LIPS TRACKING IDENTIFICATION OF A CORRECT QURANIC LETTERS PRONUNCIATION FOR TAJWEED TEACHING AND LEARNING
AbstractMastering the recitation of the holy Quran is an obligation among Muslims. It is an important task to fulfill other Ibadat like prayer, pilgrimage and zikr. However, the traditional way of teaching Quran recitation is a hard task due to the extensive training time and effort required from both teacher and learner. In fact, learning the correct pronunciation of the Quranic letters is the first step in mastering Tajweed (Rules and Guidance) in Quranic recitation. The pronunciation of Arabic letters is based on its points of articulation and the characteristics of a particular letter. In this paper we implement the lip identification technique from video signal acquired from expert to extract the movement data of the lips while pronouncing the correct Quranic letters. The extracted lip movement data from expert helps in categorizing the letters into 5 groups and in deciding the final shape of the lips. Later the technique was then tested among a public reciter and then compared for similarity verification between the public and the professional reciter. The system is able to extract the lips movement of the random user and draw the displacement graph and compared with the pronunciation of the expert. The error will be shown if the user has mistakenly pronounced the letter and suggested for improvement. More subjects with different background will be tested in very near future with feedback instructions. Machine learning techniques will be implemented at later stage for the real time application for learning process.
 â€œTajweed introduction.â€ [Online]. Available: http://www.readwithtajweed.com/tajweed_Intro.htm. [Accessed: 25-Feb-2016].
 â€œletters_Makhaarij.â€ [Online]. Available: http://www.readwithtajweed.com/tajweed_Makhaarij.htm. [Accessed: 23-Feb-2016].
 H. M. A. Tabbaa and B. Soudan, â€œComputer-Aided Training for Quranic Recitation,â€ Procedia - Soc. Behav. Sci., vol. 192, pp. 778â€“787, Jun. 2015.
 C. Meskill, Teaching and Learning in Real Time: Media, Technologies, and Language Acquisition. Athelstan, 2002.
 L. Wang, Y. Qian, M. Scott, G. Chen, and F. Soong, â€œComputer-assisted audiovisual language learning,â€ Computer (Long. Beach. Calif)., vol. 45, no. 6, pp. 38â€“47, 2012.
 P. Badin, A. Ben Youssef, G. Bailly, F. Elisei, T. Hueber, and GIPSA, â€œVisual articulatory feedback for phonetic correction in second language learning,â€ Actes de SLATE, pp. 1â€“10, 2010.
 M. Anusuya and S. Katti, â€œSpeech recognition by machine: A review,â€ Int. J. Comput. Sci. Inf. Secur., vol. 6, no. 3, pp. 181â€“205, 2009.
 â€œHearing loss - lipreading,â€ 2012. [Online]. Available: https://www.betterhealth.vic.gov.au/health/conditionsandtreatments/hearing-loss-lipreading. [Accessed: 22-Feb-2016].
 T. Chen, â€œAudiovisual speech processing,â€ Signal Process. Mag. IEEE, vol. 18, no. 1, pp. 9â€“21, 2001.
 A. B. Hassanat, â€œVisual Words for Automatic Lip-Reading,â€ Sep. 2014.
 W. Ur Rehman Butt and L. Lombardi, â€œA survey of automatic lip reading approaches,â€ in 8th International Conference on Digital Information Management, ICDIM 2013, 2013, pp. 299â€“302.
 A. B. A. Hassanat, Speech and Language Technologies. InTech, 2011.
 K. Iwano, T. Yoshinaga, S. Tamura, and S. Furui, â€œAudio-visual speech recognition using lip information extracted from side-face images,â€ Eurasip J. Audio, Speech, Music Process., vol. 2007, 2007.
 E. S. Salama, R. A. El-khoribi, and M. E. Shoman, â€œAudio-Visual Speech Recognition for People with Speech Disorders,â€ Int. J. Comput. Appl., vol. 96, no. 2, pp. 51â€“56.
 N. Essam-Aldeen, Tajweed Guide To Read Right,Part 2 Upper Intermediate. Lulu.com.
 M. Akutsu, Y. Oikawa, and Y. Yamasaki, â€œExtract voice information using high-speed camera,â€ J. Acoust. Soc. Am., vol. 133, no. 5, p. 3297, May 2013.
 â€œCasio EX-100 Review and Specs.â€ [Online]. Available: http://cameradecision.com/review/Casio-Exilim-EX-100. [Accessed: 06-Mar-2016].
How to Cite
The IIUM Engineering journal follows the open access policy.