Combining Image and Text Processing for the Computational Reading of Arabic Calligraphy
نام عام مواد
[Thesis]
نام نخستين پديدآور
Alsalamah, Seetah
نام ساير پديدآوران
Batista-Navarro, Riza Theresa
وضعیت نشر و پخش و غیره
نام ناشر، پخش کننده و غيره
The University of Manchester (United Kingdom)
تاریخ نشرو بخش و غیره
2020
مشخصات ظاهری
نام خاص و کميت اثر
221
یادداشتهای مربوط به پایان نامه ها
جزئيات پايان نامه و نوع درجه آن
Ph.D.
کسي که مدرک را اعطا کرده
The University of Manchester (United Kingdom)
امتياز متن
2020
یادداشتهای مربوط به خلاصه یا چکیده
متن يادداشت
The Arabic language originally made use of ancient calligraphy, which is found in historical documents and the Holy Quran. This calligraphy shows a different representation of Arabic texts using a more cursive style and a mixture of complex constructed word forms. These types of writing styles in Arabic texts give a degree of difficulty in segmenting the letters and reading the text. Since they originated long ago, they have mostly been reflected in Islamic culture and the use of quotations from the Holy Quran. This art form is still used today for various purposes in Arabic representation and Islamic calligraphy. The challenges of this type of text motivate the search for a way to simplify the reading and digitisation processes. To the best of our knowledge, this is the first attempt to investigate the recognition of Arabic calligraphy images and the reading of text drawn in such images. Due to the lack of resources in the calligraphy domain, different datasets were developed for this research. An Arabic calligraphy image dataset was collected, from which calligraphy letter image datasets were generated. Finally, a calligraphy quotations corpus was manually annotated based on the Arabic calligraphy image dataset. All of these datasets were used for training, testing and support in the different phases that were applied to achieve the primary goal of reading calligraphy. A new approach to the recognition of Arabic calligraphy was developed to manipulate a scanned image and extract a list of probable quotations. It consists of comparing two detection methods, namely maximally stable extremal regions (MSER) and sliding window (SW), to obtain the identity of intersecting letters from the image. The letters detected had their features extracted through a comparison between the histogram of oriented gradient (HOG) features and a bag of speeded-up robust features (SURF) used in training two different recognition models, support vector machines (SVM) and the convolutional neural network (CNN). In the investigation into which of these models and image feature descriptors most accurately fit the calligraphy letters, the results from the recognition process were placed in a bag of letters (BOL) feature. This feature was used to search the corpus according to two different methodologies to produce the list of probable quotations. The first method compares the target BOL with the corpus index BOL for each element, while the second method generates a list of related words from the BOL and then searches the corpus for any quotations that contain two or more of these words. The results from reading 388 calligraphy images showed that the MSER method outperforms the SW method in detecting letters. Moreover, BOL matching and searching the corpus predicts more accurate lists of quotations than the word generation process. The best methodology is based on a combination of the SVM recognition model and HOG feature extraction, correctly predicting more than 74% of the top ten quotations using the BOL matching process.
موضوع (اسم عام یاعبارت اسمی عام)
موضوع مستند نشده
Computer science
موضوع مستند نشده
Linguistics
نام شخص به منزله سر شناسه - (مسئولیت معنوی درجه اول )