TY - INPR A1 - Wolf, Norbert Richard T1 - Die Darwinsche Theorie und die Sprachwissenschaften N2 - No abstract available Y1 - 1989 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-41739 ER - TY - INPR A1 - Kiesler, Reinhard T1 - Por uma fonética arábigo-portuguesa N2 - Der Aufsatz beschäftigt sich mit der lautlichen Anpassung der arabischen Lehnwörter im Portugiesischen. Behandelt werden die Betonung, der Vokalismus und der Konsonantismus sowie verschiedene kontextabhängige Lautwandelerscheinungen. KW - Phonetik KW - Lehnwort KW - Portugiesisch KW - Arabismus KW - Historische Phonetik KW - Arabisch KW - phonetics KW - Portuguese KW - Arabic Y1 - 1992 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-83519 ER - TY - INPR A1 - Kiesler, Reinhard T1 - A propósito dos arabismos na língua portuguesa N2 - Der Aufsatz beschreibt die portugiesischen Wörter arabischen Ursprungs, und zwar unter verschiedenen Gesichtspunkten: im Vergleich mit den Arabismen des Spanischen, Katalanischen und Italienischen; die Anzahl der portugiesischen Arabismen, ihre lexikalische Struktur, ihre Verteilung nach Sachbereichen und ihre geographische Verbreitung. Untersucht werden nur sichere und direkte Arabismen, die heute noch gebäuchlich sind. Einige Anmerkungen zum inneren Lehngut (Lehnübersetzung, Lehnbedeutung u.ä.) schließen den Artikel ab. KW - Portugiesisch KW - Wortschatz KW - Arabismus KW - Struktur KW - Verbreitung KW - Anzahl KW - Sachbereich Y1 - 1992 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-83608 ER - TY - INPR A1 - Kiesler, Reinhard T1 - Zum Stand der Forschung auf dem Gebiete der französischen Umgangssprache (1994) N2 - Der Artikel gibt einen Überblick über die Forschungen zur französischen Umgangssprache von den Anfängen bis etwa 1992 in vier Abschnitten: 1. Die Anfänge, 2. die Sprachschichtenforschung, 3. die Beiträge von Soziolinguistik und Varietätenlinguistik und 4. die Arbeiten der "Gegenwart" um 1990. Eine Zusammenfassung und eine ausführliche Bibliographie schließen den Forschungsbericht ab. Berücksichtigt werden alle sprachlichen Ebenen von der Aussprache über den Wortschatz bis zur Grammatik. Wo es angebracht erscheint, sind Hinweise auf Arbeiten zu anderen Umgangssprachen angeführt. KW - Französisch KW - Umgangssprache KW - Variationslinguistik KW - Soziolinguistik KW - Stilistik KW - Aussprache KW - Wortschatz KW - Grammatik Y1 - 1994 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-103365 ER - TY - INPR A1 - Ruhe, Ernstpeter T1 - Fantasia en Alsace : Les Nuits de Strasbourg d’Assia Djebar N2 - No abstract available. KW - Djebar, Assia / Les nuits de Strasbourg Y1 - 2000 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-102397 ER - TY - INPR A1 - Ruhe, Ernstpeter T1 - La legende de la ville : l'espace urbain dans la culture francophone issue de l'immigration N2 - No abstract available. KW - Kultur KW - Frankreich KW - Einwanderung Y1 - 2001 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-102408 ER - TY - INPR A1 - Dandekar, Thomas T1 - Some general system properties of a living observer and the environment he explores N2 - In a nice assay published in Nature in 1993 the physicist Richard God III started from a human observer and made a number of witty conclusions about our future prospects giving estimates for the existence of the Berlin Wall, the human race and all the rest of the universe. In the same spirit, we derive implications for "the meaning of life, the universe and all the rest" from few principles. Adams´ absurd answer "42" tells the lesson "garbage in / garbage out" - or suggests that the question is non calculable. We show that experience of "meaning" and to decide fundamental questions which can not be decided by formal systems imply central properties of life: Ever higher levels of internal representation of the world and an escalating tendency to become more complex. An observer, "collecting observations" and three measures for complexity are examined. A theory on living systems is derived focussing on their internal representation of information. Living systems are more complex than Kolmogorov complexity ("life is NOT simple") and overcome decision limits (Gödel theorem) for formal systems as illustrated for cell cycle. Only a world with very fine tuned environments allows life. Such a world is itself rather complex and hence excessive large in its space of different states – a living observer has thus a high probability to reside in a complex and fine tuned universe. N2 - Dieser Aufsatz ist ein Preprint und Discussion Paper und versucht - ähnlich wie ein hervorragendes Beispiel eines Physikers, Richard God III (1993 in Nature veröffentlicht) mit einfachen Grundannahmen sehr generelle Prinzipien für uns abzuleiten. In meinem Aufsatz sind das insbesondere Prinzipien für Beobachten, für die Existenz eines Beobachters und sogar für die Existenz unserer komplexen Welt, die Fortentwicklung von Leben, die Entstehung von Bedeutung und das menschliche Entscheiden von Grundlagenfragen. Aufs erste kann so ein weitgehendes Anliegen nicht wirklich vollständig und akkurat gelingen, der Aufsatz möchte deshalb auch nur eine amüsante Spekulation sein, exakte (und bescheidenere) Teilaussagen werden aber später dann auch nach peer Review veröffentlicht werden. KW - Komplex KW - Entscheidung KW - Natürliche Auslese KW - Evolution KW - Bedeutung KW - Komplexität KW - Gödel KW - Entscheidungen KW - complexity KW - decision KW - evolution KW - selection KW - meaning Y1 - 2007 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-33537 ER - TY - INPR A1 - Dandekar, Thomas T1 - Why are nature´s constants so fine-tuned? The case for an escalating complex universe N2 - Why is our universe so fine-tuned? In this preprint we discuss that this is not a strange accident but that fine-tuned universes can be considered to be exceedingly large if one counts the number of observable different states (i.e. one aspect of the more general preprint http://www.opus-bayern.de/uni-wuerzburg/volltexte/2009/3353/). Looking at parameter variation for the same set of physical laws simple and complex processes (including life) and worlds in a multiverse are compared in simple examples. Next the anthropocentric principle is extended as many conditions which are generally interpreted anthropocentric only ensure a large space of different system states. In particular, the observed over-tuning beyond the level for our existence is explainable by these system considerations. More formally, the state space for different systems becomes measurable and comparable looking at their output behaviour. We show that highly interacting processes are more complex then Chaitin complexity, the latter denotes processes not compressible by shorter descriptions (Kolomogorov complexity). The complexity considerations help to better study and compare different processes (programs, living cells, environments and worlds) including dynamic behaviour and can be used for model selection in theoretical physics. Moreover, the large size (in terms of different states) of a world allowing complex processes including life can in a model calculation be determined applying discrete histories from quantum spin-loop theory. Nevertheless there remains a lot to be done - hopefully the preprint stimulates further efforts in this area. N2 - Dieses Preprint vertieft einen Aspekt des preprints http://www.opus-bayern.de/uni-wuerzburg/volltexte/2009/3353/, nämlich die Balance zwischen den Konstanten für unsere Naturgesetze. Die Frage nach einer solchen Balance entsteht nur, wenn man sich ein Multiversum mit vielen Alternativen Universen mit anderen Gewichten für die Naturkonstanten vorstellt und dann feststellt, dass diese gerade in unserem Universum optimal für Leben und überhaupt für komplexe, selbst organisierende Strukturen eingestellt sind (sogenanntes fine-tuning). Dies wird häufig mit dem anthropozentrischen Prinzip erklärt. Dies erklärt aber beispielsweise nicht, warum denn dieses fine-tuning noch deutlich feiner und genauer eingestellt ist, als für die Existenz eines Beobachters nötig ist. Wir zeigen dagegen, dass unser Universum besonders komplex ist und einen sehr großen Zustandsraum hat und Bedingungen, die eine hohe Komplexität erlauben, auch einen Beobachter und komplexe Prozesse wie Leben ermöglichen. Allgemein nimmt ein besonders komplexer Zustandsraum den Löwenanteil aller Alternativen ein. Unsere Komplexitätsbetrachtung kann auf verschiedenste Prozesse (Welten, Umwelten, lebende Zellen, Computerprogramme) angewandt werden, hilft bei der Modellauswahl in der theoretischen Physik (Beispiele werden gezeigt) und kann auch direkt ausgerechnet werden, dies wird für eine Modellrechnung zur Quantenschleifentheorie durchgeführt. Dennoch bleibt hier noch viel weitere Arbeit zu leisten, das Preprint kann hier nur einen Anstoß liefern. KW - Natur KW - Naturgesetz KW - Beobachter KW - Kolmogorov-Komplexität KW - Berechnungskomplexität KW - Fundamentalkonstante KW - Nature constants KW - complexity KW - observer KW - fine-tuning KW - multiverse Y1 - 2008 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-34488 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Markup overlap: Improving Fragmentation Method N2 - Overlapping is a common word used to describe documents whose structural dimensions cannot be adequately represented using tree structure. For instance a quotation that starts in one verse and ends in another verse. The problem of overlapping hierarchies is a recurring one, which has been addressed by a variety of approaches. There are XML based solutions as well as Non-XML ones. The XML-based solutions are: multiple documents, empty elements, fragmentation, out-of-line markup, JITT and BUVH. And the Non-XML approaches comprise CONCUR/XCONCUR, MECS, LMNL ...etc. This paper presents shortly state-of-the-art in overlapping hierarchies, and introduces two variations on the TEI fragmentation markup that have several advantages. KW - XML KW - Überlappung KW - Fragmentierung KW - XML KW - Overlapping KW - Fragmentation Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-49084 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Understanding the Vex Rendering Engine N2 - The Visual Editor for XML (Vex)[1] used by TextGrid [2]and other applications has got rendering and layout engines. The layout engine is well documented but the rendering engine is not. This lack of documenting the rendering engine has made refactoring and extending the editor hard and tedious. For instance many CSS2.1 and upcoming CSS3 properties have not been implemented. Software developers in different projects such as TextGrid using Vex would like to update its CSS rendering engine in order to provide advanced user interfaces as well as support different document types. In order to minimize the effort of extending Vex functionality, I found it beneficial to write a basic documentation about Vex software architecture in general and its CSS rendering engine in particular. The documentation is mainly based on the idea of architectural layered diagrams. In fact layered diagrams can help developers understand software’s source code faster and easier in order to alter it, and fix errors. This paper is written for the purpose of providing direct support for exploration in the comprehension process of Vex source code. It discusses Vex software architecture. The organization of packages that make up the software, the architecture of its CSS rendering engine, an algorithm explaining the working principle of its rendering engine are described. KW - Cascading Style Sheets KW - Softwarearchitektur KW - CSS KW - Processing model KW - Software architecture KW - Software design Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-51333 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Reference Architecture, Design of Cascading Style Sheets Processing Model N2 - The technique of using Cascading Style Sheets (CSS) to format and present structured data is called CSS processing model. For instance a CSS processing model for XML documents describes steps involved in formatting and presenting XML documents on screens or papers. Many software applications such as browsers and XML editors have their own CSS processing models which are part of their rendering engines. For instance each browser based on its CSS processing model renders CSS layout differently, as a result an inconsistency in the support of CSS features arises. Some browsers support more CSS features than others, and the rendering itself varies. Moreover the W3C standards are not even adhered by some browsers such as Internet Explorer. Test suites and other hacks and filters cannot definitely solve these problems, because these solutions are temporary and fragile. To palliate this inconsistency and browser compatibility issues with respect to CSS, a reference CSS processing model is needed. By extension it could even allow interoperability across CSS rendering engines. A reference architecture would provide common software architecture and interfaces, and facilitate refactoring, reuse, and automated unit testing. In [2] a reference architecture for browsers has been proposed. However this reference architecture is a macro reference model which does not consider separately individual components of rendering and layout engines. In this paper an attempt to develop a reference architecture for CSS processing models is discussed. In addition the Vex editor [3] rendering and layout engines, as well as an extended version of the editor used in TextGrid project [5] are also presented in order to validate the proposed reference architecture. KW - Cascading Style Sheets KW - XML KW - Softwarearchitektur KW - CSS KW - XML KW - Processing Model KW - Reference Architecture Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-51328 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Empirical Study on Screen Scraping Web Service Creation: Case of Rhein-Main-Verkehrsverbund (RMV) N2 - Internet is the biggest database that science and technology have ever produced. The world wide web is a large repository of information that cannot be used for automation by many applications due to its limited target audience. One of the solutions to the automation problem is to develop wrappers. Wrapping is a process whereby unstructured extracted information is transformed into a more structured one such as XML, which could be provided as webservice to other applications. A web service is a web page whose content is well structured so that a computer program can consume it automatically. This paper describes steps involved in constructing wrappers manually in order to automatically generate web services. KW - HTML KW - XML KW - Wrapper KW - Web service KW - HTML KW - XML KW - Wrapper KW - Web service Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-49396 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Java Web Frameworks Which One to Choose? N2 - This article discusses web frameworks that are available to a software developer in Java language. It introduces MVC paradigm and some frameworks that implement it. The article presents an overview of Struts, Spring MVC, JSF Frameworks, as well as guidelines for selecting one of them as development environment. KW - Java Frameworks KW - MVC KW - Struts KW - Spring KW - JSF Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-49407 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Doing Webservices Composition by Content-based Mashup: Example of a Web-based Simulator for Itinerary Planning N2 - Webservices composition is traditionally carried out using composition technologies such as Business Process Execution Language (BPEL) [1] and Web Service Choreography Interface (WSCI) [2]. The composition technology involves the process of web service discovery, invocation, and composition. However these technologies are not easy and flexible enough because they are mainly developer-centric. Moreover majority of websites have not yet embarked into the world of web service, although they have very important and useful information to offer. Is it because they have not understood the usefulness of web services or is it because of the costs? Whatever might be the answers to these questions, time and money are definitely required in order to create and offer web services. To avoid these expenditures, wrappers [7] to automatically generate webservices from websites would be a cheaper and easier solution. Mashups offer a different way of doing webservices composition. In web environment a Mashup is a web application that brings together data from several sources using webservices, APIs, wrappers and so on, in order to create entirely a new application that was not provided before. This paper presents first an overview of Mashups and the process of web service invocation and composition based on Mashup, then describes an example of a web-based simulator for navigation system in Germany. KW - Mashup KW - Wrapper KW - Mashup KW - Webservice Composition KW - Wrappers Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-50036 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - A Rule-based Statistical Classifier for Determining a Base Text and Ranking Witnesses In Textual Documents Collation Process N2 - Given a collection of diverging documents about some lost original text, any person interested in the text would try reconstructing it from the diverging documents. Whether it is eclecticism, stemmatics, or copy-text, one is expected to explicitly or indirectly select one of the documents as a starting point or as a base text, which could be emended through comparison with remaining documents, so that a text that could be designated as the original document is generated. Unfortunately the process of giving priority to one of the documents also known as witnesses is a subjective approach. In fact even Cladistics, which could be considered as a computer-based approach of implementing stemmatics, does not present or recommend users to select a certain witness as a starting point for the process of reconstructing the original document. In this study, a computational method using a rule-based Bayesian classifier is used, to assist text scholars in their attempts of reconstructing a non-existing document from some available witnesses. The method developed in this study consists of selecting a base text successively and collating it with remaining documents. Each completed collation cycle stores the selected base text and its closest witness, along with a weighted score of their similarities and differences. At the end of the collation process, a witness selected more often by majority of base texts is considered as the probable base text of the collection. Witnesses’ scores are weighted using a weighting system, based on effects of types of textual modifications on the process of reconstructing original documents. Users have the possibility to select between baseless and base text collation. If a base text is selected, the task is reduced to ranking the witnesses with respect to the base text, otherwise a base text as well as ranking of the witnesses with respect to the base text are computed and displayed on a histogram. KW - Textvergleich KW - Text Mining KW - Gothenburg Modell KW - Bayes-Klassifikator KW - Textual document collation KW - Base text KW - Gothenburg model KW - Bayesian classifier KW - Textual alterations weighting system Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-57465 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Assisting Understanding, Retention, and Dissemination of Religious Texts Knowledge with Modeling, and Visualization Techniques: The Case of The Quran N2 - Learning a book in general involves reading it, underlining important words, adding comments, summarizing some passages, and marking up some text or concepts. Once deeper understanding is achieved, one would like to organize and manage her/his knowledge in such a way that, it could be easily remembered and efficiently transmitted to others. In this paper, books organized in terms of chapters consisting of verses, are considered as the source of knowledge to be modeled. The knowledge model consists of verses with their metadata and semantic annotations. The metadata represent the multiple perspectives of knowledge modeling. Verses with their metadata and annotations form a meta-model, which will be published on a web Mashup. The meta-model with linking between its elements constitute a knowledge base. An XML-based annotation system breaking down the learning process into specific tasks, helps constructing the desired meta-model. The system is made up of user interfaces for creating metadata, annotating chapters’ contents according to user selected semantics, and templates for publishing the generated knowledge on the Internet. The proposed software system improves comprehension and retention of knowledge contained in religious texts through modeling and visualization. The system has been applied to the Quran, and the result obtained shows that multiple perspectives of information modeling can be successfully applied to religious texts. It is expected that this short ongoing study would motivate others to engage in devising and offering software systems for cross-religions learning. KW - Wissensmanagement KW - Koran KW - Knowledge Modeling KW - Meta-model KW - Knowledge Management KW - Content Management KW - Quran Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-55927 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Design and Implementation of Architectures for Interactive Textual Documents Collation Systems N2 - One of the main purposes of textual documents collation is to identify a base text or closest witness to the base text, by analyzing and interpreting differences also known as types of changes that might exist between those documents. Based on this fact, it is reasonable to argue that, explicit identification of types of changes such as deletions, additions, transpositions, and mutations should be part of the collation process. The identification could be carried out by an interpretation module after alignment has taken place. Unfortunately existing collation software such as CollateX1 and Juxta2’s collation engine do not have interpretation modules. In fact they implement the Gothenburg model [1] for collation process which does not include an interpretation unit. Currently both CollateX and Juxta’s collation engine do not distinguish in their critical apparatus between the types of changes, and do not offer statistics about those changes. This paper presents a model for both integrated and distributed collation processes that improves the Gothenburg model. The model introduces an interpretation component for computing and distinguishing between the types of changes that documents could have undergone. Moreover two architectures implementing the model in order to solve the problem of interactive collation are discussed as well. Each architecture uses CollateX library, and provides on the one hand preprocessing functions for transforming input documents into CollateX input format, and on the other hand a post-processing module for enabling interactive collation. Finally simple algorithms for distinguishing between types of changes, and linking collated source documents with the collation results are also introduced. KW - Softwarearchitektur KW - Textvergleich KW - service based software architecture KW - service brokerage KW - interactive collation of textual variants KW - Gothenburg model of collation process Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-56601 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Assisting Analysis and Understanding of Quran Search Results with Interactive Scatter Plots and Tables N2 - The Quran is the holy book of Islam consisting of 6236 verses divided into 114 chapters called suras. Many verses are similar and even identical. Searching for similar texts (e.g verses) could return thousands of verses, that when displayed completely or partly as textual list would make analysis and understanding difficult and confusing. Moreover it would be visually impossible to instantly figure out the overall distribution of the retrieved verses in the Quran. As consequence reading and analyzing the verses would be tedious and unintuitive. In this study a combination of interactive scatter plots and tables has been developed to assist analysis and understanding of the search result. Retrieved verses are clustered by chapters, and a weight is assigned to each cluster according to number of verses it contains, so that users could visually identify most relevant areas, and figure out the places of revelation of the verses. Users visualize the complete result and can select a region of the plot to zoom in, click on a marker to display a table containing verses with English translation side by side. KW - Text Mining KW - Visualisierung KW - Koran KW - Information Visualization KW - Visual Text Mining KW - Scatter Plot KW - Quran Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-55840 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - A Knowledge-based Hybrid Statistical Classifier for Reconstructing the Chronology of the Quran N2 - Computationally categorizing Quran’s chapters has been mainly confined to the determination of chapters’ revelation places. However this broad classification is not sufficient to effectively and thoroughly understand and interpret the Quran. The chronology of revelation would not only improve comprehending the philosophy of Islam, but also the easiness of implementing and memorizing its laws and recommendations. This paper attempts estimating possible chapters’ dates of revelation through their lexical frequency profiles. A hybrid statistical classifier consisting of stemming and clustering algorithms for comparing lexical frequency profiles of chapters, and deriving dates of revelation has been developed. The classifier is trained using some chapters with known dates of revelation. Then it classifies chapters with uncertain dates of revelation by computing their proximity to the training ones. The results reported here indicate that the proposed methodology yields usable results in estimating dates of revelation of the Quran’s chapters based on their lexical contents. KW - Text Mining KW - Maschinelles Lernen KW - text categorization KW - Bayesian classifier KW - distance-based classifier KW - Quran Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-54712 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Using Machine Learning Algorithms for Categorizing Quranic Chaptersby Major Phases of Prophet Mohammad’s Messengership N2 - This paper discusses the categorization of Quranic chapters by major phases of Prophet Mohammad’s messengership using machine learning algorithms. First, the chapters were categorized by places of revelation using Support Vector Machine and naïve Bayesian classifiers separately, and their results were compared to each other, as well as to the existing traditional Islamic and western orientalists classifications. The chapters were categorized into Meccan (revealed in Mecca) and Medinan (revealed in Medina). After that, chapters of each category were clustered using a kind of fuzzy-single linkage clustering approach, in order to correspond to the major phases of Prophet Mohammad’s life. The major phases of the Prophet’s life were manually derived from the Quranic text, as well as from the secondary Islamic literature e.g hadiths, exegesis. Previous studies on computing the places of revelation of Quranic chapters relied heavily on features extracted from existing background knowledge of the chapters. For instance, it is known that Meccan chapters contain mostly verses about faith and related problems, while Medinan ones encompass verses dealing with social issues, battles…etc. These features are by themselves insufficient as a basis for assigning the chapters to their respective places of revelation. In fact, there are exceptions, since some chapters do contain both Meccan and Medinan features. In this study, features of each category were automatically created from very few chapters, whose places of revelation have been determined through identification of historical facts and events such as battles, migration to Medina…etc. Chapters having unanimously agreed places of revelation were used as the initial training set, while the remaining chapters formed the testing set. The classification process was made recursive by regularly augmenting the training set with correctly classified chapters, in order to classify the whole testing set. Each chapter was preprocessed by removing unimportant words, stemming, and representation with vector space model. The result of this study shows that, the two classifiers have produced useable results, with an outperformance of the support vector machine classifier. This study indicates that, the proposed methodology yields encouraging results for arranging Quranic chapters by phases of Prophet Mohammad’s messengership. KW - Koran KW - Maschinelles Lernen KW - Text categorization KW - Clustering KW - Support Vector Machine KW - Naïve Bayesian KW - Place of revelation KW - Stages of Prophet Mohammad’s messengership KW - Quran Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-66862 ER -