TY - INPR A1 - Volkmann, Armin A1 - Bock, Sina A1 - Seibt, Daniela A1 - Kümmet, Sonja A1 - Weiß, Michael A1 - Dietz, Elisabeth A1 - Huss, Patrick A1 - Heer, Anna A1 - El Hassan, Naitelqadi T1 - Geisteswissenschaft und Geografische Informationssysteme (GIS): Erstellung von Kartierungen mit kommerzieller und Open Source Software im Vergleich T1 - Digital Humanities and Geographic Information Systems (GIS): Making maps with commercial and open source software N2 - Der Einsatz von Geographischen Informationssystemen (GIS) bietet auch für die Geisteswissenschaften zahlreiche Ansätze zur Generierung von neuem Wissen. Die GIS-Software ist jedoch unterschiedlich geeignet für geisteswissenschaftliche Fragestellungen. Getestet wurden daher zwei kommerzielle und vier Open Source GIS-Programme: MapInfo, ArcGIS, Quantum GIS, gvSIG, DIVA-GIS und SAGA. MapInfo zeichnet sich besonders für GIS-Anfänger durch seine große Benutzerfreundlichkeit aus. Jedoch sind die Anschaffungskosten recht hoch. ArcGIS weist den größten Nutzungsumfang auf, wobei jedoch keine oder kaum eine „intuitive“ Nutzung möglich ist. Zudem sind die laufenden Kosten durch aufwändige Abo-Lizenzverträge besonders hoch. Quantum GIS ist eine freie Software, die benutzerfreundlich ist und auch Anfängern einen leichten Einstieg ermöglicht. Hunderte Erweiterungen machen Quantum GIS sehr leistungsstark und universal einsetzbar. gvSIG ist nicht ganz leicht zu bedienen, da zudem die Dokumentation nur fragmentarisch vorliegt. Der große Funktionsumfang macht es jedoch zu einem vollwertigen GIS, wenn auch manch ergänzende Funktion fehlt. DIVA-GIS ermöglicht einen schnellen Einstieg durch seine gute Dokumentation. Man gelangt jedoch recht bald an die Grenzen des Nutzungsumfangs durch die eingeschränkte Funktionalität. SAGA hingegen erfüllte alle hier gestellten Anforderungen, sodass es, trotz der geringeren Anzahl von Erweiterungen, zusammen mit Quantum GIS als Open Source eine echte Alternative zu kommerziellen GIS-Programmen darstellt. N2 - The use of Geographic Information Systems (GIS) is also in the Humanities an interesting method to analyze questions of space and time. For creating new results, we need to search reputed GIS software for our regular use. Within this article we tested two commercial and four open source GIS programs: MapInfo, ArcGIS, Quantum GIS, gvSIG, DIVA-GIS and SAGA. ArcGIS has the greatest functionality. But it is very expensive and not easy to use. MapInfo is particularly distinguished for GIS-beginners due to its large usability. However, the cost is quite high. Quantum GIS is a free software that is user friendly, and even for beginners easy to get started. gvSIG is not very easy to use and some ancillary functions are missing. DIVA-GIS provides a quick start by its good documentation. But the functionality is limited pretty soon. Many functions make SAGA to a full-fledged GIS, despite the lower number of enhancements. Hundreds extensions make Quantum GIS very powerful and versatile. Altogether for the Humanities the open source Quantum GIS represents a viable alternative to expensive commercial GIS software. KW - Geoinformationssystem KW - Literaturwissenschaft KW - Open Source KW - Digital Humanities KW - Geographisches Informationssystem KW - GIS KW - Digital Humanities KW - Geographic Information Systems KW - GIS KW - Literary Studies KW - Open Source Software Y1 - 2012 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-74470 ER - TY - INPR A1 - Volkmann, Armin T1 - Stempelverzierte Keramikfunde der Völkerwanderungszeit im Barbaricum – Neue Funde vom frühmittelalterlichen Burgwall bei Kopchin (Lkr. Bautzen) T1 - Stamp decorated pottery from the migration period in the Barbaricum – New finds from the early medieval hill fort of Kopchin in Upper Lusatia (East Germany) N2 - Durch die systematische Sichtung des Fundmaterials des frühmittelalterlichen Burgwalls von Kopchin in der Oberlausitz konnten einige Keramikscherben identifiziert werden, die wohl älter als bisher angenommen sind und in die Völkerwanderungszeit datieren. Dies ist von besonderer Relevanz, da für Nordostdeutschland traditionell eine Besiedlungslücke im 5.–7. Jh. AD postuliert wird. Dieser Hiatus ist offenbar teils auch der schwierigen sicheren Datierung der oft recht unspezifischen Keramiktypen geschuldet. So konnten mit wachsendem Kenntnisstand dieser Keramiken in den letzten Jahren auch einige völkerwanderungszeitliche Fundstellen, besonders in Nordbrandenburg und im deutsch-polnischen Pommern lokalisiert werden. In Nordost-Sachsen sind die vorgestellten singulären Funde des 5.–6. Jhs. AD jedoch bisher ohne sichere Parallelen, auch wenn mittlerweile einige Fundstellen der Völkerwanderungszeit in der Region erkannt worden sind. N2 - Through the systematic review of the finds of the early medieval slavic hill fort of Kopchin in Upper Lusatia, some pottery fragments were identified that are probably older than previously believed: They are dated to the migration period. This is of particular relevance. Traditionally, a gap in settlement in northeastern Germany of the 5.–7. centuries AD is postulated. This hiatus is apparently due partly to the rather difficult secure dating of the often non-specific ceramic types of this period. With increasing knowledge of these ceramics over the last years, new finding sites of the migration period were localized especially in northern Brandenburg and the German-Polish Pomerania. In northeast Saxony, the presented pottery fragments are still singular discoveries of the 5.–7. centuries AD without parallels in this region. KW - Germanen KW - Spätantike KW - Vor- und Frühgeschichte KW - Frühmittelalter KW - Barbaricum KW - Burgwall KW - slawisch KW - Archäologie KW - Völkerwanderungszeit KW - Stempelkeramik KW - Late Antiquity KW - early medieva KW - l slavic hill fort KW - Migration Period Y1 - 2012 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-74432 ER - TY - INPR A1 - Volkmann, Armin T1 - Eisenproduktionswerkplätze der späten römischen Kaiserzeit (3.–5. Jh. AD) im inneren Barbaricum T1 - Workshops of iron production from the late Roman period (3rd–5th century AD) in Central Barbaricum N2 - Durch systematische Prospektionen in Südbrandenburg wurden auch bei den devastierten Ortschaften Klein Görigk und Kausche zahlreiche bisher unbekannte Fundplatze entdeckt (vgl. Abb. 1). Diese verdeutlichen den Fundreichtum dieser kargen Landschaft als „archäologisches Fenster“ einer fallbeispielhaft intensiv erforschten Region. Die sehr zahlreichen Werkplätze der späten römischen Kaiserzeit (3.–5. Jh. AD) belegen eine massenhafte Eisenproduktion, die über den Eigenbedarf weit hinausging und die Grundlage für Handel darstellte. Interessanterweise sind im Eisenverhüttungszentrum des Niederlausitzer Grenzwalls keine zeitgleichen Siedlungen und Gräberfelder entdeckt worden. Diese liegen etwas weiter entfernt in den fruchtbareren Niederungs- und Beckenlandschaften der Umgebung. Die Werkplätze sind also nur temporär zur Eisenverhüttung aufgesucht worden. Die stereotyp errichteten Eisenproduktionsstätten wurden in unmittelbarer Nähe zum lokal vorkommenden „Raseneisenerz“ im waldreichen Gebiet errichtet. Durch die massenhafte Eisenproduktion, die äußerst viel Holzkohle benötigte, ist auch von negativen Folgen auf die prähistorische Umwelt auszugehen. Indizien einer mutmaßlichen „ökologischen Krise“ zum Ende der spätgermanischen Kultur (Mitte 5. Jh. AD) konnten jedoch bisher nicht sicher belegt werden. N2 - By systematic survey methods were numerous previously unknown finding sites discovered also at the devastated villages "Klein Görigk" and "Kausche" in southern Brandenburg (Fig. 1). This illustrates the potentially richness of archaeological sites in this barren landscape as an exemplary case in an intensively examined region. The numerous workshops of the late Roman period in the 3rd to 5th century AD indicate a massive iron production in this region. The iron production was far beyond of own needs. It was the economical basis for trade. Interestingly, in the iron-smelting centers at the "Niederlausitz" no settlements and cemeteries of this period have been discovered. These are a little further away in the more fertile lowland basins. The workshops have been visited only temporarily for iron smelting. The stereotyped iron production workshops were built in close proximity to locally occurring iron ore. And they were established in the woodlands. By the mass production of iron, which required very much charcoal, is also presumed a negative impact on the prehistoric environment. Evidence of the alleged "ecological crisis" at the end of the late Germanic culture (mid of the 5th century AD), however, could not yet be proven. KW - Eisenproduktion KW - Germanen KW - Spätantike KW - Vor- und Frühgeschichte KW - Barbaricum KW - Landschaftsarchäologie KW - Umweltarchäologie KW - Wirtschaftsarchäologie KW - Barbaricum KW - iron production KW - germanic KW - Late Antiquity KW - Landscape Archaeology Y1 - 2012 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-74420 ER - TY - INPR A1 - Reiss, Harald T1 - Time scales and existence of time holes in non-transparent media N2 - The analysis presented in this paper applies to experimental situations where observers or objects to be studied, all at stationary positions, are located in environments the optical thickness of which is strongly different. Non-transparent media comprise thin metallic films, packed or fluidised beds, superconductors, the Earth’s crust, and even dark clouds and other cosmological objects. The analysis applies mapping functions that correlate physical events, e, in non-transparent media, with their images, f(e), tentatively located on standard physical time scale. The analysis demonstrates, however, that physical time, in its rigorous sense, does not exist under non-transparency conditions. A proof of this conclusion is attempted in three steps: i) the theorem “there is no time without space and events” is accepted, (ii) images f[e(s,t)] do not constitute a dense, uncountably infinite set, and (iii) sets of images that are not uncountably infinite do not create physical time but only time-like sequences. As a consequence, mapping f[e(s,t)] in non-transparent space does not create physical analogues to the mathematical structure of the ordered, dense half-set R+ of real numbers, and reverse mapping, f-1f[e(s,t)], the mathematical inverse problem, would not allow unique identification and reconstruction of original events from their images. In these cases, causality as well as invariance of physical processes under time reversal, might be violated. An interesting problem is whether temporal cloaking (a time hole) in a transparent medium, as very recently reported in the literature, can be explained by the present analysis. Existence of time holes could perhaps be possible, not in transparent but in non-transparent media, as follows from the sequence of images, f[e(s,t)], that is not uncountably infinite, in contrast to R+. Impacts are expected for understanding physical diffusion-like, radiative transfer processes and stability models to protect superconductors against quenchs. There might be impacts also in relativity, quantum mechanics, nuclear decay, or in systems close to their phase transitions. The analysis is not restricted to objects of laboratory dimensions. KW - Zeitrichtung KW - Strahlungstransport KW - Supraleiter KW - Nicht-Transparente Medien KW - Physikalische Zeit KW - Inverse Probleme KW - Time hole KW - mapping function KW - Monte Carlo simulation Y1 - 2012 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-73554 N1 - Überarbeitung des Artikels urn:nbn:de:bvb:20-opus-67268 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Towards a Knowledge-Based Learning System for The Quranic Text N2 - In this research, an attempt to create a knowledge-based learning system for the Quranic text has been performed. The knowledge base is made up of the Quranic text along with detailed information about each chapter and verse, and some rules. The system offers the possibility to study the Quran through web-based interfaces, implementing novel visualization techniques for browsing, querying, consulting, and testing the acquired knowledge. Additionally the system possesses knowledge acquisition facilities for maintaining the knowledge base. KW - Wissensbanksystem KW - Wissensmanagement KW - Text Mining KW - Visualisierung KW - Koran KW - Knowledge-based System KW - Knowledge Management System KW - Text Mining KW - Visualization KW - Quran Y1 - 2012 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-70003 ER - TY - INPR A1 - Reiss, Harald T1 - Physical time and existence of time holes in non-transparent media N2 - The analysis presented in this paper applies to experimental situations where observers or objects to be studied (both stationary, with respect to each other) are located in environments the optical thickness of which is strongly different. By their large optical thickness, non-transparent media are clearly distinguished from their transparent counterparts. Non-transparent media comprise thin metallic films, packed or fluidised beds, the Earth’s crust, and even dark clouds and other cosmological objects. As a representative example, a non-transparent slab is subjected to transient disturbances, and a rigorous analysis is presented whether physical time reasonably could be constructed under such condition. The analysis incorporates mapping functions that correlate physical events, e, in non-transparent media, with their images, f(e), tentatively located on a standard physical time scale. The analysis demonstrates, however, that physical time, in its rigorous sense, does not exist under non-transparency conditions. A proof of this conclusion is attempted in three steps: i) the theorem “there is no time without space and events” is accepted, (ii) images f[e(s,t)] do not constitute a dense, uncountably infinite set, and (iii) sets of images that are not uncountably infinite do not create physical time but only time-like sequences. As a consequence, mapping f[e(s,t)] in non-transparent space does not create physical analogues to the mathematical structure of the ordered, dense half-set R+ of real numbers, and reverse mapping, f-1f[e(s,t)] would not allow unique identification and reconstruction of original events from their images. In these cases, causality and determinism, as well as invariance of physical processes under time reversal, might be violated. Existence of time holes could be possible, as follows from the sequence of images, f[e(s,t)], that is not uncountably infinite, in contrast to R+. Practical impacts are expected for understanding physical diffusion-like, radiative transfer processes, stability models to protect superconductors against quenchs or for description of their transient local pair density and critical currents. Impacts would be expected also in mathematical formulations (differential equations) of classical physics, in relativity and perhaps in quantum mechanics, all as far as transient processes in non-transparent space would be concerned. An interesting problem is whether temporal cloaking (a time hole) in a transparent medium, as very recently reported in the literature, can be explained by the present analysis. The analysis is not restricted to objects of laboratory dimensions: Because of obviously existing radiation transfer analogues, it is tempting to discuss consequences also for much larger structures in particular if an origin of time is postulated. KW - Strahlungstransport KW - Zeitrichtung KW - Supraleiter KW - Computersimulation KW - Non-transparency KW - disturbance KW - physical time KW - time hole Y1 - 2012 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-67268 N1 - Von diesem Artikel gibt es eine überarbeitete Version unter urn:nbn:de:bvb:20-opus-73554. ER - TY - INPR A1 - Volkmann, Armin T1 - Die Germanen: Mythos und Forschungsrealität T1 - "Germanen": Myth and reality of research N2 - Der Begriff Germanen ist eine Fremdbezeichnung griechisch-römischer Autoren der Antike. Die so bezeichneten Gruppen hatten aber keine gemeinsame germanische Identität. Die Germanen wurden schon in der Antike als mächtige Gegner stilisiert, was wiederum im Mittelalter im Zuge der Staatenbildungen gerne in den schriftlichen Quellen aufgegriffen wurde. Retrospektiv kann keine "Ursprache" oder "Urheimat" der Germanen rekonstruiert werden. In der Archäologie gibt es jedoch aufgrund des Fundmaterials Kulturräume einer materiellen Kultur, die als germanisch interpretiert werden. Diese sind jedoch nicht mit einer "germanischen Ethnie" zu verwechseln. N2 - The term Germanic was given by Greco-Roman authors of antiquity. However, the so-called Germanic groups had no common Germanic identity. In antiquity, the Germanic tribes were stylized as a powerful opponent by Caesar. In the written sources of the Middle Ages this antique information had helped to create a Germanic identity. This myth was an important basis of the medieval state formation. It is not possible to find roots of the "original language" or "homeland" of Germanic people. Cultural areas in archeology are defined by finds of archeological cultures. These archeological cultures are only material cultures. Some material cultures can be identified with the name Germanic, if they are discovered in central Europe from the period of the 1st century BC until the 5th century AD. But these are not identical with a "Germanic ethnicity". KW - Vor- und Frühgeschichte KW - Vor- und Frühgeschichte KW - Paläoethnologie KW - Germanen KW - Germanic Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-66789 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Using Machine Learning Algorithms for Categorizing Quranic Chaptersby Major Phases of Prophet Mohammad’s Messengership N2 - This paper discusses the categorization of Quranic chapters by major phases of Prophet Mohammad’s messengership using machine learning algorithms. First, the chapters were categorized by places of revelation using Support Vector Machine and naïve Bayesian classifiers separately, and their results were compared to each other, as well as to the existing traditional Islamic and western orientalists classifications. The chapters were categorized into Meccan (revealed in Mecca) and Medinan (revealed in Medina). After that, chapters of each category were clustered using a kind of fuzzy-single linkage clustering approach, in order to correspond to the major phases of Prophet Mohammad’s life. The major phases of the Prophet’s life were manually derived from the Quranic text, as well as from the secondary Islamic literature e.g hadiths, exegesis. Previous studies on computing the places of revelation of Quranic chapters relied heavily on features extracted from existing background knowledge of the chapters. For instance, it is known that Meccan chapters contain mostly verses about faith and related problems, while Medinan ones encompass verses dealing with social issues, battles…etc. These features are by themselves insufficient as a basis for assigning the chapters to their respective places of revelation. In fact, there are exceptions, since some chapters do contain both Meccan and Medinan features. In this study, features of each category were automatically created from very few chapters, whose places of revelation have been determined through identification of historical facts and events such as battles, migration to Medina…etc. Chapters having unanimously agreed places of revelation were used as the initial training set, while the remaining chapters formed the testing set. The classification process was made recursive by regularly augmenting the training set with correctly classified chapters, in order to classify the whole testing set. Each chapter was preprocessed by removing unimportant words, stemming, and representation with vector space model. The result of this study shows that, the two classifiers have produced useable results, with an outperformance of the support vector machine classifier. This study indicates that, the proposed methodology yields encouraging results for arranging Quranic chapters by phases of Prophet Mohammad’s messengership. KW - Koran KW - Maschinelles Lernen KW - Text categorization KW - Clustering KW - Support Vector Machine KW - Naïve Bayesian KW - Place of revelation KW - Stages of Prophet Mohammad’s messengership KW - Quran Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-66862 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Computing Generic Causes of Revelation of the Quranic Verses Using Machine Learning Techniques N2 - Because many verses of the holy Quran are similar, there is high probability that, similar verses addressing same issues share same generic causes of revelation. In this study, machine learning techniques have been employed in order to automatically derive causes of revelation of Quranic verses. The derivation of the causes of revelation is viewed as a classification problem. Initially the categories are based on the verses with known causes of revelation, and the testing set consists of the remaining verses. Based on a computed threshold value, a naïve Bayesian classifier is used to categorize some verses. After that, using a decision tree classifier the remaining uncategorized verses are separated into verses that contain indicators (resultative connectors, causative expressions…), and those that do not. As for those verses having indicators, each one is segmented into its constituent clauses by identification of the linking indicators. Then a dominant clause is extracted and considered either as the cause of revelation, or post-processed by adding or subtracting some terms to form a causal clause that constitutes the cause of revelation. Concerning remaining unclassified verses without indicators, a naive Bayesian classifier is again used to assign each one of them to one of the existing classes based on features and topics similarity. As for verses that could not be classified so far, manual classification was made by considering each verse as a category on its own. The result obtained in this study is encouraging, and shows that automatic derivation of Quranic verses’ generic causes of revelation is achievable, and reasonably reliable for understanding and implementing the teachings of the Quran. KW - Text Mining KW - Koran KW - Text mining KW - Statistical classifiers KW - Text segmentation KW - Causes of revelation KW - Quran Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-66083 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Philosophical and Computational Approaches for Estimating and Visualizing Months of Revelations of Quranic Chapters N2 - The question of why the Quran structure does not follow its chronology of revelation is a recurring one. Some Islamic scholars such as [1] have answered the question using hadiths, as well as other philosophical reasons based on internal evidences of the Quran itself. Unfortunately till today many are still wondering about this issue. Muslims believe that the Quran is a summary and a copy of the content of a preserved tablet called Lawhul-Mahfuz located in the heaven. Logically speaking, this suggests that the arrangement of the verses and chapters is expected to be similar to that of the Lawhul-Mahfuz. As for the arrangement of the verses in each chapter, there is unanimity that it was carried out by the Prophet himself under the guidance of Angel Gabriel with the recommendation of God. But concerning the ordering of the chapters, there are reports about some divergences [3] among the Prophet’s companions as to which chapter should precede which one. This paper argues that Quranic chapters might have been arranged according to months and seasons of revelation. In fact, based on some verses of the Quran, it is defendable that the Lawhul-Mahfuz itself is understood to have been structured in terms of the months of the year. In this study, philosophical and mathematical arguments for computing chapters’ months of revelation are discussed, and the result is displayed on an interactive scatter plot. KW - Text Mining KW - Visualisierung KW - Koran KW - Text mining KW - Visualization KW - Chronology of revelation KW - Chapters arrangement KW - Quran KW - Lawhul-Mahfuz Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-65784 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Computer-based Textual Documents Collation System for Reconstructing the Original Text from Automatically Identified Base Text and Ranked Witnesses N2 - Given a collection of diverging documents about some lost original text, any person interested in the text would try reconstructing it from the diverging documents. Whether it is eclecticism, stemmatics, or copy-text, one is expected to explicitly or indirectly select one of the documents as a starting point or as a base text, which could be emended through comparison with remaining documents, so that a text that could be designated as the original document is generated. Unfortunately the process of giving priority to one of the documents also known as witnesses is a subjective approach. In fact even Cladistics, which could be considered as a computer-based approach of implementing stemmatics, does not present or recommend users to select a certain witness as a starting point for the process of reconstructing the original document. In this study, a computational method using a rule-based Bayesian classifier is used, to assist text scholars in their attempts of reconstructing a non-existing document from some available witnesses. The method developed in this study consists of selecting a base text successively and collating it with remaining documents. Each completed collation cycle stores the selected base text and its closest witness, along with a weighted score of their similarities and differences. At the end of the collation process, a witness selected more often by majority of base texts is considered as the probable base text of the collection. Witnesses’ scores are weighted using a weighting system, based on effects of types of textual modifications on the process of reconstructing original documents. Users have the possibility to select between baseless and base text collation. If a base text is selected, the task is reduced to ranking the witnesses with respect to the base text, otherwise a base text as well as ranking of the witnesses with respect to the base text are computed and displayed on a bar diagram. Additionally this study includes a recursive algorithm for automatically reconstructing the original text from the identified base text and ranked witnesses. KW - Textvergleich KW - Text Mining KW - Textual document collation KW - Base text KW - Reconstruction of original text KW - Gothenburg model KW - Bayesian classifier KW - Textual alterations weighting system Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-65749 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Design and Implementation of a Model-driven XML-based Integrated System Architecture for Assisting Analysis, Understanding, and Retention of Religious Texts:The Case of The Quran N2 - Learning a book in general involves reading it, underlining important words, adding comments, summarizing some passages, and marking up some text or concepts. Once deeper understanding is achieved, one would like to organize and manage her/his knowledge in such a way that, it could be easily remembered and efficiently transmitted to others. This paper discusses about modeling religious texts using semantic XML markup based on frame-based knowledge representation, with the purpose of assisting understanding, retention, and sharing of knowledge they contain. In this study, books organized in terms of chapters made up of verses are considered as the source of knowledge to model. Some metadata representing the multiple perspectives of knowledge modeling are assigned to each chapter and verse. Chapters and verses with their metadata form a meta-model, which is represented using frames, and published on a web mashup. An XML-based annotation and visualization system equipped with user interfaces for creating static and dynamic metadata, annotating chapters’ contents according to user selected semantics, and templates for publishing generated knowledge on the Internet, has been developed. The system has been applied to the Quran, and the result obtained shows that multiple perspectives of information modeling can be successfully applied to religious texts, in order to support analysis, understanding, and retention of the texts. KW - Wissensrepräsentation KW - Wissensmanagement KW - Content Management KW - XML KW - Koran KW - Knowledge representation KW - Meta-model KW - Frames KW - XML model KW - Knowledge Management KW - Content Management KW - Quran Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-65737 ER - TY - INPR A1 - Schmidt, Karin Stella T1 - Zur Musik Mesopotamiens. Erste Ergänzung (2011) T1 - Mesopotamian Music. First Supplement N2 - Literaturzusammenstellung zur Musik Mesopotamiens: zu Musiktheorie, Notenschriften, Instrumentenkunde, Aufführungspraxis in Sumer, Akkad, Babylonien, Assyrien N2 - Literature on Music from mesopotamia and related areas KW - Mesopotamien KW - Musik KW - Musikgeschichte KW - Sumer KW - Akkad KW - Hurritisch KW - Musikinstrumente KW - Notenschriften KW - Instrumentenkunde KW - Musiktheorie KW - Music KW - Mesopotamia KW - Sumer KW - Akkad KW - Musical Instruments KW - Musical Theorie KW - Hurrian KW - History of Music Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-65169 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - A Rule-based Statistical Classifier for Determining a Base Text and Ranking Witnesses In Textual Documents Collation Process N2 - Given a collection of diverging documents about some lost original text, any person interested in the text would try reconstructing it from the diverging documents. Whether it is eclecticism, stemmatics, or copy-text, one is expected to explicitly or indirectly select one of the documents as a starting point or as a base text, which could be emended through comparison with remaining documents, so that a text that could be designated as the original document is generated. Unfortunately the process of giving priority to one of the documents also known as witnesses is a subjective approach. In fact even Cladistics, which could be considered as a computer-based approach of implementing stemmatics, does not present or recommend users to select a certain witness as a starting point for the process of reconstructing the original document. In this study, a computational method using a rule-based Bayesian classifier is used, to assist text scholars in their attempts of reconstructing a non-existing document from some available witnesses. The method developed in this study consists of selecting a base text successively and collating it with remaining documents. Each completed collation cycle stores the selected base text and its closest witness, along with a weighted score of their similarities and differences. At the end of the collation process, a witness selected more often by majority of base texts is considered as the probable base text of the collection. Witnesses’ scores are weighted using a weighting system, based on effects of types of textual modifications on the process of reconstructing original documents. Users have the possibility to select between baseless and base text collation. If a base text is selected, the task is reduced to ranking the witnesses with respect to the base text, otherwise a base text as well as ranking of the witnesses with respect to the base text are computed and displayed on a histogram. KW - Textvergleich KW - Text Mining KW - Gothenburg Modell KW - Bayes-Klassifikator KW - Textual document collation KW - Base text KW - Gothenburg model KW - Bayesian classifier KW - Textual alterations weighting system Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-57465 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Design and Implementation of Architectures for Interactive Textual Documents Collation Systems N2 - One of the main purposes of textual documents collation is to identify a base text or closest witness to the base text, by analyzing and interpreting differences also known as types of changes that might exist between those documents. Based on this fact, it is reasonable to argue that, explicit identification of types of changes such as deletions, additions, transpositions, and mutations should be part of the collation process. The identification could be carried out by an interpretation module after alignment has taken place. Unfortunately existing collation software such as CollateX1 and Juxta2’s collation engine do not have interpretation modules. In fact they implement the Gothenburg model [1] for collation process which does not include an interpretation unit. Currently both CollateX and Juxta’s collation engine do not distinguish in their critical apparatus between the types of changes, and do not offer statistics about those changes. This paper presents a model for both integrated and distributed collation processes that improves the Gothenburg model. The model introduces an interpretation component for computing and distinguishing between the types of changes that documents could have undergone. Moreover two architectures implementing the model in order to solve the problem of interactive collation are discussed as well. Each architecture uses CollateX library, and provides on the one hand preprocessing functions for transforming input documents into CollateX input format, and on the other hand a post-processing module for enabling interactive collation. Finally simple algorithms for distinguishing between types of changes, and linking collated source documents with the collation results are also introduced. KW - Softwarearchitektur KW - Textvergleich KW - service based software architecture KW - service brokerage KW - interactive collation of textual variants KW - Gothenburg model of collation process Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-56601 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Assisting Understanding, Retention, and Dissemination of Religious Texts Knowledge with Modeling, and Visualization Techniques: The Case of The Quran N2 - Learning a book in general involves reading it, underlining important words, adding comments, summarizing some passages, and marking up some text or concepts. Once deeper understanding is achieved, one would like to organize and manage her/his knowledge in such a way that, it could be easily remembered and efficiently transmitted to others. In this paper, books organized in terms of chapters consisting of verses, are considered as the source of knowledge to be modeled. The knowledge model consists of verses with their metadata and semantic annotations. The metadata represent the multiple perspectives of knowledge modeling. Verses with their metadata and annotations form a meta-model, which will be published on a web Mashup. The meta-model with linking between its elements constitute a knowledge base. An XML-based annotation system breaking down the learning process into specific tasks, helps constructing the desired meta-model. The system is made up of user interfaces for creating metadata, annotating chapters’ contents according to user selected semantics, and templates for publishing the generated knowledge on the Internet. The proposed software system improves comprehension and retention of knowledge contained in religious texts through modeling and visualization. The system has been applied to the Quran, and the result obtained shows that multiple perspectives of information modeling can be successfully applied to religious texts. It is expected that this short ongoing study would motivate others to engage in devising and offering software systems for cross-religions learning. KW - Wissensmanagement KW - Koran KW - Knowledge Modeling KW - Meta-model KW - Knowledge Management KW - Content Management KW - Quran Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-55927 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Assisting Analysis and Understanding of Quran Search Results with Interactive Scatter Plots and Tables N2 - The Quran is the holy book of Islam consisting of 6236 verses divided into 114 chapters called suras. Many verses are similar and even identical. Searching for similar texts (e.g verses) could return thousands of verses, that when displayed completely or partly as textual list would make analysis and understanding difficult and confusing. Moreover it would be visually impossible to instantly figure out the overall distribution of the retrieved verses in the Quran. As consequence reading and analyzing the verses would be tedious and unintuitive. In this study a combination of interactive scatter plots and tables has been developed to assist analysis and understanding of the search result. Retrieved verses are clustered by chapters, and a weight is assigned to each cluster according to number of verses it contains, so that users could visually identify most relevant areas, and figure out the places of revelation of the verses. Users visualize the complete result and can select a region of the plot to zoom in, click on a marker to display a table containing verses with English translation side by side. KW - Text Mining KW - Visualisierung KW - Koran KW - Information Visualization KW - Visual Text Mining KW - Scatter Plot KW - Quran Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-55840 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - A Knowledge-based Hybrid Statistical Classifier for Reconstructing the Chronology of the Quran N2 - Computationally categorizing Quran’s chapters has been mainly confined to the determination of chapters’ revelation places. However this broad classification is not sufficient to effectively and thoroughly understand and interpret the Quran. The chronology of revelation would not only improve comprehending the philosophy of Islam, but also the easiness of implementing and memorizing its laws and recommendations. This paper attempts estimating possible chapters’ dates of revelation through their lexical frequency profiles. A hybrid statistical classifier consisting of stemming and clustering algorithms for comparing lexical frequency profiles of chapters, and deriving dates of revelation has been developed. The classifier is trained using some chapters with known dates of revelation. Then it classifies chapters with uncertain dates of revelation by computing their proximity to the training ones. The results reported here indicate that the proposed methodology yields usable results in estimating dates of revelation of the Quran’s chapters based on their lexical contents. KW - Text Mining KW - Maschinelles Lernen KW - text categorization KW - Bayesian classifier KW - distance-based classifier KW - Quran Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-54712 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Reference Architecture, Design of Cascading Style Sheets Processing Model N2 - The technique of using Cascading Style Sheets (CSS) to format and present structured data is called CSS processing model. For instance a CSS processing model for XML documents describes steps involved in formatting and presenting XML documents on screens or papers. Many software applications such as browsers and XML editors have their own CSS processing models which are part of their rendering engines. For instance each browser based on its CSS processing model renders CSS layout differently, as a result an inconsistency in the support of CSS features arises. Some browsers support more CSS features than others, and the rendering itself varies. Moreover the W3C standards are not even adhered by some browsers such as Internet Explorer. Test suites and other hacks and filters cannot definitely solve these problems, because these solutions are temporary and fragile. To palliate this inconsistency and browser compatibility issues with respect to CSS, a reference CSS processing model is needed. By extension it could even allow interoperability across CSS rendering engines. A reference architecture would provide common software architecture and interfaces, and facilitate refactoring, reuse, and automated unit testing. In [2] a reference architecture for browsers has been proposed. However this reference architecture is a macro reference model which does not consider separately individual components of rendering and layout engines. In this paper an attempt to develop a reference architecture for CSS processing models is discussed. In addition the Vex editor [3] rendering and layout engines, as well as an extended version of the editor used in TextGrid project [5] are also presented in order to validate the proposed reference architecture. KW - Cascading Style Sheets KW - XML KW - Softwarearchitektur KW - CSS KW - XML KW - Processing Model KW - Reference Architecture Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-51328 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Understanding the Vex Rendering Engine N2 - The Visual Editor for XML (Vex)[1] used by TextGrid [2]and other applications has got rendering and layout engines. The layout engine is well documented but the rendering engine is not. This lack of documenting the rendering engine has made refactoring and extending the editor hard and tedious. For instance many CSS2.1 and upcoming CSS3 properties have not been implemented. Software developers in different projects such as TextGrid using Vex would like to update its CSS rendering engine in order to provide advanced user interfaces as well as support different document types. In order to minimize the effort of extending Vex functionality, I found it beneficial to write a basic documentation about Vex software architecture in general and its CSS rendering engine in particular. The documentation is mainly based on the idea of architectural layered diagrams. In fact layered diagrams can help developers understand software’s source code faster and easier in order to alter it, and fix errors. This paper is written for the purpose of providing direct support for exploration in the comprehension process of Vex source code. It discusses Vex software architecture. The organization of packages that make up the software, the architecture of its CSS rendering engine, an algorithm explaining the working principle of its rendering engine are described. KW - Cascading Style Sheets KW - Softwarearchitektur KW - CSS KW - Processing model KW - Software architecture KW - Software design Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-51333 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Doing Webservices Composition by Content-based Mashup: Example of a Web-based Simulator for Itinerary Planning N2 - Webservices composition is traditionally carried out using composition technologies such as Business Process Execution Language (BPEL) [1] and Web Service Choreography Interface (WSCI) [2]. The composition technology involves the process of web service discovery, invocation, and composition. However these technologies are not easy and flexible enough because they are mainly developer-centric. Moreover majority of websites have not yet embarked into the world of web service, although they have very important and useful information to offer. Is it because they have not understood the usefulness of web services or is it because of the costs? Whatever might be the answers to these questions, time and money are definitely required in order to create and offer web services. To avoid these expenditures, wrappers [7] to automatically generate webservices from websites would be a cheaper and easier solution. Mashups offer a different way of doing webservices composition. In web environment a Mashup is a web application that brings together data from several sources using webservices, APIs, wrappers and so on, in order to create entirely a new application that was not provided before. This paper presents first an overview of Mashups and the process of web service invocation and composition based on Mashup, then describes an example of a web-based simulator for navigation system in Germany. KW - Mashup KW - Wrapper KW - Mashup KW - Webservice Composition KW - Wrappers Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-50036 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Java Web Frameworks Which One to Choose? N2 - This article discusses web frameworks that are available to a software developer in Java language. It introduces MVC paradigm and some frameworks that implement it. The article presents an overview of Struts, Spring MVC, JSF Frameworks, as well as guidelines for selecting one of them as development environment. KW - Java Frameworks KW - MVC KW - Struts KW - Spring KW - JSF Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-49407 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Empirical Study on Screen Scraping Web Service Creation: Case of Rhein-Main-Verkehrsverbund (RMV) N2 - Internet is the biggest database that science and technology have ever produced. The world wide web is a large repository of information that cannot be used for automation by many applications due to its limited target audience. One of the solutions to the automation problem is to develop wrappers. Wrapping is a process whereby unstructured extracted information is transformed into a more structured one such as XML, which could be provided as webservice to other applications. A web service is a web page whose content is well structured so that a computer program can consume it automatically. This paper describes steps involved in constructing wrappers manually in order to automatically generate web services. KW - HTML KW - XML KW - Wrapper KW - Web service KW - HTML KW - XML KW - Wrapper KW - Web service Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-49396 ER - TY - INPR A1 - Nassourou, Mohamadou T1 - Markup overlap: Improving Fragmentation Method N2 - Overlapping is a common word used to describe documents whose structural dimensions cannot be adequately represented using tree structure. For instance a quotation that starts in one verse and ends in another verse. The problem of overlapping hierarchies is a recurring one, which has been addressed by a variety of approaches. There are XML based solutions as well as Non-XML ones. The XML-based solutions are: multiple documents, empty elements, fragmentation, out-of-line markup, JITT and BUVH. And the Non-XML approaches comprise CONCUR/XCONCUR, MECS, LMNL ...etc. This paper presents shortly state-of-the-art in overlapping hierarchies, and introduces two variations on the TEI fragmentation markup that have several advantages. KW - XML KW - Überlappung KW - Fragmentierung KW - XML KW - Overlapping KW - Fragmentation Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-49084 ER - TY - INPR A1 - Wolf, Norbert Richard T1 - Die Darwinsche Theorie und die Sprachwissenschaften N2 - No abstract available Y1 - 1989 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-41739 ER - TY - INPR A1 - Dandekar, Thomas T1 - Why are nature´s constants so fine-tuned? The case for an escalating complex universe N2 - Why is our universe so fine-tuned? In this preprint we discuss that this is not a strange accident but that fine-tuned universes can be considered to be exceedingly large if one counts the number of observable different states (i.e. one aspect of the more general preprint http://www.opus-bayern.de/uni-wuerzburg/volltexte/2009/3353/). Looking at parameter variation for the same set of physical laws simple and complex processes (including life) and worlds in a multiverse are compared in simple examples. Next the anthropocentric principle is extended as many conditions which are generally interpreted anthropocentric only ensure a large space of different system states. In particular, the observed over-tuning beyond the level for our existence is explainable by these system considerations. More formally, the state space for different systems becomes measurable and comparable looking at their output behaviour. We show that highly interacting processes are more complex then Chaitin complexity, the latter denotes processes not compressible by shorter descriptions (Kolomogorov complexity). The complexity considerations help to better study and compare different processes (programs, living cells, environments and worlds) including dynamic behaviour and can be used for model selection in theoretical physics. Moreover, the large size (in terms of different states) of a world allowing complex processes including life can in a model calculation be determined applying discrete histories from quantum spin-loop theory. Nevertheless there remains a lot to be done - hopefully the preprint stimulates further efforts in this area. N2 - Dieses Preprint vertieft einen Aspekt des preprints http://www.opus-bayern.de/uni-wuerzburg/volltexte/2009/3353/, nämlich die Balance zwischen den Konstanten für unsere Naturgesetze. Die Frage nach einer solchen Balance entsteht nur, wenn man sich ein Multiversum mit vielen Alternativen Universen mit anderen Gewichten für die Naturkonstanten vorstellt und dann feststellt, dass diese gerade in unserem Universum optimal für Leben und überhaupt für komplexe, selbst organisierende Strukturen eingestellt sind (sogenanntes fine-tuning). Dies wird häufig mit dem anthropozentrischen Prinzip erklärt. Dies erklärt aber beispielsweise nicht, warum denn dieses fine-tuning noch deutlich feiner und genauer eingestellt ist, als für die Existenz eines Beobachters nötig ist. Wir zeigen dagegen, dass unser Universum besonders komplex ist und einen sehr großen Zustandsraum hat und Bedingungen, die eine hohe Komplexität erlauben, auch einen Beobachter und komplexe Prozesse wie Leben ermöglichen. Allgemein nimmt ein besonders komplexer Zustandsraum den Löwenanteil aller Alternativen ein. Unsere Komplexitätsbetrachtung kann auf verschiedenste Prozesse (Welten, Umwelten, lebende Zellen, Computerprogramme) angewandt werden, hilft bei der Modellauswahl in der theoretischen Physik (Beispiele werden gezeigt) und kann auch direkt ausgerechnet werden, dies wird für eine Modellrechnung zur Quantenschleifentheorie durchgeführt. Dennoch bleibt hier noch viel weitere Arbeit zu leisten, das Preprint kann hier nur einen Anstoß liefern. KW - Natur KW - Naturgesetz KW - Beobachter KW - Kolmogorov-Komplexität KW - Berechnungskomplexität KW - Fundamentalkonstante KW - Nature constants KW - complexity KW - observer KW - fine-tuning KW - multiverse Y1 - 2008 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-34488 ER - TY - INPR A1 - Dandekar, Thomas T1 - Some general system properties of a living observer and the environment he explores N2 - In a nice assay published in Nature in 1993 the physicist Richard God III started from a human observer and made a number of witty conclusions about our future prospects giving estimates for the existence of the Berlin Wall, the human race and all the rest of the universe. In the same spirit, we derive implications for "the meaning of life, the universe and all the rest" from few principles. Adams´ absurd answer "42" tells the lesson "garbage in / garbage out" - or suggests that the question is non calculable. We show that experience of "meaning" and to decide fundamental questions which can not be decided by formal systems imply central properties of life: Ever higher levels of internal representation of the world and an escalating tendency to become more complex. An observer, "collecting observations" and three measures for complexity are examined. A theory on living systems is derived focussing on their internal representation of information. Living systems are more complex than Kolmogorov complexity ("life is NOT simple") and overcome decision limits (Gödel theorem) for formal systems as illustrated for cell cycle. Only a world with very fine tuned environments allows life. Such a world is itself rather complex and hence excessive large in its space of different states – a living observer has thus a high probability to reside in a complex and fine tuned universe. N2 - Dieser Aufsatz ist ein Preprint und Discussion Paper und versucht - ähnlich wie ein hervorragendes Beispiel eines Physikers, Richard God III (1993 in Nature veröffentlicht) mit einfachen Grundannahmen sehr generelle Prinzipien für uns abzuleiten. In meinem Aufsatz sind das insbesondere Prinzipien für Beobachten, für die Existenz eines Beobachters und sogar für die Existenz unserer komplexen Welt, die Fortentwicklung von Leben, die Entstehung von Bedeutung und das menschliche Entscheiden von Grundlagenfragen. Aufs erste kann so ein weitgehendes Anliegen nicht wirklich vollständig und akkurat gelingen, der Aufsatz möchte deshalb auch nur eine amüsante Spekulation sein, exakte (und bescheidenere) Teilaussagen werden aber später dann auch nach peer Review veröffentlicht werden. KW - Komplex KW - Entscheidung KW - Natürliche Auslese KW - Evolution KW - Bedeutung KW - Komplexität KW - Gödel KW - Entscheidungen KW - complexity KW - decision KW - evolution KW - selection KW - meaning Y1 - 2007 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:20-opus-33537 ER -