Refine
Has Fulltext
- yes (232)
Is part of the Bibliography
- yes (232) (remove)
Year of publication
Document Type
- Doctoral Thesis (48)
- Book article / Book chapter (40)
- Book (38)
- Journal article (29)
- Preprint (25)
- Master Thesis (24)
- Conference Proceeding (10)
- Review (8)
- Bachelor Thesis (4)
- Other (3)
Keywords
- Deutsch (22)
- Kulturanthropologie (13)
- Korpus <Linguistik> (8)
- Quran (8)
- Deutschunterricht (7)
- Digital Humanities (7)
- Koran (7)
- Rezension (7)
- Text Mining (7)
- Alltagskultur (6)
Institute
- Institut für deutsche Philologie (232) (remove)
Schriftenreihe
Sonstige beteiligte Institutionen
- Professur für Museologie (3)
- VolkswagenStiftung (2)
- Zentrum für Lehrerbildung und Bildungsforschung (2)
- Badisches Landesmuseum Karlsruhe (1)
- Bayerische Museumsakademie (1)
- Bezirk Unterfranken (1)
- Brown University (1)
- DFG (1)
- Heimatmuseum Ebern (1)
- Lehrstuhl für Europäische Ethnologie/Volkskunde der Julius-Maximilians-Universität Würzburg (1)
Die vorliegende Arbeit beschäftigt sich mit dem Sprachkorpus aus zwei Blickwinkeln. Im technischen Teil handelt es sich um die Aufbereitung der Texte für das deutsch-tschechische Korpus DeuCze. Es wird hier der Vorgang von der Digitalisierung der Bücher bis zum Erstellen der wohlgeformten und validen XML-Dateien beschrieben. Diese Dateien sind bis zur Satzebene segmentiert und ermöglichen auf diese Weise die parallele Anzeige der Texte der beiden verglichenen Sprachen nach einzelnen Segmenten. Im analytischen Teil wird die Aufmerksamkeit der sprachlichen Analyse des Phänomens der Themaentwicklung innerhalb eines ausgewählten Textes gewidmet. Das Ziel sind also sowohl die aufbereiteten Dateien für das genannte Korpus als auch die Analyse der Teilthemaentwicklung.
Design and Implementation of Architectures for Interactive Textual Documents Collation Systems
(2011)
One of the main purposes of textual documents collation is to identify a base text or closest witness to the base text, by analyzing and interpreting differences also known as types of changes that might exist between those documents. Based on this fact, it is reasonable to argue that, explicit identification of types of changes such as deletions, additions, transpositions, and mutations should be part of the collation process. The identification could be carried out by an interpretation module after alignment has taken place. Unfortunately existing collation software such as CollateX1 and Juxta2’s collation engine do not have interpretation modules. In fact they implement the Gothenburg model [1] for collation process which does not include an interpretation unit. Currently both CollateX and Juxta’s collation engine do not distinguish in their critical apparatus between the types of changes, and do not offer statistics about those changes. This paper presents a model for both integrated and distributed collation processes that improves the Gothenburg model. The model introduces an interpretation component for computing and distinguishing between the types of changes that documents could have undergone. Moreover two architectures implementing the model in order to solve the problem of interactive collation are discussed as well. Each architecture uses CollateX library, and provides on the one hand preprocessing functions for transforming input documents into CollateX input format, and on the other hand a post-processing module for enabling interactive collation. Finally simple algorithms for distinguishing between types of changes, and linking collated source documents with the collation results are also introduced.
Learning a book in general involves reading it, underlining important words, adding comments, summarizing some passages, and marking up some text or concepts. Once deeper understanding is achieved, one would like to organize and manage her/his knowledge in such a way that, it could be easily remembered and efficiently transmitted to others. In this paper, books organized in terms of chapters consisting of verses, are considered as the source of knowledge to be modeled. The knowledge model consists of verses with their metadata and semantic annotations. The metadata represent the multiple perspectives of knowledge modeling. Verses with their metadata and annotations form a meta-model, which will be published on a web Mashup. The meta-model with linking between its elements constitute a knowledge base. An XML-based annotation system breaking down the learning process into specific tasks, helps constructing the desired meta-model. The system is made up of user interfaces for creating metadata, annotating chapters’ contents according to user selected semantics, and templates for publishing the generated knowledge on the Internet. The proposed software system improves comprehension and retention of knowledge contained in religious texts through modeling and visualization. The system has been applied to the Quran, and the result obtained shows that multiple perspectives of information modeling can be successfully applied to religious texts. It is expected that this short ongoing study would motivate others to engage in devising and offering software systems for cross-religions learning.
Die vorliegende Arbeit stellt einen Beitrag zur gesprächslinguistischen Analyse literarischer Dialoge dar. Da Theodor Fontane in der Forschung als Schriftsteller gilt, der dem Gespräch und seiner sprachlichen Gestaltung einen besonders hohen Stellenwert zuschreibt, ist sein letzter großer Roman Der Stechlin aus linguistischer Sicht besonders reizvoll. In ihm wird die Gesellschaft des 19. Jahrhunderts durch das Mittel des Gesprächs eindrucksvoll charakterisiert. In einem theoretischen Teil dieser Arbeit werden zunächst die Besonderheiten des literarischen Dialogs in Abgrenzung zum Alltagsgespräch geklärt sowie der historische Kontext, also die Höflichkeits- und Gesprächskultur in adeligen Kreisen des 19. Jahrhunderts, beleuchtet. Auf dieser Basis steht die detaillierte Analyse von drei Dialogausschnitten aus dem Roman im Mittelpunkt: ein offizieller Besuch, ein Tischge-spräch, ein Klatschgespräch unter Bekannten.
The Quran is the holy book of Islam consisting of 6236 verses divided into 114 chapters called suras. Many verses are similar and even identical. Searching for similar texts (e.g verses) could return thousands of verses, that when displayed completely or partly as textual list would make analysis and understanding difficult and confusing. Moreover it would be visually impossible to instantly figure out the overall distribution of the retrieved verses in the Quran. As consequence reading and analyzing the verses would be tedious and unintuitive. In this study a combination of interactive scatter plots and tables has been developed to assist analysis and understanding of the search result. Retrieved verses are clustered by chapters, and a weight is assigned to each cluster according to number of verses it contains, so that users could visually identify most relevant areas, and figure out the places of revelation of the verses. Users visualize the complete result and can select a region of the plot to zoom in, click on a marker to display a table containing verses with English translation side by side.
A Knowledge-based Hybrid Statistical Classifier for Reconstructing the Chronology of the Quran
(2011)
Computationally categorizing Quran’s chapters has been mainly confined to the determination of chapters’ revelation places. However this broad classification is not sufficient to effectively and thoroughly understand and interpret the Quran. The chronology of revelation would not only improve comprehending the philosophy of Islam, but also the easiness of implementing and memorizing its laws and recommendations. This paper attempts estimating possible chapters’ dates of revelation through their lexical frequency profiles. A hybrid statistical classifier consisting of stemming and clustering algorithms for comparing lexical frequency profiles of chapters, and deriving dates of revelation has been developed. The classifier is trained using some chapters with known dates of revelation. Then it classifies chapters with uncertain dates of revelation by computing their proximity to the training ones. The results reported here indicate that the proposed methodology yields usable results in estimating dates of revelation of the Quran’s chapters based on their lexical contents.
In der Universitätsbibliothek Würzburg finden seit 1985 regelmäßig Werkstattgespräche mit Autoren der Gegenwartsliteratur statt. Organisiert werden diese Autorenlesungen vom Universitätsverbund und dem Institut für deutsche Philologie. In diesem Artikel wird das Konzept der Literatur-Werkstattgespräche beschrieben sowie ein Rückblick auf die ersten 5 Lesungen gegeben. - Manfred Bieler 27. November 1985 - Walter Kempowski 11. Dezember 1985 - Reiner Kunze 15. Januar 1986 - Leonie Ossowski 29. Januar 1986 - Horst Bienek 19. Februar 1986
The technique of using Cascading Style Sheets (CSS) to format and present structured data is called CSS processing model. For instance a CSS processing model for XML documents describes steps involved in formatting and presenting XML documents on screens or papers. Many software applications such as browsers and XML editors have their own CSS processing models which are part of their rendering engines. For instance each browser based on its CSS processing model renders CSS layout differently, as a result an inconsistency in the support of CSS features arises. Some browsers support more CSS features than others, and the rendering itself varies. Moreover the W3C standards are not even adhered by some browsers such as Internet Explorer. Test suites and other hacks and filters cannot definitely solve these problems, because these solutions are temporary and fragile. To palliate this inconsistency and browser compatibility issues with respect to CSS, a reference CSS processing model is needed. By extension it could even allow interoperability across CSS rendering engines. A reference architecture would provide common software architecture and interfaces, and facilitate refactoring, reuse, and automated unit testing. In [2] a reference architecture for browsers has been proposed. However this reference architecture is a macro reference model which does not consider separately individual components of rendering and layout engines. In this paper an attempt to develop a reference architecture for CSS processing models is discussed. In addition the Vex editor [3] rendering and layout engines, as well as an extended version of the editor used in TextGrid project [5] are also presented in order to validate the proposed reference architecture.
The Visual Editor for XML (Vex)[1] used by TextGrid [2]and other applications has got rendering and layout engines. The layout engine is well documented but the rendering engine is not. This lack of documenting the rendering engine has made refactoring and extending the editor hard and tedious. For instance many CSS2.1 and upcoming CSS3 properties have not been implemented. Software developers in different projects such as TextGrid using Vex would like to update its CSS rendering engine in order to provide advanced user interfaces as well as support different document types. In order to minimize the effort of extending Vex functionality, I found it beneficial to write a basic documentation about Vex software architecture in general and its CSS rendering engine in particular. The documentation is mainly based on the idea of architectural layered diagrams. In fact layered diagrams can help developers understand software’s source code faster and easier in order to alter it, and fix errors. This paper is written for the purpose of providing direct support for exploration in the comprehension process of Vex source code. It discusses Vex software architecture. The organization of packages that make up the software, the architecture of its CSS rendering engine, an algorithm explaining the working principle of its rendering engine are described.
Die vorliegende Arbeit untersucht die metaphorischen Konzepte, die Herz- und Hand-Somatismen der deutschen und der albanischen Sprache zugrunde liegen. Gestützt auf die kognitive Metapherntheorie und die holistisch geprägte kognitive Semantik wird die semantische Klassifizierung der ausgewählten Somatismen in metaphorische Konzepte unternommen. Somatismen gehören zum Grundwortschatz jeder Sprache und da sie zweifach anthropozentrisch sind, ist man der Überzeugung, dass metaphorische Konzepte, die auf solchen Bezeichnungen beruhen, einen universellen Charakter aufweisen. Des Weiteren werden die Vorkommenshäufigkeit und die Erscheinung der deutschen Somatismen in belletristischen und in Pressetexten korpusbasiert untersucht, um zu ermitteln, welche Konzepte in der Gegenwartssprache lebendig sind und welche Phraseologismen als Fossile lediglich in Wörterbüchern existieren. Abschließend folgt eine Analyse der stilistischen Funktion von ausgewählten deutschen Somatismen in Zeitungstexten. In der albanischen Phraseologie ist der kognitive Ansatz noch kaum Gegenstand von Forschungsarbeiten geworden. Auch kontrastive linguistische Untersuchungen in Bezug auf das Sprachenpaar Deutsch-Albanisch auf dem Gebiet der Kognitiven Linguistik und der konzeptuellen Metapher sind sehr selten. Daher setzt sich die vorliegende Arbeit als Ziel, eine Forschungslücke teilweise zu füllen.
Der Beitrag zeigt am Beispiel der Analyse der Konstruktion lassen mit reinem Infinitiv die Arbeit mit dem parallelen DeuCze-Korpus. Er geht auf die semanto-syntaktische Analyse aller Korpusbelege ein und schlägt ausgehend von ihrer Interpretation eine Klassifikation der untersuchten Konstruktion vor. Anschließend werden die tschechischen Entsprechungen von lassen+Infinitiv untersucht und abschließende Schlussfolgerungen für beide Sprachen gezogen.
No abstract available
D'Webi stirbt - Zur gegenwärtigen Krise in der Textilindustrie im Wiesental, am Hoch- und Oberrhein
(1986)
No abstract available
No abstract available
No abstract available
Webservices composition is traditionally carried out using composition technologies such as Business Process Execution Language (BPEL) [1] and Web Service Choreography Interface (WSCI) [2]. The composition technology involves the process of web service discovery, invocation, and composition. However these technologies are not easy and flexible enough because they are mainly developer-centric. Moreover majority of websites have not yet embarked into the world of web service, although they have very important and useful information to offer. Is it because they have not understood the usefulness of web services or is it because of the costs? Whatever might be the answers to these questions, time and money are definitely required in order to create and offer web services. To avoid these expenditures, wrappers [7] to automatically generate webservices from websites would be a cheaper and easier solution. Mashups offer a different way of doing webservices composition. In web environment a Mashup is a web application that brings together data from several sources using webservices, APIs, wrappers and so on, in order to create entirely a new application that was not provided before. This paper presents first an overview of Mashups and the process of web service invocation and composition based on Mashup, then describes an example of a web-based simulator for navigation system in Germany.
This article discusses web frameworks that are available to a software developer in Java language. It introduces MVC paradigm and some frameworks that implement it. The article presents an overview of Struts, Spring MVC, JSF Frameworks, as well as guidelines for selecting one of them as development environment.
Empirical Study on Screen Scraping Web Service Creation: Case of Rhein-Main-Verkehrsverbund (RMV)
(2010)
Internet is the biggest database that science and technology have ever produced. The world wide web is a large repository of information that cannot be used for automation by many applications due to its limited target audience. One of the solutions to the automation problem is to develop wrappers. Wrapping is a process whereby unstructured extracted information is transformed into a more structured one such as XML, which could be provided as webservice to other applications. A web service is a web page whose content is well structured so that a computer program can consume it automatically. This paper describes steps involved in constructing wrappers manually in order to automatically generate web services.
Overlapping is a common word used to describe documents whose structural dimensions cannot be adequately represented using tree structure. For instance a quotation that starts in one verse and ends in another verse. The problem of overlapping hierarchies is a recurring one, which has been addressed by a variety of approaches. There are XML based solutions as well as Non-XML ones. The XML-based solutions are: multiple documents, empty elements, fragmentation, out-of-line markup, JITT and BUVH. And the Non-XML approaches comprise CONCUR/XCONCUR, MECS, LMNL ...etc. This paper presents shortly state-of-the-art in overlapping hierarchies, and introduces two variations on the TEI fragmentation markup that have several advantages.