Project lead: Thomas Weitin
Funding: DFG
The 19th century, in which our current system of scientific disciplinary cultures emerged, was an epoch of collecting and organizing bodies of knowledge, which often lay between these disciplines.
Criminal cases were used by law to learn the individual presentation of proofs for criminal proceedings, and by literature to learn the realistic art of narration for a modern mass audience. As the first globally comparative and also most comprehensive collection of criminal cases in the German-speaking world, the Neue Pitaval (1842-1890) played a decisive role in the emergence of a general legal consciousness. In order to reconstruct the inaugural discourse of this collection, it has to be analyzed in all its diversity as a corpus of 540 case studies.
Our project follows an approach based on historical research questions, which adapts the different methods of digital corpus analysis to the respective epistemic goal. Semantic progress analyses examine which topics were in vogue and when, and how the stories were put into perspective in terms of legal policy under changing circumstances. Narratological analyses get to the bottom of thenarrative patterns that are developed in this process. To distinguish between legal and literary modes of representation, digital methods are combined with varying degrees of context sensitivity.
In stylometric analyses, which operate at the level of sentences and words, different levels of abstraction can be analyzed algorithmically. Narrative analyses, on the other hand, require collaborative annotations of larger sections of text, making questions of automatability themselves appear as hermeneutic problems, which in turn give insight into the research object.
Between law and literature, the legal case study is a popular medium of knowledge in the 19th century, in which the normative orientations and the understanding of law of bourgeois society become observable. The genre poetics of this text type is therefore of particular importance. Thus, we will be employing comparative analyses to contrast the Pitaval with other contemporary corpora that can be described as its direct competitors in the media system: The crime stories in the family journal Die Gartenlaube and the crime novellas of the Deutscher Novellenschatz potentially entertained the same audience. Focusing on global and local thematic conjunctures in relation to the categorization of determined signal strengths in the classification, we will also be able to examine aspects of aesthetic response by including the analysis of affect strengths of the texts in our analyses.
On the technical basis of an advancement of the collaborative annotation tool WebAnno, this will also involve possibilities of crowd sourcing to not only achieve further text improvements, but also to motivate the contributors to provide annotations. Experience has shown that a key search criteria are often entities such as persons and place names, but also the coding of grammatical structures. The goal is to develop a high-quality body, both in scope and quality. The collaboration between the library and the institute is intended as a prototype of a collaborative structure that combines specifically scientific and interdisciplinary interests in a productive manner.
Project lead: Prof. Dr. Thomas Weitin
Funding: „LOEWE-Exploration“
German classes are supposed to inspire enthusiasm for literature and teach students to make independent judgments as well as to be empathetic. However, empirical evidence for such effects of literature is lacking. This project aims to examine whether there is a level of emotionality that is optimal for understanding literary texts. It combines emotion-oriented methods of text analysis with the measurement of emotional reactions during reading. The aim is to provide concrete guidance for teaching German at the upper secondary level.
For its bold approach, the TU project ”Evidence-based Literary Comprehension in German Classes“ will receive around 300,000 euros from the ”LOEWE-Exploration“ funding program. A total of four research teams at universities were selected by the Hessian Ministry of Science for unconventional innovative research. Funds totaling around one million euros are available for them.
The project combines emotion-oriented methods of text analysis with the measurement of emotional reactions during reading. For example, movements of the eye, activities of the brain and certain reactions of the body are measured. In addition, questionnaires will be used to record self-perceived emotionality during reading. The test subjects are high school students. For the analyses and experiments, contemporary literary prose from three thematic areas will be used: ecological crises and sustainability, human rights and international conflicts, and living contemporary history. Thus, according to the researchers, it is possible to check whether the requirements of teaching German are met using the example of a current reading. The aim is to provide concrete orientation aids for German lessons in the upper level of secondary school.
The Hessian Minister of Science, Angela Dorn, expressed her delight at the funding of four projects with ”LOEWE-Exploration“ The researchers were given ”the freedom to pursue new, highly innovative research ideas,“ she said: ”With up to 300,000 euros per project for up to two years, they can test an unconventional hypothesis, a radically new approach. Such freedom has become rare in research funding."
Projektleitung: Prof. Dr. Evelyn Gius
Förderung: Land Hessen in der Förderlinie “LOEWE-Exploration”
Das übergreifende Ziel von KatKit ist es, die Komplexität des Ontologie-Designs zu reduzieren und humanistisches Denken durch mathematisch informierte Methoden zu unterstützen. Dadurch wird das Risiko verringert, versehentlich inkohärente und inkonsistente Modelle der relevanten Untersuchungsgegenstände zu erstellen, während die Konzeptualisierung und Operationalisierung geisteswissenschaftlicher Forschungsfragen gestärkt wird.
Das Projekt zielt darauf ab, die Entwicklung von Kategoriensystemen in den Digital Humanities durch mathematische Prinzipien zu unterstützen. Als Ergebnis wird der Ontologie-Editor KatKit (KATegorien toolKIT) als Proof of Concept entwickelt, der 1.) auf mathematischen Modellen basiert, die Eigenschaften wie Konsistenz und Kohärenz von Kategoriensystemen sicherstellen, 2.) humanistisches Denken unterstützt und schließlich 3.) in bestehende DH-Toolchains integriert werden kann, z.B. durch Anbindung an die Annotationssoftware CATMA. In diesem Projekt wird das Problem der Kategorienentwicklung auf interdisziplinäre Weise von den digitalen Geisteswissenschaften, der Philosophie und den formalen Wissenschaften Mathematik und Informatik unter Verwendung der angewandten Kategorientheorie angegangen.
Informationen über das Projekt finden Sie hier:Mehr erfahren
Project lead: Prof. Dr. Thomas Stäcker (ULB Darmstadt), Prof. Dr. Marcus Müller (TU Darmstadt)
Funding: DFG
The Darmstädter Tagblatt, which was discontinued in 1986, was one of the oldest periodicals and the longest continuously published daily newspaper in the German-speaking world, serving as the most important medium in Darmstadt and the region of southern Hesse. The aim of the first phase is to digitize and linguistically process (parts of) the contents until 1941.
The digitization process includes image digitization with structural data acquisition, OCR full-text generation, (semi-) manual article separation of selected sequences, automatic person and place name identification and, where possible, assignment to standard data as well as automatic word-type annotation. As a first use case, a team led by Prof. Müller on the segmented, OCR-treated, adjusted and linguistically annotated part of the data will conduct a discursive study on the development of the public concept of risk in the period 1850-1915, using the Darmstädter Tagblatt as an example.
On the technical basis of an advancement of the collaborative annotation tool WebAnno, this will also involve possibilities of crowd sourcing to not only achieve further text improvements, but also to motivate the contributors to provide annotations. Experience has shown that a key search criteria are often entities such as persons and place names, but also the coding of grammatical structures. The goal is to develop a high-quality body, both in scope and quality. The collaboration between the library and the institute is intended as a prototype of a collaborative structure that combines specifically scientific and interdisciplinary interests in a productive manner.
Project by: Prof. Dr. Andrea Rapp
The two established research infrastructures CLARIN-D and DARIAH-DE have joined forces to contribute to a joint research data infrastructure for the humanities in Germany.
During the project period, the cooperation – which was successfully established over the last few years – is to be developed further to ensure that a national research data infrastructure for the humanities, consisting of common technical components and approved procedures, can be offered as of 2021.
Funding: BMBF
Associated partners: Berlin-Brandenburg Academy of Sciences, Eberhard Karls University of Tübingen, Gesellschaft für wissenschaftliche Datenverarbeitung mbH Göttingen, Hamburg Center for Language Corpora, Institute for German Language Mannheim, Otto-Friedrich-University Bamberg / Faculty of Business Informatics and Applied Computer Science, Göttingen State and University Library, Darmstadt University of Technology / Institute of Linguistics and Literature, University of Leipzig, Julius-Maximilians-Universität Würzburg / Department of German Philology / Department of Computational Philology and Modern German Literature History.
Project by: Prof. Dr. Nina Janich . Funded by the Klaus Tschira Foundation.
Based on interviews and text analyzes, the project aims to examine when and how communication problems between scientists and the fields of science journalism or public relations arise. The goal is to be able to make specific suggestions to improve communication in this scope and, thus, to support junior researchers in publishing their research results.
Project by: Prof. Dr. Thomas Weitin (TU Darmstadt), Ulrik Brandes (ETH Zurich)
Funding: Volkswagen Foundation
The 'Reading at Scale' project is based on the following approach: If hermeneutic and statistical methods each have their own strengths in detailed individual analysis and in dealing with large amounts of data, a mixed methods approach is better suited for the middle level than the two methods alone. Literary texts and text corpora allow analyzes at different levels of resolution – from the level of the characters in individual pieces of work to entire literatures, whereby Literary Studies and Literary History traditionally address many research questions at the middle level. The focus of our studies is a historical collection of 86 novellas, published under the title “Der deutsche Novellenschatz” (24 volumes, 1871-1876) by the editors Paul Heyse and Hermann Kurz. We already prepared this realism-oriented anthology as a TEI/XML corpus, and more similar collections will follow. As it is of medium size, the novella collection is still suitable for individual reading – but still has a promising size for statistical analysis. Our text corpus was examined in two dissertations at different levels of operationalization: (1) a network analysis addresses problems of distinction within popular literature while, (2) a comparative study examines the “deutscher Novellenschatz” as an effective instrument of canonization and as a programmatic attempt at non-narrative Literary History. Both project managers contribute individual studies from the perspective of basic methodological research, while an algorithmic subproject investigates concepts of position in network research – and a literary science subproject focuses on problems of validation in digital analysis.
Publications in the project context
• Brandes, Ulrik, Weitin, Thomas, Päpcke, Simon, Pupynina, Anastasia, Herget, Katharina (2019): Distance measures in a non-authorship context. The effect on the “Deutsche Novellenschatz” (forthcoming).
• Weitin, Thomas (2019): Burrows‘s Delta und Z-Score-Differenz im Netzwerkvergleich. Analysen zum Deutschen. Novellenschatz von Paul Heyse und Hermann Kurz (1871-1876), in: Digitale Literaturwissenschaft. Beiträge des DFG-Symposiums, Fotis Jannidis (ed.), Stuttgart (forthcoming).
• Weitin, Thomas (2017): (ed.): Scalable Reading. Zeitschrift für Literaturwissenschaft und Linguistik, 47.1.
Weitin, Thomas (2017): Literarische Heuristiken: Die Novelle des Realismus, in: Komplexität und Einfachheit. DFG-Symposion 2015, Albrecht Koschorke (ed.), Stuttgart, p. 422-442
• Weitin, Thomas, Herget, Katharina (2017): Falkentopics: Über einige Probleme beim Topic Modeling literarischer Texte, in: Zeitschrift für Literaturwissenschaft und Linguistik, 47.1, pp. 29–48.
• Weitin, Thomas (2016): Heuristik des Wartens. Literatur lesen unter dem Eindruck von big data, in: Warten als Kulturmuster, Julia Kerscher, Xenia Wotschal (ed.), Würzburg, pp. 180-196.
• Weitin, Thomas (2016): Selektion und Distinktion. Paul Heyses und Hermann Kurz ́Deutscher Novellenschatz als Archiv, Literaturgeschichte und Korpus, in: Archiv/Fiktionen. Verfahren des Archivierens in Literatur und Kultur des langen 19. Jahrhunderts, Daniela Gretz, Nicolas Pethes (ed.), Freiburg 2016, pp. 385-408.
• Weitin, Thomas, Gilli, Thomas, Kunkel, Nico (2016): Auslegen und Ausrechnen: Zum Verhältnis hermeneutischer und quantitativer Verfahren in den Literaturwissenschaften, in: Zeitschrift für Literaturwissenschaft und Linguistik, 46,1, pp. 103-115.
Corpora
• Weitin, Thomas (2016): Volldigitalisiertes Textkorpus. Der Deutsche Novellenschatz. Paul Heyse, Hermann Kurz (ed.), 24 volumes, 1871-1876. Darmstadt/Konstanz,
• Weitin, Thomas (2018): Volldigitalisiertes Textkorpus. Der Neue Deutsche Novellenschatz. Paul Heyse, Ludwig Laistner (ed.), 24 volumes, 1884-1887. Darmstadt, forthcoming.
Project by: Prof. Dr. Nina Janich
Funding: German Research Foundation (DFG).
In 2013 (and again in 2018), the EU banned a specific group of pesticides that is suspected to be responsible for bee mortality. From the viewpoint of discourse linguistics, the project aims to examine the dispute between agribusiness, environmental organizations, beekeeper associations, political parties, and scientists. It investigates how, in such a controversial public discourse, attributions of knowledge and non-knowledge are verbalized and used to advocate for specific goals or interests.
Project by: Prof. Dr. Andrea Rapp
The innovation objective of the project is the analysis and validation of the potentials of different forms of usage of virtual research environments in the humanities. By means of a process analysis of joint work in a specific field of research in the humanities (using digital research applications), novel applications and methods of collaboration are identified, and steps regarding their further development are identified. To this end, 19 international research groups will be introduced to the various digital tools and contents, and their usage practices and research processes within the virtual research environment are examined. The results of the study serve as reference models for the implementation and design of virtual scientific research projects as well as for best practice recommendations for the structural and procedural linking of technical, methodological, and content-related working levels.
Funding:
Federal Ministry of Education and Research
Partners:
JGU Mainz, University of Mainz, TU Darmstadt
Project lead: Prof. Dr. Matthias Luserke-Jaqui
This project concludes a long, research-intensive project on the cultural history of literature. (Francke Verlag)
Project lead: Prof. Dr. Andrea Rapp
Digital methods and contents can be both means and (research) subject of Didactics. However, the diverse, far-reaching, and profound practices of digitally-based research and scientific work established in the Digital Humanities (DH) are no longer covered by the concepts of media didactics and e-learning alone. Therefore, the project aims at developing and testing new concepts of digitality in subject-related didactics. The project results will be documented in an online manual, and they will be incorporated in a specific module “Digitalität als Praxis in den Geisteswissenschaften”, which will be offered in all teacher training programs and beyond.
Funding: TU Darmstadt
Project lead: Prof. Dr. Andrea Rapp
The guest project aims at the development of methods and software for the automatic measurement, documentation, and visual analysis of design features on handwriting or book pages. Together with selected subprojects, properties of (units in) books are identified which are used for the analysis of movements and their modes, i.e. transfers and, thus, de- and recontextualizations of knowledge (in books and corpora) that is relevant and can be automated in the context of image processing. The results will be seamlessly integrated into the data infrastructure of the SFB 980.
Funding: German Research Foundation
Partner: Freie Universität Berlin
Project lead: Prof. Dr. Thomas Weitin
Sustainable and high-quality digital applications and operationalization of (literary) questions are necessarily based on appropriate and stable corpora. Many canonized classics and works are now freely available on the Internet and can be freely downloaded from websites such as the Gutenberg DE project. From a philosophical and corpus critical perspective, however, these digital texts are often unreliable: sometimes, the underlying text source and editions are unmarked, and the files are often only available in simple txt format – without formatting or in-depth text labeling. The error rates of the OCR software used (i.e. programs for optical character recognition – for example, the machine-readable text from PDF files) vary widely, which in turn has significant influence on the quality of the corpora. Initiatives such as the German Text Archive are trying to counteract this trend by striving for a historical reference corpus based on strict guidelines and high quality standards (based on, among other aspects, the use of first editions).
At the same time, digital literary scholarship also aims to disassociate its analyzes and research subjects from the traditional canon of literature – or to extend it. Accordingly, the constant creation and expansion of literary corpora is an integral part of many research projects.
The corpus workflow using the example of the new “Novellenschatz”
In June 2015 – in the scope of the preparations for the workshop “Scalable Reading. Paul Heyses Deutscher Novellenschatz zwischen Einzeltext und Makroanalyse”, under the auspices of Thomas Weitin – the first TEI-XML-Corpus of the “Novellenschatz” was created: a historical collection of 86 novellas, published by Paul Heyse and Hermann Kurz (24 volumes, 1871-1876). This corpus was continually improved and enriched with metadata in order to allow for further research into the popular novella collection of the 19th century.
Since then, the corpus workflow has been continuously expanded and professionalized. The corpora are created by means of a corrected OCR method:
The digital representation of the text (typically in the form of PDF formats) is first converted into machine-readable text using the Abbyy FineReader software, which is particularly well-suited for reading Gothic typefaces. In a second step, the digitized text is then manually checked, corrected, and stored in TXT-format by specially trained assistants, and some corpora are also transferred into a TEI-compliant XML schema.
Other corpus-projects
In addition to the “Deutscher Novellenschatz”, the “Neuer Deutscher Novellenschatz” by Paul Heyse and Ludwig Laistner (70 novellas in 24 volumes, 1884-1887) was digitized and reviewed as well. In addition, we started to work on the last missing novella treasure, the “Novellenschatz des Auslandes”, which consists of 57 translated foreign novellas, also published by Paul Heyse and Hermann Kurz (14 volumes, 1872-1876). Thus, our novella corpus is almost complete and ready for analysis. Parallel to this, other historical sources are prepared and digitized as well, e.g. the extensive letter correspondence between Paul Heyse and Hermann Kurz (1858-1873, over 700 letters), which was created during the publication process of the “Novellenschatz” collection.
With “Der neue Pitaval”, edited by Julius Eduard Hitzig and Willibald Alexis (Wilhelm Häring) (60 volumes, 1842-1890), we also digitized “a collection of the most interesting international crime stories, from the past to the present”.
The resulting digital corpora are published in the “Deutsches Textarchiv”, made available for research in the sense of Open Access.
Project publications
- Weitin, Thomas (2016). Volldigitalisiertes Textkorpus. Der Deutsche Novellenschatz. Paul Heyse, Hermann Kurz (ed.), 24 volumes, 1871-1876. Darmstadt/Konstanz.
- Weitin, Thomas (2018). Volldigitalisiertes Textkorpus. Der Neue Deutsche Novellenschatz. Paul Heyse, Ludwig Laistner(ed.), 24 volumes, 1884-1887. Darmstadt (forthcoming).
Further links
- Deutsches Textarchiv. Grundlage für ein Referenzkorpus der neuhochdeutschen Sprache. Berlin-Brandenburgische Akademie der Wissenschaften (ed.), Berlin 2019
- Project Gutenberg. Project Gutenberg Literary Archive Foundation (ed.)
- Projekt Gutenberg-DE. Hille & Partner GbR (ed.)
Project lead: Dr. Sabine Bartsch
The goal of this shared task is was to encourage the developers of NLP applications to adapt their tools and resources for the processing of written German discourse in genres of computer-mediated communication (CMC). Examples for CMC genres are chats, forums, wiki talk pages, tweets, blog comments, social networks, SMS and WhatsApp dialogues.
Processing CMC discourse is a desideratum and a relevant task in different research fields and application contexts in the Digital Humanities – e.g.:
- in the context of building, processing and analyzing corpora of computer-mediated communication / social media (chat corpora, news corpora, whatsapp corpora, …)
- in the context of collecting, processing and analyzing large, genre-heterogenous web corpora as resources in the field of Language Technology / Data Mining
- in the context of dealing with CMC data in corpus-based analyses on contemporary written language, language variation and language change
- in all research fields beyond linguistics which address social, cultural and educational aspects of social media and CMC technologies using language data from CMC genres
The shared task consisted of two subtasks:
- Tokenization of CMC discourse
- Part-of-speech tagging of CMC discourse
The two subtasks made use of two different data sets:
- CMC data set: a selection of data from different CMC genres (social chat, professional chat, Wikipedia talk pages, blog comments, tweets, WhatsApp dialogues).
- Web corpora data set: a selection of data which represents written discourse from heterogenuous WWW genres. It consists of crawled websites including small portions of CMC discourse (e.g. webpages, blogs, news sites, blog commentary etc.).
Project by: Prof. Dr. Andrea Rapp
In contrast to the predominantly chiseled hieroglyphics, the italic manuscripts represent the actual writing of Ancient Egypt – written on papyrus, linen, leather, wood, ceramics, plaster or stone, using wicker stems and black and red ink. The hieratic script was used for Egypt’s different language levels for 3,000 years, before it was in some areas replaced by the demotic script in the middle of the 1st millennium BC. The so-called cursive hieroglyphs are a handwritten form-oriented implementation of individual hieroglyphics. The exploration of both fonts and the differences between hieroglyphics and the demotic script is still a desideratum of Egyptology and Codicology. The academy project aims to create a digital palaeography – in order to preserve the script for a variety of search options as well as for cooperations with international experts – to presenting it online, and to provide extensive metadata on all relevant sources. Further, partial or special palaeographies are being made available as download files or book publications. On the other hand, there are systematic research projects on cursive scripts, focusing on origins and development, functional areas, regionality, and datability. Other relevant questions are, for example, the economics and materiality of writing, the layout of manuscripts, or the identification of individual handwriting. While modules on information technology are rooted in the field of digital humanities, there are also internships focusing on writing and copying hieratic texts, and on didactics in the field of Egyptology.
Funding:
Union der Deutschen Akademien der Wissenschaften
Partners:
Johannes Gutenberg Universität Mainz – Egyptology, Academy of Sciences and Literature – Mainz, TU Darmstadt – Institute of Linguistics and Literature
Project by: Prof. Dr. Andrea Rapp
This project aims at establishing a generic metadata management system for scientific data, based on an application-oriented metadata description. The implementation is accompanied by users from different/heterogeneous fields of application. Darmstadt and Mainz are jointly responsible for managing the use case of the humanities.
Funding: German Research Foundation
Associated partners: TU Dresden – ZIH, KIT – Institute for Data Processing and Electronics, Leibniz Institute of Ecolocical Urban and Regional Development – Monitoring of Settlement and Open Space Development, RWTH Aachen – Akademie der Wissenschaften und der | Literatur Mainz – Digitale Akademie, TU Darmstadt – Digital Academy, TU Darmstadt – Institute of Linguistics and Literature
Projekt lead: Prof. Dr. Nina Janich
Funding: German Research Foundation (DFG).
The project aimed to linguistically examine children's university lectures, children's science formats on television, as well as non-fiction books for children – trying to find out how knowledge is processed for children (language and imagery) and how these non-school education formats differ.
Project lead: Prof. Dr. Andrea Rapp
ePoetics aims to further develop the eHumanities by testing current information technology methods on key texts of the humanities, on poetics and aesthetics from 1770 to 1960. As a project partner, TU Darmstadt is particularly interested in creating added value with regard to new insights in the field of development methods and metadata while fleshing out this special corpus – also with regard to technological language analysis/processing and a targeted visualization process. In addition, the open design of the corpus and its publication in virtual research environments and infrastructures allows the body of text to be extended, and to involve the science community in the research work on and with the texts and metadata, which in turn will lead to new scientific results.
Funding: Federal Ministry of Education and Research
Network Partner: Computer Science / Institute for Visualization and Interactive Systems and Computational Linguistics / Institute of Machine Language Processing, University of Stuttgart, Department of Linguistics and Literature of the Technical University of Darmstadt
Project lead: Prof. Dr. Daniel Barben (University of Klagenfurt) & Prof. Dr. Nina Janich
Funding: German Research Foundation (DFG) as part of a priority program “Climate Engineering: Risks, Challenges, Opportunities?”
In this interdisciplinary linguistic-political science project, it was examined – in the scope of climate research in general and in the priority program in particular – how (scientific) responsibility is defined and addressed. The focus was on the responsibility of researchers who investigate the opportunities and risks of so-called climate engineering technologies (for example means of storing CO2 in the soil or in the sea, or interventions in the radiation balance of the earth).
Project lead: Prof. Dr. Andrea Rapp
Funding: Federal Ministry of Education and Research
The projectdraws on the pool of about 500 medieval manuscripts from the Benedictine abbey of St. Matthias in Trier. The goal is to develop, test, and optimize new algorithms that automatically detect the macro- and microstructural elements of a manuscript page, adding the information into the metadata of each image. Examples are (metric) data such as page size, number of lines, labels, registers, parse texts, marginalia, the relationship between image and text, and many more. Furthermore, methods for the statistical evaluation of this metadata are developed and tested.
Partners: Institute for Process Data Processing and Electronics / Karlsruhe Institute of Technology KIT, Competence Center for Electronic Dissemination and Publication Methods in the Humanities at the University of Trier, Municipal Library and City Archive Trier
Project leader: Prof. Dr. Damaris Nübling (University of Mainz) & Prof. Dr. Nina Janich . Responsible at the TU Darmstadt: Prof. Dr. Andrea Rapp .
Funding: Mainz Academy of Sciences and Literature
In the course of the project, the most common (German and foreign language) surnames in Germany are interpreted, from a language-historical perspective, and listed in a digital dictionary of names. This surname-dictionary also contains onomastic information and is available to the interested public at (http://www.namenforschung.net/dfd/woerterbuch/liste/).
Project lead: Prof. Dr. Nina Janich
Funding: German Research Foundation (DFG) within the framework of an environmental-historical project group “Wege zur nachhaltigen Entwicklung von Städten”
Based on an exemplary comparison of Mainz and Wiesbaden, this project aimed to linguistically examine how sustainability is locally discussed and established as a development program according to the “Lokale Agenda 21”. One focus was on how the aspect of sustainability is related to specific urban spaces and places – and how, in turn, this affects the use of spatial metaphors in programmatic texts of the cities.
Project lead: Prof. Dr. Andrea Rapp
Funding: Federal Ministry of Education and Research
DARIAH-DE is a research infrastructure project of the ESFRI Roadmap which aims to support and expand the use of digital methods in the humanities under the European project DARIAH-EU. The 17 academic partners started working on the project on March 1, 2011.
DARIAH-DE supports and enhances the use of digital methods in the humanities. Together with successful initiatives in the field of the digital humanities in Germany
• DARIAH-DE furthers the establishment of virtual research environments in the humanities by providing advice, by connecting previously separate activities, and by technical infrastructure.
• DARIAH-DE also uses and connects existing interdisciplinary and cross-cutting digital resources, services, and insights.
• DARIAH-DE aims to establish and investigate a decentralized technical infrastructure to implement and apply specific methods of the humanities.
This goal not only involve technical tasks, but also local and international activities. It is mainly designed for the long term.
At the TU Darmstadt, apart from the Institute of Linguistics and Literary Studies, the Institute of Philosophy (Prof. Petra Gehring) and the UKP Lab (Prof. Iryna Gurevych) are involved.
Project lead: Prof. Dr. Nina Janich .
Funding: German Research Foundation (DFG) as part of the Priority Program “Wissenschaft und Öffentlichkeit”
The project was aimed at investigating the wording of different researchers and (science) journalists when mentioning a lack of knowledge. Based on the example of a public and political debate on experiments with iron fertilization of the oceans (in 2009) showed significant differences in style, rhetoric, and argumentation.
Project by: Dr. Sabine Bartsch
The LOEWE-Schwerpunkt Digital Humanities is a collaboration of the University of Frankfurt, the Technical University of Darmstadt, and the Freie Deutsche Hochstift / Goethe Museum Frankfurt. Objective: to connect basic research in the humanistic disciplines involved, focusing on information technology procedures.
LOEWE Schwerpunkt Digital Humanities – Integrated editing and evaluation of text-based corpora, co-applicant and PI in the project area “Contemporary Corpora”, January 2011 to December 2013
Partner: Prof. Dr. Iryna Gurevich, Prof. Dr. Gert Webelhuth, January 2011 to December 2013
Funded by the State of Hesse as part of the LOEWE initiative of excellence.
Project lead: Prof. Dr. Andrea Rapp
Funding: German Research Foundation
The digitization project Virtual Scriptorium St. Matthias presents the remaining collection of manuscripts from the medieval library of the Benedictine Abbey of St. Matthias in Trier – consisting of approximately 500 codes that are kept in 25 locations all over the world. The majority of about 450 manuscripts are still in Trier today. In addition to the 434 codes held by the City Library Trier and the library of the episcopal seminary, there are also other manuscripts in the archive of the Diocese Trier and in the library of the monastery St. Matthias. The cultural assets were digitized and made accessible to the public, and it has become much easier to use them for scientific research. The works presented here are valuable for different disciplines. The subjects are classical Philology, German Studies, History, Art History, Theology, Medicine and Legal History. The purpose of such a reconstructed library is to preserve the spiritual profile of an important educational center and its development, and to provide novel insights into the conditions of production and reception of its assets.
Partners: University of Trier, TU Darmstadt, Stadtbibliothek (City Library) Trier
Project lead: Prof. Dr. Andrea Rapp
Funding: Federal Ministry of Education and Research
Information and knowledge can be coded by grouping individual characters according to certain rules, and there are basic structural similarities between genome codes and linguistic code – as well as between biological development and language development (see keywords such as “ABC of Humanity”, “Book of Life”, “Language Family”). ). Another key feature in addition to the aspect of development and, thus, “historicity” is the variety (or variance) of the phenomenon. A more sophisticated understanding of the mechanisms and rules of evolution and variance enables new and more accurate methods of obtaining information, as well as the storage, processing, and evaluation of the data obtained. TU Darmstadt is responsible for the consortium management of this project.
Associated partners: TU Darmstadt, University of Würzburg, Institute for German Language Mannheim, Competence Center for Electronic Dissemination and Publication Methods in the Humanities at the University of Trier (later TU Darmstadt)
Project lead: Prof. Dr. Andrea Rapp
Funding: Federal Ministry of Education and Research
As a project in the 3rd D-Grid Call, WissGrid focuses on the sustainable establishment of organizational and technical structures for the academic field in the scope of D-Grid. WissGrid unites the heterogeneous requirements from various scientific disciplines in order to develop conceptual foundations for the sustainable implementation of grid infrastructure and IT-technical solutions. In this context, the project aims to further scientific collaborations in the grid, lowering the entry threshold for new community grids.
Alliance partners:
Alfred Wegener Institute, Helmholtz Center for Polar and Marine Research, Bergische Universität Wuppertal, German Electron Synchrotron (DESY), German Climate Computing Centre (Deutsches Klimarechenzentrum, DKRZ), Institute for German Language, Institute for Information Systems, Leibniz University Hannover, Competence Center for Electronic Dissemination and Publication Methods in the Humanities at the University of Trier (later Technical University of Darmstadt), Konrad Zuse Center for Information Technology Berlin, Leibniz Institute for Astrophysics Potsdam, Lower Saxony State and University Library Göttingen, Technical University of Dortmund, Technical University of Munich, University of Stuttgart, University Hospital Göttingen / Department of Medical Computer Science, Center for Astronomy
Project lead: Prof. Dr. Nina Janich .
Funding: German Research Foundation (DFG).
The project consisted of linguistic accompanying research to a research project that was founded in Physics and Political Science, aiming to examine how the project participants agreed on a common language for their interdisciplinary project – and which challenges arose in drawing up a joint project application.
Project by: Dr. Sabine Bartsch
The project linguisticsweb.org addresses the development and creation of tutorials, how-tos, links, tools, and approaches to a corpus – focusing on research in the fields of Linguistics, Corpus and Computational Linguistics, and other digital philologies.
The aim of linguisticsweb.org is to support students and researchers in corpus- and computer-based research by providing materials and guidance for self-study and teaching, and to further the independent use of technologies and methods of Linguistics and other philological sciences.
The portal linguisticsweb-org is used by international researchers and teachers, in the fields of research and teaching as well as in workshops.
linguisticsweb.org was created as an independent online project in 2008-09, and it has been developed further ever since.
Project lead: Prof. Dr. Andrea Rapp
Funding: Federal Ministry of Education and Research
Since 2006, TextGrid has been developed within the framework of a joint project consisting of ten institutional and university partners, funded by the Federal Ministry of Education and Research (BMBF) until June 2015 (grant number: 01UG1203A). In 2016, TextGrid became part of the DARIAH-DE research infrastructure.
As part of the project, a virtual research environment for the humanities was developed. Key pillars include the TextGrid Laboratory, which provides open source tools and services, a repository for the long-term preservation of research data, and community-wide support services.
Partner: DARIAH-DE