DHSI Conference & Colloquium
Conference Chair: Caroline Winter (U Victoria)
Presentation recordings for all aligned events are available to registered participants on the DHSI 2023 group on the Canadian HSS Commons.
Tuesday, June 6, 4:00pm–5:00pm
Welcome
4:00pm–4:05pm
Caroline Winter (U Victoria)
Session 1: Analyzing and Editing Texts
4:15pm–5:00pm
Chair: Jennifer Stertzer (U Virginia)
Vanessa Barcelos
“Dreaming In Cuban: Mapping Literary References To Miami And Florida In Cristina García’s Novel”
View/hide abstract
Abstract: “Dreaming Cuban” is a Digital Humanities project that started in the Summer of 2022 and concluded its first stage in December 2022. Though it started with a summer fellowship, it continued throughout the graduate course “Digital Caribbean”, ministered in the Fall of 2022 by Prof. Dr. Kelly Josephs. I refer to it as the “first stage” because, as originally envisioned, it will entail many other literary texts besides the one on which I have been working, that is, Cristina García’s first novel, “Dreaming in Cuban”. The present stage consisted of the creation of a dataset made out of the novel’s portrayal and mentions of Miami and Florida, followed by the mapping of events, feelings, and characters as they move around the city – literally or through their memories and envisioned futures. The present project involves projecting the city in the book’s personae’s geographies including both places they had literally been to as well as locations imagined, thought of, and spoken of, with the use of ArcGIS mapping tools and story map. The mapping reveals the wider dimensions of diaspora with a special focus on Cuban-American women and corroborates with the view of Miami as a Caribbean space.
Matt Cook
“Longhand: Text Tokens As 3D Object Arrays In Virtual Reality”
View/hide abstract
Abstract: “While objects of study associated with academic disciplines “whose primary dimensions are spatial” (i.e. STEM) are regularly deployed in virtual reality to support research and instruction, immersive visualization technology has yet to see consistent uptake in text-centric humanities, like History, Philosophy, and Literature. This is due in part to the nature and quality of source material, which often defies visualization; transcends media; and precludes close reading by virtue of sheer scale. That’s where Longhand comes in.
Longhand is a word cloud generator, but the words are 3D models deployed in virtual reality . The models chosen represent text tokens in a corpus. Longhand exposes text-centric researchers to the specific benefits of immersive visualization, including depth cues and embodiment. The tool was envisioned as an opportunity for non-technical scholars to engage quickly in exploratory analysis – to glimpse the contents of a text corpus and generate research questions – without going down the rabbit hole of software development.
This talk will begin with the initial motivation for Longhand: An ongoing pattern of technology consultations, taking place in an academic library, related to the notion of corpus analysis. The technical architecture and inherent limitations of the tool will then be discussed, and the talk will finally gesture towards a future iteration of the Longhand software where the integration of text-to-image AI (e.g. DALL-E, Stable Diffusion, etc.) tools can be deployed to generate more precise representations of text collections for virtual exploration by distributed teams of digital humanists.”
Pranoy Jainendran, Sushmita Banerji
“Mann ki Baat: Series Of Conversations, Aesthetic Imperative Or Vehicle For Propaganda?”
View/hide abstract
Abstract: Mann ki Baat is a series of speeches made by the 14th Prime Minister of India, Narendra Modi addressing the nation. Mann ki Baat holds significant sway over the Indian population as it is broadcasted directly by the Prime Minister. As such, it is possible for this stream of speeches to carry hidden political intent or to act as a symbol to show that the government is in touch with the people. For this purpose, the English transcripts of 84 speeches from this series starting from October 2014 to March 2022 were analysed. For an overarching general analysis, Natural Language Processing tools were employed and later on, the speeches were juxtaposed with important events that have occurred in the country to investigate the responses as shown in Mann ki Baat. Moreover, 25 speeches were analysed individually in connection with 3 major events, each spanning a course of multiple months to validate the observations found through the employment of NLP tools and add to them. Observations have shown that Mann ki Baat does indeed carry political intent and works to better the image the people have of the Prime Minister and the ruling government, even at the cost of bending facts at times. However, it has also come to light that Mann ki Baat also works as a highly efficient means of spreading general awareness such as the instances observed during the Covid pandemic.
Sunghyun Jang
“A Stylistic Analysis Of Christina Rossetti’s Poetry: A Digital Literary Study”
View/hide abstract
Abstract: Christina Rossetti is the foremost female poet in the Victorian age, and I have been teaching her “Goblin Market” and other short poems in my British poetry courses. In this presentation, I will explore how a computational analysis of the entire textual data of Rossetti’s poetry can supplement the existent literary studies of her work based on the techniques of close reading, thereby giving helpful guidance on teaching and studying her poetry in the age of digital transformation. Using the Python and/or R programming language, I will conduct a quantitative vocabulary analysis of Rossetti’s poetic texts with the aim of identifying the stylistic traits of them. My study employing computational methods will focus on discerning the synchronic and diachronic patterns of Rossetti’s word usage—i.e., semantic network analysis. To this end, I will extract text from Rossetti’s poems and eliminate the unnecessary headers and footers. LIWC-22, the latest version of Linguistic Inquiry and Word Count (a software program that James W. Pennebaker and his colleagues developed), will be utilized for specifying the psychological nature of the word used. LIWC-22 assigns the words to 90 different categories of emotion, thus helping to describe mental states reflected in the speaker’s language. Another important method to be used is topic modeling. Identifying a group of words that address the same theme, the structural topic model (STM) will enable me to understand the internal thematic structure of Rossetti’s oeuvre. I hope my distant reading of Rossetti’s poetry sheds new light on the ways of implementing digital humanities methods in the study of literature.
Jeffrey C. Witt
“Text-reuse Detection With N-grams And Graphs”
View/hide abstract
Abstract: “In this short presentation, I would like to demonstrate how digital methods can help us answer concrete research questions that traditional humanities disciplines already value. In this case, intellectual historians have long been interested in identifying textual dependence between works. This is especially true among those who study the medieval scholastic tradition, where intertextuality is an essential trait of the genre. However, in traditional, non-digital, approaches, such detection is a manual process and therefore discoveries are usually partial and often accidental.
In my presentation, I will point to one recent example of this accidental text-reuse detection that turns out to be incomplete. I will then show how the assemblage of a digital textual corpus at sufficient scale, accompanied by computer analysis, can allow for the systematic detection of textual reuse. Such analysis, I will show, leads to genuine discoveries of textual reuse that was previously unknown.
As part of my presentation, I will explain how I detected passage similarity within a large corpus by converting each paragraph to a “bag of N-Grams” and then queried for passages whose intersection of common N-Grams exceeded a particular threshold. As part of my presentation, I also want to speak to the power and importance of good data-visualization. Indexing the corpus in the manner described above was not yet enough to identify the specific discoveries I made. The data needed to be visualized at a particular scale in order for interesting data patterns to emerge. A corpus graph of textual metadata proved to be critical for producing such visualizations. Once these patterns were detected, I will finally show how these patterns can be pursued algorithmically leading to even more discoveries.”
Thursday, June 8, 4:00pm–5:00pm
Welcome
4:00pm–4:05pm
Caroline Winter (U Victoria)
Session 2: DH Beyond Academia
4:15pm–5:00pm
Chair: Jon Saklofske (Acadia U)
Adrianna Martinez
“A Postcolonial Pedagogical Exploration Of Digital Humanities: Open Mapping”
View/hide abstract
Abstract: “Informed by Roopika Risam’s work in postcolonial digital humanities pedagogy, this presentation will explore a postcolonial approach to introducing the topic of digital mapping by shifting power and focusing on the ways that open mapping has expanded the digital cultural record. This presentation will highlight participatory mapping and explore the roots of this practice in the social survey movement with Jane Addams, Florence Kelly, and W.E.B. Du Bois’s work in maps. Exploring the significance of these projects lifts up the work of these humanists as well as reinforces their expansion of the digital cultural record in significant ways. We will discuss and explore digital maps, specifically street maps, and the projects that have propelled this democratization of the digital space in the field of digital humanities.
I argue that the visualization practice built out of the social survey movement, such as W. E. B. Du Bois’ work created the framework for how open street mapping projects today are built. In this session we will explore works by scholars Stephanie Boddie, and Amy Hillier in exploring Du Bois research and the continuation of this work in examples which center public health, humanitarian projects(HOTOSM) and others (Morris Justice Project). We will then switch the conversation to one about participation in the building of these spaces, and usage of these tools as a pedagogical element in and of itself.”
Kit Snyder
“‘Thank The Phoenicians:’ Constructing Home Through Fan Recordings ”
View/hide abstract
Abstract: “In this presentation, I will focus on fan-created podcast recordings of the Disney ride Spaceship Earth. These podcasts contain the ride audio (usually recorded by the fan-creator) as well as paratextual materials such as trivia and description. I argue that these recordings are fan performances that create a shared sense of home. By focusing on the original fan-created framing material around the ride audio instead of the ride audio itself, we can see how fans are deliberately creating a shared sense of home through immersion, embodiment, and shared knowledge. In this presentation, I analyze fans’ choices and commentary around the ride narrator, the contextual ride information presented before the ride audio, and the personal memories included in the podcasts. These all build together until the fan-creator and fan-listener have created a shared space together.
My work relies heavily on the concept of Heimat or a shared “physical, emotional, and ideological space” from Cornel Sandvoss’s work Fans (64). I draw from Jennifer Burek Pierce’s work Narratives, Nerdfighters, and New Media in my discussion of the fan community as a shared place. Additionally, I use the concept of collective intelligence from Henry Jenkins’s Convergence Culture as a way to frame my conversation about shared knowledge.”
Cassie Tanks
‘After The War’: Developing A Collaborative Project With The Public And Academics Across Generations”
View/hide abstract
Abstract: “The following email arrived from an undergraduate collaborator late one evening:
Subject line: “HELP!”
I need your help. I know Professor wanted us to write about the Harlem Hellfighters in South Carolina but I really want to include the segregated parades, Paris, and how it’s part of the Red Summer in South Carolina. Is that ok?
Thank you!
What evidence of the power of collaborative public focused digital humanities projects. This student had no prior DH experience but over the course of a semester she began making critical connections between space, place, power, and history.
“After the War” is a public DH project striving to uplift the experiences of Black veterans of the U.S. military and other veterans-of-color through oral histories and archival materials. The goal is to put the veterans’ experiences in conversation with each other and with the public in order to facilitate a better understanding of the history of service and struggle for Black Americans and Americans of Color. But, this brief presentation is less concerned with reporting the progress of the project and rather reflects the development of collaborative relationships between community, undergraduates, graduate students, and faculty members. By drawing on critical archival theories and practices, public history ethos, and drawing on the words of Alanna Prince and Cara Marta Messina- that Black DH provides a framework for “how we might all work together to uplift each other”- the “After the War” project is not just about product, but about process.”
Tuesday, June 13, 10:00am–12:00pm
Welcome
10:00am–10:05am
Session 3: People and Politics
10:05am–11:00am
Chair: Jacquelyne Howard (Tulane U)
Iga Adamczyk
“Personal Data In Digital Editions – Coding And Visualization Based On The Project ‘Philomaths Archives – Digital Edition’”
View/hide abstract
Abstract: “Philomaths archives – digital edition’ is a Polish project, in which all literary and non-literary documents created by the Philomath Society are published in digital versions. It has a vast amount of information about historical and literary figures. The number and diversity of identities triggered many complications in coding and pointed out limitations caused by current technical solutions and deficiencies in Text Encoding Initiative. During my occurrence, I would like to present a few vital problems and solutions to them (advantages and disadvantages), as well as display the possibilities of visualising data. Everything will be based on our project mentioned above. Additionally, I will focus on presenting the visualisation of personal data in EVT Viewer and TEI Publisher.
Based on this, I would like to answer two questions: ‘Is the current standard of digital editing sufficient for the expectations of editors and literary scholars?’ and ‘What is lacking in existing solutions for these problems – is it problems with the code standard or with the tool for a display of digital editions?’”
Moira Armstrong
“Access Guaranteed To All Citizens?: Disability And Digital Public History In The Queer Pandemic Project”
View/hide abstract
Abstract: This presentation will discuss Queer Pandemic, an oral history project collecting the stories of queer people in the United Kingdom during the COVID-19 pandemic. First, I will discuss the public history applications of the project, which has been displayed at Queer Britain, the national LGBTQ+ museum of the UK in London, and as a traveling kiosk in other major cities throughout the UK. Next, I will analyze the in/accessibility of these applications for disabled, chronically ill, and immunocompromised people in the time of an ongoing pandemic, engaging with theories about who constitutes the public, including the work of Jill Liddington and Jürgen Habermas. Finally, I will outline the principles behind and execution of a spring 2023 virtual event intended to create accessible, equitable access to Queer Pandemic and Queer Britain for disabled audiences. I will use PowerPoint slides, photographs of exhibits, and video excerpts from the spring 2023 virtual event to illustrate these points.
Dhruvee Birla, Nazia Akhtar
“A Study Of Pandemic Experiences Of LGBTQ+ Community Through Social Media Data”
View/hide abstract
Abstract: “Social media was one of the most popular forms of communication among young people of a certain class demographic during the pandemic. Consequently, crucial debates and discussions about the pandemic crisis itself have also developed on social media platforms, making them a great primary source to study the experiences of the LGBTQ+ community during the pandemic. We conducted research using LDA topic modelling on a sub-reddit from the Reddit platform to understand the nature of this discourse. LDA Topic modelling is useful in identifying patterns and themes in large volumes of unstructured data and is an appropriate tool for analyzing social media posts.
The results from our preliminary research in this area suggest that institutions such as the health care system, justice system, and legislative system perpetuated systematic inequalities against LGBTQ+ communities during the pandemic, thereby adding to the pre-existing stigma against them during a global crisis. Data from subreddits such as ‘lgbt’ have also suggested a shift in tone in these discussions from the period during the pandemic to the period after the pandemic. We intend to now expand our analysis to other platforms, such as Twitter and Facebook to verify and qualify our approach and understanding of this problem. Researchers, such as Ahmed & Sifat (2021), Pandya & Redcay (2021) and Bhalla & Agarwal (2021), working on qualitative studies have given us enough evidence that proves that inequalities and violence against these communities had intensified during the pandemic. Although research has been done showing that COVID-19 pandemic correlates to an increased amount of discrimination towards the LGBTQ+ community, no research prior to this study has examined content available on Twitter, Facebook and Reddit using LDA topic modelling to understand this correlation. By conducting this research, we will also be able to offer an informed analysis of the effectiveness of computational tools in the study of gender and look at this population from a new perspective.”
Arun Jacob
“Machine Translation And Politics: Mapping The Media Genealogy Of Digital Humanities Collaborations And Opportunities”
View/hide abstract
Abstract: This paper will shed light on the legacies, logics, and cultural techniques that have shaped and formed the early collaborations and opportunities in digital humanities projects in machine translation research and computational linguistics. The media history of the machine translation project led by Léon Dostert, the Director of the Institute of Languages and Linguistics at Georgetown University in collaboration with IBM, helps unpack how war is the motor-force of history. By tracing the lineages of the machine translation media technologies, i.e. their discursive formation, the networks through which the discourse was circulated and the apparatuses that were formed in the process. We are able to gather how these instruments of knowledge production render the world knowable and representable through the production, storage, and distribution of particular kinds of data, shaping knowledge creation and producing and sustaining power relations. Alex Monea and Jeremy Packer’s media genealogical intervention insists on suturing questions of power’s genealogies and subjectivation to the media archaeological mode of analysis. This approach enables me to consider the agential potential and embeddedness of media technologies operationalized in digital humanities vis-a-vis relations of power. My analyses will show how the institutional systems that work to gather, collect, store, transcribe, and distribute the data of machine translation are inconspicuously tangled in relations of power.
Zhiwei Wang
“Being Chinese Online – Discursive (Re)production Of Internet-mediated Chinese National Identity”
View/hide abstract
Abstract: A further investigation into how Chinese national(ist) discourses are daily (re)shaped online by diverse socio-political actors (especially ordinary users) can contribute to not only deeper understandings of Chinese national sentiments on China’s Internet but also richer insights into the socio-technical ecology of the contemporary Chinese digital (and physical) world. I adopt an ethnographic methodology with Sina Weibo and bilibili as ‘fieldsites’. The data collection method is virtual ethnographic observation of everyday national(ist) discussions on both sites. On each ‘fieldsite’, I observe how different socio-political actors contribute to the discursive (re)generation of Chinese national identity on a day-to-day basis with attention to forms and content of national(ist) accounts that they publicise on each ‘fieldsite’, contextual factors of their posting and reposting of and commenting on national(ist) narratives and their interactions with other users about certain national(ist) discourses on each platform. Critical discourse analysis is employed to analyse data. From November 2021 to December 2022, I conducted 36 weeks’ observations with 36 sets of fieldnotes. Based on fieldnotes of the first week’s observations, I found multifarious national(ist) discourses on both ‘fieldsites’. Second, Sina Weibo and bilibili users have agency in interpreting and deploying concrete national(ist) discourses despite the leading role played by the government and the two platforms in deciding on the basic framework of national expressions. Third, the (re)production process of national(ist) discourses on Sina Weibo and bilibili depends upon not only technical affordances and limitations of the two sites but also some established socio-political mechanisms and conventions in offline China.
Elizabeth Zak
“An Examination Of The Coverage Of The Charlottesville Riots And January 6 Insurrection At The Capitol”
View/hide abstract
Abstract: People subconsciously shape their perception of an event by the information they acquire and process from various sources. Researchers have conducted studies regarding the news media as well as coverage of riots and protests. Since we live in an increasingly politically divided time, we cannot assume that all coverage is unbiased. News organizations remain one of the main media from which we consume information. However, in these increasingly divisive times, we must analyze the outlets and media we consume. Understanding the bias that a news outlet presents can allow us to more critically examine the information we use. While many methods of identifying bias exist, such as textual analysis of titles and article content, visualizations present a particularly tricky challenge. One under-utilized method in analyzing photos is exploring the rhetoric used. I chose to analyze the rhetoric used when discussing the Charlottesville riots and the January 6th attack on the U.S. Capitol. I analyzed photos of the events, and the emotions that both image and title conveyed. By examining how both conservative and liberal news outlets discuss the events, we can see how the acceptance of violence has evolved since the Charlottesville riots, and how bias currently exists when discussing white supremacy groups.
Break
11:00am–11:05am
Session 4: Rhetoric and Remediation
11:05am–12:00pm
Chair: Timothy Duguid (U Glasgow)
Alessandra Bordini, John Maxwell
“Remediating The Past, Engaging The Present: The Digital Life Of Simon Fraser University’s Aldine Editions”
View/hide abstract
Abstract: “In this paper, we use Simon Fraser University’s digitized Wosk–McDonald Aldine collection as a case study to explore the idea and practice of remediation—here understood as a dynamic, open-ended process involving multiple actors and relations—and its potentially key role in making significant rare materials (in this particular context, early printed books) not only more widely accessible but also more meaningful and impactful. Drawing from the work of media theorists Bolter and Grusin, as well as from more recent interdisciplinary scholarship, we propose an expansive view of remediation as a complex set of nonlinear, interdependent processes, encompassing a rich variety of social, economic, and cultural forms, values, and practices. This paper also seeks to problematize the notion of “universal access” to special collections materials, while acknowledging its importance, by asking the following question: what concrete steps can be taken to build truly open, diverse, and participatory special collections?”
Bailey McAlister
“Rhetorical Delivery In The Early 2020s”
View/hide abstract
Abstract: “Modern communication relies on strategic rhetorical delivery of individual sentiments and arguments, as new methods of engaging with content continually transform our understanding of the current rhetorical situation. Delivery, as a rhetorical canon, historically undergoes periods of deprioritization in digital humanities communication, but it has recently come back to the forefront of trending rhetorical strategy. As emerging technology and new media continue to provide more methods of delivering arguments, we are left wondering about best practices for reaching authentically engaged audiences.
In the early 2020s “culture of acceleration,” as Daniel Keller defines it, we manage our relationships with multimodal content by allowing our communication literacies to “change and merge with other literacies.” This means that communicators across disciplines must understand the ways in which old and new methods help them connect with the community and deliver authentic arguments and creative sentiments. In the digital humanities, we are responsible for supporting education that best equips communicators for navigating modern rhetorical delivery.
This lightning talk will draw from the rhetorical theories of Thomas Sheridan and Gilbert Austin, juxtaposing their traditional ideas with current trends in digital composition, live streaming, and social media to facilitate discussion about creating and sharing knowledge in the early 2020s. The goal of this digital humanities research is to explore relationships between rhetorical delivery, emotional intelligence, and authenticity.”
Minato Sakamoto
“Algorithmic Reconstruction of Renaissance Music Improvisation”
View/hide abstract
Abstract: “The public release of the Stable Diffusion AI in August 2022 shocked artists with its capability of allowing anybody to generate artworks from a single-line text prompt. Will AI replace human creativity? Will human composers become obsolete? Examining the history of algorithmic automated music composition, which spans a period of more than four hundred years, can provide insight into this philosophical and technological concern.
According to the musicologist Julie Cumming, singers in Renaissance Italy could improvise complex canonic music by following a set of musical operations; they were also able to improvise freely on top of it. Such a double-layered improvisation synthesizes algorithmic organicism and human spontaneity, showing that predeterminate musical thinking can paradoxically expand our musical imagination. My own generative music composition “”Renaissance Renaissance”” attempts to renovate this rich historical practice.
In the Digital Humanities Summer Institute, I will present how I applied the algorithm in Cumming’s article “Renaissance Improvisation and Musicology” for the generative music context today. The installation presentation will demonstrate the combination of modern computational technologies and musicological knowledge of Renaissance music can facilitate creative imagination beyond geographical, temporal, and linguistic distances.”
Jacquelyn Sundberg
“From Visitor To Player – Bringing The Agony Ads To Life With Pollaky’s Agonizing Advenure”
View/hide abstract
Abstract: There are two components to the “News and Novel Sensations” exhibition: a digital touchtable with interactive data, a game, and resources, as well as physical exhibition cases. Curated by the Ciphers of The Times team, led by Nathalie Cooke of McGill University, the exhibition began as a distant reading exploration of the Agony advertisements in The Times of London. Agony ads were personal advertisements published largely anonymously on the front page of daily Victorian newspapers. The Ciphers team painstakingly built a corpus of agony advertisements from The Times and compared that corpus to another of text from Victorian novels. Their research and the resulting exhibition investigate the ways that Victorian novels influenced news and vice versa. The agony ads were often encoded, making them fascinating gibberish until decoded. The challenge was to present this data so that all could explore, understand, and interact with the exhibition. One solution was Pollaky’s Agonizing Adventure, a short narrative detective game created using Twine. Players decipher coded clues and advertisement for themselves, using real historical figures, advertisements, and sources. The game invites visitors to become players. Players can then to experience firsthand how agony ads were often used in the Victorian era, by lovers planning a liaison, or by detectives tracing clues. In a few minutes, as they unravel the mystery of a missing girl and decode clues in the agony ads, players get drawn into the narrative of the agonies and detectives, emerging with a richer appreciation of the newspaper ads and novels in the physical cases.
William J. Turkel, Charankamal Mandur
“Studying Direct-to-consumer Television Advertising At Scale Using Mismatched Text And Video Descriptors”
View/hide abstract
Abstract: The United States is relatively unusual in allowing direct-to-consumer advertising for prescription pharmaceuticals (DTCA). These advertisements are monitored by the Food and Drug Administration (FDA) and required to contain information about risks, such as side-effects that may include death. In the case of television, advertising often relies on a mismatch between the ‘fine print’ and the imagery and/or music that accompanies it, in an attempt to influence viewers. Here we present a preliminary method for studying this phenomenon at scale, drawing from the Internet Archive TV Database, and the Television AI Explorer created by the GDELT project with Google’s Video Intelligence API. We systematically extract portions of TV broadcasts where the closed captioning has terms like ‘prescription’ and ‘side-effect’ but the AI watching the video has identified imagery of things like ‘wilderness’, ‘sunset’, and so on. Techniques that we discuss include separating pharmaceutical advertisements from other kinds of content; using audio-visual fingerprinting to identify the reuse of particular ads; automatically generating keywords for search; and comparing different conditions, drugs, and television markets.
Janneke Van Hoeve
“Digital Humanities Tools As Methodologies For Academic Research: A Reflection On Three Term Papers ”
View/hide abstract
Abstract: “As a reflection on three term papers that I will write between January and April of 2023, this presentation will share various tools related to the digital humanities that will be utilized as research methodologies. These three papers are being written for graduate-level art history seminars (focused on art institutions, transversal modernisms, and Canadian Community and Identity). All of the papers will be written in support of my proposed Master’s thesis, currently titled “”ARTiculating Canadian Identities: The Canada Council Art Bank and EDI in Contemporary Canadian Fine Art””.
Methodologies informed by my specialization in digital humanities that will be utilized to support my research for these three papers include data visualization, timelines, and interactivity, and text analysis. Discussed in light of the three term papers, these tools are accessible to students in terms of cost and functionalities. This presentation will demonstrate how such tools are beneficial towards the academic research process, drawing on the three term papers for examples and further impact in consideration of next steps in a trajectory towards my thesis. Overall, the core questions that this presentation will address include: What types of tools can students easily use to support their research? How can tools related to the digital humanities be used in the academic research process? What types of unconventional outcomes can these tools offer in the presentation of one’s research? “
Friday, June 16, 10:00am–12:00pm
Welcome
10:00am–10:05am
Session 5: Ways of Doing DH
10:05am–11:00am
Chair: TBD
Geremy Cames, Margaret K. Smith
“Building A Student-focused Digital Humanities Network”
View/hide abstract
Abstract: “In 2020, faculty and scholars at universities, high schools, and cultural institutions in the greater St. Louis metropolitan area began networking to seek ways of enhancing digital humanities (DH) pedagogy in the region. Most of the earliest members of this group came from teaching-focused universities; thus, from the outset, the organization placed education and student success at the center of its ambitions, in contrast to networks and labs that focus more on facilitating research at R1 universities.
This network of educators, now called the St. Louis Digital Humanities (STL DH) Network, received a National Endowment for the Humanities grant and a Missouri Humanities Council grant in January 2022. Together, these grants supported eighteenth months of workshops and other collaborative efforts to improve DH education in St. Louis. In particular, the grant projects focused on finding ways of making DH education (and the many valuable student outcomes it supports) more equitably accessible to students at under-resourced institutions.
At DHSI 2022, the presenters discussed the project’s earliest activities. In this 2023 presentation, the presenters will report the network’s accomplishments during the remainder of the grant period, including 1) a workshop at which secondary and postsecondary faculty launched its first major collaborative endeavors, 2) the development of a resource website for area DH educators, and 3) a regional showcase of student DH work. The presentation will conclude with discussion of the network’s future plans and how it will maintain momentum beyond the expiration of the grants.”
Brian Jukes
“PhD To Python (and Back Again)”
View/hide abstract
Abstract: “My conference presentation is based on my journey from a disheartened PhD student of English Literature, being inspired by a Careers in Python course, and then using this knowledge to reinvigorate my doctoral research. It will focus on the program I made during this course, which scrapes data from Project Gutenberg (such as novels by H. G. Wells) and allows for user manipulation, such as searching a series of texts using a category (such as nineteenth century scientists) and viewing trends using the pandas module and graphs. It will also consider how my background as a literary scholar shaped the program (such as using the user’s inputted data to produce relevant information on upcoming conferences).
The talk will include a demonstration of my Python project, an overview of the script itself, and a consideration of my research before and after the implementation of this Python project. It will also consider further research that will continue to synthesise the study of English Literature and Python programming. As well as this, it will contemplate whether the literary scholar should be wary or welcome AI such as ChatGPT.”
K Kavitha
“First-generation Indian Digital Humanities Scholar And Minimal Computing”
View/hide abstract
Abstract: “Digital Humanities (DH), as a discipline, as a practice and method currently is booming in public and private academic institutions located in various parts of India. Many scholars who work in this new area are not trained in computational methods and do not have access to advanced technologies. However, they endeavour to achieve their aim through various minimalistic approaches and minimal computing methods. Minimal computing is, proposed by Roopika Risam and Alex, an approach that advocates to use the necessary, available and sufficient technologies for digital humanities scholarship. Although the current Indian DH scholars lack training in DH tools and technologies, how do they conduct their research by adopting minialitistic approach? how does minimal computing approach would help to attain their aim? and what are their struggles and sucesses? Positioning these scholars’ experience in the wider historicity of DH, can we categorize them as first-genreation DH scholars? In this paper, I will endeavour to answer these questions through my experience as a DH scholar and a survey report that will be collected from various DH scholars around India and define first-genreation dh scholars.”
Jan Maliszewski
“Pragmatics Of Digital Editions Of Medieval Manuscript Sources: A Case In Favour Of Low-complexity, Open Access Solutions”
View/hide abstract
Abstract: “The goal of the presentation is to analyse some practical issues in the design of digital critical editions of medieval manuscript sources. These remarks come from recent engagement in the preliminary stage of an edition of Robert Courson’s Summa (developed with prof. Gary Macy), designed as a digital open-access edition. Issues discussed in this paper are directly related to its advancement.
As I would like to argue, in the currently available digital solutions in the field of medieval studies a general preference is given to semantic innovativeness, with relatively little focus on user experience optimization. I will discuss this trend on two examples: Brepolis LLT text browser (the single biggest search engine for modern scholarly editions of medieval sources) and the pioneering publishing project LombardPress developed by prof. Jeffrey Witt.
While some universal solutions employed in printed editions – e.g. page division or apparatus layout – stem from their medium material limitations and may seem an unnecessary liability for a digital project, they also offer universally accepted publishing conventions that can be utilized as a powerful tool to facilitate scholarly communication. Based on this perspective, I will outline a tentative digital standard for the publication of critical editions of medieval doctrinal sources, that focuses on offering an open-access printable edition while providing manuscripts facsimile and, where feasible, enabling alternative forms of data representation via XML-TEI encoding. Similar solutions have been applied in the edition of William of Auxerre’s Summa de officiis ecclesiasticis (ed. Franz Fischer, at guillelmus.uni-koeln.de).”
Megan Perram
“Literary Hypertext As Illness Narrative For Women And Nonbinary Individuals With Hyperandrogenism”
View/hide abstract
Abstract: Illness narratives, or autobiographical accounts of the lived experience of pathology or disability, have been established as an effective therapeutic intervention for responding to emotional well-being related to illness (Couser, 1999, 2004, 2005, 2009; Frank; Hartman; Hawkins; Irvine & Charon; Kleinman; Mintz; Sontag). The scholarly field related to illness narratives is currently grappling with the medium’s expansion from the traditional book to digital-born narratives, however, there is limited research analyzing illness narratives built through literary hypertext. Literary hypertext is a form of digital story writing that calls on the reader to participate in the narrative’s unfolding by selecting hyperlink options which branch the narrative into nonlinear directions. There has been a revival of scholarly and public interest in literary hypertext in the past decade, owing to the genre’s culture of free production and distribution (Anthropy; Harvey). This project questions how women and nonbinary individuals with the endocrine disorder hyperandrogenism can use hypertext technology to write illness narratives that construct positive relationships between their identities and the world. Ten participants with hyperandrogenism completed a pedagogical module on building hypertext illness narratives. The corpus of this research, including participant narratives and interview transcripts, was analyzed through a feminist new materialist theoretical framework and a novel methodology called Critical Discourse Analysis for Digtial-Born Narratives. The findings of this research argue that literary hypertext technology was used by participants to visually map and manually chart experiences through the practice of hyperlinking in order to create a structure perceived as best suited for therapeutic reflection.
Erin Scott
“Gamified Learning Through Duolingo And The Development Of A Digital Artistic Language Corpora”
View/hide abstract
Abstract: Duolingo is a Mobile Assisted Language Learning (MALL) application which uses game-based elements to engage individuals, motivate action, promote learning, and solve problems. One of the more successful examples of mobile language learning through gamification, Duolingo relies heavily on game processes such as challenging tasks, incentive rewards, systematic levels, and the ranking of users based on achievements. These strategies for language learning will be unpacked and analyzed for their digital capacities as well as language learning efficacy, with a focus on minoritized language learning experiences. Through an iterative and creative process of using Duolingo, this paper draws on the complexities of learning a heritage language through a digital application while being physically and communally isolated from the target language community. As a second-generation Scots, this work is informed by my own heritage language learning journey and takes the method of creative and critical analysis of the positives and negatives of learning Scottish Gaelic exclusively through Duolingo. The paper presents an intimate and creative understanding of diasporic language revitalization efforts through the presentation of video poems which serve as an artistic language corpora, alongside the critical reflections of heritage language learning through digital tools. I conclude with reflections about the limitations and freedoms that using Duolingo provides for distanced or disconnected learners from their speaking community. Finally, I propose ways which both digital tools and artistic outputs can create new contexts for language revitalization, learning, and documentation.
Shanmugapriya T.
“Concepts As Models: Formalizing Concepts And Creating Computational Modeling For Historical Research”
View/hide abstract
Abstract: The concepts of textuality are derived from the epistemological understanding of the discourse. Such concepts are plural and complex, and they can be multiplied based on the epistemological derivation, as these concepts are not formalized. On the contrary, digital humanities can function only based on the formalized concept and model. In this respect, the lack of concepts as models precludes applied computational analyses to answer the qualitative hermeneutics question why? Such a lack of formalized models in theoretical digital humanities poses a series of questions for applied digital humanities. In this article, I ask how do we formalize concepts as models and how can we use them efficiently for domain specific corpus? How do we create a metamodel? I endeavor to answer these questions through a case study of British colonial India corpus. For this, I employ the following both conceptual and theoretical framework for formalizing concepts and creating metamodel for computational modeling through a series of phases. It includes concept-based model, semantically enabled approach and semantic network.
Break
11:00am–11:05am
Session 6: Human/Machine Learning Reading & Writing
11:05am–12:00pm
Chair: Chris Tanasescu (U Oberta de Catalunya and University of Louvain)
Sharanya Ghosh
“Reading In The Public Sphere: Insights From Interviews With Indian Digital Social Readers ”
View/hide abstract
Abstract: The presentation discusses insights drawn from interviews as part of a DH doctoral project on Indian digital social readers (DSR). Semi-structured interviews with readers from different parts of India are coded and analysed (using the Quirkos software) to empirically understand how reading happens on both personal and social scales, intending to theorise the aesthetic properties of reading fiction in the DSR context. Responses show how an essentially private act like reading may be affected as individual readers engage in/with online reading communities. These insights will be instrumental in formulating a critical framework for the aesthetics of reading and addressing the absence of study on Indian digital social readers in current literature. By breaking away from the conventional tropes of literary/ critical theories discussing relations between text, author, and reader, this study delves deep into the habits/ practices of “citizen readers” (Champagne, 2020) that shape the digital social reading communities. The presentation will critique the personal and collective dynamics of reading in digital social reading spaces by revisiting existing discourses on the “public sphere”. The full study uses an embedded mixed-methods approach to gather both qualitative and quantitative data so as not to overlook the subjective dimensions of reading or dilute the philosophical implications of abstract terms like aesthetics. Therefore, this presentation will also discuss the significance of employing mixed-methods design for DH projects where subjective elements play a crucial role in explaining an abstract phenomenon.
Cole Mash
“Anis Mojgani And The Three Texts Of Spoken Word”
View/hide abstract
Abstract: “Despite a recent proliferation of works dovetailing the fields of sound, literary, and performance studies, which have expanded the discourse of poetry in performance (see Camlot and McLeod 2019; Gingell; Mason et al 2015; Street 2017) work focused on Spoken Word have been rare, especially works considering the formal and aesthetic elements of the prominent poetic mode. Spoken Word, since the digital turn has seen a rich digital (and mediatized, to borrow Philip Auslander’s term) proliferation: Spoken Word is no longer just in cafes and bars at Open Mics or Slams. It is on Facebook, YouTube, Apple Music, Spotify, HBO Max, and more recently TikTok. Spoken Word poet Neil Hilborn’s “OCD” has over of fifteen million views on YouTube, as do a number of other video poems. But how do we ‘read’ (that is, interpret) a mode that exists as print, live performance, as well as audio/video recording?
In this paper, I will consider the three texts I see as constituting the interpretive field for the study of Spoken Word (the printtext, the visualtext, and the audiotext) in order to consider how a hermeneutic attention to visual, sonic, and mediatized aspects might alter, or supplement how we ‘read’ Spoken Word (and poetry in performance generally)? Drawing on methods of close reading and close listening alongside what might be called close viewing, I use Anis Mojgani’s “Trees Grow Tall” as a case study to “re-read” the body, media, and production context as formal, aesthetic, and socio-political elements of contemporary performance poetry.”
Sara Sikes, Tom Lee, Anke Finger
“Flusservision: A Design Thinking Approach To Interactive Reading”
View/hide abstract
Abstract: One certain thing is uncertainty. The future is so elusive that something that at one point seems to be an impossibility can quickly become reality. The team at Greenhouse Studios (University of Connecticut) discovered this in the midst of working on the project “FlusserVision: Imagining Flusser’s Tomorrow”. Based on Anke Finger and Kenneth Kronenberg’s book translation _What If: 22 Scenarios in Search of Images_ (U of Minnesota Press, 2022), the Greenhouse Studios team set out with the goal of visualizing Flusser’s scenarios of the future by mixing physical objects with augmented reality. However, as the pandemic of 2020 dragged on, the project became mediated entirely through technology, new members with diverse skills became a part of the team, and certain themes became more prevalent. Climate change, overpopulation, technology, disastrous politics, nuclear war–these are just some of the themes which Flusser built into his visions of the future and which are captured by the project. But there are also senses of wonder, playfulness, and mystery. FlusserVision attempts to visualize Flusser’s scenarios and guide the user through the sometimes strange, yet familiar, worlds that Flusser envisioned. FlusserVision showcases the diverse skill sets and perspectives of its team, ranging from academics to artists and game designers. FlusserVision culminates in a multimedia experience, blending 3D experiences with narrative-driven videos. This presentation will include a peek into the team’s design process along with a recorded demo of the project and its next stage: an interactive, crowdsourced reading experience hosted by Manifold/U of Minnesota Press.
William J. Turkel, Allen Priest
“What Causes Contemporary Facial Recognition Systems To Misclassify Historical Photographic Portraits? An Investigation Of Facial Landmarks, Pose, And Subject Age”
View/hide abstract
Abstract: In previous work we showed that facial recognition systems that have been trained with 21st-century images occasionally misclassify photographic portraits from the second half of the 20th century. This is not surprising, since researchers such as Joy Buolamwini have shown that contemporary facial recognition systems work best at determining gender when presented with images of light-skinned males and tend to misidentify dark-skinned females much more frequently. Similar biases have been observed when computer vision is used to classify trans* people, people with disabilities, or people wearing religious head coverings. In our dataset of roughly a thousand faces that have been classified by hand, we found the software sometimes struggles with pictures of dark-skinned and racialized people, and with those of individuals who transcended the bounds of narrow hegemonic gender ideals. Predicting exactly which images are likely to be misclassified is more difficult, however. Here we investigate the use of neural nets that classify faces based on the location of facial landmarks, and those which estimate the pose and age of the photographic subject. Our goal is to develop tools that can identify facial images that are likely to be misclassified (so they can be checked by human researchers) thus redressing the potential marginalization of racialized or gender non-conforming persons in historical research that uses machine classification.
Shu Wan
“Teaching Text Generation, Not Threatening Academic Integrity”
View/hide abstract
Abstract: “The emerging text generation technology ChatGPT proves its potential consequences on the menace of academic integrity. An increasing number of humanities scholars and teachers became concerned about how to avoid their students’ abuse of this new technology in completing writing assignments. Based on the pedagogical experiment in a history class, this study aims to explore how to introduce the trending technology and instruct its “”threat”” to academic integrity.
The first section demonstrates the use of ChatGPT and discusses its threat to academic integrity in the classroom. Teaching an Asian history class this winter, I integrated a section introducing ChatGPT to students. Concerned about the protection of their privacy when creating ChatGPT accounts by providing their email addresses and phone numbers, I assigned students to “”play with ” this trending anonymous text generator TextSynth (https://textsynth.com/) in the class. Unlike ChatGPT’s function in answering questions, TextSynth is designed to supplement incomplete sentences and paragraphs entered by users. Students are encouraged to play with the platform inside and outside the classroom.
The second section shifts to reviewing and reflecting on students’ feedback. After the class demonstration, I invited students to reflect on their experience with the practice of playing with TextSynth and to post both the machine-generated text and their reflection pieces on the Padlet. This critical reflection section is planned to raise students’ awareness of the extent to which the misuse of text generators may be harmful to their learning.
The last section shifts to a theoretical reflection on how to react to the threat brought by text generation technology. While a large number of humanities instructors may be worried about student abuse of text generation technology, there is “”Nothing to fear but fear itself.”” In other words, While the emergence of GPTZero and other apps may detect misuse, the competition between the technology “”advocating”” academic misconduct and preventing it is not over. In supplement to the advancement of technology in detecting technology-assisted academic misconduct, this essay contends for the significance of instructing students to learn of its harmful influence in class teaching.”