Step 1: focuses on personalizing a provided text to reflect my own digital literacies and the skills I aim to develop this semester. The goal is to revise the Generated Response (by GPT-4o, 30 Jan 25) to reflect my perspective while keeping it around 750 words.

Prompt: Write a 750 word reflection on digital literacy from the perspective of a 20 year old university student at a liberal arts university. Include strategies they might already know how to employ in their daily and academic lives for content creation, social interaction, as well as any platforms or software they use for research data management as well as data manipulation and analysis. Consider the different skill sets in the room given all the interdisciplinary majors ( in different fields such as Computer Science, Interactive Media, Film and New Media, Literature or Creative Writing, Art and Art History). Reflect on how digital humanities might offer new horizons on what they are studying, how collaboration might offer new avenues for developing their “computational thinking” that is in the sense of Berry and Fagerjord (A cognitive practice involving practical wisdom and reflection on computation) and how they can learn about new ways of approaching data and the world of AI. Offer critical reflection on your current abilities, what you are what you are discovering in the course as well as what kinds of new ways of thinking you would like to explore.*

As a 22-year-old Interactive Media Arts student from NYU Shanghai, currently studying abroad at NYU Abu Dhabi, my understanding of digital literacy has evolved significantly throughout this semester. Digital literacy is central to both my academic and personal development—it’s not just about knowing how to use computers and social media but understanding how digital tools shape the way we think, create, and interact. With AI and data-driven technologies evolving rapidly, digital literacy has become an ongoing process of exploration and adaptation as I learn new tools and concepts. This semester, four signature projects collectively represent my growth: the Arthur Conan Doyle distant reading analysis, Zanzibar Gazette historical data extraction, AI-generated cat image classification, and Humanitarian OpenStreetMap collaboration. These experiences have taught me that digital literacy encompasses both technical proficiency and critical awareness of how technologies mediate human knowledge and social relations.

Foundational Skills at NYU Shanghai

My Shanghai coursework established essential technical competencies that became the bedrock for later advanced work. Digital tools are at the heart of my creative process, enabling me to design, prototype, and communicate effectively. Mastering Figma for UI/UX design taught me precision in digital creation, where I learned to balance aesthetic principles with functional usability through iterative prototyping. Projects in Unity and Maya developed my 3D modeling capabilities, particularly in understanding spatial relationships and lighting effects that would later prove crucial when reconstructing historical trade routes in the Zanzibar Gazette project. Working with Adobe Animate for 2D animation cultivated my sense of motion and timing, while my recent explorations with DaVinci Resolve for film production have deepened my understanding of cinematic language through color grading and timeline editing.

These technical skills exist within a broader ecosystem of digital collaboration. Beyond my immediate field, I’ve observed how digital literacy manifests differently across disciplines while maintaining common threads. My peers in Creative Writing and Film rely on platforms like Medium and Substack for serialized publishing, developing strategies for building readership through algorithmic understanding of platform-specific content distribution. Film students working with Adobe Premiere Pro and DaVinci Resolve engage with nonlinear editing paradigms that reshape narrative construction. What unites us is our reliance on digital platforms for creation, communication, and collaboration—whether through academic tools like Discord and Slack for project coordination, Trello for agile workflow management, or Google Workspace for real-time collaborative documentation.

Even traditionally non-digital fields like Art History have transformed through digital mediation. Colleagues now use Google Arts & Culture for virtual curation, employing digital facsimiles that allow new forms of comparative analysis across geographically dispersed collections. This interdisciplinary exposure has shown me how digital literacy transcends specific tools, becoming instead a mode of thinking that adapts technological affordances to diverse intellectual needs. My early experiences with GitHub for version control and Zotero for research organization, though initially approached as technical exercises, laid the groundwork for more complex collaborations by teaching me the importance of systematic documentation and workflow transparency in digital environments.

Transformative Projects at NYU Abu Dhabi

The Arthur Conan Doyle distant reading project marked my first major engagement with Abu Dhabi’s distinctive digital humanities approach. Combining Voyant Tools’ macroanalysis with traditional close reading techniques, our team uncovered how Doyle’s post-Sherlock works surprisingly returned to pre-Holmes thematic concerns rather than maintaining detective fiction conventions. This project revealed the interpretive labor required to bridge computational patterns with literary meaning—when our Posit.cloud visualizations showed unexpected vocabulary distributions in Doyle’s late-period novels, we had to contextualize these findings within Victorian publishing trends, Doyle’s biographical circumstances, and the material conditions of serialized fiction production. The quantitative data provided provocative starting points rather than definitive conclusions, requiring humanistic interpretation to transform raw word frequencies into meaningful literary insights.

The Zanzibar Gazette historical data extraction project pushed these methods further into the realm of colonial archives. Navigating OCR inaccuracies in 1909 shipping records while simultaneously contending with colonial-era terminology required both technical problem-solving and historical sensitivity. Converting fragmented tabular data into structured CSV files involved developing custom parsing scripts while remaining attentive to how our data normalization processes might inadvertently erase historical particularities. Using Kepler.gl for spatial analysis of Indian Ocean trade routes, we had to consider how the software’s default visualization parameters might privilege certain types of maritime movement over others, potentially obscuring regional dhow networks that operated outside formal colonial shipping lanes.

A pivotal moment occurred midway through the Gazette project when we discovered our initial scanning process had accidentally excluded certain vessel types—particularly dhows and warships—from our dataset. This realization transformed my understanding of archival digitization from a technical challenge to an ethical imperative. As we corrected this oversight by rescanning relevant pages and creating separate visualization layers for different vessel classes, I recognized how digitization decisions actively shape historical interpretation. Processing these corrected records through AI tools like Gemini and DeepSeek became an unexpected case study in model capabilities—where Gemini failed to handle basic CSV formatting or expand historical ditto marks consistently, DeepSeek demonstrated sophisticated pattern recognition in inferring missing port names from contextual clues, exemplifying what Underwood (2019) terms “computational hermeneutics.”

Ethical Dimensions and Interdisciplinary Growth

My AI-generated cat image classification project brought these technical and critical strands together in examining machine perception. Applying clustering algorithms from Posit Cloud revealed how computer vision systems prioritize formal visual features over cultural context, consistently misclassifying humorous memes as “surreal art” and struggling with non-Western artistic depictions of felines. This mirrored findings from the Gazette project about algorithmic smoothing of historical nuance, confirming Alan Liu’s (2021) assertion that “all data is cooked” through the assumptions embedded in processing pipelines. The project required developing hybrid analysis methods that combined computational pattern detection with cultural contextualization, particularly when dealing with internet meme formats that rely on intertextual references invisible to vision algorithms.

The Humanitarian OpenStreetMap collaboration further deepened this ethical engagement through studying participation patterns in crowdsourced platforms. Our analysis of GitHub commit histories exposed structural inequalities where approximately 15% of contributors completed nearly 75% of mapping tasks, despite microtasking’s theoretical distribution of labor. This resonated strongly with my earlier partnership experience analyzing Shakespeare’s works, where an apparently equitable division of labor (one partner using Voyant Tools for textual analysis, the other using Posit Cloud for statistical visualization) concealed significant disparities in actual time investment and intellectual labor. These experiences collectively prompted critical reflection on how digital collaboration frameworks—whether in large-scale crowdsourcing or small academic partnerships—structure participation and shape final outcomes through their underlying architectures.

Current Integration and Future Directions

Now synthesizing these cross-campus experiences, I’m developing an oral history digitization project that applies lessons from both the Gazette’s OCR challenges and Doyle’s textual analysis. This work involves creating hybrid human-AI transcription protocols that preserve speech patterns and cultural references often lost in automated processes, while implementing participatory metadata frameworks that allow community members to annotate recordings with contextual information. The technical challenges—developing custom forced-alignment scripts for audio segmentation, designing accessible interfaces for community annotation—are inseparable from the ethical considerations about representation and authority in digital archives.

While I’ve developed substantial proficiency with creative tools (Unity, Figma), research technologies (GitHub, Voyant), and analytical platforms (Posit Cloud, Kepler.gl), I’m particularly eager to explore natural language processing applications for immersive storytelling and examine bias mitigation strategies in generative AI systems. Beyond technical skill acquisition, I want to investigate how digital literacy shapes conceptual frameworks—how computational thinking’s structured problem-solving interacts with phronesis (practical wisdom) in Berry and Fagerjord’s sense. This includes examining how interface design influences cognitive processes, how algorithmic mediation affects historical understanding, and how we might develop more reflexive digital practices that acknowledge technology’s role in shaping knowledge.

Transformative Encounters with Imperfection

Working with the Zanzibar Gazette’s OCR errors (where “Wami River” frequently became “Wani River”) and my cat project’s algorithmic misclassifications cultivated what Nowviskie (2019) calls “resistance to closure”—the ability to recognize digital imperfections as productive sites for critical inquiry rather than mere technical failures. This mindset shift fundamentally changed my approach to tools like Kepler.gl; where I initially viewed them as authoritative visualization solutions, I now understand them as what Drucker (2021) terms “speculative instruments” that require continuous interrogation of their assumptions and defaults. The Gazette’s dhow traffic visualizations, corrected through painstaking cross-referencing with German East Africa records from Encyclopaedia Britannica’s historical editions, became a case study in how digital remediation intersects with historical imagination.

Future Directions in Critical DH Praxis

Moving forward, I aim to develop what Risam (2018) calls “postcolonial digital humanities” approaches that address the gaps revealed by my previous work—both the colonial biases exposed in maritime records and the cultural blind spots in AI classification systems. This involves extending the Gazette project’s methodology to oral history preservation while implementing Klein’s framework for “AI systems that surface rather than suppress contextual complexity.” Key challenges include designing interfaces that make algorithmic processes transparent to end-users, developing preservation standards that accommodate cultural specificity, and creating collaborative frameworks that distribute interpretive authority more equitably.

This integrated approach—combining Shanghai’s technical training with Abu Dhabi’s critical frameworks—represents my evolving understanding of digital literacy: not merely as tool mastery but as a form of critical praxis that interrogates technology’s role in knowledge production while harnessing its potential for innovative scholarship. As I continue this journey, I’m particularly interested in how digital tools mediate between quantitative and qualitative modes of knowing, and how we might develop practices that honor both computational scale and humanistic nuance.

Reference

  • Berry & Fagerjord (2017) - Computational thinking framework
  • Drucker - Interface epistemology (Ch.1), Distant reading (Ch.7), Spatial data (Ch.8)
  • Rockwell & Sinclair - Text analysis methods
  • Chachra - Maintenance labor critique
  • Barton - Ethical AI in archives
  • Li - Computer vision limits
  • Lang & Ommer - Algorithmic bias
  • “Thick Mapping” - Critical cartography

Text refined with AI assistance


Submission Timeline:
Phase 1: 8 February 2025
Revision 1: 22 March 2025
Final Revision: 11 May 2025


✅ Ready for grading