Program Stream: Content as Data – Archival Approaches

There is a growing interest in viewing “content as data.” Generally speaking, work in this space seeks to explore what can be possible when archives and cultural heritage organizations begin to think about, prepare, describe, and provision access to content data in ways that promote its amenability to computational use. The program stream focuses on work being done to manage digital content from a data perspective. Come and see emerging technologies that are shaping the way we manage and present our digital archives.

 

 

 

 

 

11:15 AM – 12:15 PM (Pacific)
Hip Hop and Human-Computer Interaction with Citizen DJ
Andrea Leigh, Library of Congress
Brian Foo, American Museum of Natural History
Jaime Mears, Library of Congress

Citizen DJ is an experimental web browser application for creating hip hop music with Library of Congress free?to-use sound and moving image collections. By embedding these historic materials in hip hop music, users are encouraged to generatively and critically engage with A/V archives. Brian Foo, data artist, former b-boy, and a 2020 Library of Congress Innovator in Residence will discuss the philosophy and development of Citizen DJ. The open source tool uses machine learning to automatically generate sonically diverse samples from hundreds of hours of material. Jaime Mears, a Senior Innovation Specialist at LC Labs, will discuss the slow work of collection identification and rights clearance for explicit commercial use, as well as some of the broader implications for LC Labs machine learning experiments. The session will also include a brief tutorial and beat-making session with attendees.

1:00 PM – 2:00 PM (Pacific)
Content as Data Stream: The First Heritage Video Stored on DNA –
A Case Study About the Future of Digital Storage

Jan Müller , National Film and Sound Archive of Australia
Yasmin Meichtry , Olympic Foundation for Culture and Heritage

It is projected that by 2025, humanity will have outgrown our capacity to store the large volumes of data we create. In just 5 years, storing data sing spinning or solid-state disk drives will no longer be sustainable,
economically viable or environmentally responsible. DNA storage has the potential to vastly exceed capacity for writing disk and tape, but with dramatically smaller physical space, energy requirements and greatly increased stability. This is the first time two renowned international institutions, the International Olympic Committee and the National Film and Sound Archive of Australia, have used DNA to store video data and it is a world-first for Archives. In this presentation, Yasmin Meichtry, Associate Director of the
Olympic Foundation for Culture and Heritage and Jan Müller, CEO of the NFSA will demonstrate the potential of DNA as an archival storage mechanism. The presentation follows the joint pilot of how to store a video on DNA. The chosen video represents a significant moment in both Australian and Olympic history:
Cathy Freeman?s gold winning run at the Sydney 2000 Olympics. A meaningful and appropriate part of both the IOC?s and NFSA?s archives – defining the culture and values of Australia and the Olympics. Step by step, in this talk the IOC and NFSA will present the nature and background of the partnership, the process
of synthesising and preserving on DNA (in collaboration with a technology partner and university) and eventually: show how a digitally preserved video on DNA looks like.

2:15 PM – 2:45 PM (Pacific)
Content as Data Stream: DeepFake Detection in the Age of Misinformation
David Güera, Purdue University
Emily Bartusiak, Purdue University

The prevalence of inauthentic multimedia continues to rise. As machine learning tools and editing software improve and become easier to use, they enable almost anyone with a computer to alter images, videos, and audio. Some tools assist users in swapping one person?s face for another in an image or video. Others allow users to alter an existing audio track or create a new audio track of a person speaking. The tools produce believable imagery and audio that deceive users into believing they are real. Such digitally manipulated content is referred to as DeepFakes. Although DeepFakes may be used for entertainment and comedy, they can also be used for nefarious purposes. To prevent dissemination of misleading information, we developed a set of methods to detect DeepFakes. We use artificial intelligence to analyze multiple media modalities ? pixels, audio signals, and metadata ? to determine the authenticity of the content.

 

 

 

 

11:15 AM – 12:15 PM (Pacific)
Content as Data Stream: Cloud Computing and Storage Workflows for Digital Media
Jim Duran, Vanderbilt Television News Archive
Steve Davis, Vanderbilt Television News Archive
Dana Currier, Vanderbilt Television News Archive
Nathan Jones, Vanderbilt Television News Archive

The Vanderbilt Television News Archive (VTNA) is innovating and iterating several of its core workflows by adopting cloud computing and storage for more reliable and streamlined digital media management. Using Amazon Web Services, Trint, and OrangeLogic, the VTNA has switched from several analog or manual tasks to partial or complete automation. This panel consists of practitioners who have learned these new tools and completely transformed their previous workflows. The panel will discuss how server-less functions, automated speech recognition, machine learning, and modern DAMS have made their work easier, but also present new challenges.

 

1:00 PM – 2:00 PM (Pacific)
Content as Data Stream: Using Machine Learning for Real-time Translation, Transcription, and Captioning Workflows
Shaun Lile, Senior Solutions Architect at Amazon Web Services

Since the start of the pandemic, AWS created a realtime translation/transcription/captioning workflow for medical training videos from the World Health Organization. The presentation will discuss the development of that workflow and its relationship to machine learning and Rekognition.

 

2:15 PM – 2:45 PM (Pacific)
Content as Data Stream: AI Techniques for Classification and Filtering over A/V Assets
Kyeongmin Rim, Brandeis University
Victoria Steger, Brandeis University
James Pustejovsky, Brandeis University

The process of making archival content available for access through online platforms can be time- and money-intensive. High quality often comes with a high cost, and open source tools that are well maintained and produce strong results are rare. This presentation will provide an introduction to one of a set of tools which hopes to solve that problem. We will cover our recent work on analyzing audio data, specifically detecting acoustic elements in the audio, and using it as a filter for speech recognition (speech-to-text) software. Its development and results on real and test data will be discussed, as well as its potential usage with the broader set of tools.

 

2:45 PM – 3:15 PM (Pacific)
Content as Data Stream: The Wrap
John Polito, Audio Mechanics
Randal Luckow, HBO

468 ad