Abstract
It is near-impossible for casual consumers of images to authenticate digitally-altered images without a keen understanding of how to "read"the digital image. As Photoshop did for photographic alteration, so to have advances in artificial intelligence and computer graphics made seamless video alteration seem real to the untrained eye. The colloquialism used to describe these videos are "deepfakes": a portmanteau of deep learning AI and faked imagery. The implications for these videos serving as authentic representations matters, especially in rhetorics around "fake news."Yet, this alteration software, one deployable both through high-end editing software and free mobile apps, remains critically under examined. One troubling example of deepfakes is the superimposing of women's faces into pornographic videos. The implication here is a reification of women's bodies as a thing to be visually consumed, here circumventing consent. This use is confounding considering the very bodies used to perfect deepfakes were men. This paper explores how the emergence and distribution of deepfakes continues to enforce gendered disparities within visual information. This paper, however, rejects the inevitability of deepfakes arguing that feminist oriented approaches to artificial intelligence building and a critical approaches to visual information literacy can stifle the distribution of violently sexist deepfakes.
Original language | English (US) |
---|---|
Pages (from-to) | 32-46 |
Number of pages | 15 |
Journal | Open Information Science |
Volume | 3 |
Issue number | 1 |
DOIs | |
State | Published - Jan 1 2019 |
Externally published | Yes |
Keywords
- artificial intelligence
- gender
- Information Literacy
- media studies
- programming
ASJC Scopus subject areas
- Library and Information Sciences