Critical AI’s main focal point AY 2021-22 is data. Our DATA ONTOLOGIES (via zoom) workshop begins in February 2022 and succeeds last semester’s workshop on the ETHICS OF DATA CURATION. Both workshops are the product of a National Endowment for the Humanities and Rutgers Global sponsored international collaboration between Rutgers and the Australian National University. The lead organizers for the series are Katherine Bode and Baden Pailthorpe at ANU and Lauren M.E. Goodlad at Rutgers. All of the workshops and associated talks are free and open to the public but space is limited so please register well in advance (see schedule and Zoom registration links below).
“Artificial Intelligence” (AI) today centers on the technological affordances of data-centric machine learning. While talk of making AI ethical, democratic, human-centered, and inclusive abounds, it suffers from lack of interdisciplinary collaboration and public understanding.
At the heart of AI’s social impact is the determinative power of data: the leading technologies derive their “intelligence” from mining huge troves of data (often the product of unconsented surveillance) through opaque and resource-intensive computation.
Our Data Ontologies workshop invites you to join a network of cross-disciplinary scholars including leading thinkers on data and its relationship to knowledge, world-building, and the nature of being.
Please join the discussion, or if the time doesn’t work for you, watch the recordings of our workshop meetings and join us on Critical AI’s blog for asynchronous conversations.
SCHEDULE AND REGISTRATION LINKS
DATA ONTOLOGIES WORKSHOP + MARCH 4 BOOK EVENT
Meeting 1: MARKET ONTOLOGIES: a facilitated discussion of recent articles on the possibility of general artificial intelligence through reinforcement learning and the moral effects of market-mediated data analysis.
Th Feb. 10, 5:30 PM EST (Feb. 11, 9:30 AM AEDT)
SPECIAL BOOK EVENT: Speculative Communities: Living with Uncertainty in a Financialized World: a book talk with
Aris Komporozos-Athanasiou (Sociology, University College London)
Fri. March 4, 12:00PM EST
Moderator/Introducer: Jamie Pietruska (History, Rutgers)
Respondent: Justin Joque (Visualization Librarian, U. Michigan)
Primary Reading: Komporozos-Athanasiou, “Introduction” to Speculative Communities
Optional Readings: Komporozos-Athanasiou, “Speculating on Chaos in Financialized Capitalism” & “Winning in the Real Fake”
Check out the video and look out for Jamie Pietruska’s blog coming soon.
Meeting 2: SUBMARINE ONTOLOGIES/ONTOLOGIES OF JUDGMENT: a facilitated discussion of The Promise of Artificial Intelligence: Reckoning and Judgment by philosopher and information scientist Brian Cantwell Smith.
Th March 10, 5:30PM EST (March 11, 9:30AM AEDT)
Co-facilitators: Katherine Bode (Data-Rich Literary History, ANU), Christopher Newfield (Director of Research, ISRF), and Mark S.D. Sammons, PhD (NLP Researcher and former Research Assistant Professor, University of Illinois, Urbana)
Primary Readings: “Introduction” and Chapters 3, 5, 6, 9, 10, and 13 from Smith’s The Promise of Artificial Intelligence: Reckoning and Judgment
Check out the video and look out for Christopher Newfield’s blog coming soon!
Meeting 3: THE EVERYTHING IN THE WHOLE WIDE WORLD BENCHMARK: a talk and discussion with Emily M. Bender (Howard and Frances Nostrand Endowed Professor of Linguistics, U. of Washington) sharing research prepared in collaboration with Inioluwa Deborah Raji, Amandalynne Paullada, Emily Denton, and Alex Hanna.
Th March 24, 5:30PM EST (March 25, 8:30AM AEDT)
Moderator/Introducer: Matthew Stone (Computer Science, Rutgers)
Optional Reading: Paullada, Raji, Bender, Denton, and Hanna “Data and its (Dis)contents: A Survey of Dataset Development and Use in Machine Learning Research“
Meeting 4: INDIGENOUS ONTOLOGIES: an interview with the creators of an indigenous artworks installation, the Tracker Data Project. Genevieve Bell (School of Cybernetics, ANU) interviews Adam Goodes, Angie Abdilla, and Baden Pailthorpe
Th April 21, 5:30PM EDT (April 22, 7:30AM AEST)
Panel: Adam Goodes (Go Foundation), Angie Abdilla (Art, Architecture, and Design, UNSW), Baden Pailthorpe (School of Art & Design, ANU)
View in Advance: Tracker Data Project Teaser Video
Optional Readings: Goodes, Abdilla, and Palithorpe “Ngapulara Ngarngarnyi Wirra (Our Family Tree)” & Powles and Walsh “Has the Monitoring of Professional Athletes’ Intimate Information Gone too Far?”
Check out the video and look out for Kirsty Anantharajah’s blog after the event!
Meeting 5: DATAFIED ONTOLOGIES: a facilitated discussion of recent articles on the onto-epistemological dimensions of human data assemblages and the “ghost workers” behind the curtain.
Th April 28, 5:30PM EDT (April 29, 7:30AM AEST)
Co-facilitators: Gavin J.D. Smith (Sociology, ANU) and Lori Moon (PhD. and NLP Researcher, Elemental Cognition, NYC)
Primary Readings: Lupton, “How do Data Come To Matter?Living and Becoming with Personal Data” and “Introduction”, Ch. 1 and Ch. 3 from Gray and Suri’s Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass
Check out the video and look out for Serap Firat’s blog coming soon!
Meeting 6: THE ONTOLOGICAL LIMITS OF CODE: a facilitated discussion of recent publications on algorithmic ethics and the notion of interpretable models.
Th. May 5, 5:30PM EDT (May 6, 7:30AM AEST)
Co-facilitators: Paola Ricaurte Quijano (Media & Digital Culture, Tecnológico de Monterrey) and Sean Silver (English, Rutgers)
Primary Readings: Ch. 2 from Amoore’s Cloud Ethics: Algorithms and the Attributes of Ourselves and Others and Lipton’s “The Mythos of Model Interpretability“
Optional Readings: “Introduction” and Ch. 4 from Cloud Ethics
Suggested Further Reading: Mackenzie & Munster, “Platform Seeing: Image Ensembles and Their Invisualities”
Check out the video and look out for Mark Aakhus’s blog coming soon!
Meeting 7: ADVENTURES IN SCRAPISM: a discussion with Tega Brain and Sam Lavigne, Brooklyn-based artists who create online experiences from massive datasets. Their recent collaborative work explores, in their words, “the artistic potential of web scraping, the commodification of everything, and the digital traces we leave behind.”
Th. May 12, 5:30PM EDT (May 13, 7:30am AEST)
Moderator: Atif Akin (Art & Design, Rutgers)
Check out the video and look out for Atif Akin’s blog after the event!
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
October – December 2021 Ethics of Data Creation Events:
Meeting 1: STOCHASTIC PARROTS: a comprehensive discussion of the social and technological dimensions of large language models (LLMS).
Th Oct. 7, 5:30 PM EST (Oct. 8, 8:30 AM AEDT)
Co-facilitators: Katherine Bode (Data-Rich Literary History, ANU) and Matthew Stone (Computer Science, Rutgers)
Primary Reading: Emily M. Bender, Timnit Gebru et al. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?🦜”
Check out the video and Lauren Goodlad’s blog of this event!
Meeting 2: DATA JOURNALISM: a talk and discussion with Meredith Broussard, Research Director at the NYU Alliance for Public Interest Technology and author of the award-winning book, Artificial Unintelligence: How Computers Misunderstand the World (MIT, 2018). Professor Broussard will be introduced by Caitlin Petre (Journalism and Media Studies, Rutgers).
Th Oct. 14, 5:30 PM EST (Oct. 15, 8:30 AM AEDT)
Readings: Chapter 4 and Chapter 6 of Artificial Unintelligence: How Computers Misunderstand the World (MIT, 2018).
Check out the video and Rutgers Undergraduate Nidhi Salian’s blog of this event!
Meeting 3: BIG DATA: a workshop discussion about two recent publications of importance to data curation and its discontents.
Th Oct. 28, 5:30 PM EST (Oct. 29, 8:30 AM AEDT)
Co-facilitators: Ella Barclay (Design and Digital Media, ANU) and Britt Paris (Critical Informatics, Rutgers)
Primary Readings: Chapter 2 of Catherine D’Ignazio and Lauren Klein’s Data Feminism and Emily Denton, Alex Hanna, et al.’s “On the Genealogy of Machine Learning Datasets.”
Check out the video and Ryan Heuser’s blog of this event!
Meeting 4: DATA RELATIONALITIES: a talk and discussion with Salomé Viljoen (Columbia Law) on her pioneering work on the relationality of data. Professor Viljoen will be introduced by Michele Gilman (Venable Professor of Law, University of Baltimore).
Th Nov. 11, 5:30 PM EST (Nov. 12, 9:30 AM AEDT)
Primary Readings: “Data as Property?” (2020)
Check out the video and Kayvon Paul’s blog of this event!
Meeting 5: DATA JUSTICE: an interview and open discussion with Sasha Costanza-Chock (Director of Research & Design, Algorithmic Justice League) including Kate Henne (School of Regulation and Global Governance, ANU), Sabelo Mhlambi (Berkman-Klein Center for Internet & Society), and Anand Sarwate (Electrical & Computer Engineering, Rutgers).
Th Dec. 2, 5:30 PM EST (Dec. 3, 9:30 AM AEDT)
Primary Readings: From Design Justice: Community-Led Practices to Build the Worlds We Need (2020): Introduction: “#TravelingWhileTrans, Design Justice, and Escape from the Matrix of Domination”
and Ch.2 “Design Practices: Nothing About Us Without Us”
Check out the video and Jonathan Calzada’s blog of this event!
Meeting 6: IMAGE DATASETS: a special event on AI & the Arts with Katrina Sluis (Photography and Media Arts, ANU) and Nicolas Malevé (Visual Artist and Researcher, CSNI). Both will be introduced by Baden Pailthorpe (School of Art & Design, ANU).
Th Dec. 16, 5:30 PM EST (Dec. 17, 9:30 AM AEDT)
Primary Readings: Katrina Sluis’s Photography must be Curated! Part 4: Survival of the Fittest Image (2019) and Nicolas Malevé’s On the Dataset’s Ruins (2020).
Check out the video and ANU graduate Madeleine Hepner’s blog of this event!
Save the Date! Emily M. Bender will be joining us on March 24, 2022 as our workshop continues on the related theme, Data Ontologies.
SUGGESTED FURTHER READINGS
For Meeting 1, 10/7/21, STOCHASTIC PARROTS
Thompson, Greenewald, Lee, Manso: “Deep Learning’s Diminishing Returns” (2021)
Dodge, Sap, Marasović, et al.: “Documenting Large Webtext Corpora: A Case Study on the Colossal Clean Crawled Corpus” (2021)
Abid, Farooqi, and Zou: “Persistent Anti-Muslim Bias in Large Language Models” (2021)
Myers: “Rooting Out Anti-Muslim Bias in Popular Language Model GTP-3” (2021)
Welble, Glaese, and Uesato: “Challenges in Detoxifying Language Models” (2021)
(Watch) Emily M. Bender’s keynote at the Alan Turing Institute (2021)
For Meeting 2, 10/14/21, DATA JOURNALISM
Chapters 1-3 of Artificial Unintelligence: How Computers Misunderstand the World (MIT, 2018).
For Meeting 4, 11/11/21, DATA RELATIONALITIES
Viljoen: “Democratic Data: A Relational Theory of Data Governance” (2020)
(Watch) Tech Conversation Series—Democratic data: privacy harms and data governance (2021)
For Meeting 6, 12/16/21, IMAGE DATASETS
Fei-Fei Li: Where Did ImageNet Come From? An invited talk given to the public on the 10th Anniversary of the Dataset at The Photographers’ Gallery, London (2019)
Fei-Fei Li: Large-scale Image Classification: ImageNet and ObjectBank, a Google Tech talk given to computer scientists (2011)
Exhibiting Imagenet at The Photographers’ Gallery (2019), includes a link to a 12 hr YouTube feed of ImageNet organised by synset
Commissioned texts by artists and writers relating to Data/Set/Match programme at The Photographers’ Gallery (2019-2020) at Unthinking Photography
* * * * * * * * * *
February – March 2021 Events:
March 26, 2021, 12 pm EST
A Conversation with Artist Mimi Onuoha
- Mimi Onuoha, Visiting Arts Professor at NYU
Mimi Onuoha is a Nigerian-American artist creating work about a world made to fit the form of data. By foregrounding absence and removal, her multimedia practice uses print, code, installation and video to make sense of the power dynamics that result in disenfranchised communities’ different relationships to systems that are digital, cultural, historical, and ecological. Onuoha has spoken and exhibited internationally and has been in in residence at Studio XX (Canada), Data & Society Research Institute (USA), the Royal College of Art (UK), Eyebeam Center for Arts & Technology (USA), and Arthouse Foundation (Nigeria, upcoming). She lives and works in Brooklyn.
March 5, 2021, 12 pm EST
Thinkpiece Panel II
An interdisciplinary conversation with three leading AI specialists
- Dwaipayan Banerjee, Associate Professor of Science, Technology, and Society at MIT. Cultural anthropologist, sociologist, and Mellon Postdoctoral Fellow at Dartmouth College.
“Decolonizing Computing: An Aesthetic and Demonic Energy”
Dwaipayan Banerjee is an Associate Professor of Science, Technology, and Society (STS) at MIT. He earned his doctorate in cultural anthropology at NYU and has been a Mellon Postdoctoral Fellow at Dartmouth College. He also holds an M. Phil and an MA in sociology from the Delhi School of Economics. His research is guided by a central theme: how do various kinds of social inequity shape medical, scientific and technological practices? In turn, how do scientific and medical practice ease or sharpen such inequities? In doing so, Banerjee’s ongoing research pushes science and technology studies into the global south. He develops postcolonial and subaltern orientations in the scholarship on science, medicine and technology.
Optional Reading: “The Aesthetics of Decolonization“
Lily Hu, Applied Mathematics and Philosophy, Harvard; Berkman Klein Center for Internet & Society
“What is ‘objective’ and what is ‘political’ about data and algorithms?”
Debate about whether predictions issued by data-based algorithm systems come out of a process that is “objective” or “political” set forth a dichotomy between the empirical and the normative that is false. I will focus, instead, on how theoretical, empirical, and political considerations interact in the creation and use of such systems.
Lily Hu is a PhD candidate in Applied Mathematics and Philosophy at Harvard University. She works in philosophy of (social) science and political and social philosophy. Her current project is on causal inference methodology in the social sciences and focuses on how various statistical frameworks treat and measure the “causal effect” of social categories such as race, and ultimately, how such methods are seen to back normative claims about racial discrimination and inequalities broadly. Previously, she has worked on topics in machine learning theory and algorithmic fairness.
Optional Reading: “What is ‘Race’ in Algorithmic Discrimination on the Basis of Race?“
- Safiya Noble, Associate Professor of Information Studies and African American Studies at UCLA
“New Paradigms of Justice: How Knowledge Curators can Respond to the Information Crisis”
Data discrimination is a real social problem, exacerbated by the monopoly status and private interests of a small number of internet companies. This talk offers provocations for imagining and creating new paradigms of justice in the technology sector, helmed by information professionals like librarians, museum curators and knowledge managers.
Safiya Noble is an Associate Professor at UCLA in the Departments of Information Studies and African American Studies. She is the author of a best-selling book on racist and sexist algorithmic bias in commercial search engines, entitled Algorithms of Oppression: How Search Engines Reinforce Racism (NYU Press). Dr. Noble is the co-editor of two edited volumes: The Intersectional Internet: Race, Sex, Culture and Class Online and Emotions, Technology & Design. She currently serves as an Associate Editor for the Journal of Critical Library and Information Studies, and is the co-editor of the Commentary & Criticism section of the Journal of Feminist Media Studies. She is a member of several academic journal and advisory boards, including Taboo: The Journal of Culture and Education.
Optional Reading: “Searching for People and Communities“
February 26, 2021, 12 pm EST
Keynote Interview on AI Ethics and Global Contexts with Sabelo Mhlambi
- Sabelo Mhlambi, Technology and Human Rights Fellow, Harvard Kennedy School Carr Center for Human Rights Policy, Fellow, Harvard Berkman Klein Center for Internet & Society
Sabelo Mhlambi is a computer scientist and researcher whose work focuses on the ethical implications of technology in the developing world, particularly in Sub-Saharan Africa, along with the creation of tools to make Artificial Intelligence more accessible and inclusive to underrepresented communities. His research centers on examining the risks and opportunities of AI in the developing world, and in the use of indigenous ethical models as a framework for creating a more humane and equitable internet. His current technical projects include the creation of Natural Language Processing models for African languages, alternative design of web-platforms for decentralizing data and an open-source library for offline networks.
February 19, 2021, 11 am EST
Thinkpiece Panel I
An interdisciplinary conversation with three leading AI specialists
- Michele Gilman, Venable Professor of Law at U of Baltimore, director of the Saul Ewing Civil Advocacy Clinic
“Poverty Lawgorithms: The Economic Injustices of Automated Decision-Making”
As a result of automated decision-making systems, low-income people can find themselves excluded from mainstream opportunities, such as jobs, education, and housing; targeted for predatory services and products; and surveilled by systems in their neighborhoods, workplaces, and schools. These dynamics impede people’s economic security and potential for social mobility, and yet the law provides scant recourse. Thus, we must consider how to challenge opaque and unfair algorithmic systems as part of an economic justice agenda.
Michele Gilman is the Venable Professor of Law at the University of Baltimore School of Law. Professor Gilman teaches in the Civil Advocacy Clinic, where she supervises students representing low-income individuals in a wide range of litigation and law reform matters. She also teaches evidence and federal administrative law. Professor Gilman writes extensively about privacy, poverty, and social welfare issues, and her articles have appeared in journals including the California Law Review, the Vanderbilt Law Review, and the Washington University Law Review. In 2019-2020, she was a Faculty Fellow at Data & Society, where she researched the intersection of privacy law, data-centric technologies, and low-income communities.
Optional Reading: “Poverty Lawgorithms“
- Katina Michael, Professor in the School for the Future of Innovation in Society and School of Computing, Informatics and Decision Systems Engineering at Arizona State
“Misdirected Dreams? Trusting in AI: the hopes, the needs, and the challenges.”
Technology is edging ever closer to interfacing with the human or even brazenly replacing the human function. As we seek dreams of automation through artificial intelligence, the question centers on whether we are engaged in a process of deep techno-utopian distraction, or we are in fact on the right path to addressing our critical global needs. What are the challenges? How will we ensure a sustainable future?
Katina Michael is a professor at Arizona State University, holding a joint appointment in the School for the Future of Innovation in Society and School of Computing, Informatics and Decisions Systems Engineering. She is also the director of the Society Policy Engineering Collective (SPEC) and the Founding Editor-in-Chief of the IEEE Transactions on Technology and Society. Katina is a senior member of the IEEE and a Public Interest Technology advocate who studies the social implications of technology.
Optional Reading: “Big Data: New Opportunities and New Challenges”
- Tae Wan Kim, Associate Professor of Business Ethics and Xerox Junior Chair, Carnegie Mellon University
“Flawed Like Us and the Starry Moral Law: A Critical Perspective to Artificial Intelligence.”
AI is an imitation game. “What is a good AI system?” Is the same question as “What is a good human being?” In this talk, engaging with Ian McEwan’s Machines Like Me, I invite the audience to rethink what it means to be human in the age of AI.
Tae Wan Kim is Associate Professor of Business Ethics and Xerox Junior Faculty Chair at Carnegie Mellon’s Tepper School of Business. Kim is a faculty member of the Block Center for Technology and Society at Heinz College, and CyLab at Carnegie Mellon’s School of Computer Science. Prior to joining Tepper’s faculty in 2012, Kim did his PHD in the Department of Legal Studies and Business Ethics at The Wharton School, University of Pennsylvania. Kim is on the editorial boards of Business Ethics Quarterly, Journal of Business Ethics, and Business & Society Review.
Optional Reading: “Taking Principles Seriously: A Hybrid Approach to Value Alignment“
February 12, 2021, 12 pm EST
Meredith Whittaker Keynote:
“AI and Social Control”
Moderator: David Pennock (Director, DIMACS)
- Meredith Whittaker, Co-founder and co-director, AI Now Institute, Minderoo Research Professor at New York University, Founder of Google’s Open Research Group
This talk examines the limits of AI technologies, and their insistence on enforcing normative categories that necessarily exclude “that which doesn’t fit.” It then goes on to review the political economy driving the AI industry, AI’s recent history and capacity for social control, and how movements for justice must go beyond corporate-sponsored “ethics” and a fascination with technical mechanisms and adopt more militant tactics that contend with concentrated power, and the capacity of AI to exacerbate inequality and facilitate minority rule.
Meredith Whittaker’s research and advocacy focuses on the social implications of artificial intelligence and the tech industry responsible for it. Prior to NYU, she worked at Google for over a decade, where she led product and engineering teams, and co-founded M-Lab, a globally distributed network measurement platform that now provides the world’s largest source of open data on internet performance. As a long-time tech worker, she helped lead labor organizing at Google, driven by the belief that worker power and collective action are necessary to ensure meaningful tech accountability in the context of concentrated industrial power. She has advised the White House, the FCC, the City of New York, the European Parliament, and many other governments and civil society organizations on artificial intelligence, internet policy, measurement, privacy, and security.
Optional Reading: AI Now Institute’s report “Disability, Bias, and AI“