Skip to main content

Critical AI’s main focal point for Fall 2021 is our Ethics of Data Curation workshop (to be held over Zoom), the product of a National Endowment for the Humanities and Rutgers Global sponsored international collaboration between Rutgers and the Australian National University. The lead organizers for the series are Katherine Bode and Baden Pailthorpe at ANU and Lauren M.E. Goodlad at Rutgers. All of the workshops and associated talks are free and open to the public but space is limited so please register well in advance (see schedule and registration links below).

“Artificial Intelligence” (AI) today centers on the technological affordances of data-centric machine learning. While talk of making AI ethical, democratic, human-centered, and inclusive abounds, it suffers from lack of interdisciplinary collaboration and public understanding.
At the heart of AI’s social impact is the determinative power of data:
the leading technologies derive their “intelligence” from mining huge troves of data (often the product of unconsented surveillance) through opaque and resource-intensive computation.
The Big Tech tendency to favor ever-larger models that use data “scraped” from the internet creates complications of many kinds including
the under-presentation of women, people of color, and people in the developing world;
the mistaken belief that stochastic text-generating software like GPT-3 truly “understands” natural language;
the misguided haste to uphold this technology as the “foundation” on which the future of all AI will be built;
and the environmental and social impact of privileging ever-larger models that emit tons of carbon and cost millions of dollars to train. 

Our Ethics of Data Curation workshop invites you to join a network of cross-disciplinary scholars including leading thinkers on the question of data curation and data-centric machine learning technologies. Please join the discussion, or if the time doesn’t work for you, watch the recordings of our workshop meetings and join us on Critical AI’s blog for asynchronous conversations.

Note: at present we are still organizing the details of various sessions, including the readings, but if you register in advance we will be certain to email you as soon as the links to readings are live!   

SCHEDULE AND REGISTRATION LINKS

Meeting 1: STOCHASTIC PARROTS: a comprehensive discussion of the social and technological dimensions of large language models (LLMS).
Th Oct. 7, 5:30 PM EST (Oct. 8, 8:30 AM AEDT)
Co-facilitators: Katherine Bode (Data-Rich Literary History, ANU) and Matthew Stone (Computer Science, Rutgers)
Primary Reading: Emily M. Bender, Timnit Gebru et al. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?🦜
Check out the video and Lauren Goodlad’s blog of this event!

Meeting 2: DATA JOURNALISM: a talk and discussion with Meredith Broussard, Research Director at the NYU Alliance for Public Interest Technology and author of the award-winning book, Artificial Unintelligence: How Computers Misunderstand the World (MIT, 2018). Professor Broussard will be introduced by Caitlin Petre (Journalism and Media Studies, Rutgers).
Th Oct. 14, 5:30 PM EST (Oct. 15, 8:30 AM AEDT)
Readings: Chapter 4 and Chapter 6 of Artificial Unintelligence: How Computers Misunderstand the World (MIT, 2018).
Check out the video and Rutgers Undergraduate Nidhi Salian’s blog of this event!

Meeting 3: BIG DATA: a workshop discussion about two recent publications of importance to data curation and its discontents.
Th Oct. 28, 5:30 PM EST (Oct. 29, 8:30 AM AEDT)
Co-facilitators: Ella Barclay (Design and Digital Media, ANU) and Britt Paris (Critical Informatics, Rutgers)
Primary Readings: Chapter 2 of Catherine D’Ignazio and Lauren Klein’s Data Feminism and Emily Denton, Alex Hanna, et al.’s “On the Genealogy of Machine Learning Datasets.”
Look out for guest blogger Ryan Heuser’s blog after the event.

Meeting 4: DATA RELATIONALITIES: a talk and discussion with Salomé Viljoen (Columbia Law) on her pioneering work on the relationality of data. Professor Viljoen will be introduced by Michele Gilman (Venable Professor of Law, University of Baltimore).
Th Nov. 11, 5:30 PM EST (Nov. 12, 9:30 AM AEDT)
Primary Readings: “Data as Property?” (2020)
NEW: Check out the video and Kayvon Paul’s blog of this event!

Meeting 5: DATA JUSTICE: an interview and open discussion with Sasha Costanza-Chock (Director of Research & Design, Algorithmic Justice League) including Kate Henne (School of Regulation and Global Governance, ANU), Sabelo Mhlambi (Berkman-Klein Center for Internet & Society), and Anand Sarwate (Electrical & Computer Engineering, Rutgers).
Th Dec. 2, 5:30 PM EST (Dec. 3, 9:30 AM AEDT)
Primary Readings: From Design Justice: Community-Led Practices to Build the Worlds We Need (2020): Introduction: “#TravelingWhileTrans, Design Justice, and Escape from the Matrix of Domination”
and Ch.2 “Design Practices: Nothing About Us Without Us”
For suggested further readings, please scroll to the bottom of the events announcement.
Look out for guest blogger Jonathan Calzada’s blog after the event.

Meeting 6: IMAGE DATASETS: a special event on AI & the Arts with Katrina Sluis (Photography and Media Arts, ANU) and Nicolas Malevé (Visual Artist and Researcher, CSNI). Both will be introduced by Baden Pailthorpe (School of Art & Design, ANU).
Th Dec. 16, 5:30 PM EST (Dec. 17, 9:30 AM AEDT)
Primary Readings: Katrina Sluis’s Photography must be Curated! Part 4: Survival of the Fittest Image (2019) and Nicolas Malevé’s On the Dataset’s Ruins (2020).
For suggested further readings, please scroll to the bottom of the events announcement.
Free and Open to the Public but space is limited!
Register here for the talk at 5:30
Register here for the workshop discussion to follow
Look out for ANU undergraduate blogger Madeleine Hepner’s blog after the event.

Save the Date! Emily M. Bender will be joining us on March 24, 2022 as our workshop continues on the related theme, Data Ontologies.   

SUGGESTED FURTHER READINGS

For Meeting 1, 10/7/21, STOCHASTIC PARROTS
Thompson, Greenewald, Lee, Manso: “Deep Learning’s Diminishing Returns” (2021)
Dodge, Sap, Marasović, et al.: “Documenting Large Webtext Corpora: A Case Study on the Colossal Clean Crawled Corpus” (2021)
Abid, Farooqi, and Zou: “Persistent Anti-Muslim Bias in Large Language Models” (2021)
Myers: “Rooting Out Anti-Muslim Bias in Popular Language Model GTP-3” (2021)
Welble, Glaese, and Uesato: “Challenges in Detoxifying Language Models” (2021)
(Watch) Emily M. Bender’s keynote at the Alan Turing Institute (2021)

For Meeting 2, 10/14/21, DATA JOURNALISM
Chapters 1-3 of Artificial Unintelligence: How Computers Misunderstand the World (MIT, 2018).

For Meeting 3, 10/28/21, BIG DATA
Chapter 1 of Catherine D’Ignazio and Lauren Klein’s Data Feminism

For Meeting 4, 11/11/21, DATA RELATIONALITIES
Viljoen: “Democratic Data: A Relational Theory of Data Governance” (2020)
(Watch) Tech Conversation Series—Democratic data: privacy harms and data governance (2021)

For Meeting 6, 12/16/21, IMAGE DATASETS
Fei-Fei Li: Where Did ImageNet Come From? An invited talk given to the public on the 10th Anniversary of the Dataset at The Photographers’ Gallery, London (2019)
Fei-Fei Li: Large-scale Image Classification: ImageNet and ObjectBank, a Google Tech talk given to computer scientists (2011)
Exhibiting Imagenet at The Photographers’ Gallery (2019), includes a link to a 12 hr YouTube feed of ImageNet organised by synset
Commissioned texts by artists and writers relating to Data/Set/Match programme at The Photographers’ Gallery (2019-2020) at Unthinking Photography

* * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

PAST EVENTS

March 26, 2021, 12 pm EST

A Conversation with Artist Mimi Onuoha
Moderator: Mindy Seu (Assistant Professor, Art & Design)
codedmatters4.jpg
Mimi Onuoha, Visiting Arts Professor at NYU

Mimi Onuoha is a Nigerian-American artist creating work about a world made to fit the form of data. By foregrounding absence and removal, her multimedia practice uses print, code, installation and video to make sense of the power dynamics that result in disenfranchised communities’ different relationships to systems that are digital, cultural, historical, and ecological. Onuoha has spoken and exhibited internationally and has been in in residence at Studio XX (Canada), Data & Society Research Institute (USA), the Royal College of Art (UK), Eyebeam Center for Arts & Technology (USA), and Arthouse Foundation (Nigeria, upcoming). She lives and works in Brooklyn.

* * * * * * * * * *

March 5, 2021, 12 pm EST

Thinkpiece Panel II
Moderator: Britt Paris (Rutgers School of Communications & Information)

An interdisciplinary conversation with three leading AI specialists

Dwaipayan Banerjee, Associate Professor of Science, Technology, and Society at MIT. Cultural anthropologist, sociologist, and Mellon Postdoctoral Fellow at Dartmouth College.

“Decolonizing Computing: An Aesthetic and Demonic Energy”

Dwaipayan Banerjee is an Associate Professor of Science, Technology, and Society (STS) at MIT. He earned his doctorate in cultural anthropology at NYU and has been a Mellon Postdoctoral Fellow at Dartmouth College. He also holds an M. Phil and an MA in sociology from the Delhi School of Economics. His research is guided by a central theme: how do various kinds of social inequity shape medical, scientific and technological practices? In turn, how do scientific and medical practice ease or sharpen such inequities? In doing so, Banerjee’s ongoing research pushes science and technology studies into the global south. He develops postcolonial and subaltern orientations in the scholarship on science, medicine and technology.

Optional Reading: “The Aesthetics of Decolonization

Lily Hu :: FAT ML

Lily Hu, Applied Mathematics and Philosophy, Harvard; Berkman Klein Center for Internet & Society

“What is ‘objective’ and what is ‘political’ about data and algorithms?”

Debate about whether predictions issued by data-based algorithm systems come out of a process that is “objective” or “political” set forth a dichotomy between the empirical and the normative that is false. I will focus, instead, on how theoretical, empirical, and political considerations interact in the creation and use of such systems.

Lily Hu is a PhD candidate in Applied Mathematics and Philosophy at Harvard University. She works in philosophy of (social) science and political and social philosophy. Her current project is on causal inference methodology in the social sciences and focuses on how various statistical frameworks treat and measure the “causal effect” of social categories such as race, and ultimately, how such methods are seen to back normative claims about racial discrimination and inequalities broadly. Previously, she has worked on topics in machine learning theory and algorithmic fairness.

Optional Reading: “What is ‘Race’ in Algorithmic Discrimination on the Basis of Race?

Safiya Noble, Associate Professor of Information Studies and African American Studies at UCLA

“New Paradigms of Justice: How Knowledge Curators can Respond to the Information Crisis”

Data discrimination is a real social problem, exacerbated by the monopoly status and private interests of a small number of internet companies. This talk offers provocations for imagining and creating new paradigms of justice in the technology sector, helmed by information professionals like librarians, museum curators and knowledge managers.

Safiya Noble is an Associate Professor at UCLA in the Departments of Information Studies and African American Studies. She is the author of a best-selling book on racist and sexist algorithmic bias in commercial search engines, entitled Algorithms of Oppression: How Search Engines Reinforce Racism (NYU Press). Dr. Noble is the co-editor of two edited volumes: The Intersectional Internet: Race, Sex, Culture and Class Online and Emotions, Technology & Design. She currently serves as an Associate Editor for the Journal of Critical Library and Information Studies, and is the co-editor of the Commentary & Criticism section of the Journal of Feminist Media Studies. She is a member of several academic journal and advisory boards, including Taboo: The Journal of Culture and Education.

Optional Reading: “Searching for People and Communities

* * * * * * * * * *

February 26, 2021, 12 pm EST

Keynote Interview on AI Ethics and Global Contexts with Sabelo Mhlambi
Moderator: Mukti Mangharam (Associate Professor, English)

Sabelo Mhlambi in a Keynote Conversation with Alex Guerrero (Philosophy, Rutgers) and Matthew Stone (Computer Science, Rutgers)

Sabelo Mhlambi, Technology and Human Rights Fellow, Harvard Kennedy School Carr Center for Human Rights Policy, Fellow, Harvard Berkman Klein Center for Internet & Society

Sabelo Mhlambi is a computer scientist and researcher whose work focuses on the ethical implications of technology in the developing world, particularly in Sub-Saharan Africa, along with the creation of tools to make Artificial Intelligence more accessible and inclusive to underrepresented communities. His research centers on examining the risks and opportunities of AI in the developing world, and in the use of indigenous ethical models as a framework for creating a more humane and equitable internet. His current technical projects include the creation of Natural Language Processing models for African languages, alternative design of web-platforms for decentralizing data and an open-source library for offline networks.

Optional Reading: “From Rationality to Relationality: Ubuntu as an Ethical and Human Rights Framework for Artificial Intelligence Governance

* * * * * * * * * *

February 19, 2021, 11 am EST

Thinkpiece Panel I
Moderator: Ellen P. Goodman (Law, Rutgers Institute for Information & Policy Law)

An interdisciplinary conversation with three leading AI specialists

Michele Gilman, Venable Professor of Law at U of Baltimore, director of the Saul Ewing Civil Advocacy Clinic

“Poverty Lawgorithms: The Economic Injustices of Automated Decision-Making”

As a result of automated decision-making systems, low-income people can find themselves excluded from mainstream opportunities, such as jobs, education, and housing; targeted for predatory services and products; and surveilled by systems in their neighborhoods, workplaces, and schools. These dynamics impede people’s economic security and potential for social mobility, and yet the law provides scant recourse. Thus, we must consider how to challenge opaque and unfair algorithmic systems as part of an economic justice agenda.

Michele Gilman is the Venable Professor of Law at the University of Baltimore School of Law. Professor Gilman teaches in the Civil Advocacy Clinic, where she supervises students representing low-income individuals in a wide range of litigation and law reform matters. She also teaches evidence and federal administrative law. Professor Gilman writes extensively about privacy, poverty, and social welfare issues, and her articles have appeared in journals including the California Law Review, the Vanderbilt Law Review, and the Washington University Law Review. In 2019-2020, she was a Faculty Fellow at Data & Society, where she researched the intersection of privacy law, data-centric technologies, and low-income communities.

Optional Reading: “Poverty Lawgorithms

 

Katina Michael, Professor in the School for the Future of Innovation in Society and School of Computing, Informatics and Decision Systems Engineering at Arizona State

“Misdirected Dreams? Trusting in AI: the hopes, the needs, and the challenges.”

Technology is edging ever closer to interfacing with the human or even brazenly replacing the human function. As we seek dreams of automation through artificial intelligence, the question centers on whether we are engaged in a process of deep techno-utopian distraction, or we are in fact on the right path to addressing our critical global needs. What are the challenges? How will we ensure a sustainable future?

Katina Michael is a professor at Arizona State University, holding a joint appointment in the School for the Future of Innovation in Society and School of Computing, Informatics and Decisions Systems Engineering. She is also the director of the Society Policy Engineering Collective (SPEC) and the Founding Editor-in-Chief of the IEEE Transactions on Technology and Society. Katina is a senior member of the IEEE and a Public Interest Technology advocate who studies the social implications of technology.

Optional Reading: “Big Data: New Opportunities and New Challenges”

 

 

Tae Wan Kim, Associate Professor of Business Ethics and Xerox Junior Chair, Carnegie Mellon University

“Flawed Like Us and the Starry Moral Law: A Critical Perspective to Artificial Intelligence.”

 AI is an imitation game. “What is a good AI system?” Is the same question as “What is a good human being?” In this talk, engaging with Ian McEwan’s Machines Like Me, I invite the audience to rethink what it means to be human in the age of AI. 

 

Tae Wan Kim is Associate Professor of Business Ethics and Xerox Junior Faculty Chair at Carnegie Mellon’s Tepper School of Business. Kim is a faculty member of the Block Center for Technology and Society at Heinz College, and CyLab at Carnegie Mellon’s School of Computer Science. Prior to joining Tepper’s faculty in 2012, Kim did his PHD in the Department of Legal Studies and Business Ethics at The Wharton School, University of Pennsylvania. Kim is on the editorial boards of Business Ethics QuarterlyJournal of Business Ethics, and Business & Society Review.

Optional Reading: “Taking Principles Seriously: A Hybrid Approach to Value Alignment

* * * * * * * * * *

February 12, 2021, 12 pm EST

Meredith Whittaker Keynote:
“AI and Social Control”

Moderator: David Pennock (Director, DIMACS)

Meredith Whittaker, Co-founder and co-director, AI Now Institute, Minderoo Research Professor at New York University, Founder of Google’s Open Research Group

This talk examines the limits of AI technologies, and their insistence on enforcing normative categories that necessarily exclude “that which doesn’t fit.” It then goes on to review the political economy driving the AI industry, AI’s recent history and capacity for social control, and how movements for justice must go beyond corporate-sponsored “ethics” and a fascination with technical mechanisms and adopt more militant tactics that contend with concentrated power, and the capacity of AI to exacerbate inequality and facilitate minority rule.

Meredith Whittaker’s research and advocacy focuses on the social implications of artificial intelligence and the tech industry responsible for it. Prior to NYU, she worked at Google for over a decade, where she led product and engineering teams, and co-founded M-Lab, a globally distributed network measurement platform that now provides the world’s largest source of open data on internet performance. As a long-time tech worker, she helped lead labor organizing at Google, driven by the belief that worker power and collective action are necessary to ensure meaningful tech accountability in the context of concentrated industrial power. She has advised the White House, the FCC, the City of New York, the European Parliament, and many other governments and civil society organizations on artificial intelligence, internet policy, measurement, privacy, and security.

Optional Reading: AI Now Institute’s report “Disability, Bias, and AI