AI Ethics & Law
Building upon his work as a First Amendment and human rights educator, Dr. Walker teaches “AI Ethics & Law” at the Rutgers Honors College. This seminar builds upon his research for his current book project, Moral Imagination: Ethics & Empathy in the Age of AI, in which he harnesses the public’s questions about artificial intelligence through a series of morality plays. These plays introduce the practice of moral imagination, the ability to picture yourself in an ethical dilemma to understand competing points of view. Each chapter of the book presents real-life case studies on AI ethics, inviting readers to immerse themselves in complex scenarios that reveal the interplay between technology and human agency.
In writing this book, Dr. Walker draws upon his doctorate in First Amendment Law and his master’s degrees from Columbia University, where he studied how the effective use of technology can enhance students’ moral and intellectual development.
As a specialist in education technology, Dr. Walker designed two dozen websites and three mobile apps, including founding the social learning communities at ReligionAndPublicLife.org, EducationLaw.org, and AbolitionistSanctuary.org. He previously served as an advisor at New York University’s Virtual College and fundraised and hired a team of forty faculty and staff to build the blended learning courses for the Newseum Institute in Washington, DC, a museum about the five freedoms of the First Amendment.
Dr. Walker is a contributor to the Munich Convention on AI & Human Rights and served as a visiting academic at the University of Oxford’s Institute for Ethics in AI and a resident fellow in First Amendment law at Harvard University. Dr. Walker is an Expert AI Trainer for OpenAI’s Human Data Team, providing subject-matter expertise and democratic inputs to ensure the safety and accuracy of frontier models.
He earned professional certificates in AI Ethics from DeepLearning.ai., IBM, London School of Economics, Lund University in Sweden, Polytechnic University of Milan, Italy, and the University of Pennsylvania.
AI Ethics & Law: An Honors Seminar
EAV 50:525:155 or GCM 50:525:153
Mondays, September 9 to December 9, 2024, from 12:30 to 3:20 PM
The dawn of artificial intelligence (AI) has raised new ethical and legal questions. These rapidly evolving technologies span the gamut from foundational algorithms—sequential instructions used in all AI systems—to sophisticated machine learning models that leverage neural networks to digest vast amounts of data. Large language models in deep learning systems build upon these innovations to generate language and create images, multimedia, and speech recognition programs. These advancements range from narrow AI for highly specified tasks, such as chatbots, language translation, medical imaging, navigation apps, and self-driving cars, to the theoretical realm of general AI that seeks to someday simulate broad human intelligence. While Artificial General Intelligence remains aspirational, understanding this distinction is necessary to evaluate the current ethical and legal landscape.
In this interactive research seminar, honors students will hone their critical thinking skills when examining the multinational, national, local, and corporate regulatory systems that seek to govern the development and deployment of various AI technologies. Case studies will reveal moral dilemmas that people experience across the professions so that students can analyze the ethical and legal implications of AI systems that seek to emulate human learning, reasoning, self-correction, and perception. This exploration aims to illuminate these rapidly changing innovations and foster students’ nuanced understanding of these technologies with an eye on AI’s influence on humanity and the natural world.
Specifically, students will ask how stakeholders legally define “artificial intelligence” and what core ethics are used to evaluate these definitions. For instance,
- What are the moral and legal relationships between the ethic of human dignity and AI systems?
- How can stakeholders in the AI movement apply principles of explainability and interpretability to ensure transparency and foster public trust—aware that fully explaining how some AI technologies work may not be possible?
- What regulations are necessary to ensure that AI systems protect people’s privacy, curb government surveillance, and guard against abusing civil liberties and human rights while maximizing freedom and autonomy?
- How can technology prevent harm (nonmaleficence) and do good (beneficence), ensuring that AI systems are just, fair, and equitable?
- What strategies can AI developers use to maximize accuracy to defend democracies, combat mis- and disinformation, and protect against propaganda, authoritarianism, and extremism?
- How can AI prevent, monitor, and mitigate bias and invidious discrimination and promote inclusion and equality?
- How can AI be used responsibly and with integrity?
- What accountability systems are needed to ensure AI will benefit humanity and the environment and provide a more equitable and sustainable future? What testing, operational, and monitoring tools are necessary to protect people, societies, and the environment?
Given its multinational focus, this course will fill the Global Communities (GCM) requirement by examining technology law through a global lens. Given the dense review of legal and ethical frameworks, the course also meets the Ethics & Values (EVA) requirement. This course requires no previous experience in computer science, philosophy, or law.
AI Ethics & Law is made possible by the Chancellor’s Grant for Pedagogical Innovation offered through the Honors College at Rutgers University.