Skip to main content

Building upon his work as a First Amendment and human rights educator, Dr. Walker teaches “AI Ethics & Law” at the Rutgers Honors College. This seminar builds upon his research for his current book project, Moral Imagination: Ethics & Empathy in the Age of AI, in which he harnesses the public’s questions about artificial intelligence through a series of morality plays. These plays introduce the practice of moral imagination, the ability to picture yourself in an ethical dilemma to understand competing points of view. Each chapter of the book presents real-life case studies on AI ethics, inviting readers to immerse themselves in complex scenarios that reveal the interplay between technology and human agency.

In writing this book, Dr. Walker draws upon his doctoral training in First Amendment Law and his master’s degree at Columbia University, where he studied how the effective use of technology can enhance students’ moral and intellectual development.

As a specialist in education technology, Dr. Walker designed two dozen websites and three mobile apps, including founding the social learning communities at,, and He previously served as an advisor at New York University’s Virtual College and fundraised and hired a team of forty faculty and staff to build the blended learning courses for the Newseum Institute in Washington, DC, a museum about the five freedoms of the First Amendment.

Later this spring, Dr. Walker will serve as a short-term visiting researcher at the University of Oxford’s Institute for Ethics in AI, where he will study AI Ethics & Law. Previously, he earned professional certificates in AI Ethics from, IBM, the London School of Economics, Lund University in Sweden, Polytechnic University of Milan, Italy, and the University of Pennsylvania.

AI Ethics & Law: An Honors Seminar

EAV 50:525:155 or GCM 50:525:153

Mondays, September 9 to December 9, 2024, from 12:30 to 3:20 PM

The dawn of artificial intelligence (AI) has raised new ethical and legal questions. These rapidly evolving technologies span the gamut from foundational algorithms—sequential instructions used in all AI systems—to sophisticated machine learning models that leverage neural networks to digest vast amounts of data. Large language models in deep learning systems build upon these innovations to generate language and create images, multimedia, and speech recognition programs. These advancements range from narrow AI for highly specified tasks, such as chatbots, language translation, medical imaging, navigation apps, and self-driving cars, to the theoretical realm of general AI that seeks to someday simulate broad human intelligence. While Artificial General Intelligence remains aspirational, understanding this distinction is necessary to evaluate the current ethical and legal landscape.

In this interactive research seminar, honors students will hone their critical thinking skills when examining the multinational, national, local, and corporate regulatory systems that seek to govern the development and deployment of various AI technologies. Case studies will reveal moral dilemmas that people experience across the professions so that students can analyze the ethical and legal implications of AI systems that seek to emulate human learning, reasoning, self-correction, and perception. This exploration aims to illuminate these rapidly changing innovations and foster students’ nuanced understanding of these technologies with an eye on AI’s influence on humanity and the natural world.

Specifically, students will ask how stakeholders legally define “artificial intelligence” and what core ethics are used to evaluate these definitions. For instance,

  1. What are the moral and legal relationships between the ethic of human dignity and AI systems?
  2. How can stakeholders in the AI movement apply principles of explainability and interpretability to ensure transparency and foster public trust—aware that fully explaining how some AI technologies work may not be possible?
  3. What regulations are necessary to ensure that AI systems protect people’s privacy, curb government surveillance, and guard against abusing civil liberties and human rights while maximizing freedom and autonomy?
  4. How can technology prevent harm (nonmaleficence) and do good (beneficence), ensuring that AI systems are just, fair, and equitable?
  5. What strategies can AI developers use to maximize accuracy to defend democracies, combat mis- and disinformation, and protect against propaganda, authoritarianism, and extremism?
  6. How can AI prevent, monitor, and mitigate bias and invidious discrimination and promote inclusion and equality?
  7. How can AI be used responsibly and with integrity?
  8. What accountability systems are needed to ensure AI will benefit humanity and the environment and provide a more equitable and sustainable future? What testing, operational, and monitoring tools are necessary to protect people, societies, and the environment?

Given its multinational focus, this course will fill the Global Communities (GCM) requirement by examining technology law through a global lens. Given the dense review of legal and ethical frameworks, the course also meets the Ethics & Values (EVA) requirement. This course requires no previous experience in computer science, philosophy, or law.

AI Ethics & Law is made possible by the Chancellor’s Grant for Pedagogical Innovation offered through the Honors College at Rutgers University.