Skip to main content

What is generative artificial intelligence (GenAI)?

GenAI models are technologies trained on specific datasets and use algorithms to “generate” outputs. Their outputs vary, with some examples being text, images, videos, and audio. GenAI models are evolving rapidly and influencing teaching and learning environments. There are numerous models (e.g. ChatGPT, DALL-E) and their increased sophistication and access make them easier to use. Reimagining course designs and teaching approaches to support student learning are critical emphasis today in higher education as a result of these technologies.

What are teaching and learning opportunities does it afford?

GenAI technologies can support teaching and learning in various ways, including, but not limited to:

  • Brainstorming for ideation or topic selection
  • Improving writing and copyediting
  • Developing examples and case studies
  • Creating assessment questions
  • Obtaining ideas for course design and lesson planning
  • Organizing text
  • Summarizing text
  • Creating tailored graphics, media, and audio files
  • Role-playing
  • Analysis and research (e.g. datasets trained on reputable sources)

What are its limitations?

In a student perspectives study, respondents identified several challenges to the usage of GenAI, a few top ones are noted below (study cited in Addy et al., 2023):

  • Hindering learning: shortcutting critical learning processes
  • Academic integrity concerns: impermissible usage or cheating (see Rutgers Academic Integrity Policy)
  • Inaccurate information: outputs containing incorrect information, highlighting the importance of fact-checking

What are equity concerns surrounding GenAI?

  • Access: Students with more financial means may have the ability to purchase the more advanced and robust tools which could create educational disparities for low-income students
  • Bias: GenAI technologies are trained on specific datasets which might contain biased information which can harm groups that are minoritized
  • Algorithmic epistemic injustice: Kay, Kasirzadeh, & Mohamed (2024) use this term to describe the representational harm the creation of epistemic inequities in multilingual scenarios involving GenAI

What are other ethical concerns?

Various ethical concerns have been raised around GenAI tools, with several noted below.

  • Intellectual property/copyright: Various issues have been raised regarding copyright as it pertains to the datasets from which the technologies are trained as well as the outputs
  • Energy consumption: Training and usage requires energy leading to climate concerns and sustainability
  • Privacy: Protections for personal information imputed into the tool and prompts may vary
  • GenAI company regulation: Regulatory processes may lack transparency and clarity
  • Labor exploitation: In some instances work involved in training and maintaining models has been outsourced to low paid workers

How do I address GenAI usage in my courses?

What about GenAI detectors?

Academic integrity violations are a common concern with regarding generative AI and a first instinct might be to focus on detection. However, various studies have uncovered limitations of GenAI detectors. Perkins et al. (2024) conducted a study on six generative AI detectors and found that when a few small techniques were used to modify the text, the detector accuracy decreased to 17%. The researchers indicated that, “The varying performances of GenAI tools and detectors indicate they cannot currently be recommended for determining academic integrity violations due to accuracy limitations and the potential for false accusation which undermines inclusive and fair assessment practices.” Weber-Wulff et al. (2023) studied 21 detectors and concluded that they were neither valid nor reliable at detecting GenAI-generated text. Liang et al. (2023) also found that GenAI detectors can falsely target students whose first language is not English. With such considerations, and concerns over a focus on punitive measures, the usage of GenAI detectors should be treated with caution.

Rethinking assessment

If detection has such challenges, instructors might wonder what other choices they can make. Some might choose to use approaches such as those described previously to take advantage of the opportunities that GenAI can afford to their learning environments. Still, depending on teaching contexts and other concerns over GenAI, some instructors might want to design of robust assignments that support learning without GenAI. Below are sample approaches in this second category.

  • Non-digital projects: creative or other hands-on assignments that naturally limit the usage of a GenAI
    • Arts – painting, drawing, acting, designing a set, etc.
    • Humanities –  building a physical model of a historical site
    • Social Sciences – creating a concept map of course topics on poster paper
    • STEM – building a physical model
    • Business – creating a physical prototype of a product
  • Experiential learning: learning beyond the classroom
    • Arts – attending a live performance and writing a reflection about it that applies course concepts
    • Humanities –  gathering an oral history and creating video vignettes
    • Social Sciences – interviewing people as part of a class project
    • STEM – performing science demonstrations at a local elementary school
  • Class-specific content
    • Any discipline – writing a response, reflection, or synthesis of specific topics from class discussion
  • Personal application
    • Any discipline – applying course material to a specific experience
  • Social annotation: using applications such as Perusall to support more robust online engagement. Examples of course materials that could be annotated based on specific prompts include:
    • Arts – a play or musical score
    • Humanities – a novel or poem
    • Social Sciences –  a research study
    • STEM – a technical report
    • Business – a business plan
  • Scaffolding: breaking down larger assignments into parts and have milestones and check-ins to observe student processes and give feedback. Consider integrating a journal where students reflect on their learning along the way. Below are sample projects.
    • Humanities – an essay
    • STEM – laboratory report
    • Business – business plan

Sample assignments

For those interested in assignments that address generative AI, consider the following repositories:

AI Pedagogy Project

TRAIL Teaching Repository for AI-Infused Learning

Resources

Bowen, J. A., & Watson, C. E. (2024). Teaching with AI: A practical guide to a new era of human learning. JHU Press.

Long, L. Cyborgs and Centaurs: Academic Writing in the Age of Generative AI. Open Press Books.
https://cwi.pressbooks.pub/longenglish102/

References

Addy, T., Kang, T., Laquintano, T., & Dietrich, V. (2023). Who Benefits and Who is Excluded?: Transformative Learning, Equity, and Generative Artificial Intelligence. Journal of Transformative Learning, 10(2), 92-103.

Furze, L., Perkins, M., Roe, J., & MacVaugh, J. (2024). The AI Assessment Scale (AIAS) in action: A pilot implementation of GenAI supported assessment. arXiv preprint arXiv:2403.14692.

Kay, J., Kasirzadeh, A., & Mohamed, S. (2024, October). Epistemic injustice in generative ai. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (Vol. 7, pp. 684-697).

Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non-native English writers. Patterns, 4(7).

Perkins, M., Roe, J., Vu, B. H., Postma, D., Hickerson, D., McGaughran, J., & Khuat, H. Q. (2024). Simple techniques to bypass GenAI text detectors: implications for inclusive education. International Journal of Educational Technology in Higher Education, 21(1), 53.

Weber-Wulff, D., Anohina-Naumeca, A., Bjelobaba, S., Foltýnek, T., Guerrero-Dib, J., Popoola, O., … & Waddington, L. (2023). Testing of detection tools for AI-generated text. International Journal for Educational Integrity, 19(1), 26.

CC BY NC SA Creative Commons license