top of page

Building critical AI literacy and fit for purpose application of Generative AI in Digital Literacy: Screen, Web and New Media.

Associate Professor Jennifer Stokes, Dr John Pike and Dr Thomas Folber

Presented at the Online Symposium, 22nd July 2025

Summary of activity:

In this case study, we will outline strategies embedded within the Digital Literacy: Screen, Web and New Media course at UniSA College (Education Futures), which support students to build understanding of ethical use cases and the limitations of GenAI. Our teaching is informed by critical AI literacy, which builds awareness of wider social implications, including ethical dimensions, and explores when, how, and whether to use AI tools (Velander et al., 2024). We have developed approaches that contextualise GenAI alongside human strengths to enable purposeful learning and students in navigating this new context.


In this course, students pitch and produce a digital project. Students are supported to develop creativity and critical thinking, and also better understand the value of these human-centred skills in the context of AI (Cropley & Cropley, 2023; Marrone et al., 2024). The course is designed to empower students through purposeful learning, aligned with the ADEPT framework for enabling pedagogy (Stokes, 2023). Students are guided to develop critical AI literacy and ability to identify where GenAI is fit for purpose, building skillsets in evaluation and judgement (Bearman et al., 2024). Course content models transparent AI application and each assessment outline provides GenAI use cases. Dialogic approaches (Shor & Freire, 1987) support students to develop tailored digital projects and discuss AI in the context of their degree and career aspirations. This case study will showcase student work that incorporates generative AI in informed and constructive ways.


Challenges faced:

We have worked to address a number of challenges through this approach. Primarily, we wanted to address more subtle uses of AI through a relational approach, which would support students to achieve learning goals. We identified a few cases where undeclared student AI use was limiting the quality of work. This non-strategic AI use related to low confidence or time pressure, and meant that students were missing aspects of university study and transformative learning. We wanted to open dialogue on AI use, while emphasising the importance of developing human-centred skills first.

As AI use becomes more difficult to identify, we wanted to explore how we could incorporate use in acceptable, explicit and ethical ways, while also supporting the choice to avoid using AI if the use-case is not strong. We recognise that if we say ‘no’ to AI use, we may risk losing credibility, especially if student is using AI as a learning tool. We were wrestling with our position as academics learning how to use emerging tools at the same time as students, and questioning whether guidance to undertake more manual processes was misguided and would date quickly. We were also constrained by what can be achieved within the limits of assessment design in one course.


Outcomes & Impact:

Drawing upon literature and experience, we navigated how we could explore these new technologies together to determine fit for purpose use of AI which complements human-centred skills. Students have appreciated this guidance and declare AI use in their assessments. We are seeing them make informed selections of relevant AI to complement their creative work. We also find some students conscientiously reject AI for ethical reasons related to content generation, political context, or environmental impact. Whether they choose to use AI in informed ways or reject it based on contextual understanding, our approach supports students to develop critical understanding of AI, which they can take forward into industry as part of a broader suite of new literacies required to operate successfully in 21st Century society.


Reflection & the Future

Our approach has increased transparency and accountability around AI use. We engage in conversations about how students use AI and where this may be appropriate for their digital productions. These conversations foster an open environment where we can better guide students in effective AI use and learn from their experiences as well, which reflects authenticity and contemporary workforce expectations.

AI is rapidly developing and we are honest about the complexities of learning about these emerging technologies together. Our approach supports students to see how AI can enhance or limit their work, as well as the repercussions for Creatives more broadly. Supporting students to adopt lifelong learning approaches and consider ethical, fit for purpose use of AI prepares them for contemporary practice and further study.


References

Bearman, M., Tai, J., Dawson, P., Boud, D., & Ajjawi, R. (2024). Developing evaluative judgement for a time of generative artificial intelligence. Assessment & Evaluation in Higher Education, 1–13. https://doi.org/10.1080/02602938.2024.2335321

Cropley, D., & Cropley, A. (2023). Creativity and the Cyber Shock: The Ultimate Paradox. The Journal of Creative Behavior, 57(4), 485–487. https://doi.org/10.1002/jocb.625

Shor, I., & Freire, P. (1987). What is the ‘Dialogical Method’ of teaching? Journal of Education, 169(3), 11–31.

Stokes, J. (2023). Enabling pedagogy. HERDSA Connect, 45(1), 20.

Velander, J., Otero, N., & Milrad, M. (2024). What is Critical (about) AI Literacy? Exploring Conceptualizations Present in AI Literacy Discourse. In A. Buch, Y. Lindberg, & T. Cerratto Pargman (Eds.), Framing Futures in Postdigital Education (pp. 139–160). Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-58622-4_8



(c) ASPERA Inc NSW 9884893

  • Facebook
  • Vimeo
  • SoundCloud
bottom of page