Signposting

My name is Jasper (Shuoyang Zheng). This is an outline of my PgCert and my ARP project.

I’m an Associate Lecturer at UAL Creative Computing Institute (CCI), I teach the Mathematics and Statistics for Data Science unit on BSc (Hons) Data Science and AI, and the Exploring to Machine Intelligence unit on MSc Creative Computing. I’m pursuing a PhD degree in AI and music technology.

I hope to use PgCert as a point of reflection, to reflect on my identity as a researcher (my PhD works)/ teacher (my teaching at CCI)/ student (me being a research student)/ artist (me being a music composer and producer), to situate myself into this state of disequilibrium.

In my Action Research Project, I look into students’ use of Generative AI (GenAI) coding tools for learning and assignments in Creative Computing. In particular, the creation of practical guidelines on the use of AI coding tools for BSc Data Science and AI students, to reduce the barrier to AI literacy in a technical programming context.

List of ARP blog posts:

  1. Rationale
  2. Relevant Documents and Contextual Challenges
  3. Research Question and Research Method
  4. Ethical Action Plan and Participant-Facing Documents
  5. The GenAI Checklist and the Rubber Duck Chat Mode
  6. Project Findings and Reflections
  7. Presentations
  8. References

Posted in ARP | Leave a comment

Reflections on GenAI Coding Tools in Creative Coding Curriculums

Reflections on GenAI Coding Tools in Creative Coding Curriculums

TL;DR: As part of my PgCert Academic Practice project, I made a customised chatbot in VS Code and a checklist for citing AI coding tools in coding assignments, with the aim of reducing technical barriers to AI literacy and fair assessments. In the end I suggest some practical considerations on how to actually embrace AI tools in coding assessments.

As a researcher / creative coder in AI and music technologies myself, AI coding tools like GitHub Copilot have become part of my everyday programming workflow.

However, as a lecturer, the explosive appearance of AI tools is bringing me more challenges than opportunities. In the last academic year, this shift has been particularly pronounced. Students are no longer using AI only for writing text and essays, but for core technical practices: coding, debugging, setting up environments, explaining error messages, etc.

Trying to articulate the problem space

The debate on acceptance and resistance toward AI (in particular, coding tools like Copilot) in technical curriculums is polarising:

  • (a) The ability to use AI coding tools can be desired in future workspaces in the industry. Considering future employability, some hope to encourage the use of AI tools in students’ programming workflow (Shukla et al., 2025), and
  • (b) The misuse of AI coding tools (e.g., shortcut learning, misconduct) has to be avoided, and ensure that the learning outcomes regarding technical programming skills can be accurately assessed.

Most critically, in the context of technical coding environments, the concept of AI-literacy can embed inequalities and assumptions (Prabhudesai et al., 2025). The ability to curate/review AI’s output assumes a level of technical coding skill in the first place, and not all learners can be confident enough to question the output of AI, especially when overwhelmed by the sheer volume of code Copilot tends to produce, plus all the media hypes.

Knowing the university-wide guidelines Student guide to generative AI (and several other policy documents, see more of that in my blog post), a recurring issue is that most AI guidance is simply not tailored to writing code. Asking students to “keep a log of AI use” sounds reasonable until you ask what that actually means in practice. When should I log something? What level of detail is appropriate? Is fixing a syntax error worth reporting? What does a good log look like? It becomes clear that generic guidance alone is not enough.

Two interventions in a test unit

During the PgCert Action Research Project cycle, I led the BSc unit Critical 1: Mathematics and Statistics for Data Science. I took it as a test unit for two interventions around the use of GenAI for coding:

1. A “Rubber Duck” Custom Agents in GitHub Copilot

A customised agent (used to be called chat mode in 2025) is a feature in Copilot that allows programmers to tailor the behaviour of Copilot to fit specialised roles. Quote from VS Code:a planning agent could instruct the AI to collect project context and generate a detailed implementation plan”.

Inspired by rubber duck debugging, in which programmers think aloud to debug, what if we could create a customised chatbot that acts as a rubber duck? That is, a chat mode in Copilot that encourages reflection, breaks tasks down, negotiates plans, works step-by-step, and prompts users to review and revise.

A detailed write-up of the prototype can be found at my blog post, here’s a side-by-side of the same prompt and context, but on the left it’s the default Copilot agent, on the right it’s the rubber duck:

A side-by-side comparison of interactions with the default chat mode (on the left) and interactions with the Rubber Duck chat mode (on the right) I made, with annotations.

2. A GenAI Checklist (Chat Log Template)

Students who chose to use AI coding tools were asked to submit a structured reflection alongside their code.

The checklist is tailored from the UAL Student guide to generative AI, includes detailed guidelines on how to use AI coding tools in an academic context, and how “keeping track of processes” might look like in coding and programming with AI. Practically, it gives a set of template questions:

  • What did you ask the AI to do?
  • What did it attempt or add?
  • What did you keep, change, discard — and what did you learn?

The intent was not to document everything, but to create a pause: a moment where students step back from the AI interaction and articulate their own judgement.

Rollout of the interventions and some discussion with students

Both tools were introduced through a one-hour GenAI workshop in class time, combining discussion, instructions, and some hands-on exercises on working with Copilot.

The GenAI workshop with students in the Math&Stats unit.

Miro responses to the question “What did you use AI for? (What was the model? What questions/prompts did you ask? How did it go?)”

Miro responses to the question “What excites you about using AI in the university? What worries you about using AI in the university?”

A note on research ethics: The project is conducted in keeping with the PgCert Ethical Action Plan. Students who chose to anonymously participate in the project are properly briefed and debriefed, with a participant information sheet and a consent form.

Reflections on the results

A summary of findings that stood out:

  • First, a number of students opted not to use AI, and their work did not suffer. In fact many demonstrated strong engagement with the course material. This reiterates the point: encouraging responsible use of AI should not penalise those who choose not to use it.
  • Second, students are worried about misconduct. This again suggests the need for transparency around assessment, i.e., clarity about how AI use will be interpreted and marked, a common ground between the marker and the learner.
  • Third, reflection is more valuable than completeness. The most effective way of citing AI were not the most detailed, but the most thoughtful. A “good” chat log should show how learners evaluate AI output, question why something worked (or didn’t), and link decisions back to learning outcomes. Conversely, asking students to log every trivial AI interaction is stressful and burdensome. Treat the log as an additional channel for students to demonstrate assessment criteria, not as a forensic tool.

On top of that, the perception of AI varies. Students’ resistance of AI due to its consequences and risks should be acknowledged. In this respect, if tools like Copilot are really being used as productivity tools, embedded in technical practices, and actually enhancing efficiency, how do we ensure fair assessments? That is, for those who opted out of using AI, how to ensure that they don’t feel left out, and ensure that they are marked equally? These aspects should be communicated in the guidelines.

Some thoughts on broader issues

I argue that a singular checklist/guideline/tailored intervention is not sufficient to reduce the barrier of AI literacy. Instead, they should belong to a larger change. In keeping with the criticality aspect across many other assessment criteria, the critical use of AI needs long-term practices. If AI literacy is a skill, it needs space to be practiced, discussed, and revisited across units, the same as many other technical skills.

Some guiding questions that I raise, in the context of creative coding pedagogy:

  • Can we incorporate “coding with AI” as a technical skill that needs to be learned and practiced?
  • Can we demonstrate what a good programmer is, in the era of coding with chatbots?
  • Can we adapt the assessing criteria, to better accommodate the use of GenAI? Such as assessing the skill of prompting, the skill of curating, and the skill of adding context to chatbots?

Reference

  • Prabhudesai, S., Kasi, A.P., Mansingh, A., Das Antar, A., Shen, H., Banovic, N., 2025. “Here the GPT made a choice, and every choice can be biased”: How Students Critically Engage with LLMs through End-User Auditing Activity, in: Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, CHI ’25. Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3706598.3713714
  • Shukla, P., Naik, S., Obi, I., Backus, J., Rasche, N., Parsons, P., 2025. Rethinking Citation of AI Sources in Student-AI Collaboration within HCI Design Education, in: Proceedings of the 7th Annual Symposium on HCI Education, EduCHI ’25. Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3742901.3742909
Posted in Uncategorised | Leave a comment

Protected: 7. Presentation Slides

This content is password protected. To view it please enter your password below:

Posted in ARP, Blog Posts | Enter your password to view comments.

Protected: 6. Project Findings and Reflections

This content is password protected. To view it please enter your password below:

Posted in ARP, Blog Posts | Enter your password to view comments.

Protected: 4. Ethical Action Plan and Participant-Facing Documents

This content is password protected. To view it please enter your password below:

Posted in ARP, Blog Posts | Enter your password to view comments.

5. The GenAI Checklist and the Rubber Duck Chat Mode

This blog post describes the two interventions as technology probes (see the blog post Research Question and Research Methods), and demonstrates how they work.

5.1. GenAI Checklist

Inspired by the “keeping track” and “showing processes” aspects highlighted by multiple relevant documents (see the blog post Relevant Documents and Contextual Challenges), I created a GenAI Checklist for the Math&Stats unit. The aims is to provide detailed guideline on how to use AI coding tools in an academic context, and how “keeping track of processes” might look like in coding and programming with AI.

Specifically, it uses the following questions as template to log the interaction with Copilot:

  • What did you ask the AI to do?
  • What did the AI attempt? What was added to the code/notebook by the AI?
  • What did you decide to keep, change, discard? What did you learn from this?

The checklist is rolled out as a mandatory submission element if any AI coding tools is used in the assessments.

5.2. “Rubber Duck” Chat Mode

For context: Visual Studio Code is the code editor used by students at CCI, it integrates an AI coding assistant GitHub Copilot (by Microsoft, the educational version is free for all students at UAL). A typical workspace is shown in the screenshot below, the left panel is for writing and running codes, the right panel is Copilot’s chat area. Copilot have access to the entire workspace, files, and it’s able to directly modify/add/delete the code in the left panel.

Figure 1. A screenshot of a typical workspace of Visual Studio Code, and Copilot chatbot (the panel on the right) used by students at CCI.

Customised chat mode is a feature in Copilot that allows programmers to tailor the behaviour of Copilot to fit specialised roles (Visual Studio Code, 2026). In practice, programmers create a set of “pre-task instructions” to define how the AI should operate. Quote from Visual Studio Code’s documentation:

“For instance, a planning mode could instruct the AI to collect project context and generate a detailed implementation plan, while a code review mode might focus on identifying security vulnerabilities and suggesting improvements.”

I create a “Rubber Duck” mode that instructs Copilot to support learning, break down the task, communicate and negotiate plans with the user, implement the solution step-by-step, and always prompt the user to review what was done and make changes. A full configuration file can be found here (requires UAL login).

The image below shows a side-by-side comparison of the default chat mode and the Rubber Duck chat mode. Rubber Duck offers a more detailed account of how to approach the task, how the approach maps to knowledge, what will be done, and it negotiates with the user about how to proceed.

Figure 2. A side-by-side comparison of interactions with the default chat mode (on the left) and interactions with the Rubber Duck chat mode (on the right) I made , with annotations.

These two interventions are rolled out in Math&Stats unit’s code repository.

Next step

The next blog post 6. Project Findings and Reflections presents my findings, reflections, and some directions for future works.

Reference list at: https://jaspersz.myblog.arts.ac.uk/2025/11/20/arp-references/

Posted in ARP, Blog Posts | Leave a comment

3. Research Question and Research Methods

The motivation of the research question is threefold:

  1. The need to tailor and adapt guidelines to specific creative disciplines
  2. The need to communicate clear expectations to students for the use of AI in advance of assessments.
  3. The need to reduce technical barriers for learners to be AI-literate (demystify AI).

Research Question

How can we create practical guidelines on the use of AI coding tools for BSc Data Science and AI students at the Creative Computing Institute, to reduce the barrier to AI literacy in a technical programming context?

Research Method

I am inspired by the use of technology probes in action research (Hutchinson et al., 2003), that is, creating a digital intervention and then evaluating users’ responses to it in their own environment (Madden et al., 2014).

In this case, I use a GenAI Checklist and a “Rubber Duck” Chat Mode as probes to be introduced to the learners, seen as probes in a field-testing setting, taken to the learners’ workspace:

  1. A GenAI Checklist tailored from the UAL Student Guide to Generative AI. It aims to provide guidelines on the responsible use of AI coding assistants, including more detailed instructions on:
    • How to use AI coding tools in an academic context,
      • How to keep track of code generation/editing done by AI,
      • How to add the “Generative AI Disclosure” in coding.
  2. A “Rubber Duck” Chat Mode will be implemented in GitHub Copilot (the AI coding tool integrated into the software used by CCI students) and provided to students in the unit. Chat Mode is a set of “pre-task instructions” for Copilot, to tailor its behaviours by setting the overall goal. I hope this can better support learners, and potentially help students identify what knowledge/skills are needed.

Detailed write-up of the above two elements are described in a separated blog post: The GenAI Checklist and the Rubber Duck Chat Mode.

Participants

Students in the BSc Year 1 Mathematics and Statistics for Data Science (Math&Stats) unit who gave consent to join the research project. The unit runs from Sep 2025 to Jan 2026.

Dissemination (roll out) of the technology probes

A GenAI workshop (~60mins) will take place during the class time of Math&Stats, workshop schedule:

  • 0 – 15mins: I give a brief on the use of AI and AI coding tools
  • 15 – 30mins: I walk through the principles and guidelines of using GenAI coding tools in class, the adapted GenAI checklist, and examples of how to keep track of the use of AI for coding.
  • 30 – 40mins: Practical notes on AI coding tools, including how they work, how to set up, applying for educational benefits (free access for students), and limitations of AI coding tools.
  • 40 – 60mins: Several programming tasks are prepared. In this activity, students get into groups of two, choose a task, use the Rubber Duck chat mode and prompt the AI to do the task, keep a chat log of their use according to the GenAI checklist, and put their results in Miro and share with the class.

Slides for the GenAI workshop:

The GenAI workshop with students in the Math&Stats unit.
Miro board created with participants.

Data Collection

First, I lead a group discussion with the student participants, about the perceptions toward AI in academia, I use a Miro board to collect their anonymised answers to:

  • What did you use AI for? (What was the model? What questions/prompts did you ask? How did it go?)
  • What excites you about using AI in the university? What worries you about using AI in the university?

Second, students are required to submit a written chat log, following the GenAI checklist, if AI is used in their assignment. I collect completed chat logs submitted by student participants in their final assignment, aim to observe students’ response of these two elements in practice.

Data Analysis

I’ll deliver the Miro discussion in the form of a word cloud.

I’ll reflect on the sample chat logs to discuss whether the GenAI checklist has been well-received by participants, I’ll discuss on the following points:

  • What patterns and themes can be identified?
  • What do they reveal about students’ perception of the GenAI checklist?
  • How to improve? What might be useful to gather next?

Next step

I put my ethical action plan and participant-facing document at 4. Ethical Action Plan and Participant-Facing Documents

I wrote a detailed description, and a demo of the two interventions (probes) at 5. The GenAI Checklist and the Rubber Duck Chat Mode

Posted in ARP, Blog Posts | Leave a comment

2. Relevant Documents and Contextual Challenges

I began my ARP by reflecting on my teaching in the Spring and Summer terms of 2025 (because early 2025 is when AI coding tools such as “vibe coding” have risen). I wrote down a list of issues I have noticed:

  • The UAL Student Guide to Generative AI is not tailored to the context of CCI. Students at CCI use Generative AI not just for writing, but also for a range of programming practices (e.g., coding, debugging, explaining codebase, setting up workspaces, writing code repository description).
  • The guidelines we gave to students on AI disclosure in coding are superficial. For instance, we require students to keep a log of the use of AI, but how do they do this? When to do this? What to include in the log? This can lead to unintended academic misconduct.
  • The perception of AI (the consequences, risks, and broader impacts) varies across students.
  • Commercial AI coding assistants offer subscription services, which give access to higher-quality AI models: this is an inequal access to technical resources.

I looked at the relevant guidelines and principles that have been proposed. Below, I’ll list some of the documents in higher education in response to the use of GenAI.

  1. The Russell Group principles on the use of generative AI tools in education (Russell Group, 2023) make a stance of moving toward the ethical and responsible use of GenAI, including: “2. Staff should be equipped to support students to use generative AI tools effectively and appropriately in their learning experience. 4. Universities will ensure academic rigour and integrity is upheld.
  2. In response, the UAL Position Statement on AI (UAL, 2025a) calls for the need to tailor and adapt these guidelines to specific creative disciplines: “Appropriate use of AI is likely to differ between the creative disciplines. We will encourage staff to consider how these tools might be applied appropriately for different student groups or those with specific learning needs.
  3. The AI and Teaching Q&As (UAL, 2025b), as a staff-facing document, highlights: “Create a clear expectation of AI in advance of students commencing assessed work (define acceptable use).” “Incorporate discussion about our AI guidance.” “Be aware of the context and the learning outcome“.
  4. The Student Guide to Generative AI (UAL, 2025c), as a student-facing document, explains academic misconduct with GenAI and highlights the importance of keeping track of the use of GenAI.
  5. The AI Guidance Summary provides a list of sources and a summary of the use of AI (LCC, 2023):

Staff perceptions of AI 2025 (Attewell, 2025) highlights “Clear guidance: simple, accessible policies for staff and students“, and “not just one-off sessions but ongoing training, reflection, and hands-on practice“.

In addition, the proceedings of the Resistance AI Workshop include several articles that have mentioned the risk of how colonialism, militarism, and imperialism can lead to the resistance of AI tools in an academic context (Agnew et al., 2020).

Next step

In the next blog post I will write down a concrete research questions, and introduce the research methods: 3. Research Question and Research Methods

Reference list at: https://jaspersz.myblog.arts.ac.uk/2025/11/20/arp-references/

Posted in ARP, Blog Posts | Leave a comment

1. Rationale

Generative AI (GenAI) brings new challenges and opportunities at an exponential speed. The explosive appearance of AI tools and materials in higher education is transforming the ways of teaching, learning, and assessing (Attewell, 2025).

I look at the field of Creative Computing, where students use GenAI not just for writing, but to engage in a range of programming practices: writing code, debugging, explaining the code, setting up workspaces, writing code descriptions, and more. In programming, AI literacy often refers to a set of technical and practical skills closely related to computer science. Dincă et al. (2023) highlight that the benefits of AI coding tools rely on the human programmers’ safeguard:

“The use of specialized AI tools in software development has the potential to increase productivity when utilized by experienced users, particularly for repetitive coding tasks. The implementations, however, must be subjected to meticulous scrutiny.”

This highlights that the barriers for students to be AI-literate: The ability to review, curate, and safeguard AI’s outputs assumes one to have strong technical skills. These skills can include being able to understand and correct programming concepts, and to accurately articulate tasks and goals when prompting the AI, and etc. Therefore, the challenge here is that, in order for one to become AI-literate in AI coding tools, one need to have a programming or computer science background in the first place.

Therefore, the problem of the use of AI coding tools in the pedagogy in creative computing is twofold: (a) the ability to use AI coding tools can be desired in future workspaces in the industry, we would therefore like to encourage the equal use and integration of these tools in students’ programming workflow, and (b) the misuse of AI coding tools (e.g., shortcut learning, misconduct) need to be avoid, and ensuring that those learning outcomes regarding technical programming skills can be accurately assessed.

My positionality

I’m an associate lecturer in the BSc Data Science and AI and MSc Creative Computing courses at CCI. During the ARP cycle, I’ll be teaching BSc year 1 students at the Mathematics and Statistics for Data Science unit. Apart from being a lecturer at the Creative Computing Institute, I am also a researcher and software engineer in a techno-scientific field of AI and music technology. In my teaching, I often consider the demystifying aspect of technologies – to share the technical know-how.

I started using AI coding tools about a year ago, and now I use them extensively in my technical practice for productivity.

Next step

To figure out what would the “actions” be in this action research project, I started by looking at relevant existing documents/guidelines on using AI in higher education, as well as reflecting on my teaching practices. Next step: 2. Relevant Documents and Contextual Challenges.

Reference list at: https://jaspersz.myblog.arts.ac.uk/2025/11/20/arp-references/

Posted in ARP, Blog Posts | Leave a comment

ARP: References

This is the reference list for all ARP posts:

Posted in ARP, Blog Posts | Leave a comment