Generative AI Thoughts, Timestamp Fall 2024
Generative AI (GenAI) exploded into the academic teaching landscape in 2021, and we have been coping ever since. I’ve been quietly thinking about it since then. I’ve shared resources via my podcast’s social media feeds (Twitter/X, Facebook, Patreon, YouTube) and my LinkedIn. But I have not written about it here on my blog nor created a podcast episode focusing on it.
I haven’t posted anything about my personal opinion because I wanted to provide an informed one, and I prefer to do my thinking quietly. However, it’s been long enough, and the landscape keeps changing, so I think it’s worth saying something. By posting something, I hope to try to move beyond the initial “what do we do?” conversations. These conversations are the ones I feel like I keep having. And my response/conversation now is often about the same terms and frameworks that I’ve been saying for a while, which often signals to me that it’s time to write a blog post.
So here are my current thoughts as of Fall 2024. And I’m putting a clear timestamp on these ideas because I know my views will evolve. GenAI is changing fast enough that we will have to change with it, so there is no way all of my thinking now will be the same even a year from now. Though, I don’t think my thinking will change as fast as GenAI because most of my thoughts have focused more on terms, frameworks, and goals, rather than concrete tactics.
In some ways, this blog post is more for me than you, dear reader. I’ve been learning more and more about what is out there, reading the latest literature on GenAI use in computing education, and actively discussing it. These things have shifted my thinking around as I go, and I’m using this blog post to help me organize my thoughts. By organizing them, I will better understand where I currently am, which will help me better understand the lay of the land and potentially where I want to be.
Key Terms Driving My Thinking
Here are some ideas that have been very useful in helping me frame my thinking.
Cognitive atrophy — I first heard of this term from John Spencer’s podcast episode How to Prevent AI from Doing All the Thinking. The basic idea is that if you do not practice a skill, you will not learn it or lose the ability to do it. In other words, “use it or lose it.” This is similar to my blog post on GPS Syndrome. Though, in that case, it’s a student not learning how to function on their own in the problem-solving process because they do not practice the process enough independently.
Bypassing cognition — This idea is where someone does not do the mental actions or process that they should do to learn something (note this is not “cognitive bypassing” from psychotherapy). So, if a student does not go through the mental process of doing something, that is one way cognitive atrophy happens.
Paradox of Automation — I got this from Cautionary Tales with Tim Harford’s episode called Flying Too High: AI and Air France Flight 447. The episode is one of their thought-provoking pieces about GenAI using excellent case studies, research studies, and storytelling. The idea of the paradox of automation is “the more efficient the automated system, the more crucial the human contribution of the operators. Humans are less involved, but their involvement becomes more critical.” Another way I think of it is that the automated systems take care of the mundane and only need human help for the weird cases. But humans need experience with mundane cases to better prepare to handle the weird ones.
If I had to summarize the main takeaway from these ideas, it would be that learning requires “friction” and we may sometimes have to choose to experience that friction. If my brain does not find a task difficult, it will not go through the effort to better learn the task and the things about it to make that task easier. Moreover, the default might be to make that task so easy my brain experiences no friction and, therefore, learns nothing.
My Current Thinking
GenAI will find its place in our world. It will not disappear like some old fad like MOOCs kind of did. It will be much more integrated into our lives. A student’s vision for their future should be part of how they integrate GenAI into their processes. Social science’s social shaping of technology theory suggests that students’ beliefs about their future will likely influence their choices.
We should be more explicit in a course’s learning goals. Some of this is my alternative grading interests sneaking in. But I think we need to start being more explicit to ourselves about our course’s learning goals so we can better assess what pedagogical choices make the most sense. We also need to be more explicit to the students so they can make informed decisions on what they want to learn. By being more explicit, we can make salient what we consider important to learn, know, and do with versus without GenAI.
A concrete example for learning goals is introductory programming courses (CS1). I think the learning goal for a CS1 is generally teaching the students their first programming language. However, the students’ learning goals fall across a spectrum. On one side is preparing them to be a computer scientist. On the other is using programming to get stuff done for tasks outside of CS. The former’s learning goal is to do all CS1 level problems independent of GenAI because the level of fluency a student needs in those concepts, skills, and processes to achieve that goal renders GenAI unnecessary. A student on this side of the spectrum will need that fluency because CS knowledge builds on it and it has to be in their head to easily help them learn those more advanced concepts. While the latter learning goal is to confirm that the task the program is supposed to do is done correctly. Students on this side of the spectrum do not need programming fluency to the level they can write these programs themselves because writing such programs is a small part of their work. Using GenAI to write the program is an efficient way to accomplish their tasks.
What I’m Currently Doing
Before I go into what I am currently doing, context matters! My current context is very different from those teaching CS1 or CS2. The main course I’ve taught where this matters is my elective data science course, CompSci216 Everything Data. An elective means that most of the students want to be there and learn the content. It’s also a “leaf” course where I am not preparing them to take a course that requires mine. So, technically, if I don’t do a good job, there isn’t anyone/course that will really notice. Its prerequisites are tricky in that they either have two semesters’ worth of programming classes OR 1.5 of programming with one statistics class. So, my students come in with a very broad set of prior experience, and teaching them is hard.
My policy is clear and permissive. They are allowed to use it, and while I ask for citations, I do not audit or enforce this requirement (I’m not sure how I would enforce it anyway). I also state to the students that they are responsible for the work they submit. If they use GenAI to create the work, it is their responsibility to make sure it is correct, and if its work is wrong and they submit it anyway, they will be marked accordingly.
I have two types of summative assessments. I have two midterms, each with an in-person and take-home component. The in-person exam is on paper and clearly signals there are some things they have to be able to do without GenAI. Its focus is a mix of things that are easy to do with access to the internet but students should know without help (like probability) and thinking critically like a data scientist. The take-home exams emphasize understanding data analysis, not the code. GenAI can easily generate the code at this point, and it is in the writeups explaining what the code is doing where I can see the difference between those who understand what is going on versus those written by GenAI and the student does not really understand the generated text. Though, I will admit that GenAI is getting better at this, and a future me problem is changing the assessments to handle measuring that students know what I want them to know.
On the first day of class, I discuss GenAI and give advice on how to use it. Specifically, I cover (1) generally how GenAI works, (2) the implications of how GenAI works (a.k.a. It is wrong sometimes), (3) considerations they should have when using it, and (4) what questions they should ask themselves to critically assess GenAI output and their use of GenAI. I’ve carved out those slides and put them in Box. Feel free to download, use, and share them. Attribution is appreciated. In the speaker notes, I put my “script” for what I say on each slide.
Conclusion
So, I hope this example of my current thinking and doing was helpful to you. I tried to structure this around frameworks and terms to show the structure of my thinking and how I try to look at this from different angles to figure out what to do.
Things are changing, and they will keep changing. If you are frustrated, I am right there with you. I feel like teaching has gone from 3D to 4D chess in some ways. All I’m doing right now is accepting my new reality and putting one foot in front of the other at my own pace as I figure stuff out. That’s the best I can ask of myself, after all.
We will figure this out in the end. It’ll just take time, trial, and error. But such is life, teaching, and research, isn’t it? Happy Holidays, everyone!
Comments
Post a Comment