Does ChatGPT dream of ones and zeroes?
What will be the impact of AI programs like ChatGPT on education?
As part of the preparation for an in-class activity in a technology and communication course I took in first-year university, we had to go on an AI-chat program and count how many messages it took for the AI to stop making sense. You couldn’t get much past “hi, how are you?” before the AI to start spitting out responses like “I mambo dog food in the banana patch.” Throughout my undergrad, the general trend of the lectures and readings dealing with AI was that while things were moving forward, we were still a long way from androids dreaming of electric sheep. While this may still be the case, and sentient AI may still be something for future generations, it has come a long way from where it was seven years ago.
Whether the development of AI is propelling us towards Terminator or something far less sinister remains to be seen. But we seem to have reached a turning point where AI could start changing parts of our world in extreme ways. In researching this piece, we came across online AI that will create art, fight your parking ticket, prepare a presentation, and write a children’s book. The focus of this discussion is the OpenAI program: ChatGPT.
Unlike the AI I played with in first year university where you are having a conversation with a machine, ChatGPT is more like Ask Jeeves, if the online butler had written multi-paragraph analyses in response to your queries.
A major concern that quickly emerged in association with ChatGPT is the impact that it, and similar programs could have on education. The keen student and writer in me wants to say that no one would ever use a program like this to write their paper for them because they would be cheating themselves of the chance to research, think, and write about a topic. And for many students, likely at first year university level and higher, that is hopefully true. But I know better than to presume that is the case across the board, particularly in middle and high school, when education is mandatory and English or history may not be someone’s subject of choice. If I’m being entirely honest, thinking back to a paper I wrote for a second-year university science class where I was completely out of my depth, I’m not sure that I would absolutely never have used a program like this to at least help with writing a paper, had one been available at the time.
There are features of ChatGPT that I did not find or learn how to use in the week I spent playing with it in preparation for writing this, but at its most basic level, users are able to input prompts in the form of questions or statements and the AI will generate a response. The program will conduct research and draw analytical conclusions, summarize text, explain the major themes in literature, write poetry in various styles and much more.
There is a level at which I could see a real advantage to a program like ChatGPT to help students better understand subject material. The program has a tendency to rely on using the simplest language possible to explain a topic. While this is likely because it is currently lacking the sophistication to do much more, this could mean that similar to using teaching aids like No Fear Shakespeare that translate text into easier language, AI programs may provide a good base foundation for students before moving on to reviewing more formal academic sources.
However, this potential benefit is vastly outweighed by the problems it can foreseeably cause for education. Though SparkNotes and similar summaries have existed for a long time, they could not answer the specific question your teacher put to you, but ChatGPT can. And it can do it without the cost that would generally be associated with buying an essay from a service.
We talk so often about how easy it is for everyone, and young people in particular to be seduced by whatever they see online. We are also witnessing the increased prevalence of a desire not to think deeply or be challenged in our assumptions. AI writing programs make it even easier to do this.
At the surface level, ChatGPT and similar programs straight up provide an out. Rather than taking the time to stop and think about the major themes in To Kill a Mockingbird and practicing articulating them, you can now copy and paste an analysis of them from the program. Furthermore, it remains unclear whether popular plagiarism software like Turnitin, which grade schools and universities typically run papers through, can catch a paper written by the AI (according to some Reddit threads, it can’t).
When generating the screenshot above, the first time I ran the search, my internet crashed when it was almost done. However, that answer focused on Atticus Finch. When I ran the search again to get the answer above, it identified the same theme, but focuses on Scout.
There is also the question of the ways you can bend the program, even unconsciously, through the phrasing of your prompt to give an answer that is not without bias. While it does its best to give neutral and balanced answers, elements of bias in the language can creep in, such as referring to Israel as an “occupying power in Palestinian territories” when asked to explain the Israeli-Palestinian conflict, rather than referring to land as “disputed” or using other more neutral terms. Given the right phrasing, the AI will defend the bombing of Hiroshima and Nagasaki and explain why Hitler wasn’t an entirely bad guy (he was after all a vegetarian who made the trains run on time).
There is always bias on the internet, and it is nothing new that one idiot can always find another. But what happens if students become overly reliant on AI programs like ChatGPT to do their work for them? What happens when they are not challenging themselves outside of the classroom to think critically about the books they read and information they receive? Google has steadily improved the order in which search results are displayed so that more credible sources appear first. While ChatGPT appears to rely on reputable sources, when asked to cite them, it fails to identify the ways that these sources can be biased. For example, it relies on the BBC as a source for understanding the Israeli-Palestinian conflict, despite the BBC having a long history of displaying a bias against Israel.
I have seen the argument made that students already rely on AI programs such as Grammarly, so this shouldn’t come as such a shock. Being a one-man operation with limited outside support, I use Grammarly most weeks when I’m writing and editing to help catch things that I miss. While I believe that it is important for students to understand when to use different punctuation and identify run-on sentences, I also know that I get a lot of value from Grammarly, and did as a student when writing particularly long papers. That being said, Grammarly is often not very good at identifying the best way to change a sentence around, so it does not become clunky, and many of its suggestions don’t match the context of the sentence's punctuation placement. It also can’t write your paper for you.
It is hard to know, only a month into the public release of these OpenAI programs, if and how they will impact education. Furthermore, it is necessary to keep in mind that what we are witnessing is only the first iteration of AI writing programs to hit the market, and that they are likely to only continue to improve and become more advanced over time.
An excellent piece Sadie! Hard to tell if this will lead to progress or regression. Will AI free people up to accomplish more or will it become a tool too easily wielded by the deceitful and lazy.