Gaskin: In defense of students using AI to write essays
There is a current effort in education to instruct teachers in how to catch students using AI to write their papers. This is the wrong direction. Discouraging students from using AI in their writing process is like insisting they use a manual typewriter to ensure they don’t use spell-check and learn how to use a physical dictionary to look up words.
Using AI is similar to using Microsoft Word’s grammar check or the Grammarly app. These tools can be incredibly helpful. For students with learning disabilities, they can provide personalized guidance and support. AI platforms like ChatGPT can serve as brainstorming partners, editors, and research assistants. With AI, students can ask for feedback on improving their essays, seek ideas on new angles to explore, or obtain research recommendations that might otherwise go overlooked.
Teachers should encourage the use of AI, raising the standards for what constitutes good writing. Grammar and mechanics, which can be perfected with AI, should no longer be the primary grading focus. Instead, assessment should emphasize abstract qualities such as the student’s voice, tone, curiosity, and creativity. Teachers can place more emphasis on the originality of ideas, the authenticity of voice, and the depth of analysis.
For instance, if an assignment asks students to write about George Washington’s crossing of the Delaware, and a student merely requests a 500-word essay on the topic from AI, stopping there would warrant an “F.” This approach would be no different than copying a Wikipedia entry and turning it in. Creativity is essential; students should learn to engage thoughtfully with AI.
What about asking the AI to help you brainstorm ideas? First brainstorm question, given the logistical complexity of the crossing, what were George Washington’s other options? I received five options with the pros and cons of each. I asked AI to rank the options from most to least riskiest. I then asked in military history, please provide examples of similar strategies and what was the result. My next question was, “Was it ethically correct for Washington to take the riskiest option given human lives were at stake?”
I drafted this research into a paragraph and asked AI to turn it into an opening paragraph for an essay. What the AI produced was good, but I wanted to edit it. Here was the result:
“When George Washington decided to cross the Delaware River in a surprise counterattack, he weighed at least five other strategic options, each with distinct advantages and disadvantages. Ultimately, he chose the riskiest plan with the highest potential reward. Military history offers comparable examples of high-risk, high-reward decisions, including Hannibal’s crossing of the Alps in 218 BCE, Caesar’s crossing of the Rubicon in 49 BCE, the daring Inchon Landing during the Korean War in 1950, the Allied D-Day invasion of 1944, and the Tet Offensive during the Vietnam War. Each of these operations carried significant risk but aimed at achieving a decisive advantage. This essay will explore why Washington’s decision was not only a brilliant military maneuver but also aligned with the prevailing ethical perspectives of his era, balancing courage with duty in a complex wartime context.”
There is no way I could have produced this in such a short period of time.
To demonstrate a different line of thinking and curiosity, I asked, “In what ways could Washington have used women in his crossing of the Delaware River?” And received the following.
“Women could have played several strategic roles to support George Washington’s crossing of the Delaware River . . . based on women’s contributions throughout the Revolutionary War . .. Providing intelligence as spies, spreading misinformation among British troops, supporting morale and logistics, and providing medical assistance.”
I asked, given the potential benefits of using women, why didn’t Washington? Each student exploring his or her own interest should produce a very different answer, none of which could be created by just submitting a single prompt and turning that in.
AI does make mistakes, and I believe part of the learning is learning to do fact-checking. The examples from military history are analysis, not fact, so the student should review those and not blindly take AI’s word for it. Responsible AI use involves understanding its limitations. The student is still responsible for proper citations and possible plagiarism. This is all part of good writing.
Avoiding AI in the classroom is neither realistic nor productive in an era where it is an integral part of many industries. Equipping students to use AI effectively and ethically prepares them for real-world applications. Employers increasingly value tech-savvy employees who understand how to leverage tools like AI in productive ways. Educators who encourage responsible AI use can help students become valuable contributors in future workplaces, whereas those who neglect these tools risk leaving students unprepared in a technology-driven job market.
Ed Gaskin is Executive Director of Greater Grove Hall Main Streets and founder of Sunday Celebrations.