Opinion

AI chatbots are ruining education

Jose Gonzalez-Campelo/The Cougar

ChatGPT is extremely popular among students, but unfortunately, it could be catastrophic for the future of education. AI models create a standard for students to accept subpar writing and therefore, subpar thinking.

This issue can all be summed up by one word: Tapestry.

Recently, users have noticed that ChatGPT overuses the word so frequently that even college admissions officers have complained on social media about countless essays using the phrase “rich cultural tapestry.”

It’s not just “tapestry” either. When looking into ChatGPT’s preferred vocabulary, it’s apparent how incredibly repetitive it is. Even their own Twitter account jokes about it.

This repetition is proof of the bland, uninspiring content that AI generates.

Writing is hard. It is a question with no answer, a test on which you can never get a perfect score. If you’re lucky enough to have a rubric, it’s likely vague with room for interpretation. Even with that guideline, following it doesn’t guarantee a good grade.

Considering the American education system prefers to test rather than teach writing skills, students often find AI-generated essays are better at checking boxes than their own writing. Though the AI content will be bland, formulaic and mediocre, a student might accept it as the best possible, since it’s far better than what they could do.

Ultimately, ChatGPT rewards mediocrity.

Imagine a student struggling to write an essay. They brainstorm a long list of ideas, and write a few paragraphs before realizing it has no hope of ever working. 

Through perseverance, they come up with a sentence, then a paragraph. The next day, the student keeps testing and scrapping ideas. Eventually, they create something they’re proud to put their name on.

After all that work, their English teacher covers every inch of the paper with red ink and gives it a D.

Through the disbelief, disappointment and anger, maybe the student actually learned a few rules about writing. When writing their next essay, they apply those rules.

What happens after a semester? After a school year? After half a decade? Everything about their brain will be different. The ideas they have will be numerous and original, and they’ll be able to better connect them with other concepts.

As they string them together, they’ll become aware of the tiny details of their plan, and how they add up to create something exceptional. They’ll grow from their experiences, and eventually be able to improve upon their flaws.

What if on that first essay, that student used ChatGPT and their English teacher gave the bland, formulaic essay a decent grade? The student would miss out on that growth.

Looking further, what if millions of students across the nation skip out on developing those cognitive skills because an AI model spit out something mediocre, but passable?

Say a doctor is presented with an ill patient infected by a rare disease. With options and time running thin, he suddenly remembers something from his schooling a decade earlier; an obscure treatment only usable in a specific case. He quickly explains to his team, in no uncertain terms, what must be done to save the patient, and his team suggests detailed improvements to the process. Together, they save the patient’s life.

If the doctor never learned to write well and relied on AI throughout his decades of schooling, could he still think so efficiently? Could he still save the patient’s life? 

This isn’t only a dilemma for doctors. It applies to engineers who build bridges critical to commerce, programmers who create databases for companies and economists who decide how to recover from a recession. It applies to everyone.

As AI becomes more accessible, there are growing incentives for millions of students to accept mediocre writing, and therefore mediocre thinking.

Usage of AI models in schools should not be standard. A less intellectually capable population is a huge price to pay for a small tapestry of convenience, and the consequences will be hell to pay if students continue accepting the mediocrity these models provide.

Rishi Chava is a finance freshman who can be reached at [email protected]

Leave a Comment