HomeSchool Reform NewsArtificial Intelligence Extends Beyond College Cheat-bots

Artificial Intelligence Extends Beyond College Cheat-bots

Artificial Intelligence extends beyond college cheat-bots to changing society in fundamental ways.

Generative AI Makes Cheating Easy, Undetectable. Ask Any Student  

AI chatbot use is now ubiquitous on campus, and when faculty read student papers, they are finding it nearly impossible to detect their use. AI has made it astonishingly easy to cheat. In early May, Columbia University undergraduate Owen Kichizo Terry wrote in The Chronicle of Higher Education: 

“Look at any student academic-integrity policy, and you’ll find the same message: Submit work that reflects your own thinking or face discipline. A year ago, this was just about the most common-sense rule on Earth. Today, it’s laughably naïve.”

If students write an essay with chatbot help, many professors assume there will be clues—the writing will have a distinguishing “voice,” the arguments may be simplistic, or the style will be detectable by AI-detection software. “Those are dangerous misconceptions,” Terry said. “In reality, it’s very easy to use AI to do the lion’s share of the thinking while still submitting work that looks like your own. Once that becomes clear, it follows that massive structural change will be needed if our colleges are going to keep training students to think critically.”

Ian Bogost, a professor of computer science and engineering at Washington University, wrote in May in The Atlantic, “Reports from on campus hint that legitimate uses of AI in education may be indistinguishable from unscrupulous ones, and that identifying cheaters—let alone holding them to account—is more or less impossible.”

Bogost, who spoke with dozens of educators, asked whether it’s possible to know for certain that a student used AI, what it even means to ‘use’ AI for writing papers, and when that use amounts to cheating.”

Meanwhile, at the University of Utah, some faculty members have adopted a different attitude—that AI might enhance higher education. The Deseret News reported that the first successful AI program was created in 1951. Early applications included bank-fraud detection, flu season prediction, and more recently, facial identification on smartphones.

AI Anxiety: From College Cheating to Existential Threat

The artificial intelligence revolution has generated widespread worries not only about cheating, but also about the classic science-fiction warnings that computers might evolve to simply take over the world.

Author and Tulane University professor Walter Isaacson, writing in The Wall Street Journal, noted that while the digital revolution was taking place, most people did not notice for years how the evolution of personal computers was beginning to change the world in fundamental ways.

By contrast, the world realized in a few weeks after the November 2022 release of the AI program ChatGPT (Generative Pre-Trained Transformer) that “a transformation was happening with head-snapping speed that would change the nature of work, learning and creativity and the tasks of daily life.” Isaacson added, “Is it inevitable that machines will become super-intelligent on their own?”

In late May, executives from leading artificial-intelligence companies—including OpenAI, Google DeepMind, and Anthropic—issued a statement warning that their technology could pose a future existential threat to the world. These leaders cautioned, according to The New York Times, that the technology “should be considered a societal risk on a par with pandemics and nuclear wars.”

Signatories included Geoffrey Hinton and Yoshua Bengio, winners of the Turing Award for their groundbreaking work on neural networks and widely considered “godfathers” of modern AI. Their statement comes at a time of emergent concern about the potential harm of AI.

Innovations in large language models, used in chatbots, have elevated fears that AI could soon be used to disseminate disinformation and propaganda, might destroy countless professional jobs, and could cause widespread social upheaval—unless science can slow it down.

In a startling, related development that recalls decades of science fiction, Elon Musk’s company Neuralink completed a series of animal studies in May and is applying to the Food and Drug Administration for authorization to implant chips into the brains of human test subjects.

Originally published by Paideia Times. Republished with permission.

For more School Reform News.

Paideia Times
Paideia Times
Paideia Times is a news quarterly for higher education trustees.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -spot_img

Most Popular

Recent Comments