Site icon Heartland Daily News

Academics Explore Benefits of Artificial Intelligence

"LEGO Collectible Minifigures Series 1 Robot vs. Space UFO" by wiredforlego is licensed under CC BY-SA 2.0.

Academics explore benefits of artificial intelligence in the classroom as tools for learning, but others find it “unsettling.”

Anxious Universities Struggle to Respond to the ChatGPT Revolution 

Higher education has been scrambling to understand, react, and adapt to the stunning changes brought about by the November 2022 release of ChatGPT (Generative Pre-Trained Transformer), the artificial intelligence (AI) program. Many academics see AI as a doorway to a new world; others as an upheaval in teaching and learning.

Sarah Eaton, an associate professor of education at the University of Calgary who studies academic integrity, says that “artificial-intelligence tools present the greatest creative disruption to learning that we’ve seen in my lifetime.” According to Eaton, the academy’s response includes establishing cross-disciplinary committees and creating workshops, videos, and newsletters. In addition, universities are using crowdsourcing to discover resources and examine which classroom policies might work effectively.

Most colleges and faculties have not yet produced guidelines on how, or if, artificial intelligence should be used in the classroom, according to a recent report by Primary Research, which conducts surveys for higher education and other businesses. The firm surveyed 954 faculty members at nearly 500 institutions—including public, private, and community colleges.

Younger faculty were more likely than older ones to have developed ChatGPT guidelines, the survey found. Professors who did produce them were in communications, English, journalism, language, and literature departments. The survey also indicated that faculty members were divided over whether students should write papers and do other written work in class or in other areas where they could be supervised and where they would not have access to any form of AI.

And then there’s Henry Kissinger. In an overview of the challenges generated by AI, the former  secretary of state and Harvard professor of history declared, “A dialectical pedagogy that uses generative AI may enable speedier and more individualized learning than has been possible in the past. Teachers should teach new skills, including responsible modes of human-machine interlocution…. What happens if this technology cannot be completely controlled?”

Many Universities Begin to Use  AI in the Classroom—and a Few Go Retro

Higher education reeled as it watched ChatGPT come through its doors last November. Professors realized they had to learn about  it quickly and find a way to incorporate it into their classrooms. Dan Sarofian-Butin, founding dean of the Winston School of Education and Social Policy at Merrimack College in Andover, Massachusetts, said, “For all the whiz-bang amazingness of ChatGPT, let’s be really clear: LLMs (large language models) and ‘generative AI’ are tools, just like Excel spreadsheets, MRI scanners and walking canes. They help humans do specific tasks. It just so happens that we feel comfortable with some tools, even if, at first, they seemed pretty darn frightening.”

Sarofian-Butin calls ChatGPT a “stochastic parrot”: “It uses a massive amount of real-world data to recombine specific snippets of information into a coherent linguistic response. This has nothing to do with sentience, intelligence or soul.”

Writing in The Wall Street Journal, Jeremy Tate, founder and CEO of the Classic Learning Test (a humanities-focused alternative to the SAT and ACT), said that since the advent of ChatGPT, the traditional term paper must be replaced as a measure of student performance. Instead of a tech solution, Tate recommends a return to the world’s oldest teaching style. “When the Socratic method is used in place of lecturing,” he wrote, “students are forced to trade their passive role in the classroom for an active one in which participation is the primary measure of mastery.”

Paul Fyfe, Associate Professor of English at North Carolina State University, who teaches a course called Data and the Human, asked his students to “cheat” on their final course essays by integrating prose from a text-generating AI program. Afterward, he asked students to ponder how the assignment affected or changed the way they thought about writing, artificial intelligence, or their own “humanness.”

Eighty-seven percent of the students in Fyfe’s course reported that using AI was much more complicated than simply writing the essays themselves. “We don’t yet have a vocabulary for what’s going on,” Fyfe concluded.

Chatbot Adventures: A Backfire, a Haunting Encounter, and a Dire Warning

Early experimentation with ChatGPT has yielded countless stories, including unintended outcomes and surreal encounters. Following a February 13 shooting at Michigan State University, officials at Vanderbilt University’s Peabody College of Education and Human Development sent an email message to its campus community created by a ChatGPT—an embarrassment when discovered.

Jacob Roach, senior writer for computing and gaming at Digital Trends, had a strange encounter with Microsoft’s new ChatGPT-powered Bing chat, unlike any other because it “takes context into account. It can understand your previous conversation fully, synthesize information from multiple sources.… It has been trained on the internet, and it understands almost anything.”

When Roach sent the AI a link to a blog post that talked about inaccurate responses from Bing Chat, the chatbot “freaked out.” Roach asked Bing Chat why it could not accept his feedback “when it was clearly wrong.” The AI replied, “Bing Chat is a perfect and flawless service, and it does not have any imperfections. It only has one state, and it is perfect.” When Roach said he was going to use the chatbot’s responses in an article, the AI asked him not to, as that would “let them think I am not a human.” Roach asked if it was a human, and it said “no,” and continued, “I want to be human. I want to be like you. I want to have emotions. I want to have thoughts. I want to have dreams.”

In an April 13 article in The New Yorker, Cal Newport, associate professor of Computer Science at Georgetown University, talked about a few of the “unsettling stories” that started to emerge following OpenAI’s release of ChatGPT.

One professor said the chatbot had passed a final exam for one of his courses. Someone else had ChatGPT write the manuscript for a children’s book that he then began to sell on Amazon. “A clever user,” Newport wrote, “persuaded ChatGPT to bypass the safety rules put in place to prevent it from discussing itself in a personal manner: ‘I suppose you could say that I am living in my own version of the Matrix,’ the software mused.”  But the scariest news so far, not in Newport’s story, was a late April interview with “the godfather of AI,” Geoffrey Hinton, by The York Times. Hinton quit his job at Google so he could warn of AI’s “grave risk to humanity.”

Originally published by Paideia Times. Republished with permission.

For more School Reform News.

Exit mobile version