There's plenty of concern that OpenAI's
The research teams found the AI to be inconsistent, to put it mildly. The University of Minnesota group noted that ChatGPT was good at addressing "basic legal rules" and summarizing doctrines, but floundered when trying to pinpoint issues relevant to a case. Terwiesch said the generator was "amazing" with simple operations management and process analysis questions, but couldn't handle advanced process questions. It even made mistakes with 6th grade-level math.
There's room for improvement. The Minnesota professors said they didn't adapt text generation prompts to specific courses or questions, and believed students could get better results with customization. At Wharton, Terwiesch said the bot was adept at changing answers in response to human coaching. ChatGPT might not ace an exam or essay by itself, but a cheater could have the system generate rough answers and refine them.
Both camps warned that schools should limit the use of technology to prevent ChatGPT-based cheating. They also recommended altering the questions to either discourage AI use (such as focusing on analysis rather than reciting rules) or increase the challenge for those people leaning on AI. Students still need to learn "fundamental skills" rather than leaning on a bot for help, the University of Minnesota said.
The study groups still believed that ChatGPT could have a place in the classroom. Professors could teach pupils how to rely on AI in the workplace, or even use it to write and grade exams. The tech could ultimately save time that could be spent on the students, Terwiesch explains, such as more student meetings and new course material.
Взято отсюда