Concerns Arise Among Educators about Student Misuse of OpenAI

There is a growing concern among teachers and professors of the misuse of generative AI among students due to its ability to “create an entire essay or research paper in a matter of seconds, based on a single prompt” for free. Substantiating the concerns are the results of a recent survey by technology policy nonprofit, the Center for Democracy & Technology, that “found 59% of middle- and high-school teachers were sure some students had used AI to help with schoolwork, up 17 points from the prior school year.” However, despite a method OpenAI has, which has been ready to be release for about a year, to reliably detect when someone uses ChatGPT to write an essay or research paper, the company hasn’t released it. Decisions have been prompted in part by the results of a survey conducted by the company of “loyal ChatGPT users that found nearly a third would be turned off by the anti-cheating technology.” However, a broader survey commissioned by OpenAI in April 2023 found that people worldwide supported the idea of AI detection technology by a margin of four to one. Simply described, the anti-cheating tool under discussion at OpenAI would leave a watermark that is unnoticeable to the human eye by slightly changing how the tokens, which are the predicted words or word fragments that should come next in a sentence, are selected, and then a detector provides a score of how likely the entire or portion of the document was written by ChatGPT. In the fall of 2023, President Joe Biden signed a “sweeping executive order” intended to rein in the emerging technology that has sparked both concern and acclaim by setting new standards on security and privacy protections for AI, with far-reaching impacts on companies, while continuing to support AI research and development. The challenge in developing AI regulation will be getting the right balance so that the promise of the huge benefits technology can provide for society through further innovation is not hindered while the potential societal risks are addressed — bias, transparency, privacy, security, copyright, content regulation, education, and economic impacts such as job loss, workforce adjustment and productivity. It is anticipated by some that unlike the European Union’s broad EU AI Act that was adopted in March 2024 as the first-ever legal framework on AI, the United States is not likely to pass a broad national AI law over the next few years, but instead take a decentralized approach.

Source:    https://www.wsj.com/tech/ai/openai-tool-chatgpt-cheating-writing-135b755a