Understanding OpenAI's Decision Not to Distribute Tools for Identifying Imitation of ChatGPT Conversations
Understanding OpenAI’s Decision Not to Distribute Tools for Identifying Imitation of ChatGPT Conversations
Students may think twice before using ChatGPT to write whole essays and research papers if OpenAI decides to launch its anticheating tool. However, it’s unclear if that will ever actually happen.
OpenAI confirmed it has been working on a tool for detecting writing from ChatGPT , following an August 4 report by The Wall Street Journal. An OpenAI spokesperson told TechCrunch the company is taking a deliberate approach to detecting AI-written text because of “the complexities involved and its likely impact on the broader ecosystem beyond OpenAI.” These watermarks are unnoticeable to the human eye, but OpenAI’s detection technology can pick them up and calculate a score of how likely it is that ChatGPT wrote the entire document or a portion of it. John Thickstun, a Stanford researcher on a team that has developed similar AI text watermarking, points out, “It is more likely that the sun evaporates tomorrow than this term paper wasn’t watermarked.”
OpenAI conducted an internal survey in April 2023, which found that almost one-third of respondents “would be turned off” by such a feature and use ChatGPT less, with 69% believing it would result in false accusations of using AI. Another survey conducted at the same time showed global users would support AI detection tools by a margin of four to one.
OepanAI’s text-watermarking method can detect text produced by ChatGPT with a 99.9% certainty. OpenAI CEO Sam Altman and CTO Mira Murati haven’t released the feature because it’s not fully effective and could hurt its business. It reportedly works by altering ChatGPT’s token selection process to create a recognizable pattern called a watermark, which you wouldn’t notice, but the algorithm could. The spokesperson explained, “The text watermarking method we’re developing is technically promising, but has important risks we’re weighing while we research alternatives, including susceptibility to circumvention by bad actors and the potential to impact groups like non-English speakers disproportionately.”
As effective as they are, watermarks aren’t airtight-sealed and can be mitigated easily by rewording with another model, translating the text into another language and then back, or deleting emoji characters that ChatGPT may add to the text. Now, OpenAI claims its tool is resistant to tampering, such as paraphrasing. The company confirmed in a blog post that it’s currently testing using cryptographically signed metadata in text to avoid false positives.
OpenAI is concerned that it might lose users if the detection feature went live. Earlier, the company released a tool to catch images created with DALL-E 3. Its rival Google is testing a text watermarking tool for Gemini AI, dubbed SynthID. OpenAI even explored releasing the watermarking tool to educators or companies that help schools identify plagiarized work, which could help contain any negative reactions.
It’s no secret that chatbots like ChatGPT can be easily misused to create an entire essay from a single prompt, and educators are well aware of the issue. A recent survey conducted by the Center for Democracy & Technology proved educators have been struggling with generative AI detection , with a whopping 59% of middle and high school teachers suspected of misusing AI to cheat with schoolwork. That’s up a massive 17 points from the prior school year.
OpenAI is torn between its commitment to transparency and the need to earn revenue, but it has a moral obligation to do the right thing. Other AI tools lacking a detection method isn’t an excuse for ChatGPT to also skip implementing the feature.
Source: OpenAI , The Wall Street Journal , TechCrunch , The Verge
- Title: Understanding OpenAI's Decision Not to Distribute Tools for Identifying Imitation of ChatGPT Conversations
- Author: Larry
- Created at : 2024-08-29 01:10:35
- Updated at : 2024-08-30 01:10:35
- Link: https://tech-hub.techidaily.com/understanding-openais-decision-not-to-distribute-tools-for-identifying-imitation-of-chatgpt-conversations/
- License: This work is licensed under CC BY-NC-SA 4.0.