The rise of large language models like ChatGPT has led to a scramble to develop tools to distinguish between human and machine-generated text, as publicly available data often contains misinformation and disinformation. DetectGPT, a "zero-shot" approach to spotting the difference, has been developed by graduate students at Stanford University. The method involves a chatbot assessing how much it "likes" a sample text, and then perturbing the text to see if it is more variable in its "likes" of altered human-generated text than altered machine text. While the method is not foolproof, it has correctly distinguished between human and machine authorship 95% of the time in early tests. The use of chatbots in scientific publishing is controversial, with some papers listing the AI as a co-author, and some experts warning of the danger of plausible fake text slipping into real scientific submissions.
Ещё видео!