Page 1 of 1

I’ve purposely chosen a

Posted: Sun Dec 22, 2024 10:19 am
by hasan018542
While this is a complex problem (machine-generated text doesn’t use sources in quite the same way a human does, or might use dozens or hundreds of sources), hybrid approaches that reduce the number of sources and provide attribution are possible. Consider this answer from Neeva — an alternative search engine focused on privacy (hat tip to Greg Sterling) — for “Should I get a tall latte or a grande?”: While this functionality is in beta and is obviously not operating at Google scale, Neeva attempts to provide primary sources.


ridiculous example because the stakes are mobile phone number database low. Imagine a question squarely in what Google’s calls the YMYL (Your Money, Your Life) realm, such as “Is Bitcoin or Dogecoin better?”: This is a question that could seriously impact my financial future, and knowing where the answer is coming from makes a big difference and allows me to dig deeper and either establish trust or reject those sources. Related to trust is a deeper, ethical issue of whether machine-generated text is a form of plagiarism.


While traditional plagiarism generally requires intent and copying sizable chunks of text, machine-generated text is still being built off of the backs of writers and content creators, without — at least in the case of ChatGPT — any clear attribution. In Google’s case, overuse of machine-generated text could discourage content creators and cause us to question whether we want to contribute our efforts to someone else’s machine. Hallucinations and machine dreams Without careful constraints, machine learning systems are prone to hallucination.