ChatGPT doesn’t produce sentences in the same way a reporter does. ChatGPT, and other machine-learning, large language models, may seem sophisticated, but they’re basically just complex autocomplete machines. Only instead of suggesting the next word in an email, they produce the most statistically likely words in much longer packages.
Because ChatGPT’s truth is only a statistical truth, output produced by this program cannot ever be trusted in the same way that we can trust a reporter or an academic’s output. It cannot be verified because it has been constructed to create output in a different way than what we usually think of as being “scientific.”
https://www.weforum.org/agenda/2023/02/why-chatgpt-raises-issues-of-trust-ai-science?utm_source=linkedin&utm_medium=social_video&utm_term=1_1&utm_content=28877_ChatGPT_telling_truth&utm_campaign=social_video_2023
