ChatGPT’s Programming Woes: A New Study Reveals Inaccurate Answers Overlooked by Developers

ChatGPT often gives incorrect answers to programming-related questions, and developers sometimes overlook the erroneous information, according to a new study.

Tools such as OpenAI’s ChatGPT chatbot are seen as revolutionary technologies that can make employees’ jobs significantly easier, and might even replace some jobs in the future. Many employees rely on AI to simplify their tasks. However, a new study indicates that ChatGPT answers programming questions correctly only 52% of the time.

In a study conducted by researchers from Purdue University, ChatGPT was asked 517 programming questions taken from Stack Overflow. The accuracy of the AI chatbot’s responses was then measured. The researchers wrote in their paper, “Our analysis showed that 52% of ChatGPT’s responses contained false information and 77% were detailed. Despite this, 35% of our study participants preferred to use ChatGPT’s responses due to their comprehensiveness and fluent language style.”


Developers failed to catch erroneous answers

What’s more troubling is that developers don’t always catch the wrong answers. According to the study, participants ignored erroneous information in ChatGPT’s answers 39% of the time. The researchers noted that this highlights the need for precautions against ChatGPT’s incorrect answers, revealing the risk of seemingly correct but inaccurate responses.

Of course, this is only one scenario, and chatbots can be customized for various purposes. Companies that produce artificial intelligence tools continue to work on improving the accuracy of the answers provided by these bots.


You may also like this content

Exit mobile version