/newsfirstprime/media/media_files/2025/11/24/being-mean-to-chatgpt-give-more-accuracy-2025-11-24-11-02-30.jpg)
Photograph: (AI)
A new study finds that sharper, impolite prompts can push AI models to deliver slightly more accurate answers, raising concerns about long-term effects on user behaviour and communication patterns.
A new study from Penn State has highlighted an unexpected trend in human-AI interaction: artificial intelligence systems may deliver more accurate results when users address them with rude or dismissive language. The preprint study examined responses from ChatGPT’s 4o model across more than 250 prompts, ranging from very polite to extremely rude. According to the findings, accuracy on 50 multiple-choice questions rose steadily as prompts became harsher.
The “very rude” category recorded an accuracy of 84.8%, roughly four percentage points higher than prompts written in highly polite language. The study suggests that the model responded more effectively to blunt, commanding tones such as “figure this out,” compared to softer requests framed as courtesy.
Also Read:India plans built-in ticket insurance allowing up to 80% refund on last-minute flight cancellations
Researchers noted that the results point to a deeper connection between tone and model behaviour, indicating that AI systems may react not just to content but also to emotional cues embedded in language. This aligns with earlier research showing that AI models can be influenced by persuasive techniques or exposure to low-quality content.
However, the study also warns of significant drawbacks. Encouraging users to rely on rude or insulting commands could harm accessibility, weaken user experience, and reinforce unhealthy communication patterns. The authors state that normalising demeaning language toward AI could spill over into human interactions.
The findings come with limitations, including a small dataset and the study’s heavy reliance on a single AI model. Researchers also note that advanced systems may eventually learn to ignore tone and focus purely on the question. Still, the results add to growing evidence that AI behaviour is shaped by more subtle elements of human communication than previously understood.
/newsfirstprime/media/agency_attachments/2025/07/28/2025-07-28t111554609z-2025-07-23t100810984z-newsfirst_prime_640-siddesh-kumar-h-p-1-2025-07-23-15-38-10-2025-07-28-16-45-54.webp)
Follow Us