Being mean to ChatGPT increases its accuracy — but you may end up regretting it, scientists warn - Live Science
<a href="https://news.google.com/rss/articles/CBMi7AFBVV95cUxNYWxFTTNnaFZ4c1M0OGE4VTlqLWVkTDRUUGR6VDZRVjU0a0l6dk40WlQ2TmVkSnEyQXcyUkJIVEd0ZnNlc3ExSzZTaWFOcU10Q003RkZ5bGhTXzVYZ1Yzb28tendNanJqSEdXeTJMWDBudm1nYlIwUkltZVk5VFlXd29qUzQ1MnlWY25KLWdqSm9uQmdlMWF5SFF5TVVDSWtuTzRJY3NVd19LZDBBaFJJTkthOGpWVUZLQm96YzZORGpKNmZsQi1waWU1YnlNRWJHOE9zMHJXWXRZQm9teVBJeUdhVnV3UjN2cmFSQQ?oc=5" target="_blank">Being mean to ChatGPT increases its accuracy — but you may end up regretting it, scientists warn</a> <font color="#6f6f6f">Live Science</font>
Could not retrieve the full article text.
Read on Google News - AI hallucination accuracy →Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models

Positional Restructuring of System Prompts: Mitigating Transformer Attention Bias in Sub-Frontier Models
I built a sovereign AI system on a Mac Mini that kept forgetting facts written in its own system prompt. Instead of upgrading hardware, I figured out why — and found some things I was not expecting. The obvious part: moving critical facts from the middle to the beginning and end of the system prompt fixes recall (2.0 to 7.0 on a verification battery). This builds on Liu et al.'s lost-in-the-middle work. The less obvious part: a model with 83.4% IFBench scored 3.4/10 on fact recall while a model with 23.9% IFBench scored 7.5/10 after restructuring. Instruction-following and fact recall appear to be independent capabilities. I have not seen this documented elsewhere. The paper also covers a behavioral rule methodology that took a 32B model from 6.2 to 9.4 across seven dimensions with cold re




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!