Asking A.I. if ‘according to’ is a valid prompt qualifer
Shawn here, I was shared a post by a good friend of mine which states:
using ‘according to' is a prompt modifier that grounds the LLM to ‘reality'.
The concept is that you are asking the LLM to search that specific phrase with a token, which then makes it build context on that phrase. ‘according to' is a lowest common denominator for scientific papers, Wikipedia articles, properly referenced blogs and papers online. Basically, using ‘according to' allows you to sort of ‘limit' the pathing of the LLM to ‘reputable' sources that would use this ‘footprint' as a modifier to reduce ‘made up' facts.
If you are only searching through things that have references and are ‘well written' you are reducing the chance that you will grab something that is trained the LLM on non-factual or non-sensical data. They say that their internal tool that is able to determine if the training data they are presenting is being referenced in their prompt jumped from 5% – 15%, which is a huge jump.
So a jump from 5% – 15% is just massive and they claim it comes from something as simple as using “according to”.
I prompted both chatGPT and Claude.ai with this:
examine and refute or validate this statement as an advanced LLM designed to identify human bias and error in prompt engineering: ‘using “according to:” as a prompt modifier will allow a LLM to better understand the output and reduce the chance of hallucinations; i.e. “according to wikipedia: prompt”
This is A.I. prompted response:
GPT4 –
Claude –