Note on Is the LLM Response Wrong, or Have You Just Failed to Iterate It? via Mike Caulfield
I call these “sorting” prompts because they usually push the system to go out and try to find things for each “bucket” rather than support a single side of an issue or argue against it. I keep these for myself in a “follow-ups” file. Here are some of the prompts in my file:
• Read the room: what do a variety of experts think about the claim? How does scientific, professional, popular, and media coverage break down and what does that breakdown tell us?
• Facts and misconceptions about what I posted
• Facts and misconceptions and hype about what I posted (Note: good for health claims in particular)
• What is the evidence for and against the claim I posted
• Look at the most recent information on this issue, summarize how it shifts the analysis (if at all), and provide link to the latest info (Note: I consider this a sorting prompt because it pushes the system to put evidence of “new” and “old” buckets)
This post has a really good overview of techniques for getting an LLM (and any human) to actually fact-check claims and go deeper than a cursory response.
I actually extracted a generalized version into a claude slash command.
Reference
- Notes
- communication, llm, research, tools
- Is the LLM Response Wrong, or Have You Just Failed to Iterate It?
-
Permalink to
2025.NTE.133 - Insight
- Edit
| ← Previous | Next → |
| Note on (5) We Were (Never) Consumers - By Sarah Mock via Sarah Mock | Note on I've Found the Comments in the Database Files (e.g. North America via shagie |