I think the golden rule with LLMs is “never trust the output.” If it’s a task you can 100% verify or has virtually no associated risk, then go right ahead.
It’s just so deeply frustrating to keep seeing people look at LLM results and treat them as truthful instead of truthy.



How’s the bedbug colony treating you?