I think the golden rule with LLMs is “never trust the output.” If it’s a task you can 100% verify or has virtually no associated risk, then go right ahead.
It’s just so deeply frustrating to keep seeing people look at LLM results and treat them as truthful instead of truthy.
I think the golden rule with LLMs is “never trust the output.” If it’s a task you can 100% verify or has virtually no associated risk, then go right ahead.
It’s just so deeply frustrating to keep seeing people look at LLM results and treat them as truthful instead of truthy.
Absolutely. For legitimate research purposes it’s not there yet. Maybe some day.
But using it as a Grammer check or running by it abstract opinions or just engaging in idle conversations I find it rather robust.
Some times I need a yes man I won’t be embarrassed in front of. 😄