I have no issue with chatgpt. Or any other LLM based platform. I use it and many other llms in my daily life. They’re solid applications that provide real world problem solving.
Guys stop downvoting this guy. I asked for a controversial opinion and he gave a controversial opinion. If you don’t agree then, well, that’s the point! Sheesh
I think giving up or downvotes in the context of such a question is fine. It actually shows controversial opinions. Now I wouldn’t downvote if it didn’t have sufficient upvotes though.
I think the golden rule with LLMs is “never trust the output.” If it’s a task you can 100% verify or has virtually no associated risk, then go right ahead.
It’s just so deeply frustrating to keep seeing people look at LLM results and treat them as truthful instead of truthy.
I think, like any technology, there are dangers. It all depends on how it’s used. The issue is that it’s new and society hasn’t adapted yet, so we have cases of ChatGPT exacerbating people’s mental illnesses
I have no issue with chatgpt. Or any other LLM based platform. I use it and many other llms in my daily life. They’re solid applications that provide real world problem solving.
Guys stop downvoting this guy. I asked for a controversial opinion and he gave a controversial opinion. If you don’t agree then, well, that’s the point! Sheesh
I think giving up or downvotes in the context of such a question is fine. It actually shows controversial opinions. Now I wouldn’t downvote if it didn’t have sufficient upvotes though.
I think the golden rule with LLMs is “never trust the output.” If it’s a task you can 100% verify or has virtually no associated risk, then go right ahead.
It’s just so deeply frustrating to keep seeing people look at LLM results and treat them as truthful instead of truthy.
Absolutely. For legitimate research purposes it’s not there yet. Maybe some day.
But using it as a Grammer check or running by it abstract opinions or just engaging in idle conversations I find it rather robust.
Some times I need a yes man I won’t be embarrassed in front of. 😄
I think, like any technology, there are dangers. It all depends on how it’s used. The issue is that it’s new and society hasn’t adapted yet, so we have cases of ChatGPT exacerbating people’s mental illnesses