A code base with TODOs embedded will make fewer mistakes and spend less tokens than if you attempt to direct the LLM only with prompting.
A file system gives an LLM more context than a flat file (or large prompt) with the same contents because a file system has a tree like structure and makes it less likely the LLM will ingest context it doesn’t need and confuse it
Lastly consider the efficacy of providing it tools vs using agent skills which is another form of prompting. Giving an LLM a deterministic feedback loop beats tweaking your prompts every time
Ok so i think i do all of these things and would just describe them as “other ways to prompt and LLM” - i think the nuance youre shooting for here is that using these methods you are “pre-preparing” the prompt - not thinking about it at prompt-time and thus likely to miss stuff.
e.g. Feeding a TODO is just the same as copy-pasting that todo in as a prompt.
I haven’t used an LLM, but it’s probably similar to how people could not Google for shit. I always considered myself something of an expert at using search engines, although they’ve gone to shit obviously, and with the advent of AI it seems like they will fade out.
I don’t know, it seems to me that most people know how to ask a question or make a request. It’s not that different. It’s just that a lot of people don’t understand what is possible and they freeze.
You tell them, to ask for anything you want. They uncork and say “So I can ask it for a chocolate cream pie?”. Partially in jest, but they do that because they don’t seem to have a comfortable knowledge of the limits. A person with little technical background has no need for output that they don’t understand. Once you guide them a little and let them know they can get a recipe for a chocolate cream pie and some practical advice on how to make it, that might be helpful, but little better than just looking up a recipe. You’d have to let them know that they can find multiple variants of recipes and have it rank them, compare them, and produce a summary of the most popular types. By now they’ve stopped listening and have gone to the grocery store to buy a chocolate cream pie and you’re standing there hoping they will give you a piece.
In summary, I wish I had some pie. What was the question?
Yeah, you’re probably right. Probably don’t do anything with it at all, never touched it, don’t understand how it works either. You, on the other hand are probably a seasoned LLM engineer. Shameful of me to not understand that.
Ever see someone using Google and cringe? People who have experience getting AI to do what they want feel the same when they see normies writing their prompts.
How does one ‘learn AI’?
Properly prompting an LLM is not something most people inherently get.
Ez pz. “Interpret this prompt in the proper manner”
That’s almost correct, actually. One of the best things to do is to prompt it to ask you clarifying questions.
“Properly prompting” is to not prompt. A chat interface is the lowest fidelity interface to use with an LLM.
Tell me more? It’s the only way I’m familiar with interacting with an LLM
Examples to consider:
A code base with TODOs embedded will make fewer mistakes and spend less tokens than if you attempt to direct the LLM only with prompting.
A file system gives an LLM more context than a flat file (or large prompt) with the same contents because a file system has a tree like structure and makes it less likely the LLM will ingest context it doesn’t need and confuse it
Lastly consider the efficacy of providing it tools vs using agent skills which is another form of prompting. Giving an LLM a deterministic feedback loop beats tweaking your prompts every time
Ok so i think i do all of these things and would just describe them as “other ways to prompt and LLM” - i think the nuance youre shooting for here is that using these methods you are “pre-preparing” the prompt - not thinking about it at prompt-time and thus likely to miss stuff.
e.g. Feeding a TODO is just the same as copy-pasting that todo in as a prompt.
Have I understood you correctly?
I haven’t used an LLM, but it’s probably similar to how people could not Google for shit. I always considered myself something of an expert at using search engines, although they’ve gone to shit obviously, and with the advent of AI it seems like they will fade out.
I don’t know, it seems to me that most people know how to ask a question or make a request. It’s not that different. It’s just that a lot of people don’t understand what is possible and they freeze.
You tell them, to ask for anything you want. They uncork and say “So I can ask it for a chocolate cream pie?”. Partially in jest, but they do that because they don’t seem to have a comfortable knowledge of the limits. A person with little technical background has no need for output that they don’t understand. Once you guide them a little and let them know they can get a recipe for a chocolate cream pie and some practical advice on how to make it, that might be helpful, but little better than just looking up a recipe. You’d have to let them know that they can find multiple variants of recipes and have it rank them, compare them, and produce a summary of the most popular types. By now they’ve stopped listening and have gone to the grocery store to buy a chocolate cream pie and you’re standing there hoping they will give you a piece.
In summary, I wish I had some pie. What was the question?
You don’t actually use AI in any professional capacity, huh
Yeah, you’re probably right. Probably don’t do anything with it at all, never touched it, don’t understand how it works either. You, on the other hand are probably a seasoned LLM engineer. Shameful of me to not understand that.
So defensive
Brain damage.
Ever see someone using Google and cringe? People who have experience getting AI to do what they want feel the same when they see normies writing their prompts.
“Normies”
Lol ok
I don’t have experience getting ai to do what I want because I want it to go away.
oh give me a fucking break