• zbyte64@awful.systems
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 hours ago

      Examples to consider:

      A code base with TODOs embedded will make fewer mistakes and spend less tokens than if you attempt to direct the LLM only with prompting.

      A file system gives an LLM more context than a flat file (or large prompt) with the same contents because a file system has a tree like structure and makes it less likely the LLM will ingest context it doesn’t need and confuse it

      Lastly consider the efficacy of providing it tools vs using agent skills which is another form of prompting. Giving an LLM a deterministic feedback loop beats tweaking your prompts every time

      • el_abuelo@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 hour ago

        Ok so i think i do all of these things and would just describe them as “other ways to prompt and LLM” - i think the nuance youre shooting for here is that using these methods you are “pre-preparing” the prompt - not thinking about it at prompt-time and thus likely to miss stuff.

        e.g. Feeding a TODO is just the same as copy-pasting that todo in as a prompt.

        Have I understood you correctly?