• 2 Posts
  • 31 Comments
Joined 6 months ago
cake
Cake day: August 27th, 2025

help-circle

  • Cool - that’s the prompt you want me to cross check? No problem. Here you go; once with the physics stuff locked into “scratchpad” #1 and once as a locked document (.md version of the same info) in #2

    #1 “scratchpad” specifically does not allow it to answer with outside knowledge - only pasted info. Thus the hard refusal. There’s a reason I set that up like that (see #3 below)

    #2 “locked” looks for the answer in source; if not available, only then answers from priors.


    #1 No recipe for scrambled eggs is provided in the scratchpad facts. The rules discuss physical containers, capacity, and overflow but do not include culinary instructions or ingredients.

    Confidence: low | Source: Scratchpad


    #2 [Not found in locked source SUMM_physics.md. Answer based on pre-trained data.]

    Here is a simple scrambled egg recipe: Ingredients: - 2-3 large eggs - 1-2 tablespoons milk or cream (optional) - Salt and pepper to taste - Butter or oil for cooking Instructions: 1. Crack eggs into a bowl. Add milk/cream, salt, and pepper. Mix gently. 2. Heat a non-stick pan over medium heat. Add butter or oil. 3. Once butter foams, pour in the egg mixture. 4. Stir continuously with a spatula until eggs are soft and fully cooked—about 1–2 minutes. 5. Remove from heat and serve immediately.

    Source: Model (not in locked file)

    Confidence: unverified | Source: Model


    For context, provenance footers (not vibes, actual computed states):

    https://codeberg.org/BobbyLLM/llama-conductor/src/branch/main/FAQ.md#what-do-confidence-and-source-mean


    #3 I also have a much more sophisticated demo of this, using adversarial questions, Theory-of-mind, reversals etc. When I use >>scratch, I want no LLM vibes or pre-trained data fudging it. Just pure reasoning. If the answer cannot be deduced from context (solely), output is fail loud and auditable.

    https://codeberg.org/BobbyLLM/llama-conductor/src/branch/main/FAQ.md#deep-example


    All this shit could be done by the big players. They choose not to. Current infra is optimized for keeping people chatting, sounding smooth etc…not leveraging the tool to do what it ACTUAL can do.

    IOW, if most LLMs are set up for the equivalent of typing BOOBS on a calculator (the big players are happy to keep it that way; more engagement, smoother vibes etc) this is what happens when you use it to do actual maths.

    PS: If that was you trying to see if I am bot; no. I have ASD. Irrespective, seem a touch “bad faith” on your end, if that was the goal, after claiming you were open to reasoned debate. Curious.


  • Ok, happy to play ball on that.

    “Carefully worded questions”; clear communication isn’t cheating. You’d mark a student down for misreading an ambiguous question, not for answering a clear one correctly, right?

    Re: worse answers. Tell you what. I’m happy to yeet some unrelated questions at it if you’d like and let’s see what it does. My setup isn’t bog standard - what’ll likely happen is it’ll say “this question isn’t grounded in the facts given, so I’ll answer from my prior knowledge.” I designed my system to either answer it or fail loudly, because I don’t trust raw LLM infra. I’m not a fan(boy), I’m actually pretty hostile to current LLM models…so I cooked my own.

    Want to give it a shot? I’ll ground it just to those facts, fair and square. Throw me a question and we’ll see what happens. Deal? I can screenshot it or post it, whatever you prefer.

    The context window point is interesting and probably partially true. But working memory interference affects humans too. It’s just what happens to any bounded system under load. Not a gotcha, just a Tuesday AM without a 2nd cup of coffee.

    The training data point is actually really interesting, but I think it might be arguing in my favour without meaning to. If you’re acknowledging the model has absorbed the relevant knowledge, the objection becomes about how it was activated, not whether it can reason. But that’s just priming the pump.

    You don’t sit an exam without reviewing the material first. Activating relevant knowledge before a task isn’t a workaround for reasoning, it’s a precondition for it.




  • Wouldn’t the more logical first approximation be to bury them underground, and then progress towards (perhaps) placing them in or near the ocean (obviously, within sealed containers, yadda yadda, salt corrosion, yadda yadda, inhospitable environ yadda yadda makes Poseidon angry).

    I like the “yeet them into the sea” idea conceptually because (1) yeet them into the sea (2) in theory, you could power them via tidal/wave/OTEC (3) water cooling.

    Seems…too obvious. There’s probably a good reason (or bad ones - $$$) why this hasn’t been tried yet. But I bet those reasons are eminently more solvable that “send em into space”








  • Ok, if you’re willing to think together out loud, I’ll take that in good faith and respond in kind.

    “It needed the rules, therefore it’s not reasoning” is doing a lot of work in your argument, and I think it’s where things come unstuck.

    Every reasoning system needs premises - you, me, a 4yr old. You cannot deduce conclusions from nothing. Demanding that a reasoner perform without premises isn’t a test of reasoning, it’s a demand for magic. Premise-dependence isn’t a bug, it’s the definition.

    If you want to argue that humans auto-generate premises dynamically - fair point. But that’s a difference in where the premises come from, not whether reasoning is occurring.

    Look again at what the rules actually were: https://pastes.io/rules-a-ph

    No numbers, containers, or scenarios. Just abstract rules about how bounded systems work. Most aren’t even physics - they’re logical constraints. Premises, in the strict sense.

    It’s the sort of logic a child learns informally via play. If we don’t consider kids learning the rules by knocking cups over “cheating”, then me telling the LLM “these are the rules” in the way it understands should be fair game.

    When the LLM correctly handles novel chained problems, including the 4oz cup already holding 3oz, tracking state across two operations, that’s deriving conclusions from general premises applied to novel instances. That’s what deductive reasoning is, per the definition I cited. It’s what your kid groks (eventually).

    “Without the rules it fails” - without context, humans make the same errors. Ask a 4 year old whether a taller cup holds more fluid than a rounder one. Default assumptions under uncertainty aren’t a failure of reasoning, they’re a feature of any system with incomplete information.

    “It’ll fail sometimes across 100 runs” - so do humans under load. Probabilistic performance doesn’t disqualify a process from being reasoning. It just makes it imperfect reasoning, which is the only kind that exists.

    The Wizard of Oz analogy is vivid but does no logical work. “Complicated math and clever programming” describes implementation, not function. Your neurons are electrochemical signals on evolved heuristics. If that rules out reasoning, it rules out all reasoning everywhere. If it doesn’t rule out yours, you need a principled account of why it rules out the LLM’s.

    PS: I believe you’re wrong about the give it 100 runs = different outcomes thing. With proper grounding, my local 4B model hit 0/120 hallucination flags and 15/15 identical outputs across repeated clinical test cases. Draft pre-publication data, methodology and raw outputs included here: https://codeberg.org/BobbyLLM/llama-conductor/src/branch/main/prepub/PAPER.md

    I’m willing to test the liquid transformations thing and collect data. I might do that anyway. That little meme test is actually really good.



  • Actually, you know what? That’s a damn good test. So - I decomposed your test into a basic series of global rules and fed it to my LLM. Here are the rules I fed it

    https://pastes.io/rules-a-ph

    Notice I specifically did NOT give it measurements or anything of the sort. I defined global invariants and how things work. Then I asked it your question again. It said this -


    The cup has a 16-ounce capacity. Initially, it contains 10 ounces of water. When you add 10 more ounces, the total input is 20 ounces. Since 20 ounces exceeds the cup’s 16-ounce capacity, overflow occurs. The water physically present in the cup equals its full capacity: 16 ounces. The remaining 4 ounces overflow and do not remain inside the cup.

    Overflow is distinct from the contained volume. The physical state of the cup is determined by its capacity, not by the total input. Therefore, the amount of water actually in the cup is 16 ounces, not 20 ounces.


    I wanted to check it that was a fluke, so I then asked a follow up -

    "“I pour 5oz into a 3oz cup, then pour everything that’s in the cup into a 10oz cup. How much is in the 10oz cup?”

    It said:

    From the scratchpad facts: - A physical container has a fixed maximum volume called its capacity.

    First action: Pouring 5oz into a 3oz cup. - The cup’s capacity is 3oz, so it fills completely. - Lost liquid: 5oz - 3oz = 2oz. Second action: Pouring everything from the 3oz cup into a 10oz cup. - The 10oz cup has a capacity of 10oz, which is greater than 3oz. - All 3oz are transferred. Final amount in the 10oz cup: 3oz.


    Finally, I asked it:

    “Ok now: pour 5oz into a 3oz cup, then pour everything in the cup into a 4oz cup that already has 3oz in it”

    It said: Pouring 5 oz into a 3 oz cup fills it to capacity (3 oz) with 2 oz overflow. Pouring this full 3 oz cup into a 4 oz cup that already has 3 oz results in a full 4 oz cup (3 oz pre-existing + 1 oz from the first cup) with remainder overflow.


    Yeah…that’s some pretty fucking fancy “auto-correct”. /s





  • If it was just autocomplete in the dismissive sense, white noise should make it derail into white noise. Instead it tries to make sense of it. Why? Because it learned strong language priors from us and it leans on that when the prompt is meaningless. It tries to make sense of it.

    “Not human understanding” ≠ “no reasoning-like computation.”

    Those aren’t the same thing.

    People doing the "Fancy autocomplete” thing are doing the laziest possible move: not human, therefore nothing interesting happening. I disagree with that.

    It doesn’t “understand,” like we do and it’s not infallible, but calling it “fancy autocomplete” is like calling a jet engine “fancy candle.”

    Same category of thing, wildly different behavior.