• pixxelkick@lemmy.world
      link
      fedilink
      arrow-up
      0
      arrow-down
      1
      ·
      26 days ago

      Everything I said was very much correct.

      LLMs are fairly primitive tools, they arent super complex and they do exactly what they say they do.

      The hard part is wrapping that up in an API that is actually readable for a human to interact with, because the lower level abstract data of what an LLM takes in and spits out arent useful for us.

      And then even harder is wrapping THAT API in another one that makes the input/output USEFUL for a human to interact with

      You have layers upon layers of abstraction overtop of the tool to make it go from just a bunch of raw float values a human wouldnt understand, to becoming a tool that does a thing

      That “wrapper” is what one calls the “platform”.

      And making a platform that doesnt fuck it up is actually very very hard, and very very easy to get wrong. Even a small tweak to it can substantially shift how it works

      Think of it a lot like an engine in a car. The LLM is the engine, which on its own is not actually super useful. You have to actually connect that engine to something to make it do anything useful.

      And even just doing that isnt very useful if you cant control it, so we take the engine and wrap it up in a bunch of layers of stuff that allow a human to now control it and direct it.

      But, turns out, when you put a V6 engine inside a car, even a tiny little bit of getting the engineering wrong can cause all sorts of problems with the engine and make it fail to start, or explode, or fall out of the car, or stall out, or break, or leak… and unlike car engines, these engines are very very new and most engineers are still only just now starting to break ground on learning how to control them well and steer them and stop them from tearing themselves out of the car, lol.

      So, to bring this back to the original post:

      Most LLMs (engines) are actually pretty good nowadays, but the problem was Clawdbot (a specific brand of car manufacturer) super fucked up the way they designed their car so the car itself had a very very stupid engineering mistake. IE in this case, the brakes didnt work well enough and the car drove off a cliff.

      That has nothing to do with how good the engine is or is not, the engine was just doing its job. The problem was with some other part of the car entirely, the part of the car Clawdbot made that wraps around the engine.

        • pixxelkick@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          26 days ago

          When using the word “they”, in English it refers the the last primary subject you referred to, so you should be able to infer what “they” referred to in my sentences. I’ll let you figure it out.

          “I love wrenches, they are very handy tools”, in this sentence, the last subject before the word “they” was “wrenches”, so you should be able to infer that “they” referred to “wrenches” in that sentence.

          • Windex007@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            26 days ago

            Ok, well, I was actively trying to avoid jumping to the conclusion that your assertion was that an LLM can tell you what it does.

            I was actively avoiding that conclusion as an act of charity.

              • Windex007@lemmy.world
                link
                fedilink
                arrow-up
                0
                ·
                25 days ago

                Hence my attempt to give you the space to provide clarity.

                For me, this isn’t a pissing contest. I’m trying to provide you with the latitude to clarify your position. I’ll be honest, I didn’t appreciate your condescending lecture on the english language.

                • pixxelkick@lemmy.world
                  link
                  fedilink
                  arrow-up
                  0
                  arrow-down
                  1
                  ·
                  25 days ago

                  I apologize for any confusion.

                  I meant LLMs are what they say they are in a non literal sense.

                  Akin to abscribing the same to any other tool.

                  “I like wrenches cause they are what they say they are, nothing extra to them” in that sort of way.

                  In the sense the tool is very transparent in function. No weird bells or whistles, its a simple machine that you can see what it does merely by looking at it.

                  • Windex007@lemmy.world
                    link
                    fedilink
                    arrow-up
                    0
                    ·
                    25 days ago

                    I think I understand your point now.

                    I still would want to apply pressure to it, because i disagree with the spirit of your assessment.

                    Once a model is trained, they become functionally opaque. Weights shift… but WHY. What does that vector MEAN.

                    I think wrenches are good. Will a 12mm wrench fit a 12mm bolt? Yes.

                    In LLM bizarre world, the answer to everything is not “yes” or “no”, it’s “maybe, maybe not, within statistical bounds… try it… maybe it will… maybe it won’t… and by the way just because it fit yesterday is no guarantee it will fit again tomorrow… and I actually can’t definitively tell you why that is for this particular wrench”

                    LLMs do something, and I agree they do that something well. I further agree with the spirit of most of the rest of your analysis: abstraction layers are doing a lot of heavy lifting.

                    I think where I fundamentally disagree is that “they do what they say they do” by any definition beyond the simple tautology that everything is what it is.