LLMs are solving MCAT, the bar test, SAT etc like they’re nothing. At this point their performance is super human. However they’ll often trip on super simple common sense questions, they’ll struggle with creative thinking.

Is this literally proof that standard tests are not a good measure of intelligence?

    • Tar_Alcaran@sh.itjust.works
      link
      fedilink
      arrow-up
      23
      arrow-down
      3
      ·
      8 months ago

      LLMs don’t “think” at all. They string together words based on where those words generally appear in context with other words based on input from humans.

      Though I do agree that the output from a moron is often worth less than the output from an LLM

      • Grimy@lemmy.world
        link
        fedilink
        arrow-up
        10
        arrow-down
        11
        ·
        edit-2
        8 months ago

        This is kind of how humans operate as well though. We just string words along based on what input is given.

        We speak much too fast to be properly reflecting on it, we just regurgitate whatever comes too mind.

        To be clear, I’m not saying LLM think but that the difference between our thinking and their output isn’t the chasm it’s made out to be.

        • cynar@lemmy.world
          link
          fedilink
          English
          arrow-up
          13
          arrow-down
          2
          ·
          8 months ago

          The key difference is that your thinking feeds into your word choice. You also know when to mack up and allow your brain to actually process.

          LLMs are (very crudely) a lobotomised speech center. They can chatter and use words, but there is no support structure behind them. The only “knowledge” they have access to is embedded into their training data. Once that is done, they have no ability to “think” about it further. It’s a practical example of a “Chinese Room” and many of the same philosophical arguments apply.

          I fully agree that this is an important step for a true AI. It’s just a fragment however. Just like 4 wheels, and 2 axles don’t make a car.

        • starman2112@sh.itjust.works
          link
          fedilink
          arrow-up
          5
          arrow-down
          1
          ·
          8 months ago

          Disagree. We’re very good at using words to convey ideas. There’s no reason to believe that we speak much too fast to be properly reflecting on what we say—the speed with which we speak speaks to our proficiency with language, not a lack thereof. Many people do speak without reflecting on what they say, but to reduce all human speech down to that? Downright silly. I frequently spend seconds at a time looking for a word that has the exact meaning that will help to convey the thought that I’m trying to communicate. Yesterday, for example, I spent a whole 15 seconds or so trying to remember the word exacerbate.

          An LLM is extremely good at stringing together stock words and phrases that make it sound like it’s conveying an idea, but it will never stop to think about the definition of a word that best conveys a real idea. This is the third draft of this comment. I’ve yet to see an LLM write, rewrite, then rewrite again it’s output.

          • agamemnonymous@sh.itjust.works
            link
            fedilink
            arrow-up
            3
            ·
            edit-2
            8 months ago

            Kinda the same thing though. You spent time finding the right auto-complete in your head. You weighed the words that fit the sentence you’d constructed in order to find the one most frequently encountered in conversations or documents that include specific related words. We’re much more sophisticated at this process, but our whole linguistic paradigm isn’t fundamentally very different from good auto-complete.

            • starman2112@sh.itjust.works
              link
              fedilink
              arrow-up
              2
              arrow-down
              1
              ·
              8 months ago

              To me it isn’t just the lack of an ability to delete it’s own inputs, I mean outputs, it’s the fact that they work by little more than pattern recognition. Contrast that with humans, who use pattern recognition as well as an understanding of their own ideas to find the words they want to use.

              Man, it is super hard writing without hitting backspace or rewriting anything. Autocorrect helped a ton, but I hate the way this comment looks lmao

              This isn’t to say that I don’t think a neural network can be conscious, or self aware, it’s just that I’m unconvinced that they can right now. That is, that they can be. I’m gonna start hitting backspace again after this paragraph