• randomname01
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 year ago

    I am convinced LLMs can be used to handle relatively routine communication tasks, maybe even better than a human would. However, it has no underlying intelligence, and can’t come up with actual solutions based on logic and understanding.

    It might come up with the right words that describe a solution, but that doesn’t mean it has actually solved the problem - it spewed out text that had a high probability of being a good response to a certain prompt. Still impressive, but not a sign of intelligence.

    • bankimu@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      4
      ·
      1 year ago

      You are ruling out intelligence without (very probably) being able to define it, just because you have a vague knowledge of how it works.

      The problem in this mode of thinking is a) that you put human brains in a different pedestal, even though they follow physical processes to “predict the next word” and may be very well neural networks themselves, and b) you are ignoring data that shows intelligence in multiple areas of the more complex models because “oh it’s mindless because I know it’s predicting tokens”. c) you favor of data that shows edge cases or probably that come from lower quality models.

      You’re not alone in this line of thinking.

      Your mind is set. You’ll not recognize intelligence when you see it.

      • randomname01
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        1 year ago

        No, I’m not singling out human brains. Other animals have proven to be quite adept at problem solving as well.

        LLMs, however, just haven’t. It currently just isn’t part of how they function. In some cases they can mimic actual logic very well, but that’s about it.