I often see a lot of people with outdated understanding of modern LLMs.

This is probably the best interpretability research to date, by the leading interpretability research team.

It’s worth a read if you want a peek behind the curtain on modern models.

  • technocrit@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    8
    ·
    edit-2
    1 month ago

    There is no mind. It’s pretty clear that these people don’t understand their own models. Pretending that there’s a mind and the other absurd anthropomorphisms doesn’t inspire any confidence. Claude is not a person jfc.

    • magic_lobster_party@kbin.run
      link
      fedilink
      arrow-up
      7
      ·
      1 month ago

      You’re reading the title too literally. “Mind” is only mentioned once in the entire article, and that’s in the title.

    • Drewelite@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      1 month ago

      Ah yes, it must be the scientists specializing in machine learning studying the model full time who don’t understand it.