• ∟⊔⊤∦∣≶@lemmy.nz
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      4
      ·
      11 months ago

      It actually is biased though. UpperEchelon did a video exposing this. I swing to the left myself, but I would prefer if the LLMs were objective.

        • ∟⊔⊤∦∣≶@lemmy.nz
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          3
          ·
          11 months ago

          I would argue that asking a machine to list known information is not impossible.

          Here’s a very clear example where chatGPT refused to answer a question regarding Biden but happily answered the exact same question for Trump.

          https://youtu.be/_Klkr6PtYzI?t=520

          And before anyone starts, NO! I’m not a supporter of the oompaloompa king.

          • Sekoia@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            1
            ·
            11 months ago

            Mhm, but with the way LLMs work, it’s not possible to actually remove bias since it’s baked into the training data. Any adjustment towards “neutral” would be biased by what the adjuster considers neutral.

        • GigglyBobble@kbin.social
          link
          fedilink
          arrow-up
          2
          arrow-down
          2
          ·
          edit-2
          11 months ago

          Only if emotions are involved. Of course it’s not possible as long as we train our AI with flawed human-generated data though.

            • GigglyBobble@kbin.social
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              11 months ago

              That’s how we make them work today. It is possible to stay politically neutral in a language though. And therefore your generalized statement is incorrect.

              • Renacles@discuss.tchncs.de
                link
                fedilink
                English
                arrow-up
                3
                ·
                11 months ago

                What does politically neutral even mean though? Because it’s definitely not centrism.

                How would you train AI without biased data anyways? All data carries some bias from whoever generated it.

                • GigglyBobble@kbin.social
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  11 months ago

                  What does politically neutral even mean though?

                  A conversation being void of any kind of politics is politically neutral. And that’s most conversations I have.

      • Phanatik@kbin.social
        link
        fedilink
        arrow-up
        5
        ·
        11 months ago

        It will never be objective if its dataset is something like the internet. It will always be prone to bias because that’s the double-edged sword of LLMs, they have to have vast quantities of data and the only place they can get that is the internet which is biased opinions everywhere.

    • Eggyhead@kbin.social
      link
      fedilink
      arrow-up
      6
      arrow-down
      1
      ·
      11 months ago

      LLMs are only as “fact based” as the data they source. An LLM in the time of Galileo would have told you the earth is flat.

      • Yorick@feddit.ch
        link
        fedilink
        English
        arrow-up
        7
        ·
        11 months ago

        Weekly reminder that scholars since at least 500 B. C thought the earth was a sphere, so Galileo’s LLM would tell him that, and even tell its size with a reasonable accuracy.

        • Eggyhead@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          11 months ago

          Galileo’s LLM would tell him that

          Assuming the sources used for training were the correct ones.