I think AI is neat.

  • Kecessa@sh.itjust.works
    link
    fedilink
    arrow-up
    8
    arrow-down
    2
    ·
    edit-2
    10 months ago

    The difference is that you can throw enough bad info at it that it will start paroting that instead of factual information because it doesn’t have the ability to criticize the information it receives whereas an human can be told that the sky is purple with orange dots a thousand times a day and it will always point at the sky and tell you “No.”

    • R0cket_M00se@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      4
      ·
      10 months ago

      To make the analogy actually comparable the human in question would need to be learning about it for the first time (which is analogous to the training data) and in that case you absolutely could convince the small child of that. Not only would they believe it if told enough times by an authority figure, you could convince them that the colors we see are different as well, or something along the lines of giving them bad data.

      A fully trained AI will tell you that you’re wrong if you told it the sky was orange, it’s not going to just believe you and start claiming it to everyone else it interacts with. It’s been trained to know the sky is blue and won’t deviate from that outside of having its training data modified. Which is like brainwashing an adult human, in which case yeah you absolutely could have them convinced the sky is orange. We’ve got plenty of information on gaslighting, high control group and POW psychology to back that up too.

      • Kecessa@sh.itjust.works
        link
        fedilink
        arrow-up
        5
        ·
        edit-2
        10 months ago

        Feed LLMs all new data that’s false and it will regurgitate it as being true even if it had previously been fed information that contradicts it, it doesn’t make the difference between the two because there’s no actual analysis of what’s presented. Heck, even without intentionally feeding them false info, LLMs keep inventing fake information.

        Feed an adult new data that’s false and it’s able to analyse it and make deductions based on what they know already.

        We don’t compare it to a child or to someone that was brainwashed because it makes no sense to do so and it’s completely disingenuous. “Compare it to the worst so it has a chance to win!” Hell no, we need to compare it to the people that are references in their field because people will now be using LLMs as a reference!

    • Meowoem@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      arrow-down
      5
      ·
      10 months ago

      Ha ha yeah humans sure are great at not being convinced by the opinions of other people, that’s why religion and politics are so simple and society is so sane and reasonable.

      Helen Keller would belive you it’s purple.

      If humans didn’t have eyes they wouldn’t know the colour of the sky, if you give an ai a colour video feed of outside then it’ll be able to tell you exactly what colour the sky is using a whole range of very accurate metrics.

      • rambaroo@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        10 months ago

        This is one of the worst rebuttals I’ve seen today because you aren’t addressing the fact that the LLM has zero awareness of anything. It’s not an intelligence and never will be without additional technologies built on top of it.

        • Meowoem@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          10 months ago

          Why would I rebut that? I’m simply arguing that they don’t need to be ‘intelligent’ to accurately determine the colour of the sky and that if you expect an intelligence to know the colour of the sky without ever seeing it then you’re being absurd.

          The way the comment I responded to was written makes no sense to reality and I addressed that.

          Again as I said in other comments you’re arguing that an LLM is not will smith in I Robot and or Scarlett Johansson playing the role of a usb stick but that’s not what anyone sane is suggesting.

          A fork isn’t great for eating soup, neither is a knife required but that doesn’t mean they’re not incredibly useful eating utensils.

          Try thinking of an LLM as a type of NLP or natural language processing tool which allows computers to use normal human text as input to perform a range of tasks. It’s hugely useful and unlocks a vast amount of potential but it’s not going to slap anyone for joking about it’s wife.

        • Meowoem@sh.itjust.works
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          10 months ago

          People do that too, actually we do it a lot more than we realise. Studies of memory for example have shown we create details that we expect to be there to fill in blanks and that we convince ourselves we remember them even when presented with evidence that refutes it.

          A lot of the newer implementations use more complex methods of fact verification, it’s not easy to explain but essentially it comes down to the weight you give different layers. GPT 5 is already training and likely to be out around October but even before that we’re seeing pipelines using LLM to code task based processes - an LLM is bad at chess but could easily install stockfish in a VM and beat you every time.