I’m sure this is a common topic but the timeline is pretty fast these days.

With bots looking more human than ever i’m wondering what’s going to happen once everyone start using them to spam the platform. Lemmy with it’s simple username/text layout seem to offer the perfect ground for bots, to verify if someone is real is going to take scrolling through all his comments and read them accurately one by one.

  • RoundSparrow@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    11 months ago

    One of the cool things to me about Lemmy is it is like email where people have their own custom domain names. Personally I think people using their real identity should come back into fashion and post 9/11/2001 USA culture of terrorism fear-ism should not be the dominating media emotion in 2023.

    “Real humans, not bots” for the ongoing Social Media reboot of Twitter since September 2022 and Reddit since May 2023 could really leverage it. The “throwaway” culture of Reddit.

    ChatGPT GPT-4 is incredibly good at convincing human beings it gives factual information when it really is great at “sounding good, but being factually wrong”. It’s amazing to me how many people have embraced and even shown deep love towards the machines. It’s pretty weird to me that a computer fed facts spits out anti-facts. Back in March I would doing a lot of research on ChatGPT’s fabrication of facts, it made wild claims like Bill Gates traveled to New Mexico when BASIC was first created. It would even give pages from Bill Gate’s book that did not have the quotes it provided. https://www.AuthoredByComputer.com/ has examples I documented.

    EDIT: another example, facts about simple computer chips it would make up about in a book, claiming they had more RAM in the chip than they did, etc: https://www.AuthoredByComputer.com/chatgpt4/chatgpt4-ibm-ps2-uart-2023-03-16a

    • Lmaydev@programming.dev
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      11 months ago

      It’s because it isn’t fed facts really. Words are converted into numbers and it understands the relationship between them.

      It has absolutely no understanding of facts, just how words are used with other words.

      It’s not like it’s looking up things in a database. It’s taking the provided words and applying a mathematical formula to create new words.

      • RoundSparrow@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        It’s because it isn’t fed facts really.

        That’s an interesting theory of why it works that way. Personally, I think rights usage, as in copyright, is a huge problem for OpenAI and Microsoft (Bing)… and they are trying to avoid paying money for the training material they use. And if they accurately quoted source material, they would run into expensive costs they are trying to avoid.

        !aicopyright@lemm.ee