OpenAI just admitted it can’t identify AI-generated text. That’s bad for the internet and it could be really bad for AI models.::In January, OpenAI launched a system for identifying AI-generated text. This month, the company scrapped it.

  • Queen HawlSera@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    11 months ago

    How not? You ever talk to Chat-GPT, it’s full of blatant lies and failure to understand context.

    • void_wanderer@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      11 months ago

      And? Blatant lies are not exclusive to AI texts. Every right wing media is full of blatant lies, yet are written by humans (for now).

      The problem is, if you properly prompt the AI, you get exactly what you want. Prompt it a hundred times, and you get a hundred different texts, posted to a hundred different social media channels, generating hype. How in earth will you be able to detect this?

    • diffuselight@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      11 months ago

      Just like your comment you say? Indistinguishable from human - garbage in, garbage out .

      If you actually used the technology rather than being a stochastic parrot, you’d understand:)