and as always, the culprit is ChatGPT. Stack Overflow Inc. won’t let their mods take down AI-generated content

  • Pigeon@beehaw.org
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    I dunno about “tomorrow”. Eventually, maybe. But today’s AI are just language models. If there are no humans answering questions and creating new reporting for new events/tech/etc, then the AI can’t be trained on their output and won’t be able to say a single thing about those new topics. It’ll pretend to and make shit up, but that’s it.

    Being just language models - really great ones, but still, without any understanding of the content of what they say whatsoever - they’re currently in a state of making shit up all the time. All they care about is the likelihood that one word or phrase or paragraph might typically follow another, for truthy sounding language, but that’s often very far from actual truth.

    The only way to get around that is to create AI that isn’t just a pile of language algorithms, and that’s an entirely different beast than what we’re dealing with now, who knows how far off, if it’s even possible. You can’t just iteratively improve a language algorithm into not being just a language algorithm anymore.

    • kevin@beehaw.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      I imagine it’ll be possible in the near future to improve the accuracy of technical AI content somewhat easily. It’d go something along these lines: have an LLM generate a candidate response, then have a second LLM capable of validating that response. The validator would have access to real references it can use to ensure some form of correctness, ie a python response could be plugged into a python interpreter to make sure it, to some extent, does what it is proported to do. The validator then decides the output is most likely correct, or generates some sort of response to ask the first LLM to revise until it passes validation. This wouldn’t catch 100% of errors, but a process like this could significantly reduce the frequency of hallucinations, for example.

      • Tutunkommon@beehaw.org
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 year ago

        Best description I’ve heard is that LLM is good at figuring out what the correct answer should look like, not necessarily what it is.

      • orclev@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        The validator would have access to real references it can use to ensure some form of correctness

        That’s the crux of the problem, a LLM has no understanding of what it’s saying, it doesn’t know how to use references. All it knows is that in similar contexts this set of words tended to follow this other set of words. It doesn’t actually understand anything. It’s capable of producing output that looks correct to a casual glance but is often wildly wrong.

        Just look at that legal filing that idiot lawyer used ChatGPT to generate. It produced fake references that were trivial for a real lawyer to spot because they used the wrong citation format for the district they were supposedly from. They looked like real citations because they were based on how real citations looked but it didn’t understand that citations have different styles depending on the court district and that the claimed district and citation style must match.

        LLMs are very good at producing convincing sounding bullshit, particular for the uninformed.

        I saw a post here the other day where someone was saying they thought LLMs were great for learning because beginners often don’t know where to start. There might be some merit to that if it’s used carefully, but by the same token that’s incredibly dangerous because it often takes very deep knowledge to see the various ways the LLMs output is wrong.

      • FlowVoid@midwest.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        The validator would have access to real references

        And who wrote the “real” references?

        Because that’s the point of the post you replied to. LLMs can’t completely replace humans, because only humans can make new “real references”.