• Balder@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Not sure if you’re disagreeing or agreeing with me. What I mean is, if a LLM’s output is in practice indistinguishable from human output, fingerprinting some popular services just creates a false sense of security, since we know malicious agents will for sure not fingerprint it.

    Isn’t it just better to let humanity accept that a LLM’s output is identical to a person’s and always be skeptical?