I wrangle code, draw pictures, and write things. You might find some of it here.

  • 3 Posts
  • 77 Comments
Joined 4 months ago
cake
Cake day: March 13th, 2024

help-circle

  • My favorite bullshit test is to have the AI count letters. Something as simple as “How many A’s are in this sentence?” will have the thing give you a seemingly random number (which might sometimes even be correct), but as soon as you ask it to list them, it will have a stroke and just make up stuff that makes no sense.

    I can’t be arsed to make an account with OpenAI or whatever, so here’s an example with Bing Chat, but this has worked on every single chatbot I’ve tested it on.

    Another fun test (as suggested by @flizzo last time I brought this up) is to have the chatbot attempt to crack a simple code. Just Caesar cipher something and ask it to crack it. Hilarity will ensue. You can even ask it to encrypt it first and then to crack it for extra fun.

    Me: Encrypt the sentence “I am a stupid roided Clippy” with the Caesar cipher.

    Clippy: [After explaining what the Ceasar cipher is, which I didn’t ask for, thanks] Convert the sentence to uppercase: “I AM A STUPID ROIED CLIPPY”. Let’s apply the Caesar cipher to your sentence using a shift of 7.

    • ‘I’ becomes ‘P’
    • ‘A’ becomes ‘H’
    • ‘M’ becomes ‘T’
    • ‘Y’ becomes ‘F’

    So the encrypted sentence is: “PH HT H ZBHTLW YVPLUU”. 😊

    Okay, first of all, it dropped a letter. And the shift isn’t even correct, lol. It said Y becomes F and then does whatever the fuck.

    Okay, so let’s give it an easy example, and even tell it the shift. Let’s see how that works.

    This shit doesn’t even produce one correct message. Internal state or not, it should at least be able to read the prompt correctly and then produce an answer based on that. I mean, the DuckDuckGo search field can fucking do it!



  • This is brilliant and I’m saving it and will post a link to it the next time someone at work asks why we can’t “just use AI to do it” when a ticket gets rejected for being stupid and/or unreasonable.

    However:

    The first is that we have some sort of intelligence explosion, where AI recursively self-improves itself, and we’re all harvested for our constituent atoms […]. It may surprise some readers that I am open to the possibility of this happening, but I have always found the arguments reasonably sound.

    Yeah, I gotta admit, I am surprised. Because I have not found a single reasonable argument for this horseshit and the rest of the article (as well as the others I read from their blog) does not read like it’s been written by someone who’d buy into AI foom.









  • Oh look, Elon openly snuggling up to Nazis and “just asking questions”. As if I didn’t hate this clown enough.

    (For anyone out of the loop: the AfD is a far-right political party in Germany and the spiritual successor to the NSDAP. They’re praising the SS, advocate for legalization of holocaust denial and historical revisionism, removal of hate crimes from the code of law, and more. They’re so openly Nazis that they got kicked out of the EU parliament’s far-right ID coalition for being too fucking Nazi. There’s no leeway. They’re literal card-carrying national socialists.)



  • They managed to make this even more stupid than the open letter from last year which had Yud among the signatories. At least that one was consistent in its message while this one somehow manages to shoehorn an Altman milquetoast well-akshually in that AI is, like, totes useful and stuff until it’s gonna murder us all.

    Who are the even pandering to here?



  • Many will point out that AI systems are not yet writing award-winning books, […]

    Holy shit, these chucklefucks are so full of themselves. To them, art and expression and invention are really just menial tasks which ought to be automated away, aren’t they? They claim to be so smart but constantly demonstrate they’re too stupid to understand that literature is more than big words on a page, and that all their LLMs need to do to replace artists is to make their autocomplete soup pretentious enough that they can say: This is deep, bro.

    I can’t wait for the first AI-brained litbro trying to sell some LLM’s hallucinations as the Finnegans Wake of our age.





  • The Collinses are atheists; they believe in science and data, studies and research. Their pronatalism is born from the hyper-rational effective altruism movement

    This is just gonna be eugenics, isn’t it?

    Malcolm describes their politics as “the new right – the iteration of conservative thought that Simone and I represent will come to dominate once Trump is gone.”

    What’s that now? Neo-alt-right? You can’t just add another fucking prefix anytime your stupid fascist movement goes off rails.

    One of the reasons why I chose to have only have two children is because I couldn’t afford to give more kids a good life; the bigger home, the holidays, the large car and everything else they would need.

    Yeah, what about giving them love or a warm relationship, or, you know, time?

    And then they wonder why those generations have shitty relationships with their parents when they seriously believe that what they need is a big fucking car, as if that’s the variable that was missing in all of this.

    Excuse me while I go and hug my daughter. I need to de-rationalize myself after reading this.