It’s called Pi and it’s a conversational AI made to be more of a personal assistant. In the bit of time I’ve used it, it’s done far better than I expected at reframing and simplifying my thoughts when I’m overwhelmed.

Obviously, talking to a real person is much better if possible, but the reality is some of us don’t have the finances to pay for therapy or other ways to cope with the anxiety/depression that so often comes with ASD. What are your thoughts on this?

  • haui@lemmy.giftedmc.com
    link
    fedilink
    English
    arrow-up
    7
    ·
    9 months ago

    It might be correct at this moment (as they have not yet found the data breach or decided selling your data is more profitable).

    I would absolutely prefer something selfhosted. If its small, it can run on a pi. If it needs gpu power, one could host it for their friend group or family and recoup the cost and effort that way.

    But I honestly dont think a post training conversational AI (for one person) should be that demanding. We‘d need a ML specialist to confirm that though. Some really know what they’re talking about.

    • shootwhatsmyname@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      9 months ago

      Agreed. I’ve dabbled in it some but I’m no expert, maybe someone else could chime in. I just haven’t found anything that works quite as well as Pi yet and it was really intriguing to say the least. You can even talk to it verbally back and forth like a phone call

      • TheBluePillock@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        9 months ago

        I would love to be corrected, but when I looked into it, it sounded like you’d probably want 32gb VRAM or better for actual chat ability. You have to have enough memory to load the model, and anything not handled by your GPU takes a major performance hit. Then, you probably want to aim for a 72 billion parameter model. That’s a decently conversational level and maybe close to the one you’re using (but it’s possible they’re higher? I’m just guessing). I think 34B models are comparatively more prone to hallucination and inaccuracy. It sounded like the 32GB VRAM was kinda entry point for the 72B models so I stopped looking, because I can’t afford that.

        So somebody with more experience or knowledge can hopefully correct me or give a better explanation, but just in case, maybe this is a helpful starting point for someone.

        You can download models on huggingface.co and interact with them through a web-ui like this one.

      • Nerd02@lemmy.basedcount.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 months ago

        I am no expert either, but I once trained and ran an AI chat bot of my own. With a decently powerful Nvidia GPU it could output a message every 20-ish seconds (which is still too slow if you want to keep the conversation at a decent pace). I also tried it without a GPU, just running on my CPU (on a PC that had an AMD GPU which is about the same as not having one for ML applications) and it was of course noticeably slower. About 3 minutes per message, give or take.

        And bear in mind, this was with an old and comparatively tiny model, something like Pi would be much more demanding, the replies my model produced hardly made any sense most of the times.

      • haui@lemmy.giftedmc.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 months ago

        Thats pretty awesome. I only know of mycroft which is an assistant like siri and also only partially selfhosted. I havent had the patience to dabble with this yet. My forte are fediverse instances, a raspi smart tv and home automation. I have used AI for image recognition in nextcloud - both selfhosted obviously - but thats it.