Let’s talk about our experiences working with different models, either known or lesser-known.

Which locally run language models have you tried out? Share your insights, challenges, or anything you found interesting during your encounters with those models.

  • actually-a-cat@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 year ago

    The wizard-vicuna family is my favorite, they successfully combine lucidity with creativity. Wizard-vicuna-30b is competitive with guanaco-65b in most cases while being subjectively more fun. I hope we get a 65b version, or a Falcon 40B one

    I’ve been generally unimpressed with models advertised as good for storytelling or roleplay, they tend to be incoherent. It’s much easier to get wizard-vicuna to write fluent prose than it is to get one of those to stop mixing up characters or rules. I think there might be some sort of poison pill in the Pygmalion dataset, it’s the common factor in all the models that didn’t work well for me.

      • actually-a-cat@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        W-V is supposedly trained for “USER:/ASSISTANT:” but I’ve found it flexible and able to work with anything that’s consistent. For creative writing I’ll often do “USER:/STORY:”. More than two such tags also work, e.g. I did a rpg-style thing with three characters plus an omniscient narrator, by just describing each of them with their tag in the prompt, and it worked nearly flawlessly. Very impressive actually.

  • Yahma@kbin.social
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    Guanaco, WizardLM (uncensored) and Camel-13b have been the best models I’ve tried that are 13b+.

    Surprisinly, the LaMini-LM (Flan 3b) and OpenLlama (3b) have performed very well for smaller models.

  • dtlnx@beehaw.orgOP
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I’d have to say I’m very impressed with WizardLM 30B (the newer one). I run it in GPT4ALL, and even though it is slow the results are quite impressive.

    Looking forward to Orca 13b if it ever releases!

    • micheal65536@lemmy.micheal65536.duckdns.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Which one is the “newer” one? Looking at the quantised releases by TheBloke, I only see one version of 30B WizardLM (in multiple formats/quantisation sizes, plus the unofficial uncensored version).

  • Kerfuffle@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    guanaco-65B is my favorite. It’s pretty hard to go back to 33B models after you’ve tried a 65B.

    It’s slow and requires a lot of resources to run though. Also, not like there are a lot of 65B model choices.

      • Kerfuffle@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        With a quantized GGML version you can just run on it on CPU if you have 64GB RAM. It is fairly slow though, I get about 800ms/token on a 5900X. Basically you start it generating something and come back in 30minutes or so. Can’t really carry on a conversation.

        • planish@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Is it smart enough that it can get the thread of what you are looking for without as much rerolling or handholding, so this comes out better?