OpenAI announced these API updates 3 days ago:

  • new function calling capability in the Chat Completions API
  • updated and more steerable versions of gpt-4 and gpt-3.5-turbo
  • new 16k context version of gpt-3.5-turbo (vs the standard 4k version)
  • 75% cost reduction on our state-of-the-art embeddings model
  • 25% cost reduction on input tokens for gpt-3.5-turbo
  • announcing the deprecation timeline for the gpt-3.5-turbo-0301 and gpt-4-0314 models
  • Sparking@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    The only thing is, haven’t wearied our lesson with reddit? Using these proprietary APIs are not to be trusted. I don’t think I would ever bud anything, even at a hobby or experimental level that relied on this.

    • Denaton@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      Until someone trains a model (and it will happen) that match or outperform GPT4 and i can run i locally, i will use this to experiment and prototype random stuff that i find interesting ^^

      Edit; a big difference here too is that Reddit just fetch data from an database, it’s just as expensive as the main site and app while GPT is generative and use quite lot of RAM and VRAM per action.

      • Sparking@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        There are open source LLMs. I am not saying it is wrong to consume LLM as a service, the issue is openAI seems intent on intent on not being very open.

        • Denaton@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 year ago

          Ah, i think there is a miss understanding of their name, it’s not open as in open source, it’s open as in open research. They publish all their research for others to duplicate. And yes, there is other models out there, but on one as good as GPT4. Unless you have a computer with 640 ram, you can’t run it. So yeah, compared to fetching data from a database that could be done on a Raspberry and generating data that requires a monster computer, i understand that they wanna put a price on the API.