The conventional wisdom, well captured recently by Ethan Mollick, is that LLMs are advancing exponentially. A few days ago, in very popular blog post, Mollick claimed that “the current best estimates of the rate of improvement in Large Language models show capabilities doubling every 5 to 14 months”:
I’m not reading that because you clearly would rather argue than have a conversation. Enjoy the rest of your day.
Sure, just like you didn’t read the article you linked to.
I did read it btw, since you shared it.