yeah it’s been absolutely hilarious to watch this play out in LLM space. so many prompt configurations and model deployments with so very many string-based rule inputs, meant to be configuring inviolable behaviour, that still get egregiously broken
and afaict none of the dipshits have really seemed to internalise that just maybe their approach isn’t working
yeah it’s been absolutely hilarious to watch this play out in LLM space. so many prompt configurations and model deployments with so very many string-based rule inputs, meant to be configuring inviolable behaviour, that still get egregiously broken
and afaict none of the dipshits have really seemed to internalise that just maybe their approach isn’t working