yeah it’s been absolutely hilarious to watch this play out in LLM space. so many prompt configurations and model deployments with so very many string-based rule inputs, meant to be configuring inviolable behaviour, that still get egregiously broken
and afaict none of the dipshits have really seemed to internalise that just maybe their approach isn’t working
A good chunk of philosophers do believe there are moral facts, but this is less useful for these purposes than one would think
yeah it’s been absolutely hilarious to watch this play out in LLM space. so many prompt configurations and model deployments with so very many string-based rule inputs, meant to be configuring inviolable behaviour, that still get egregiously broken
and afaict none of the dipshits have really seemed to internalise that just maybe their approach isn’t working