• zbyte64@awful.systems
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    2 months ago

    That’s OpenAI admitting that o1’s “chain of thought” is faked after the fact. The “chain of thought” does not show any internal processes of the LLM — o1 just returns something that looks a bit like a logical chain of reasoning.

    I think it’s fake “reasoning” but I don’t know if (all of) OpenAI thinks that. They probably think hiding this data prevents cot training data from being extracted. I just don’t know how deep the stupid runs.