• Rhaedas@kbin.social
    link
    fedilink
    arrow-up
    2
    ·
    11 months ago

    In the context of LLMs, I think that means giving them access to their own outputs in some way.

    That’s what the AUTOGPTs do (as well as others, there’s so many now) they pick apart the task into smaller things and feed the results back in, building up a final result, and that works a lot better than just a one time mass input. The biggest advantage and main reason for these being developed was to keep the LLM on course without deviation.