• jacksilver@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    Because all of these models are focused on text prediction/QA, the whole idea of “prompts” organically grew out of the functionality when they tried to make it something more useful/powerful. Everything from function calling, agents, now this are just be bolted onto the foundation of LLMs.

    Its why this seems more like a patch than an actual iteration of the technology. They aren’t approaching it at the fundamentals.