Agentic AI is becoming a common way to structure systems around language models — making them appear goal-directed, interactive, and capable of handling complex tasks.
But the models themselves haven’t changed.
Large language models remain static, stateless, and deterministic. What gives rise to agentic behavior comes entirely from how we structure inputs, manage outputs, and coordinate interactions through the application layer.
Understanding that separation is key to building and working effectively with agentic systems.
The Model Behind the Application
At the heart of all LLM-based applications like ChatGPT (and it is an application) is the model itself. As I’ve explained in previous posts, these models are:
- Static – They don’t learn apart from pretraining and fine-tuning by the developers.
- Stateless – They are event-driven with no internal memory of previous interactions.
- Deterministic – They generate text using understood and repeatable algorithms, even if outputs can vary without a fixed random seed.
These models are also inherently isolated; they don’t have access to other applications or external sources of information. They are, simply, a read-only collection of learned language patterns that are used to produce (or rather, predict) text based on probability and the user’s provided input.
Why Agentic AI Appears Dynamic
But how is that true if agentic AI appears dynamic, stateful, and has access to external systems?
First, a model is instructed to produce specific function calls based on the user’s input. This could be as simple as catching the user’s request to search the web or identifying a math problem. During generation, the model then creates a function call — such as sending a query to a search engine, writing or executing code, or triggering an API call to solve a problem.
This is where the application layer comes in. The application is what looks for this specially crafted and structured output, at which point it’s the software — not the model — that pauses and handles the instructions. Once complete, the application passes the newly acquired data back to the model, which continues its output with the benefit of the new information.
Simulating State and Autonomy
So this handles the static part, but what about being stateless? How are LLMs autonomous?
This too comes from the application layer. Just as Windows-based environments have the Task Scheduler and Linux has cron, agentic applications use a scheduler as well. The interesting part is that this scheduler can be managed through the LLM itself via the same mechanism described above — function calling.
If you ask an LLM to perform a task every morning at 6 a.m., it can create the command necessary to pause output and create a “job” based on your prompt.
These jobs often include a time parameter, function call, and post-processing instructions that can then run according to schedule. Here’s an example:
- You ask an agentified AI to search for all new LLM-related news and email it to you at 6:00 a.m. each morning.
- The LLM begins processing and extracts the necessary elements (6:00 a.m., scrape news data, email user).
- It builds a job containing the necessary function calls and tasks:
- a) Search web for headlines related to LLMs
- b) Pull full articles and summarize
- c) Email user the LLM-generated summary
- The application adds the job to the scheduler.
- At 6:00 a.m., the job runs. Like a Rube Goldberg machine, the application makes the function calls, passes data to the LLM for processing, and then sends the finished output via email.
Though the difference may seem academic, this is not the LLM remembering or becoming stateful. This is all accomplished through specific prompting formats designed to utilize function calling, combined with software to handle various types of automated functions built on a scheduler.
Why This Matters
The value in understanding agentic systems isn’t just in knowing how they operate, but in seeing what that structure enables. Separating the model from the system clarifies where capability comes from and where control resides. That framing is essential — not just for building these systems, but for using them well.





Leave a comment