Yesterday, I shared a high-level overview of agentified AI. Today, let’s look at a simple but practical example. As a quick reminder, an AI agent, at its core, is a system that:
- Runs autonomously (without being manually triggered each time)
- Has a specific task or goal
- Interacts with its environment — files, APIs, tools, or people
In many ways, this is standard automation. What’s changed is that large language models (LLMs) make certain types of agents easier to build and more flexible — especially when tasks involve unstructured data or natural language.
The tooling has improved, but agents themselves aren’t new. Developers have been building these systems for years. What’s new is the accessibility — more people can now create useful, autonomous systems with relatively little code.
That said, understanding the underlying structure still matters — especially if you’re aiming for scalable, reliable, or domain-specific agents. Knowing how to interact with LLMs programmatically, design prompts, and manage inputs/outputs is key.
Here’s a simple example of an agent, written in Python with no frameworks, that demonstrates the core idea:
import os
import time
import requests
import smtplib
from email.message import EmailMessage
# === Configuration ===
# Folder to watch
FOLDER = "./watch"
# LLM settings
API_KEY = ""
API_URL = "http://localhost:8000/v1/chat/completions"
MODEL = ""
TEMPERATURE = 0.5
SYSTEM_PROMPT = "You are a helpful assistant that summarizes documents clearly and concisely."
# Email settings
SMTP_SERVER = "smtp.example.com"
SMTP_PORT = 587
EMAIL_ADDRESS = "sender@example.com"
EMAIL_PASSWORD = "email-password"
SEND_TO = "recipient@example.com"
# Internal tracking
PROCESSED = set()
# === Core Functions ===
def summarize(content):
headers = {"Content-Type": "application/json"}
if API_KEY:
headers["Authorization"] = f"Bearer {API_KEY}"
payload = {
"model": MODEL,
"messages": [
{"role": "system", "content": SYSTEM_PROMPT},
{"role": "user", "content": f"Summarize this file:\n\n{content}"}
],
"temperature": TEMPERATURE
}
response = requests.post(API_URL, headers=headers, json=payload)
response.raise_for_status()
return response.json()["choices"][0]["message"]["content"]
def send_email(subject, body):
msg = EmailMessage()
msg["Subject"] = subject
msg["From"] = EMAIL_ADDRESS
msg["To"] = SEND_TO
msg.set_content(body)
with smtplib.SMTP(SMTP_SERVER, SMTP_PORT) as smtp:
smtp.starttls()
smtp.login(EMAIL_ADDRESS, EMAIL_PASSWORD)
smtp.send_message(msg)
print(f"Email sent to {SEND_TO}")
def run_agent():
os.makedirs(FOLDER, exist_ok=True)
print(f"Watching folder: {FOLDER}")
while True:
files = [f for f in os.listdir(FOLDER) if f.endswith(".txt")]
for filename in files:
path = os.path.join(FOLDER, filename)
if path in PROCESSED:
continue
with open(path, "r") as f:
content = f.read()
print(f"Processing: {filename}")
try:
summary = summarize(content)
send_email(f"Summary of {filename}", summary)
PROCESSED.add(path)
except Exception as e:
print(f"Error processing {filename}: {e}")
time.sleep(5)
if __name__ == "__main__":
run_agent()
This very basic agent watches a folder. When a new .txt file appears, it reads the file, summarizes the content using a call to an LLM using an OpenAI-compatible API entry point, and emails the result.
It runs autonomously, uses a language model, and interacts with local and remote systems — making it a complete, if simple, agent.
This can be run as a standalone script or adapted for use as a cron job or background process. It’s intentionally simple to show the core pattern.
From here, it’s easy to extend: support other file types, batch processing, custom prompts, tool use, logging, etc.
The key takeaway: you don’t need a full agent framework to get started — just a basic understanding of Python, API access to an LLM, and the workflow you want to automate.





Leave a comment