AI Agents Demystified

by Ai4Value CTO Pasi Karhu

“If you have a ready-made agent that suits us, we’d be happy to buy one.”

This comment from a customer during a sales meeting perfectly captures the current confusion around what AI agents actually are. The widespread hype around agents that began last year and continues into this year has led many to view agents as a solution to everything — which, of course, isn’t true. What our customer actually needed wasn’t a ready-made agent, but ready-made solutions for improving their business operations. In many cases, those can still be built — and it makes more sense to build them — using traditional methods, without agents at all.

The basic definition of an agent is that it operates autonomously to achieve a given goal using the tools it has been provided. Autonomy means the agent makes decisions based on the current situation that haven’t been pre-programmed. The agent decides what sub-goals to pursue in order to complete a task, and which tools to use to achieve each one.

Agents are strictly necessary only when a task requires the kind of flexibility and ability to handle unforeseen situations that traditional programming cannot provide. This flexibility comes from the language model serving as the agent’s intelligence, combined with the instructions (prompts) it is given. It’s also worth understanding that software built with traditional techniques can include AI elements and even language model functionality without being an agent.

The trend these days, however, seems to be that agentic solutions are being built even for tasks where they aren’t needed. Building agentic solutions has become so easy that no technical knowledge is required — you just need to be able to describe your need in plain language. But over time, this comes at a cost: in direct monetary expenses, carbon footprint, security risks, and challenges around maintenance and further development.

Even if implementing an agentic solution for a problem that could be solved with traditional programming costs almost nothing upfront, its lifecycle costs can quickly exceed those of traditional development work. Significant cost factors and risks can arise from the following, for example:

  • The language model functions used by an agent require computational power that can be over a million times greater than the computing power needed by equivalent program logic. For frequently repeated tasks, this shows up in the budget — and it has global ecological consequences too.
  • Language model outputs are non-deterministic, meaning an agent won’t perform a task the same way every time. Quality can vary, and even simple tasks can occasionally fail completely. Human oversight is tedious and costly. Without it, quality issues can cause serious problems downstream in a process.
  • Agentic solutions are highly challenging from a security perspective. Autonomous agents are remarkably resourceful, but they also lack “common sense.” They may, for instance, feed sensitive information to questionable destinations if they believe it will help them complete their assignment. They are also vulnerable to attacks, since language models struggle to distinguish between instructions and the content they’re processing. This makes agents easy to manipulate with new instructions if a malicious party manages to inject them into the agent’s input.
  • If a solution’s core architecture is built agentically from the start, it becomes difficult to apply traditional program logic in future development. As the software’s capabilities grow, the cost factors and risks mentioned above can become even more pronounced.

My intention isn’t to argue that agents shouldn’t be used or that they’re too risky. Agentic solutions bring genuine relief to many tasks that previously had no solution — either because the technology simply didn’t exist, or because traditionally programming a “small task” was too expensive. The key is simply to be aware of how agentic implementations differ from traditional ones in critical ways, and what level of expertise is needed to build agentic solutions as safely as possible.

Ai4Value is a strong and capable AI expertise organization, and one of our strengths is understanding the safe use of AI. We can also advise when it makes more sense to use traditional solutions instead of agents.