Tech

Why agentic AI still needs the human touch

Photo of Shaha Alam - AI and Agentic Solutions Lead Written by Shaha Alam - AI and Agentic Solutions Lead,   Nov 3, 2025

When people talk about artificial intelligence (AI) today, the conversation often drifts into mystique. We hear phrases like 'thinking machines' or 'AI replacing humans,' as if a spark of magic has suddenly entered our world. But history shows a different story: every wave of automation, whether industrial, digital, or now intelligent, has been shaped not by magic, but by clear thinking, human analysis and precise understanding of what needed to be achieved.

To see this, it helps to trace automation through three phases: achieve what I achieve, do what I do, and think what I think.

Achieve what I achieve: the age of industrial automation.

When the Jacquard loom appeared in the early 1800s, it transformed textile production. This punch-card-controlled loom automated the creation of complex patterns, eliminating the need for manual thread lifting. The outcome was the same, beautiful cloth, but the method was completely different. A weaver might have seen the machine and thought: It doesn’t weave like me at all. And that was the point. The loom wasn’t built to mimic human hand movements; it was built to achieve the same outcome on a massive scale.

This is what industrial automation was about - results. Steam engines, looms, and assembly lines didn’t care about how a human did the job; they cared about producing the same end product, faster and more reliably.

And yet, none of this happened by accident. Engineers had to analyse weaving patterns, identify repeatable designs, and figure out which outcomes were truly worth automating. The loom didn’t appear by magic; it appeared because someone carefully defined the goal.

 

Do what I do: the age of computerised automation


Fast-forward to the mid-20th century, when computers began entering offices. One of the earliest “killer apps” wasn’t glamorous: it was payroll.

accountantBefore computers, clerks sat with ledgers, adding up hours worked, applying tax codes, deducting pensions, and calculating wages. It was a meticulous, step-by-step process. When the first computerised payroll systems came along, they didn’t invent new ways of paying people. They followed the same steps a clerk would, but faster and with fewer mistakes.

 

This was the hallmark of computerised automation: do what I do. It wasn’t just about achieving the end result (paying people correctly); it was about faithfully replicating the process, step by step.

Again, success depended on analysis. The system was only as good as the people who mapped out the payroll process in detail. Miss a step in the specification, and suddenly, hundreds of workers might find their paychecks wrong.

 

Think what I think: the age of AI

Now, AI adds a new twist. For the first time, machines are expected not only to achieve outcomes or replicate steps, but to make judgements that resemble human reasoning.

Take medical imaging. A radiologist doesn’t just follow a checklist when reviewing a scan; instead, they weigh context, spot subtle anomalies and make a judgment call. AI systems trained to read X-rays or MRIs are expected to “think like a doctor”, not just count pixels, but evaluate what those pixels might mean.robot taking work

Or consider online recommendations. When you shop on Amazon or scroll Netflix, the system doesn’t just show you “products like the last one.” It tries to make a judgment: "Given what you’ve watched or bought, what would you want next?". That’s more than a process; it’s a kind of reasoning.

But here’s the key: AI doesn’t just emerge with these abilities. Analysts still play a critical role in deciding:

  • What kinds of judgments should be imitated?
  • What data reflects those judgments?
  • Where is it acceptable for the machine to be wrong, and where is it not?

Without this clarity, AI can produce outcomes that are irrelevant or dangerously misleading.

 

Analysts throughout the eras

Across all these ages, industrial, computerised, and now AI, the common thread has been the role of the analyst:

  • In the age of industrial automation, someone had to define the outcome worth achieving.
  • In the age of computerised automation, someone had to map out the process worth repeating.
  • In the age of AI, someone must now articulate the judgments worth imitating.

The analyst’s work, including understanding the domain, capturing requirements, and clarifying intent, remains the bridge between human need and machine capability.

 

So, what does this mean for the introduction of AI into the workplace?

I’m glad you asked. Regardless of whether you need a solution to do a job, complete a process, or make a decision, the first step is to understand your goals clearly. Clarity of vision has never been as important as it has now become when we’re asking AI to undertake thinking and decision-making tasks.

A good place to start is to:

  • Think clearly about your inputs and outputs, KPI’s.
  • Ask the right questions about who benefits from the AI solution.
  • Consider the risks and necessary guardrails to ensure the solution is doing its job right.
  • Assess who will use, interact with, benefit from, or be impacted by the solution.
  • Consider the modality of thought - what is the reasoning context from which the AI should do its thinking Eg ‘as a social worker focusing on risk assessments for vulnerable adults…’ puts the language model into a knowledge frame of reference, from which it can give better responses.

In the world of natural language models, the key is to describe the problem and the solution. And that’s a surprisingly difficult thing to do, where complex decision-making takes place.

 

Conclusion

It’s important to remember that AI is not magic. It’s the next stage in a long tradition of human ingenuity: designing artefacts that can achieve, do, and now even think in ways that extend our own abilities. But every stage of automation has required human understanding first. The role of the analyst is more vital than ever. To build AI that truly “thinks what we think,” we need people who can ask the right questions, define the right problems, and capture the right requirements. Without them, automation, no matter how advanced, risks missing the point.

At Infomentum, we believe that responsible AI isn’t just about technology. It’s about thoughtful design, clear intent, and human understanding. We’re already helping our customers harness the power of Agentforce AI to build intelligent, ethical, and outcome-driven solutions.

By combining deep business analysis with cutting-edge innovation, we ensure that automation continues to deliver value and improvements. So don’t hesitate to reach out to one of our automation experts if you have any questions!

We’d love to hear your opinion on this post