
Section 1 - Introduction

First, I would like to introduce myself. Most content these days is AI written, however, I would like to keep this piece authentic and raw. My name is Amritpal Chera, and I’ve been trying to understand and break the limits of AI since 2021, mostly in secret in my dark and dark bedroom (lol). The goal is to do more, more efficiently.
I’ve worked with orgs worth billions to understand where does the adoption of AI really lies in 2026 and onwards. About 90% of the people I’ve come across are still figuring it out, although everyone is now using it in one form or another. For this article, we will strictly focus on non-technical workers who are using AI to accelerate their work.
New tools are being released every single week, and with that those who are always seeking for something “more” are on a new IDE every other week. And not always for the good. Many of us are mainly driven by two big factors when trying out new software, first is curiosity, and second is fear-of-missing-out (FOMO).
A lot of the recent changes in AI, and also the hype is built on FOMO. People want to be ahead of other people, move faster than other people, and be better than other people. If they see someone interacting with the “shiny” thing, they want that “shiny” thing, often missing the cracks that are hidden behind the shine.
In this article, we will explore the gaps of the current state of AI and the new set of tooling that will solve them.
Section 2 - Current Tooling

The latest AI tooling that are being released follow the “agentic” format. It is equipped with a set of “tools” it can use for it to gain context better answer the questions. After every tool use, the AI determines if it has enough context to answer, if not, it might try using the tool again, or use a different tool it is given access to get to the right information. This is done repeatedly until the AI either determines it has a good answer, or it can not find the answer.
This is powerful because a human would otherwise have to click through 10 different things to get the same context the AI can pull in under 1 second. In a lot of repeated tasks, this eliminates the boring steps. For example, if you every had the excel sheet open, you probably have wondered, why can’t my computer just take the important information and copy it for me. Well now it can, kinda.
Let’s look at the top AI agents that are changing the world right now:
Anthropic AI’s Claude
OpenAI’s Codex
Perplexity Computer
Openclaw - now a part of OpenAI
The most effective ways I’ve seen these tools being used is when executives give them free access to their browser. Since most of us on browsers now save our passwords, and are always logged in, this also allows the agent to the services we use everyday. They can use this ability to analyze and complete tasks in real time.
For example, I’ve seen executives from using these agents to analyze their google analytics data and putting it in a spreadsheet file all the way to analyzing their complicated business dashboard and spinning up a PowerPoint with the results in seconds. Typically, the analysis, the making sense of numbers, the putting it all together, the creation of a PowerPoint asset would take more than a few hours, even though the data already all exists. Now, it takes minutes. Think about the time the executive now has to focus on other tasks that really drive the business forward, such as talking to customers.
I’ve seen different levels of adoption from startups to enterprise. Startups are much more open and willing to give AI access to their data than enterprise are and rightfully so. Compliance is a serious topic that startups almost always trade for speed. This is not to say enterprise are falling behind in AI’s adoption, they are just a lot more considerate before jumping on the trends. And that brings us to the next section.
Section 3 - The Pitfalls

Most enterprise organizations are unmoved by the wave of AI since most of their internal operations are built on relationships and not the ability of their software. Yes they want to be more efficient and use AI to streamline their internal operations, however, they are not panicking over losing deal-flow just because of it.
Although we see new “tools” launch every single day with Pixar quality demos, they still have a lot to prove at scale. The dynamics of every software are different at scale. And those dynamics stretch way beyond than just compute resources and cost. At scale, the systems need to handle intense demand, while being able to manage billions of data points all at once. Even the most sophisticated system struggle with that.
Let’s focus in on agents. Many executives I’ve seen have used agents to analyze their ad accounts data and report back on the results. Many of them have tried using the agents to also do market research to stay updated on the latest market trends. Often times, the results they get are often worse quality than what a junior worker would produce, faster yes, but without much cost benefit, and you have to learn a whole new skill - to talk to AI instead of a human.
The thing with agents is, when they have to go in the real-world and analyze all this data, they get expensive. Very expensive. I know people spending thousands on just chat agents and the return? Now you have to type away to a computer instead of telling your junior worker to complete the tasks and get back with the results. Yes the agent can run 24/7, however, a lot of the times it lacks the sheer quality that can only be delivered through the handy work of a human.
The baseline is, AI is still AI not matter how smart it is, and however long it can run. People are spending thousands, yet their work quality is no higher than it was prior to integrating AI into their workflows. AI in its current state introduces a different kind of busywork.
Section 4 - The New Tooling

Perplexity was one of the first companies to realize the pitfalls of strictly limiting the AI to it own pre-trained knowledge. The realized if AI had a special set of tooling that it could call to enhance its context, it would provide much better results. So they gave AI a browser it could search to get information in real-time about world events.
Then OpenAI also launched their tooling marketplace where users would be able to connect specific tools to OpenAI to have it gain extra context before answering questions. This was a game changer because now AI didn’t have to rely on its own abilities to answer the question, it could use the ability of the tools it was provided. And this became the basis for the agents we see today, where developers can build their own tools in code and instantly give AI superpowers. This is what enables Claude, OpenClaw, and OpenAI to control your computer and take actions, its just a set of tools.
However as the actions get more complex, the AI is forced to analyze more unstructured content, spending billions in tokens repeating the simplest tasks. This is where this new set of tooling will make a difference. The next-gen tools are now more complicated and instead of just retrievers of information, they can also modify and distill the information before providing it to the agent. These new tools are evolving from just ‘functions’ with a single scope, to sub-processes that can be plugged in to cut the costs and data delays by 10x.
We first saw these sub-processes in the form of workflow builders such as N8N. The reason they work so well is because they have a dedicated purpose to complete a complex tasks, and instead of just passing raw data to the agent, they complete the task themselves and pass the results. Since they are self-contained units, they can be independently improved forever.
For example, before when you asked AI to “give me the stock market trends in the health sector”, the AI would look through an over-whelming amount of information, different articles, and try to answer the question to the best of its ability. Now imagine you have a workflow tool you give AI to use that looks at dedicated information from trusted health websites, looks at stock markets, looks at analyst trends, and weighs its findings to best answer the question and reports back to the agent. Now the agent instead of going and researching and providing you with a generic response, it just uses the tool and provides you with data that is on-par with a professional human analyst.
This is what is needed, better complex tooling and subset workflow for AI to better answer questions. So far, we’ve just been brute forcing the AI function where it does not perform best.
Section 5 - The Future

I’m genuinely impressed by the speed at which new tools are coming out, and the speed at which old tools are disappearing. The make fast and break fast culture seems to be at its peak. I think it will only accelerate from here, and thus the quality of data to noise ratio will only increase. This is were the relevance of data will matter the most to users. Being able to control the algorithms will set people’s entire personalities and define their success in life.
We’re already seeing this with the younger generations entire personalities being built around YouTube, Instagram, TikTok. There’s certainly a world there that places AI as our closest ally, and with that maybe our closest foe. If we are able to give it the right data, and make sure our AI friend is acting on the right side of ethics, we might as well save ourselves from the pitfalls of having incomplete, or the wrong knowledge.
Personalities will be defined, economies will shift, new power will rise, and it all is dependent on how well we play and the set of tooling for our AI friend that answers our questions every single day. Regardless, we should always be careful on the type of information being fed into our brains no matter where it comes from. Listen to the right things.
This is it for today. Thank you for the read.




