2025.09.14

Dirbtinis intelektas: kas tai ir kodėl rinką valdo vos kelios įmonės

WE BUILT THIS AI!” (BUT DID YOU REALLY?)

How Most AI Tools Run on Just a Few Engines And Why That Matters

Picture this: you're scrolling through LinkedIn or Twitter (sorry, X), and you see a flashy new AI startup claiming they’ve “built a revolutionary tool that uses cutting-edge AI to [insert exciting task here]”. It sounds impressive, maybe it's an AI that writes copy, automates customer service, or even reads your thoughts (well… almost).

But behind the shiny logos and startup energy, there’s a quiet truth: Most of these “revolutionary” tools are powered by the exact same few engines.

In fact, if you peek under the hood, you’ll often find just one of three names doing all the heavy lifting: OpenAI, Google, or sometimes Anthropic.

That’s right: a huge number of so-called AI products aren’t building the AI. They’re calling it. Specifically, they’re calling an API - a kind of plug that lets their app connect directly to a big company’s AI brain and ask it for help. The startup adds a nice user interface, maybe a bit of automation, and wraps it all up with a subscription plan.

It’s a bit like putting a microwave meal on a fancy plate and calling yourself a chef.

To be fair, some tools do add value - they automate real-world workflows, integrate with other platforms, and help people use AI more efficiently. But the illusion that everyone is “building AI” from scratch? That’s marketing. Not reality.

In this article, we’re going to unpack:

  • What’s actually happening behind the scenes

  • Why almost everyone is using the same tools

  • What does it mean when just a few companies control the logic of AI

  • And whether there’s still room for real innovation in this space

  • Because while the interfaces keep changing, the brain behind the curtain often stays the same. And that has some pretty big implications.

 

Ready? Let’s go.

 

Key Terms

Before we dig into how most AI tools actually work, let’s quickly define a few terms you’ll see throughout this article.

AI Model - The “brain” of the operation. An AI model is a trained algorithm that generates responses, makes predictions, or analyses data. Think of it as the smart engine doing all the actual work. OpenAI’s GPT-4 or Google’s Gemini are examples of large, powerful AI models.

API (Application Programming Interface) - The plug-in point. An API is like a waiter at a restaurant: it takes your order (input), passes it to the kitchen (AI model), and brings the food (response) back to you. Most “AI apps” don’t run their own kitchen - they just talk to someone else’s.

Frontend - What you see. The part of the app or tool that you interact with: buttons, chatboxes, nice colours, dashboards. This is what companies often design themselves.

Backend - What’s powering it. The behind-the-scenes logic: where the requests go, which APIs are called, and how data moves around. In AI apps, the backend is usually where the connection to OpenAI or another model happens.

Prompt - The instructions sent to the AI model. When an app “asks” the AI model something, it sends a prompt - a carefully worded request that gets interpreted by the model. Prompt design can dramatically affect how good the AI’s answer is.

Wrapper App - An app that puts a custom user interface on top of an existing AI model. Most “AI startups” fall into this category. They don’t build the models; they “wrap” them in a new UI and maybe add automation or integrations. The real intelligence still lives elsewhere.

SaaS (Software as a Service) - A software business model where customers pay a subscription to use a tool, usually in the cloud. Most AI tools today are SaaS: you don’t download them, you just pay monthly to access them in your browser.

LLM (Large Language Model) - A type of AI trained on vast amounts of text to understand and generate human-like language. GPT-4, Claude, Gemini, and Mistral are all LLMs. They’re like extremely smart autocomplete engines that can write essays, answer questions, and summarise text.

Open Source Model - An AI model whose code and training data are (mostly) public and modifiable. These models can be used freely or self-hosted without relying on big companies. Examples include LLaMA, Mistral, and Mixtral. They’re gaining ground, but still usually require more technical know-how to use.

We’ll use these terms throughout the article; feel free to scroll back here if something sounds fuzzy.

 


 

A Peek Behind the Curtain - How Most AI Tools Actually Work

When a new AI startup pops up with a slick website and bold promises - “Write blog posts 10x faster!” or “Automate your customer support with our AI!” - it’s easy to imagine that they’ve invented some groundbreaking technology.

But here’s the not-so-secret secret: Most AI tools you see online are not building their own AI models. They’re just using someone else’s, usually from OpenAI, Google, or a small handful of other providers.

In other words, they’re building the wrapper, not the engine.

Let’s break down what happens behind the scenes when you use a typical AI tool - say, an “AI copywriter” that helps you write emails or blog posts:

  1. You enter a prompt: “Write a friendly cold email for a coffee brand”.

  2. The app sends your input to an AI model through an API. Usually to OpenAI (ChatGPT), Google (Gemini), or maybe Anthropic (Claude). These companies host and train the massive language models behind the scenes.

  3. The AI model returns a response: “Hi there! I wanted to share something fresh and bold from our coffee lineup…”

  4. The app shows you the output in a nice, styled interface. Maybe with options to “regenerate,” “polish tone”, or “export to PDF”. But the actual writing work? That was the model’s doing.

  5. You pay the AI tool a monthly fee. Even though they’re paying a small fraction of that to access the API behind the scenes.

That’s the basic cycle. It’s useful, but not as revolutionary as it often sounds.

Here’s the key point: The “thinking” - the language understanding, creativity, summarisation, logic - all come from the underlying model.

The tool you’re using is just formatting inputs and outputs.

That’s why so many AI tools feel similar under the hood. Whether you’re generating tweets, writing product descriptions, or summarising reports - chances are they’re all talking to the same brain: GPT-4, or Gemini, or Claude.

Some apps are more than just wrappers - they might include automation, memory, or plug into external systems.

But the vast majority? They're just frontends with a few clever prompts and some good marketing.

This centralised setup has big implications:

  • A few companies control the core intelligence. If OpenAI or Google changes their API rules, pricing, or filters, every dependent app must adapt instantly or risk breaking.

  • The real power lies with the model owners. They set the boundaries of what’s possible, what’s allowed, and what kind of data gets used.

  • If the API goes down, so does your product. Many “AI companies” live or die based on someone else’s infrastructure.

It’s like a restaurant that doesn’t cook - they just replate takeout from the same three kitchens.

 


 

Why Everyone’s Building on the Same Few Brains

It might seem like there’s an AI gold rush going on - hundreds of startups and SaaS tools claiming to offer “cutting-edge AI solutions”, from résumé editors to creative writing assistants and SEO content generators. But peek under the hood of most of these tools, and you’ll find the same few names powering everything: OpenAI’s GPT-4, Google’s Gemini, or Anthropic’s Claude.

So why does it feel like everyone is just remixing the same few ingredients?

 

The short answer: because they are. And that’s not necessarily a bad thing, but it does come with consequences.

Let’s start with the obvious: training your own large language model (LLM) from scratch is, for most companies, completely out of reach. It’s not just a matter of hiring smart engineers or throwing some GPUs at the problem. The kind of infrastructure needed to develop something on the scale of GPT-4 is immense - we’re talking tens (or hundreds) of millions of dollars, massive data pipelines, top-tier research talent, and months or even years of fine-tuning and testing. For a typical startup, this level of investment simply isn’t realistic.

But OpenAI and a few others have already done the hard work. And they’ve made their models accessible to the world via APIs - essentially, plug-and-play brains that anyone can rent. With just a bit of code and a credit card, you can hook into these systems and start generating “AI-powered” responses within minutes. This has lowered the barrier to entry so dramatically that almost anyone can build something that looks intelligent, even if the intelligence was borrowed.

For many businesses, this setup is not just easier; it’s smarter. Why reinvent the wheel when someone else is offering a perfectly good one, with global-scale capabilities, at a cost far lower than hiring human workers? The API model means companies can skip the heavy lifting of AI development and focus instead on wrapping it in an appealing interface or tailoring it to a niche use case. That’s where most of the so-called innovation happens - not in the AI itself, but in the user experience around it.

There’s also a kind of psychological benefit to relying on the big players. When you say your app uses GPT-4, it immediately conveys legitimacy. It’s a name users already trust (or at least recognise), and it reassures them that the underlying technology is robust, powerful, and familiar. Branding-wise, it’s a win - even if your app is essentially a stylish coat draped over someone else’s engine.

But all of this convenience comes with strings attached. The moment you tie your product to someone else’s API, you give up a chunk of control. You don’t get to decide how the model behaves, how it evolves, or which inputs are acceptable. If OpenAI changes its pricing, limits certain use cases, or revokes your access, there’s little you can do. Your entire product could grind to a halt overnight. It’s like building a castle on someone else’s land - impressive while it lasts, but vulnerable if the landlord changes their mind.

And then there’s the broader implication: when nearly every AI product depends on the same few underlying systems, it concentrates power in the hands of very few companies. These aren’t just providers - they become gatekeepers. They define the boundaries of what’s possible, permissible, and profitable in the AI world.

That’s the tradeoff. Using someone else’s “brain” gives you speed, ease, and scale, but you don’t get to drive. And as more of the digital world is shaped by AI, the question of who holds the steering wheel becomes increasingly important.

 


 

The Business of Wrapping AI and Why It Works

Imagine going to a grocery store, buying a premade cake mix, adding water and eggs at home, and then opening a bakery with a sign that says “Gourmet Cakes, Handcrafted Daily”. Technically, you did bake the cake, but the mix did most of the work.

That’s the model for a large chunk of today’s AI startups. They take a powerful existing AI model like GPT-4, wrap it in a custom interface or workflow, and market it as an original product. And surprisingly (or perhaps not) - it often works really well.

But why?

First, it solves a very real problem: usability. The raw APIs from OpenAI or Google are powerful, but they’re not user-friendly on their own. Most people don’t want to fiddle with code, manage authentication tokens, or craft the perfect prompt every time. They want simplicity. A button that says “Fix My Writing”, “Summarise This PDF”, or “Generate My Product Descriptions” is far more appealing than a blank prompt window asking, “What would you like to do?”.

That’s where wrapping comes in. These businesses aren’t building new AI models - they’re building interfaces, flows, dashboards, and automations that remove friction. It’s not deep-tech innovation; it’s UX design and convenience. But in practice, those things are often what people actually pay for.

Take content generation tools as an example. Many of them are, in essence, ChatGPT with guardrails - pre-designed prompts for specific content types, integrated tone options, editable templates, and export buttons. All of this could be done manually with GPT-4, but most users don’t want to think about prompt engineering. They want results, fast. And they’re happy to pay for that.

Then there’s automation. When AI gets bundled into a larger workflow, say, an internal tool that pulls data from a CRM, summarises it with an LLM, and emails a weekly report - it suddenly becomes part of something bigger. You’re not just paying for an API call anymore. You’re paying for time saved, errors avoided, and processes streamlined.

This is especially attractive to small and medium businesses. They don’t have the budget or expertise to build custom AI tools from scratch. But a polished app that uses AI to automate a niche task - writing job ads, scanning contracts, summarising calls can offer real ROI with minimal setup. It doesn’t matter that the underlying AI isn’t original. What matters is that it works, fits into their workflow, and saves them effort.

There’s also a psychological layer at play. When a tool is branded, marketed, and packaged well, people are more likely to trust it, even if it's just a dressed-up API. The wrapping legitimises the experience. It makes the product feel like a finished, intentional service rather than a tech experiment. That alone can justify a monthly subscription.

So while some may roll their eyes at the idea of “just a wrapper”, the business case is solid. AI wrappers work because they deliver value where people feel it: not in the code, but in the experience.

Of course, not all wrappers are equal. Some go far beyond a simple UI and build rich systems of automation, analytics, or integrations around the core model. Others... just repackage prompts and call it a day. But even at the simplest level, there’s a powerful takeaway: you don’t need to invent AI to build something valuable with it.

The business of wrapping AI isn’t just about software. It’s about understanding what people need, where they get stuck, and how to use the brains we’ve already built to help them move faster.

 


 

A Few Companies in Control

It’s easy to believe we’re living in an age of AI abundance. New tools are released every week, promising to do everything from writing essays to building websites to acting as your virtual therapist. But beneath all that noise, most of these tools are built on just a handful of engines - the same few models owned by the same few companies. In other words, there’s a hidden bottleneck at the core of this explosion of creativity.

Right now, OpenAI (with Microsoft behind it), Google, and a few others like Anthropic or Meta, control the most advanced LLMs in the world. Their APIs are what power the majority of AI startups, productivity tools, and chatbots you see online. Even many corporate “in-house AI assistants” are just lightly customised wrappers over these existing models.

What this means is simple, but important: a small number of companies act as the gatekeepers of modern AI.

They decide:

  • What models are available to the public

  • How much does access cost

  • What types of usage are allowed or banned

  • What content is filtered, restricted, or downranked

  • What kinds of data are collected from users and retained

 

Even when an app looks independent or creative on the surface, the brain underneath might still be the same one used by hundreds of other apps. It’s like a city full of restaurants, all claiming to have secret recipes, but most of them are just rebranding the same pre-cooked meal from a central supplier.

This kind of centralisation isn’t just a technical curiosity - it has real implications.

For one, it concentrates power. If OpenAI, for instance, decided tomorrow to block certain types of political queries or disallow entire industries from using their API, that change would ripple across hundreds (if not thousands) of apps instantly. Companies built entirely on OpenAI's backend would have no recourse except to adapt or shut down.

It also shapes public understanding. If the only models available to the majority of users are fine-tuned and filtered according to one company’s preferences or values, then even something as neutral as “what counts as a good answer” becomes subjective and centralised.

There’s a term for this in software: vendor lock-in. Once you build your tool around a single provider’s model, switching becomes expensive and complicated. Many startups are already deeply tied to OpenAI’s infrastructure, not just technically, but financially and strategically. If prices go up or terms change, they’re stuck.

To be fair, OpenAI and others offer excellent tools. They’re fast, stable, and powerful. There’s a reason developers keep using them. But the bigger picture is still worth considering: innovation is happening at the edges, while the core intelligence (the “brains”) remains in a few hands.

This isn't inherently evil or broken - yet. But it is something users, businesses, and even regulators should pay attention to. Because when just a few entities control the intelligence layer of the internet, the stakes are no longer just technical. They’re cultural, economic, and political.

 


 

The Hidden Risks of Over-Reliance

When nearly the entire AI ecosystem is built on the same few foundations, there’s more at stake than just convenience or performance. What we’re witnessing is a quiet, systemic dependency - a digital monoculture.

And monocultures break easily.

Imagine a world where most online tools, services, and even businesses depend on a single species of crop. If that crop fails due to disease, climate, or policy, the entire ecosystem collapses. The same principle applies here: if most AI apps draw intelligence from the same few sources, they all inherit the same limitations, weaknesses, and vulnerabilities.

For users, this means that variety is often an illusion. You might try three different AI tools for writing, customer support, or planning - only to realise they all behave in strikingly similar ways. Not because they’re copying each other, but because they’re pulling from the same model underneath, with the same blind spots and biases.

Over-reliance also stifles creativity at the infrastructure level. Why invest in building your own AI model - a massive, expensive, multi-year effort - when you can just plug into an API and ship a product next month? This creates a feedback loop where foundational innovation slows down, and the market fills with polished interfaces masking the same engine.

There’s also the risk of coordinated failure. If a bug, flaw, or security vulnerability appears in a major model, every dependent product is suddenly at risk - not just technically, but reputationally. The same goes for incorrect or misleading outputs: when they happen at scale, they shape public discourse and decisions in ways we barely understand yet.

And on a deeper level, this dependence reshapes how businesses think. When AI is “just a service” you call in from a few powerful providers, you stop treating intelligence as something you own or shape and start treating it like electricity: something rented, invisible, and taken for granted until the lights go out.

This kind of centralisation also opens the door for quiet control over data, behaviour, and access. While most people think about cookies and tracking in the context of advertising, many AI-powered tools also operate server-side, meaning data flows straight into systems you can’t see - and that cookie banners don’t even cover. The decisions AI makes, the metrics it logs, the feedback it records - all of it lives in the backend, quietly shaping your experience without your direct knowledge or consent.

If that idea sounds familiar, it echoes the same concerns we explored in our article on GDPR and the Illusion of Consent, where user choice often clashes with server-side data collection and tracking. You might want to give that a read if you haven’t already.

The result? A world where the capacity to reason, write, analyse, and persuade is increasingly centralised - not just in terms of who builds the tools, but who owns the means of cognition.

That’s not just a technical issue. It’s a cultural one.

 


 

Is There Real Innovation in AI Anymore?

Yes, but it’s not where most people are looking.

In a world flooded with AI apps, it's tempting to think we're in a golden age of invention. But most of what gets marketed as “innovation” is actually distribution: packaging existing AI in more accessible, marketable, or automated ways. That’s not inherently bad, but it blurs the line between building with AI and building AI itself.

So where is the real innovation?

Model Research and Architecture Design. At the foundational level, there is incredible progress. Open-source communities are racing to develop smaller, faster models that can run locally. Projects like LLaMA, Mistral, and Phi are pushing boundaries - not just copying OpenAI or Google, but experimenting with entirely different training strategies, token systems, and fine-tuning techniques.

Specialised Domain Models. Another frontier is narrow intelligence - tools built to deeply understand one field, like medicine, law, chemistry, or code. Instead of trying to answer every question, these models are optimised to think like experts. This kind of focused training is technically challenging, but it's where real utility and real breakthroughs often emerge.

Modularity and Agent Architecture. We’re also seeing early-stage innovation in how AI systems are structured: not as giant all-knowing models, but as modular agents with memory, goals, tools, and autonomy. These multi-agent systems are still clunky, but they hint at the next phase of development - AI that can reason over time, coordinate tasks, and even critique or refine its own output.

Data Governance and Alignment. Perhaps the least glamorous, but most essential, innovations are happening in how we govern AI: making models safer, more aligned with human values, and less prone to misuse. Techniques like RLHF (reinforcement learning from human feedback) and constitutional AI are attempts to inject judgment into systems that otherwise just mimic patterns.

Infrastructure Innovation. And finally, we shouldn't forget the innovation behind the scenes: better GPUs, smarter inference engines, quantisation techniques, and memory-efficient training methods. These enable everything else to scale and, crucially, allow smaller players to stay in the game without billion-dollar budgets.

So, is AI innovation dead? Not even close.

But it has moved away from flashy consumer apps and toward more fundamental, less visible work. And the more we mistake UX polish for intelligence, the harder it becomes to see where the real breakthroughs are happening.

Let’s not confuse mass production with invention.

 


 

Where Do We Go From Here?

So, now that we’ve peeked behind the curtain - where do we actually go from here?

Most AI tools today may be running on the same few engines, but that doesn’t mean we’re stuck in a loop. Understanding the current landscape gives us an opportunity to rethink how we build, use, and trust AI.

Here are a few key directions worth paying attention to:

  • Real Innovation Might Be Smaller (and Slower)

Not every AI project has to scale globally overnight. Some of the most promising work is happening in local models, domain-specific tools, or open-source projects that prioritise experimentation over monetisation. These may not go viral on launch, but they push the field forward in ways that truly matter.

  • Transparency Will Be a Competitive Advantage

As users become more aware of how AI tools are built, trust will shift. Products that disclose how they work, including what APIs they use, what data they collect, and what control users have, may become more desirable than those that simply promise “AI magic”. We’ve seen this same shift with cookies and data privacy - and it’s coming for AI.

  • The Real Value Lies in Combining Pieces Creatively

Rather than building another wrapper around ChatGPT, the future may lie in connecting AI with other technologies: databases, search, sensors, tools, workflows, and physical devices. The power won’t be in any one model, but in how intelligently and ethically we weave them into our lives.

  • Users Will (Finally) Start Asking Questions

And maybe most importantly, people will get smarter about what they’re using. Instead of asking, “What can AI do?”, more users are starting to ask: “What’s really happening under the hood and who benefits from it?”. That’s not cynicism - it’s maturity.

 


 

Summary: Look Under the Hood

The AI explosion we’re living through is real, but it’s also a little misleading. The vast majority of tools labelled “AI-powered” are really interfaces over just a few backend engines.

That’s not inherently bad. But it means we’re concentrating power, outsourcing innovation, and often overestimating what’s actually new.

At the same time, real progress is happening - in labs, open-source projects, and quiet corners of the tech world where people are working on alignment, architecture, and real-world applications.

If we want AI to stay useful, fair, and creatively open, we need to keep looking under the hood and not fall for every polished UI that promises “intelligence”.

And if this article made you think a bit differently about AI...

Mission accomplished.

Paruošti ir individualūs el. prekybos sprendimai

Atraskite naujas galimybes savo elektroninei parduotuvei – dirbkime kartu ir paverskime jūsų verslą sėkmės istorija!