The Great AI Reckoning: Who Will Own the Future and Why 95% Will Fail
A clear look at AI's harsh economics, the scaling plateau, and the 2027–2030 turning point that will determine who wins and who loses
By Jim Walker, October 2025

Introduction
The numbers are mind-boggling. By 2030, global spending on AI infrastructure will hit $7 trillion[1]—that’s bigger than the yearly economy of Japan and marks the largest technology project in history. Today, the world is already spending close to $200 billion a year on AI[2]. The market is growing at an astonishing 35.9% per year[3]—faster than any tech sector we’ve ever seen. Venture capitalists are pouring money into anything with “AI” on the pitch deck. In some quarters, 70% of all VC deals are going to AI companies[4].
Our generation's gold rush is happening now. People will get rich. There will be empires. Things will change in the world.
But they aren't telling you this: 95% of you will fail[5].
Not "struggle." Not "pivot." Not "take longer than expected." Just fail. Stop. Bought for a few cents. Equity has no value. Dreams broken.
And what about the 5% who survive? Most won’t end up with the real rewards. Instead, a handful of tech giants are becoming the landlords of the digital age—quietly building their own empires while everyone else fights over scraps.
This isn't being negative. This is numbers. Cold, hard, and undeniable math based on data from McKinsey, Goldman Sachs, MIT, and the balance sheets of the companies that are leading the AI revolution. Don't believe? Check my reading list below. Even the people who win are losing billions.
We need to know where this started in order to know where it will end. This story—of hype, promise, and inevitable consolidation—is as old as technology itself. We've seen this before. We'll see it again. But not on this scale.
The Transformer Revolution and the Cambrian Explosion: How We Got Here
The Ghosts of AI Winters Long Ago
In 1956, a group of researchers got together at Dartmouth College for a summer workshop[6]. Their goal was bold: to make machines think. They said that "the problem of making artificial intelligence will be mostly solved in a generation."
They were wrong. Very wrong.
What came next was decades of ups and downs. The 1973 Lighthill Report in Britain said that AI had not lived up to its promises, which led to huge cuts in funding[7]. Billions of dollars went into expert systems in the 1980s. These were rule-based AI systems that were supposed to change everything from medical diagnosis to financial trading. By the early 1990s, most of those businesses were no longer in business, their investors had lost all their money, and their promises had not been kept.
These weren't small problems. These were "AI winters," when the whole field lost its reputation, funding dried up, and serious researchers stayed away from the term "AI." The pattern was always the same: big promises of change, huge amounts of money spent, disappointing results, and a complete failure.
Every cycle lasted between 10 and 15 years. The believers said "this time is different" every time, but reality had other plans.
But then, in 2017, things really did change.
The Big Breakthrough in Transformers
The paper, "Attention Is All You Need," was written by Google researchers and introduced a new kind of neural network called the Transformer[8]. It could process information in parallel instead of in order, which made it much more efficient and scalable than previous methods.
This wasn't just a small step forward. It was a change of phase.
OpenAI came out with GPT-1, which had 117 million parameters, within a year[9]. Then there was GPT-2, which had 1.5 billion. After that, GPT-3, which has 175 billion. With each leap, it seemed like they could do things that were almost magical: make coherent text, answer questions, write code, and make content that was harder and harder to tell apart from what a flesh and blood human would write.
But the real turning point was when OpenAI made ChatGPT available to the public in November 2022.
This kind of growth is unprecedented. In just two months, 100 million people tried it out—making it the fastest-growing app in history[10]. By August 2025, 700 million people will be using it every week, and it racks up 5.72 billion visits every month[11]. Annual sales have already hit $2.7 billion. What started as a tool for researchers has now reached the mass market at a speed nobody saw coming.
In some quarters, 70% of all venture capital deals were related to AI by 2024–2025[12]. Ninety-three percent of tech investments in Silicon Valley included AI[13]. More than $100 billion was given to AI companies by venture capitalists each year.
The prices were crazy. AI startups had seed valuations that were 42% higher than those of non-AI startups, with a median of $17.9 million before money[14]. When companies that made AI were bought, they sold for an average of 25.8 times their revenue[15]. Vendors of LLM? 54.8 times the amount of money made.
The Hard Truth: Why 95% Will Fail
Most AI startups (85–95%) fail within three years[5].
But it gets worse. MIT research shows that 95% of generative AI pilots in businesses, where the real money is supposed to be, are not giving measurable returns[16]. These aren't little tests. These are important projects with real budgets, dedicated teams, and support from top management. And 95% of the time, they don't work.
Only 1% of companies that have used AI say their implementations are advanced enough that they "make real money" from them[17]. Even in the US, which is the most advanced AI market in the world, only 23% of AI companies are making money[18]. The big dogs, OpenAI and Anthropic? They both lost billions, even though their returns went through the roof[19][20].
And it gets even scarier: "LLM wrappers"—the companies that just build an interface on top of a model—are giving up 100-164% of their revenue to the companies that provide the infrastructure[21]. They are already paying more for computing than they are getting from customers. That doesn't add up.
Why do most people should care? Because the way the market is made makes it more likely that you will lose.
There are three layers in the AI market:
- Infrastructure: chips, data centers, and cloud platforms
- Model: teams that train and run the models
- Application: companies that use models to solve problems in the real world
Almost all of the worth goes to the infrastructure layer. A medium amount goes to the model layer. The smallest amount goes to the application layer.
How can an app business be successful and stay in business? It needs one or more of the following:
- Data that no one else has (not just scraped web data, but proprietary labeled data)
- Network effects (your product gets better as more people and content are added)
- Integration with workflows (part of core processes with high switching costs)
- Starting from "unit economics" that are profitable (gross margin can improve over time and at scale)
- A market that is regulated and needs a lot of special knowledge (to keep big businesses away)
Most AI startups don’t have any real advantage. If your main offering is just a slick interface built on top of GPT-4 or Claude, you don’t have a business—you have a feature that someone else can copy in weeks.
The time to start a business that makes AI applications that can be defended is running out. By 2027 or 2028, the market will have settled on a few winners. The rest will either be bought for a few dollars or shut down completely.
The Infrastructure Titans: Who Comes Out on Top
The Truth About the Value Chain
Follow the cash. It always tells the truth.
The infrastructure layer, which includes the companies that make the chips, build the data centers, and offer the cloud platforms, gets 60% of the economic value in the AI value chain[1]. Another 25% goes to people who make models. The layer of applications? Only 15%.
Think about that for a minute. That layer, which is where all the new ideas are happening, where all the startups are competing, and where all the venture capital is going, makes up 15% of the value.
The infrastructure layer takes up 60%. And it's not even close.
Why? Infrastructure has structural benefits that applications can never match.
- There are a lot of obstacles to getting in. It costs billions of dollars in research and development and manufacturing capacity to make a competitive AI chip. It costs hundreds of millions of dollars to build a hyperscale data center. You can't start these businesses in your garage.
- Effects on the network. AWS becomes more valuable as more people use it and it can offer more services. NVIDIA gets more entrenched as more developers use CUDA.
- Economies of scale. Costs per unit drop sharply as volume goes up. NVIDIA makes more chips than anyone else, so they can make chips for less money. AWS can offer computing power at a lower cost because they work on a scale that no one else can match.
- Benefits that build on each other. Early leaders get data, talent, and resources that make them stronger over time. The rich get richer, and the gap gets bigger.
This isn't a guess. This is what you can see in real life. OpenAI and Anthropic are losing billions, but NVIDIA's stock has gone up a lot. AI startups are spending a lot of money from venture capitalists, but hyperscalers are making a lot of money.
People who sell picks and shovels are making a lot of money. The prospectors are running out of money.
The $7 trillion infrastructure build-out
The amount of money spent on infrastructure is almost too much to understand.
By 2030, McKinsey says that AI data center investments will total between $5.2 and $7 trillion[1]. That's about the same as Japan's whole GDP, just to give you an idea. Each year. For five years.
What happens to that money?
- 60% ($3.1 trillion) goes to chip designers, mostly NVIDIA but also AMD, Intel, and new companies
- 25% ($1.3 trillion) goes to infrastructure for power and cooling
- 15% ($800 billion) for building things and developing the site
The big tech companies are making huge promises and leading the way[23]:
- Microsoft: will spend $80 billion on AI data centers by 2025
- Google: $75 billion, a 42% increase from last year
- Amazon: More than $100 billion
The biggest tech companies are spending more than $325 billion a year on AI infrastructure. There is no guesswork here. This isn't a wager. This is the biggest building project for infrastructure in history, and it's happening right now.
Seventy percent of the world's data center capacity will be used for AI workloads by 2030. Hyperscalers will be in charge of more than 63% of that space. Amazon, Microsoft, and Google will basically have control over how AI is made.
The M&A Wave of Consolidation
Mergers and acquisitions are changing the market in addition to the infrastructure build-out.
In 2024–2025, AI M&A activity went up 32% from the year before[24]. But the real story is in the deal values: they went up 127% from the first half of 2024 to the first half of 2025[25]. There are fewer, bigger deals happening in the market now.
The US is the clear leader, making up 83% of the total value of AI deals even though it only makes up 47% of the total number of deals. In other words, American companies are buying the most valuable AI assets in the world.
Who is doing the buying? The usual people:
- Apple: bought 28 AI companies between 2014 and 2023
- Alphabet: has bought 23 companies
- Microsoft: has bought 18 companies
- Meta: 16 purchases
These are not acqui-hires. These are not moves to protect yourself. The companies that already control technology are systematically combining AI skills, talent, and intellectual property.
The plan is clear: let a thousand flowers bloom, then buy the ones that don't die. Let new businesses take the risk, spend the VC money, and find out what works. Then get the winners before they become real problems.
An IPO is not your exit strategy. It's getting bought before you run out of money. And the people who buy it know this. They can wait. They have an endless amount of money. You don't.
The market structure will be clear by 2030:
- There are three to five major infrastructure providers, such as AWS, Azure, Google Cloud, and maybe Oracle and Alibaba
- 5 to 10 top model providers (some are open source and some are proprietary)
- Hundreds of experts in regulated industries who work in verticals
- A long list of niche uses
What about everyone else? Gotten or dead.
The Energy Crisis: The Thing That Could Kill AI
The Explosion of Power Demand
You can't make AI bigger without power. And we're running out of time.
Goldman Sachs says that by 2030, the amount of power needed by data centers will be 165% more than it is now[26]. Data centers in the US use 3–4% of the country's electricity at the moment. That could go up to 11–12% by 2030.
Deloitte says that by 2035, the US data center power demand could reach 123 gigawatts, which is 30 times what it is now[27]. The International Energy Agency says that data centers could use 21% of the world's electricity by 2030, including the costs of sending and delivering it[28].
To put this in perspective, a single ChatGPT query uses 2.9 watt-hours of electricity, while a Google search uses only 0.3 watt-hours[29]. That's ten times as much energy. When you multiply that by billions of searches every day, you can start to see the problem.
AI isn't just hard on computers. It uses a lot of energy in a way that no other technology has before. And we're building it out at a speed that our electrical system was never meant to handle.
The Infrastructure Blockage
Making power is only part of the problem. The other half is getting it there.
In places where there is a lot of demand, like Northern Virginia, which is home to the world's largest data center, grid connection wait times can be as long as 7 years[27]. It takes 18 to 24 months to build a data center. But you won't be able to plug it in for another 4 to 7 years.
Transformer lead times, which is the time it takes to get the equipment needed to lower high-voltage power for use in data centers, can be as long as two years. Delays in getting permits add more time. Problems with the supply chain make things worse.
72% of data center managers say that grid stress is a big problem. If current plans for infrastructure don't speed up a lot, the US could have a power supply shortfall of more than 15 gigawatts by 2030.
You can have as many GPUs as you want. They're just expensive paperweights without power.
Businesses are being creative. Microsoft is looking into small nuclear reactors that can be put together like building blocks. Google is putting money into geothermal energy. Amazon is building natural gas generation on site. But it takes years and billions of dollars to put these plans into action.
The AI revolution is running straight into the limits of our electrical grid. And the grid is losing.
The Cost to the Environment and the Economy
Someone has to pay for all this energy. It's not just tech companies, though.
In 2025, AI data centers will release 50 to 75 million tons of CO2[29]. Goldman Sachs says that the social cost of higher emissions is between $125 and $140 billion. Society has to pay for that, not the companies that make money from AI.
Electricity bills are going up faster than inflation, by as much as 14% in some parts of the US. This is because utilities are passing on the costs of building new infrastructure to all customers. Your electricity bill is going up to help pay for AI data centers. The family that can't pay their electric bill is helping OpenAI pay for its computing costs.
It's ironic that we're burning fossil fuels faster and faster to power the "technology of the future." To meet immediate AI power needs, natural gas generation is being brought online, which goes against decades of efforts to reduce carbon emissions.
When your AI dreams make everyone's power bills go up, who pays? Who pays for the damage to the environment caused by training models that most people will never use? Who suffers when the grid gets unstable because data centers are using up all the available power?
These aren't just questions that could happen. These are costs that are happening right now. And they're only going to get worse.
The Scaling Plateau: Why Bigger Isn't Always Better
The Diminishing Returns
For years, the formula was simple: more compute + more data = better AI.
If you doubled the number of parameters, the amount of training data, and the compute budget, you would get a model that was better on every benchmark.
That is falling apart.
OpenAI's Orion, which was supposed to be the next big thing after GPT-4, is showing smaller improvements even though it uses a lot more resources[30]. Gemini from Google and Claude 3.5 from Anthropic were both impressive, but they didn't give the same huge improvements that earlier generations did.
The rules that controlled AI growth from 2018 to 2023 are starting to show signs of strain. We're entering a time when putting in twice as much money doesn't double the performance. The returns are going down.
This isn't just a story. The Association for the Advancement of Artificial Intelligence asked 76% of AI researchers if they thought that scaling alone would lead to AGI[31]. The people who are making these systems are telling you that brute force isn't enough anymore.
Research in schools is coming to the same conclusions. Toby Ord, an Oxford philosopher, looked at "The Scaling Paradox" and found that to get humans to do complex reasoning tasks as well as they do now, they may need to spend a lot more money and time on resources[32]. An arXiv paper on efficiency says that advanced AI performance might need thousands of years of training time or huge fleets of GPUs that aren't possible without big improvements in efficiency[33].
The New Yorker asked the question that everyone is asking: "What if AI doesn't get much better than this?"[34] What if we're getting close to the limits of what current architectures can do? What if the next big improvement needs more than just bigger models?
The business world is changing. OpenAI is working on "test-time compute," which means making models think longer during inference instead of just making them bigger during training. Anthropic is putting safety and dependability ahead of raw power. Google is looking into how to make multimodal systems work better together.
Companies that are sure that scaling will keep giving them big gains don't make these moves. These are the steps that businesses take to protect their investments.
The Problem of Not Enough Data
We are running out of data, which is another problem we are facing even if scaling laws worked perfectly.
Several studies have shown that high-quality text data, which is needed to train cutting-edge models, could run out between 2026 and 2032[35]. When you look at the best companies, like Databricks and OpenAI, you can see that they are already paying to get private datasets, making synthetic data pipelines, and partnering with content platforms to get more training data.
The growth in synthetic data will help, but it won't be enough. If you use synthetic versions of poor-quality data, you get poor-quality output. Even worse, models that are trained on their own outputs can go into "model collapse," which makes them worse over time.
The data problem is so big that experts now think it will limit how much AI can be scaled before energy or computing costs do. If we keep going the same way we are now, we will run out of good text data in three to five years. That could stop progress before we reach AGI.
The Shift to the Edge and Efficient Computing
We are likely to see a small change in architecture soon: moving from huge, centralized models to more efficient, number-efficient models that will work well on edge devices and make inference cheaper.
Meta's Llama family and Google's Gemma models show that this is possible, as do Mistral's and Phi's small models from Microsoft. The best way to get good results might not always be to build bigger models. It might be smarter to build models that are trained better, can be fine-tuned, and can be shut down when needed.
This change isn't only happening because of technology. It's necessary because large models are too slow and expensive to use for a lot of real-world situations. Companies are already moving toward hybrid designs, which use pipelines that are smart and include both small, specialized models and larger, sometimes called "orchestrators."
Because cloud costs are going up, AI is also moving more and more to the edge. OpenAI's GPT-4o Mini, Google's Gemini Nano, and Meta's on-device models show that edge inference will become standard for low-latency jobs in the future. The cost of cloud computing will drop, the privacy of data will be improved, and responses will be faster if you do this.
What the Winners Have in Common
What will the people who make it through the 95% culling have in common?
- Strong point of view + top-notch execution. They don't copy what everyone else does. They change their solutions to fit the market perfectly.
- Built-in defensibility. These companies have strong moats from the start, like high switching costs, private data, or network effects.
- Good unit economics. The cost to serve stays low enough that gross margins improve as usage grows, not the other way around.
- Direct ties to real-world outcomes. They sell useful business outcomes (like risk reduction, faster throughput, higher conversion rates), not just "AI features."
If you don't have these, your business will not last.
Winning Strategies for Application Companies
If you work in the application layer, this is where you'll make your money:
- Specialized domain models: It won't be enough to use basic models anymore. Have a small, very high-quality model that is trained on data that no one else has, which gives the best results for your niche
- Workflows with human-in-the-loop: That is not a chatbot alone. That's changing important workflows and making them more efficient by 30–50%
- Smart agents with accountability: Real agents that can do hard tasks while staying within set goals and rules. These agents will take over processes, not just add to them.
- Unique data: Get access to private data at any cost—through partnerships, purchase, or direct creation—because it will be your long-term defense
Design Principles That Make Them Hard to Copy
- Measurable outcomes: Every claim needs a number to back it up: "Cut claims processing by 38%" or "20% less store shrinkage." No buzzwords.
- Moats that get stronger with use: Products that get better with use by turning real-world usage into training feedback loops that keep you ahead
- Deployment that is cheap and simple: It works with the infrastructure that is already there, and units can scale up without blowing up costs.
- Edge-first thinking: Keep an eye on how forecasts can move away from the cloud to devices and edge nodes
The AI Reckoning Timeline
The Next Five Years:
The Culling: 2025–2026. The rate of failure goes up. Money is hard to come by. The weakest companies go out of business first. These are the ones that don't make any money, don't have a moat, and don't have a way to make money. Gartner says that by the end of 2025, 30% of GenAI projects will be dropped[48]. That's just the start.
The Inflection in 2027. Goldman Sachs thinks the GDP will have a measurable effect. The successful implementations begin to grow. The gains in productivity are clear. The gap between leaders and those who are behind grows into a chasm.
The Consolidation will happen in 2028–2029. The most M&A activity happens. The IPO window opens for a short time for the survivors. Big tech buys up everything that is worth something. The structure of the market becomes clear.
The New Reality in 2030. It's easy to see who won and who lost. 25 to 30 percent of companies have reached AI maturity and changed the way they do business. Everyone else is either having a hard time catching up or has given up on being important.
Right now, it’s like a huge game of musical chairs. Plenty of companies are enjoying the spotlight on the dance floor. But when the music stops in 2027, most won’t have a seat—and only a few will make it through.
Pick a side. Make your moat. Know your chances.
And don't forget: the people who sell picks and shovels get rich in every gold rush. The prospectors? A lot of them go broke.
The AI gold rush is the same.
Reading List
This paper draws on McKinsey, Goldman Sachs, BCG, Deloitte, PwC, MIT research, CB Insights, Ropes & Gray, IEA data, EU regulatory materials, peer-reviewed studies, and established tech journalism. All figures are linked to original sources. Dataset reflects information available through October 2025.
About Economic Data - Investment & Market Valuations
[1] McKinsey - The Cost of Compute: $7 Trillion Race
AI-related data center CapEx projected at $5.2-7 trillion by 2030. 60% ($3.1T) to chip designers, 15% ($800B) to builders, 25% ($1.3T) to power/cooling. 60% of economic value accrues to infrastructure layer. 70% of global data center capacity dedicated to AI workloads by 2030. Hyperscalers control over 63% of capacity.
https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/the-cost-of-compute-a-7-trillion-dollar-race-to-scale-data-centers
[2] Goldman Sachs - AI Investment Forecast
Global AI investment projected to approach $200 billion by 2025, with U.S. portion around $100 billion. Projects AI could add 7% to global GDP.
https://www.goldmansachs.com/insights/articles/ai-investment-forecast-to-approach-200-billion-globally-by-2025
[3] Statista - Global AI Market Outlook
AI market growing at 35.9% CAGR through 2030, with comprehensive market forecasts and regional breakdowns.
https://www.statista.com/outlook/tmo/artificial-intelligence/worldwide
[4] EY Global - Venture Capital Investment Trends
Q1 2025 showed $80.1 billion in U.S. VC funding (heavily driven by AI), with AI accounting for 46.4% of total deal value and 28.9% of deal count. In some quarters, 70% of VC deals were AI-related.
https://www.ey.com/en_us/insights/growth/venture-capital-investment-trends
[5] Edge Delta - AI Startup Statistics
85-95% of AI startups fail within three years; while sector attracted $42.5 billion in 2023, most struggle to convert investments into sustainable revenue.
https://edgedelta.com/company/blog/ai-startup-statistics
About Historical AI Developments
[6] The AI Navigator - AI Timeline
Comprehensive timeline from 1950s (Turing Test) to 2025, including Dartmouth Conference (1956), AI winters, and major breakthroughs.
https://www.theainavigator.com/ai-timeline
[7] Wikipedia - AI Winter
Comprehensive account of two major AI winters (1974-1980, late 1980s-early 1990s). Covers ALPAC report (1966), Lighthill Report (1973), DARPA funding cuts, expert systems collapse.
https://en.wikipedia.org/wiki/AI_winter
[8] Wikipedia - Transformer Architecture
2017 "Attention Is All You Need" paper introduction; explains self-attention mechanism, parallel processing advantages over RNNs.
https://en.wikipedia.org/wiki/Transformer_(deep_learning_architecture)
[9] Wikipedia - Generative Pre-trained Transformer
Technical history of GPT series from GPT-1 (117M parameters, 2018) through GPT-4. Covers architectural improvements and training methods.
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer
[10] Exploding Topics - ChatGPT Users
ChatGPT reached 100M monthly users in 2 months (fastest-growing app ever), 700M weekly active users by August 2025.
https://explodingtopics.com/blog/chatgpt-users
[11] DemandSage - ChatGPT Statistics
5.72 billion monthly visits (July 2025), 190.6M daily users, $2.7B annual revenue, 10M+ paid subscribers.
https://www.demandsage.com/chatgpt-statistics/
[12] Crunchbase News - State of Startups Q2/H1 2025
93% of tech investments in Silicon Valley had an AI component. AI startups captured 33-52% of total venture capital deployed globally in 2024.
https://news.crunchbase.com/venture/state-of-startups-q2-h1-2025-ai-ma-charts-data/
[13] Reuters - AI Startups Drive VC Funding Resurgence
AI startups responsible for nearly 30% YoY increase in U.S. VC funding in 2024; total annual VC funding for AI companies exceeded $100 billion.
https://www.reuters.com/technology/artificial-intelligence/ai-startups-drive-vc-funding-resurgence-capturing-record-us-investment-2024-2025-01-07/
[14] Carta - AI Fundraising Trends 2024
Seed-stage AI companies had median pre-money valuation 42% higher than non-AI peers at $17.9M.
https://carta.com/data/ai-fundraising-trends-2024/
[15] FinroFCA - AI M&A Valuation 2025
Analysis of 90+ deals showing average revenue multiple 25.8x. LLM vendors at 54.8x, data intelligence 41.7x.
https://www.finrofca.com/news/ai-mna-valuation-2025
[16] Fortune - 95% of GenAI Pilots Failing
MIT report showing 95% failure rate for generative AI pilots in enterprises, indicating difficulty in scaling AI initiatives.
https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/
[17] McKinsey - State of AI 2024
78% of organizations use AI in at least one function; only 1% reach "mature" deployment. 41% of millennials and Gen Z are actively sabotaging AI strategies.
https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai
[18] All About AI - AI Statistics/Companies
In the United States, only 23% of AI companies are profitable, despite being the most advanced AI ecosystem in the world.
https://www.allaboutai.com/resources/ai-statistics/companies/
[19] Where's Your Ed At - How Much Money?
OpenAI generated $12 billion in ARR in 2025 but lost $5 billion in 2024. 50-75% of their revenue goes straight to compute costs.
https://www.wheresyoured.at/howmuchmoney/
[20] SaaStr - Anthropic May Never Catch OpenAI
Anthropic projected to hit $4-5 billion in ARR by mid-2025, but they lost $5.3 billion in 2024 despite growing 9x year-over-year.
https://www.saastr.com/anthropic-may-never-catch-openai-but-its-already-40-as-big/
[21] Where's Your Ed At - Why Everybody Is Losing Money on AI
"LLM wrapper" companies are passing 100-164% of their revenue to infrastructure providers—paying more for compute than they charge customers.
https://www.wheresyoured.at/why-everybody-is-losing-money-on-ai/
[22] BCG - AI Adoption 2024
74% of companies struggle to achieve value from AI; 70% of challenges stem from people/process issues rather than technology.
https://www.bcg.com/press/24october2024-ai-adoption-in-2024-74-of-companies-struggle-to-achieve-and-scale-value
[23] Aragon Research - AI Data Center Race
Microsoft: $80B in AI data centers in 2025. Google: $75B. Amazon: over $100B. Combined: $325+ billion annually.
https://aragonresearch.com/ai-data-center-race-amazon-vs-google-microsoft/
[24] Aventis Advisors - M&A in AI
AI M&A activity increased 32% year-over-year in 2024-2025.
https://aventis-advisors.com/ma-in-ai/
[25] Ropes & Gray - AI M&A H1 2025 Global Report
127% increase in AI deal value H1 2025 vs. H1 2024. U.S. accounts for 83% of AI deal value.
https://www.ropesgray.com/en/insights/alerts/2025/08/artificial-intelligence-h1-2025-global-report
[26] Goldman Sachs - AI Power Demand Increase
165% increase in global data center power consumption by 2030 vs. 2023.
https://www.goldmansachs.com/insights/articles/ai-to-drive-165-increase-in-data-center-power-demand-by-2030
[27] Deloitte - Data Center Infrastructure for AI
U.S. data center power demand could reach 123 GW by 2035 (30x increase). Grid connection wait times are 4-7 years in high-demand regions.
https://www.deloitte.com/us/en/insights/industry/power-and-utilities/data-center-infrastructure-artificial-intelligence.html
[28] IEA - AI Set to Drive Surging Electricity Demand
Data centers could consume 21% of global electricity by 2030 when including transmission and delivery costs.
https://www.iea.org/news/ai-is-set-to-drive-surging-electricity-demand-from-data-centres-while-offering-the-potential-to-transform-how-the-energy-sector-works
[29] Goldman Sachs - AI Poised to Drive 160% Increase
ChatGPT query requires 2.9 watt-hours vs. 0.3 for Google search. AI data centers will emit 50-75 million tonnes of CO2 in 2025.
https://www.goldmansachs.com/insights/articles/AI-poised-to-drive-160-increase-in-power-demand
[30] TechCrunch - AI Scaling Laws Showing Diminishing Returns
OpenAI's Orion, Google's Gemini showing smaller improvements despite increased resources. Shift to "test-time compute".
https://techcrunch.com/2024/11/20/ai-scaling-laws-are-showing-diminishing-returns-forcing-ai-labs-to-change-course/
[31] AIM Multiple - AGI & Singularity Timing
76% of AI researchers surveyed doubt that scaling alone will achieve AGI. Median prediction: 2040.
https://research.aimultiple.com/artificial-general-intelligence-singularity-timing/
[32] Toby Ord - The Scaling Paradox
Reinterprets OpenAI's scaling laws showing exponential resource demands for linear accuracy gains.
https://www.tobyord.com/writing/the-scaling-paradox
[33] arXiv - The Race to Efficiency
Academic paper arguing advanced performance could require millennia of training or infeasible GPU fleets without efficiency gains.
https://arxiv.org/abs/2501.02156
[34] The New Yorker - What If AI Doesn't Get Much Better?
Analysis of GPT-5's underwhelming release and Apple researchers' study showing limitations in extended reasoning.
https://www.newyorker.com/culture/open-questions/what-if-ai-doesnt-get-much-better-than-this
[35] Foundation Capitalist - Has AI Scaling Hit a Limit?
High-quality text data could be exhausted by 2026-2032. Achieving advanced capabilities might require 100,000x more data.
https://foundationcapitalist.com/has-ai-scaling-hit-a-limit/
[36] 80,000 Hours - When Do Experts Expect AGI?
AI company leaders predict 25% chance by 2026, 50% by 2031. Sam Altman/Dario Amodei: 2026-2030.
https://80000hours.org/2025/03/when-do-experts-expect-agi-to-arrive/
[37] Forbes - Yearly Path to AGI by 2040
Linear progression scenario to AGI by 2040, emphasizing incremental advancements rather than breakthroughs.
https://www.forbes.com/sites/lanceeliot/2025/06/05/future-forecasting-the-yearly-path-that-will-advance-ai-to-reach-agi-by-2040/
[38] McKinsey - Mind the Gap
For AI leaders who have redesigned workflows, payback periods are 6-12 months.
https://www.mckinsey.com/capabilities/operations/our-insights/mind-the-gap-how-operations-leaders-are-pulling-ahead-using-ai
[39] Goldman Sachs - AI May Start to Boost US GDP in 2027
Projects measurable US GDP impact starting in 2027, when successful implementations start to scale.
https://www.goldmansachs.com/insights/articles/ai-may-start-to-boost-us-gdp-in-2027
[40] PwC - AI Predictions
Top performers achieve 20-30% productivity gains from AI implementation by 2028-2029.
https://www.pwc.com/us/en/tech-effect/ai-analytics/ai-predictions.html
[41] MIT Sloan - What's Your Company's AI Maturity Level?
By 2030, 25-30% of organizations are projected to achieve AI maturity, up from 1% today.
https://mitsloan.mit.edu/ideas-made-to-matter/whats-your-companys-ai-maturity-level
[42] CB Insights - Tech M&A Predictions 2025
Big tech aggressively acquiring AI capabilities. Predicts major activity in robotics, drug discovery, and AI agents.
https://www.cbinsights.com/research/report/tech-merger-acquisition-predictions-2025/
[43] Artificial Intelligence Act - Implementation Timeline
EU AI Act timeline: Feb 2025 (prohibitions), Aug 2025 (GPAI obligations), Aug 2027 (full compliance for high-risk systems).
https://artificialintelligenceact.eu/implementation-timeline/
[44] EU Digital Strategy - Regulatory Framework for AI
Risk-based approach with fines up to €35M or 7% of global turnover. Extraterritorial reach affecting U.S. companies.
https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
[45] Greenberg Traurig - EU AI Act Compliance
Compliance costs estimated at $50-200M for large enterprises. Ongoing monitoring, auditing, documentation required.
https://www.greenbergtraurig.com/en/insights/2025/7/eu-ai-act-key-compliance-considerations-ahead-of-august-2025
[46] TelcoDR - AI Application Layer Blog
Application layer will grow to 25-30% of total value by 2030, up from 15% today.
https://telcodr.com/insights/ai-application-layer-blog/
[47] Goldman Sachs - AI Agents Market Sizing
Application software market could grow to $780B globally by 2030, with AI agents potentially accounting for over 60% of software market share.
https://www.goldmansachs.com/insights/articles/ai-agents-to-boost-productivity-and-size-of-software-market
[48] Gartner - GenAI Project Abandonment Prediction
Gartner predicts 30% of GenAI projects will be abandoned by end of 2025 after proof-of-concept phase.
https://www.gartner.com/en/newsroom/press-releases/2024-07-29-gartner-predicts-30-percent-of-generative-ai-projects-will-be-abandoned-after-proof-of-concept-by-end-of-2025
Paper was written from my original research document: AI Related Forecasts 2025-2030