What the Current Middle East War Is Already Doing to the Price of AI
~ 9 min read
AI runs on electricity. That is the quiet dependency underneath every model, every API call, every training run, and every inference request. When energy is cheap and abundant, compute scales. When energy is expensive and constrained, the economics of AI change fast.
As of March 20, 2026, three weeks into the war that began on February 28, this is no longer a hypothetical scenario. The shock is already visible in oil, LNG, freight, and risk pricing. AI has not suddenly repriced everywhere overnight, but the inputs underneath AI compute already have.
The Energy Exposure Is No Longer Hypothetical
The Strait of Hormuz still matters for the same reason it always did: it is one of the most important chokepoints in the global energy system. What changed is that we are now measuring real disruption, not sketching out a future scenario. The Strait of Hormuz shipping movements.
The U.S. Energy Information Administration said Brent rose from an average of $71 per barrel on February 27 to $94 on March 9 after the war began on February 28. The same March 10 outlook said the strait was effectively closed to most shipping traffic when the forecast was finalised. AP also reported that tanker traffic dropped sharply as attacks, electronic interference, and war-risk insurance disruption hit shipping through the corridor.
Oil is not the primary fuel for most data centres, but energy markets are linked through fuel substitution, gas pricing, freight, and risk. The transmission is not uniform. The EIA’s March 10 release said Europe and Asia were already seeing higher natural gas prices because of reduced LNG flows through Hormuz, while U.S. natural gas remained relatively insulated for now. That does not remove the problem for AI. It means the cost shock lands unevenly across a global compute market rather than evenly across every region at once.
The honest statement today is not that every AI API now costs 50 percent more. We do not have broad public evidence for that. The honest statement is that the upstream inputs behind AI compute have already been repriced sharply, and that pressure hits new training runs, marginal capacity, and smaller providers first.
What the Market Has Already Repriced
We do not need to guess at a 30 to 50 percent shock in upstream energy inputs. Parts of the market have already moved that far:
- Oil moved first. The EIA said Brent rose from $71 per barrel on February 27 to $94 on March 9 and expected it to remain above $95 per barrel over the following two months.
- Gas and LNG moved harder in import-dependent regions. AP reported that European natural gas prices jumped more than 40 percent on March 2 after QatarEnergy halted LNG production following attacks on its facilities.
- Regional gas benchmarks repriced sharply. Kpler wrote on March 10 that Dutch TTF rose 55 percent week on week and Asian LNG rose to $15.11/MMBtu on March 4 from $10.60/MMBtu a week earlier.
- Shipping and insurance stress amplified the fuel shock. Even where molecules still moved, the cost of moving them rose because freight, insurance, and route risk repriced immediately.
Those are not abstract warning signs. They are the first-order cost inputs that sit underneath electricity markets, industrial operations, and data-centre economics.

AI Is Uniquely Exposed to Energy Price Shocks
Most software workloads are relatively light on power. A web server, a database, a messaging queue, and a conventional SaaS stack all use compute, but the energy cost per unit of work is modest.
AI is different. Training a large model can consume megawatts sustained over weeks or months. Inference at scale is less intense per request but adds up across billions of daily queries. The major AI labs were already struggling to secure enough power for their planned capacity before this war. Microsoft, Google, and Amazon have all signed long-term energy deals, explored nuclear options, and built dedicated power infrastructure precisely because the energy requirement is so large.
That matters now because energy shocks do not hit AI as a rounding error. They hit a sector whose largest ambition is already constrained by power availability.
What This Means for AI Compute Costs So Far
The real effect so far is best understood as margin pressure and capacity risk, not as a single universal percentage increase on every AI bill.
- Training economics worsen first. New frontier-model runs are planned against future power costs, not yesterday’s averages. When the forward view on energy becomes more volatile, some runs get delayed, resized, or scrutinised harder.
- Inference margins compress before list prices necessarily change. Providers can absorb some pressure for a while, but higher power, freight, and hardware risk costs still reduce margin on every GPU-backed workload.
- Marginal capacity gets more expensive than base capacity. Long-term contracts cushion some incumbents, but incremental demand still clears at the market’s more stressed edge.
- Smaller players feel it first. Companies renting GPU time, buying capacity late, or operating in import-exposed regions see the shock earlier and more directly than the hyperscalers.
That distribution matters. A war-driven energy shock does not just make AI more expensive in aggregate. It tends to concentrate advantage around the operators with long-term contracts, better balance sheets, and more regional flexibility.

The Strategic Energy Contracts Tell the Story
The scale of energy commitments already being made by AI companies looks even more rational now.
Microsoft’s nuclear moves, Amazon’s reactor bets, and Google’s geothermal and storage investments were already signals that AI’s biggest constraint was not just chips but power. In the current market they read less like optional hedges and more like strategic necessities.
These projects still take years to deliver. In the short term, even the best-prepared companies remain exposed to grid-level price increases, regional gas stress, and the cost of procuring incremental capacity when the market is under pressure.
The GPU Supply Chain Adds a Second Layer of Risk
Energy is the operating cost, but the hardware supply chain is the capital cost.
Semiconductor manufacturing depends on global logistics, rare earth materials, stable shipping lanes, and predictable lead times. A war that disrupts energy shipping and raises freight and insurance costs does not stay confined to fuel markets. It spills into the movement of GPUs, networking equipment, and the physical infrastructure needed to expand data-centre capacity.
The lead time on high-end AI accelerators was already measured in quarters. A more expensive and less reliable logistics environment pushes that further in the wrong direction. The combination is particularly damaging because it raises both the cost of running AI and the cost of building more of it.
Submarine Cables Add a Third Layer of Vulnerability
There is another piece of infrastructure running through the conflict zone that still gets less attention than it should: submarine communications cables.
Multiple fibre optic cables pass through or near the Strait of Hormuz and surrounding waters, linking Europe, the Middle East, and Asia. These links matter for distributed training, cross-region inference routing, dataset movement, and redundancy between data centres.
The cable risk is still partly contingent rather than broadly realised, but it is no longer theoretical. In an active conflict zone, accidental or deliberate cable damage becomes more plausible, and repair windows become harder to secure. If that layer is hit as well, the problem stops being only energy cost and becomes energy cost plus degraded network reliability.
For AI operators built around cheap, fast, global interconnection, that is a compounding risk rather than a separate one.

Efficiency Becomes an Existential Priority
One likely consequence of the current shock is a harder turn toward efficiency.
If every additional GPU hour carries more energy risk and more capital risk, the incentive to waste compute disappears. That means:
- Smaller, more efficient models gain ground. Distillation, quantisation, and architecture efficiency become more attractive when raw scale gets more expensive.
- Inference optimisation becomes more valuable. Speculative decoding, adaptive compute, batching, and cache-friendly designs move closer to core business strategy.
- On-device and edge inference look stronger. Local inference avoids some of the data-centre energy and network exposure entirely.
- Training efficiency research gets more funding. Anything that reduces the GPU hours required to reach a given capability level becomes economically strategic.
Constraint does not stop the field. It changes what good engineering looks like.
Europe and Asia Are Under Sharper Pressure
The regional split matters more now because the shock is not landing evenly.
The EIA explicitly said Europe and Asia were seeing natural gas pressure from reduced LNG flows while U.S. natural gas remained relatively unaffected in the near term. That means AI operators exposed to European and Asian power markets are likely to feel the cost pressure sooner, while U.S. operators benefit from partial insulation even though they are still exposed through oil, freight, hardware, and global capacity competition.
This creates an uneven playing field. American AI companies were already ahead on capital and infrastructure. A regionally uneven energy shock can widen that advantage.
The Broader Economic Context Still Compounds the Problem
The energy shock does not arrive by itself. It arrives with freight disruption, insurance repricing, supply-chain friction, inflation pressure, and weaker confidence.
That broader environment matters for AI because it can squeeze both sides of the equation at once. Costs rise through energy and logistics while customers become more cautious about discretionary AI spend. Companies built on the assumption that compute only gets cheaper every quarter are the ones most exposed to that reversal.
What This Means for the AI Industry
The AI industry was built during a period that made energy look abundant, logistics look reliable, and scaling look mostly like a capital-allocation problem.
Three weeks into this war, the better framing is no longer “what if energy gets expensive?” It is “what happens when the inputs under compute have already moved and the pass-through is still working its way through the stack?”
The likely direction is clear:
- The race to scale slows at the margin.
- Efficiency matters more than raw capability.
- Well-capitalised incumbents gain relative advantage.
- On-device and edge computing become strategically more attractive.
- The geography of AI capability shifts toward the operators least exposed to imported energy and stressed logistics.
The important update is not that every consequence has fully arrived already. It is that the market evidence is now real, dated, and measurable. The war has already repriced the foundation under AI. The rest of the industry now adjusts around that fact.