Between Doom and Dollars
The grid, the law, and the P&L—not sci‑fi—will decide AI’s payoff.
“There is potential for serious, even catastrophic, harm… stemming from the most significant capabilities of these AI models.”
—The Bletchley Declaration, November 2023. GOV.UK
A chatbot, a bereavement fare, and a legal theory that made a judge blink
On February 14, 2024, the British Columbia Civil Resolution Tribunal ruled that Air Canada owed a passenger money because the airline’s website chatbot gave him bogus advice about a bereavement discount. The customer, Jake Moffatt, produced screenshots; the airline produced a novel defense—that its chatbot was a “separate legal entity” responsible for its own statements. The tribunal was not amused. It ordered Air Canada to pay C$650.88 plus interest and fees and wrote, flatly:
“While a chatbot has an interactive component, it is still just a part of Air Canada’s website. It should be obvious to Air Canada that it is responsible for all the information on its website.” The Guardian
Lawyers summarized the case the way investors should read it: negligent misrepresentation by an automated system is still negligent misrepresentation by the company. In other words, there is no “the bot did it” defense in commerce. American Bar Association
Reading the fine print | The real risk isn’t sci‑fi. It’s liability.
This small case is a big tell. If you strip away the competing sermons—AI doom on one side, AI euphoria on the other—the most immediate risks look like boring corporate ones: false claims to consumers, security lapses, discrimination allegations, IP fights, and discovery headaches. The upside is equally prosaic: productivity deltas that justify capital outlays, if the P&L closes.
Thesis: AI right now is less an existential roulette wheel than a capital‑intensive, regulation‑sensitive infrastructure buildout colliding with familiar legal duties. Hype exaggerates timelines; doom exaggerates agency. Both underplay the spreadsheet.
How we got here: the boom nobody budgeted for
Start with the hardware. Nvidia’s data‑center revenue went vertical in 2024–2025, as hyperscalers and model labs chased compute. The numbers are not subtle: Q2 FY2025 revenue hit $30.0 billion (data‑center: $26.3 billion). By Q2 FY2026, total revenue was $46.7 billion, with “Blackwell Data Center revenue” up 17% sequentially—and, tellingly, no H20 sales to China that quarter due to export restrictions. NVIDIA Newsroom+1
The buyers? Microsoft says it is on track to invest about $80 billion in FY2025 on AI‑enabled data centers. Meta raised its 2025 capex outlook to $70–$72 billion and warns 2026 will be “notably larger.” Amazon is hovering around the $100 billion capex mark across the estate, with AWS as the growth engine. The market is now watching capex almost the way it once watched MAUs. The Official Microsoft Blog
Banks and brokers finally noticed what the transformers quietly did to utility load curves. UBS now pegs global AI capex at ~$423 billion in 2025 and $1.3 trillion by 2030, with the kicker that power availability—not just chips—will gate winners. United States of America
On the demand side, the International Energy Agency expects data‑center electricity use to more than double by 2030 to about 945 TWh in its base case. The U.S. EIA’s November 2025 outlook forecasts U.S. electricity sales rising 2.4% in 2025 and 2.6% in 2026, with data centers singled out as a regional driver. This is not just a chip cycle; it’s an energy story with a grid attached. IEA
Regulators also discovered new levers. Europe adopted the AI Act in 2024; key provisions started phasing in during 2025, including governance structures and penalties of up to €35 million or 7% of global turnover for certain violations. In August 2025, obligations for general‑purpose AI (GPAI) models began to apply; the remainder kicks in through 2026–2027, including regulatory sandboxes and full high‑risk system requirements. European Parliament
The U.S. took a different path. The October 2023 Biden executive order (EO 14110) that began pushing NIST to set safety testing standards was rescinded on January 20, 2025, and a new order reframed AI as an innovation‑first agenda. The Federal Register filings and contemporaneous reporting are clear about the change, even if agencies are still sorting out what remains of the prior workstreams. Federal Register
So, in three paragraphs: compute spending exploded, energy demand followed, and the policy split widened.
A rhyming case: Google’s “we missed the mark” week
In late February 2024, Google paused Gemini’s ability to generate images of people after the model produced historically inaccurate depictions (yes, even Nazi‑era German soldiers with diverse skin tones). The company apologized—“It’s clear that this feature missed the mark”—and suspended the feature while it reworked the system. In August 2024, Alphabet said it would resume the capability with safeguards. blog.google+2Reuters+2
“We recently made the decision to pause Gemini’s image generation of people while we work on improving the accuracy of its responses.” —Prabhakar Raghavan, SVP, Google. blog.google
The euphemism of the week was “missed the mark.” The reality was reputational damage—and a free lesson in alignment, prompt steering, and the brittleness of one‑size‑fits‑all safety tuning.
The loudest voices | Doom vs. shrug
On one pole, Geoffrey Hinton has warned that AI may pose a “more urgent” threat than climate change and has put non‑trivial odds on catastrophic failure within a few decades. On the other, Yann LeCun has called existential risk claims “preposterous.” These are not anonymous trolls; they are Turing Award winners. The result is a public debate that toggles between 10‑to‑20% extinction odds and a confidence that intelligence does not imply agency. Reuters
Countries tried to split the difference. The U.K.’s Bletchley Declaration—signed by the U.S., EU, and China among others—conceded upfront “serious, even catastrophic, harm” could flow from frontier systems. That sentence is now quoted in every budget slide that wants a line item for an AI safety team. GOV.UK
Policy and politics, not vibes
Europe’s rules with teeth. The AI Act’s staged rollout matters operationally: codes of practice were due in May 2025; GPAI rules, governance, and penalty regimes kicked in August 2025; the remaining obligations, including for high‑risk systems, start August 2026; and providers of GPAI released before August 2025 must comply by August 2027. This is a compliance program, not a press release. If you’re deploying models that touch credit, employment, or critical infrastructure in the EU, the paperwork and audit trail are no longer optional. Artificial Intelligence Act
America’s forked signals. The U.S. moved from a 2023 EO that nudged NIST and agencies toward pre‑release testing to a 2025 reset focused on “removing barriers.” Whatever your politics, the effect is simple: fewer federal guardrails, more private‑sector discretion, and a larger role for statehouses, courts, and procurement to decide what “safe enough” means. (California’s experiment illustrates the swing: Governor Gavin Newsom vetoed SB 1047 in September 2024, rejecting a “frontier model” oversight board and mandatory safeguards, even as he called for evidence‑based protocols.) Federal Register
China’s administrative muscle. Beijing’s Interim Measures for Generative AI (effective August 2023) formalized content controls, security assessments, and algorithm filings. The approach is direct: you get a service only after an administrative greenlight, with labeling and “core values” language baked in. Western firms won’t copy that model, but they’ll collide with it in supply chains and market access. China Law Translate
Policy isn’t academic here. When the U.S. tightened export restrictions, Nvidia explicitly reported no sales of its China‑specific H20 GPUs in Q2 FY2026. That’s industrial policy landing on a P&L and a bill of materials. NVIDIA Newsroom
Capital allocation under constraint
Markets have already made a call: they are financing the AI buildout at eye‑watering scale. Microsoft’s $80 billion datapoint, Meta’s $70–72 billion, Amazon’s $100 billion—these are not venture rounds. They are multi‑year commitments that assume durable demand for inference, coding copilots, enterprise search, and every workflow that can be turned into an autocomplete. The Official Microsoft Blog+2Reuters+2
But two constraints are creeping from footnotes to headlines:
Power. The IEA’s doubling scenarios and the EIA’s demand revisions are, for once, on the conservative side of industry talk. Grid‑connection queues, transformer lead times, and water rights are now hard bottlenecks for valuation models. When an investment bank’s CIO note starts talking about gigawatts, you know this is no longer a TAM slide. IEA
Unit economics. Even as revenue soars—OpenAI’s annualized run‑rate reportedly crossed $10 billion by June 2025—the costs of training runs, inference contracts, and stock‑based comp are ravenous. A company can both sell a lot of tokens and burn cash if power, silicon, and debt servicing eat the margin. Reuters
What the Air Canada case actually tells us
Return to the “lying chatbot.” It didn’t end the world, but it did end the patience of a tribunal. That matters, because the first wave of generative deployment has been front‑office: customer support, sales, marketing, and HR. Those are precisely the places where misrepresentation, bias, and recordkeeping are most regulated—and where screenshots live forever.
Three operational takeaways:
You own your model’s mouth. Outsourcing to a vendor or slapping “beta” on a widget won’t immunize you from consumer law. Negligent misrepresentation is still negligent when it’s synthesized. American Bar Association
Policy and PR collide. Google’s “we missed the mark” episode was a content moderation mess wearing an AI badge. Fixing it required model changes and product policy changes—and public contrition. blog.google
Regulatory arbitrage is closing. In Europe, penalties are now live. In China, approvals are table stakes. In the U.S., the federal center of gravity shifted, but plaintiffs’ lawyers, state AGs, and procurement rules fill gaps faster than DC pundits think. Artificial Intelligence Act+1
Comparative systems | Why rules change investment math
EU vs. U.S. vs. China is not just an ideological debate; it’s a cost curve.
EU: The AI Act will force earlier documentation, risk classification, and, for high‑risk uses, conformity assessments. Expect slower rollouts but lower compliance tail risk once certified. Multinationals will shift model choices (and sometimes prompt interfaces) to fit “high‑risk” definitions and record‑keeping standards. The fines—up to 7% of global revenue—concentrate minds. Artificial Intelligence Act
U.S. (2025 reset): Faster deployment, more self‑attestation, and bigger swings on capex. Venture and private credit like this; so do hyperscalers. But the litigation surface area grows (consumer protection, discrimination claims, privacy, copyright), with the risk priced after the fact by judges and insurers. Reuters
China: Administrative pre‑clearance and labeling make for predictable, if restrictive, deployment. Domestic firms live with it; foreign ones must translate both language and law. The upside is speed once the paperwork is complete. The downside is guardrails that are political, not just technical. China Law Translate
Different rules, different ROIs. If you wonder why a model lab signs a massive long‑term cloud contract or why a hyperscaler builds in specific U.S. states with friendly interconnection queues, read the statutes and the intertie maps.
The dilemma ahead
A handful of near‑term decisions will tell you whether “AI doom vs. AI hype” settles into something economically sustainable, or whether we get a blowoff top followed by a policy backlash.
1) Europe’s enforcement phase (2026). The AI Act’s “remainder” becomes applicable on August 2, 2026, including broad obligations for high‑risk systems and regulatory sandboxes mandated at the national level. The European Commission and the new AI Office must also issue practical guidance under Article 6. The choice is between strict enforcement (with high‑profile penalties) and a softer, sandbox‑heavy ramp. The stakes: legal certainty for investment vs. a chilling effect on smaller deployers. Artificial Intelligence Act
2) America’s national line. After rescinding EO 14110, the new U.S. orders promised an “AI Action Plan.” The fork here is clear: double down on an innovation‑first line and let courts clean up the harms—or restore some pre‑release testing and incident reporting via agency rulemakings and procurement standards. The stakes: the speed of capex deployment, liability distribution, and whether federal purchasing becomes the de facto safety standard. Reuters
3) Power and permitting. EIA’s projections imply new records for U.S. electricity use in 2025 and 2026, with data centers a cited driver. Whether regulators clear gigawatt‑scale projects and transmission upgrades will determine if those capex plans translate to actual inference capacity—or if “out of power” becomes the 2026 earnings‑call euphemism. U.S. Energy Information Administration
Scenarios (with consequences)
If policymakers go “strict Europe”:
What happens: The EU AI Office backs heavier audits and early fines; codes of practice morph into de facto standards; national sandboxes run, but with tight scopes.
Effects: Multinationals prioritize EU‑compliant deployment first (to amortize compliance across regions) and slow‑roll experimental features. Startups spend more on compliance personnel than on marketing. Valuations for audit, monitoring, and model‑risk startups rise.
Macro: Slower feature velocity; lower headline “oops” risk; better paper trails when things go wrong. The risk is freezing smaller players out. Artificial Intelligence Act
If Washington sticks with “fast U.S.”:
What happens: The federal government avoids prescriptive safety rules; NIST remains influential but voluntary; agencies use procurement clauses rather than regulation.
Effects: Hyperscalers continue $70–$100+ billion annual capex plans with fewer federal brakes; plaintiffs’ attorneys and state AGs test new theories in court (consumer law, unfair practices, bias). Insurance markets widen exclusions.
Macro: Faster deployment and revenue growth—the OpenAI $10 billion run‑rate type headlines keep coming—but higher tail risk is borne by firms, not licensing authorities. Utilities remain the uncomfortable bottleneck. Reuters+3The Official Microsoft Blog+3Reuters+3
If everyone tries “half‑measure harmonization”:
What happens: Brussels softens early enforcement via guidance; Washington re‑animates parts of NIST’s work through soft law; companies converge on ISO‑ish documentation and incident reporting; China keeps its administrative approvals.
Effects: Compliance costs fall relative to “strict Europe” but rise relative to “fast U.S.” Cross‑border deployments get easier for enterprises; open‑source communities still fight over licensing and liability.
Macro: More predictable than pure laissez‑faire, less sclerotic than early GDPR enforcement. This is the “boring stability” most CFOs would pick if asked.
What to watch (without the drama)
Export controls in practice. Nvidia’s disclosure of zero H20 sales to China in Q2 FY2026 tells you chips policy is a live input to model roadmaps and revenue mixes. Expect more “China‑only” SKUs and more legal creativity around where training actually happens. NVIDIA Newsroom
Energy math. IEA and EIA projections are being revised upward because real projects (and real interconnects) are being signed. When UBS says AI capex depends on power, don’t roll your eyes; call your utility IR desk. IEA
Liability normalization. The Air Canada ruling will be echoed in class actions and agency guidance: if your bot speaks for you, it speaks for you. Boards should ask general counsel for a one‑pager on where chatbots, copilots, and internal tools could create discoverable misstatements. American Bar Association
Regulatory synchronization (or not). Europe’s 2026 milestones, U.S. agency memos, and China’s filing/labeling regime will either cohere into a rough baseline—or split the world into three stacks with different defaults. That’s not “doom.” It’s cost.
The fork in the road
The exaggerated debate—apocalypse vs. utopia—hides the actual decision in front of executives and policymakers.
Likely outcome: A messy half‑measure harmonization. Europe enforces the AI Act but leans on guidance and sandboxes; the U.S. favors speed with pockets of federal procurement‑driven safety; China keeps administrative pre‑clearance. Capital continues to flow—$423 billion this year by UBS’s count—until power, interest rates, and end‑user ROI discipline it. The “AI winter” people keep predicting looks more like a few cold fronts: regional permitting snarls, export‑control whiplash, and lawsuits that make some consumer features too risky to ship. United States of America
Most desirable outcome (for long‑term economic and social health): A boring, test‑and‑document baseline: pre‑release evaluations for high‑risk uses (health, finance, critical infra), narrow incident reporting, clear lines on biometric surveillance, and procurement standards that move the market without strangling it. In other words: treat models like systems, not oracles or toys. Pair that with grid investments that make the capex cycle real instead of aspirational. IEA
The Air Canada episode is a hint, not a headline. The next few quarters will not be decided by a killer robot or a glossy keynote. They’ll be decided by power purchase agreements, export licenses, conformity assessments, and the dull work of making sure your bot doesn’t promise refunds it can’t deliver.
LeCun says doomsaying is “preposterous”; Hinton warns the risks are “more urgent” than climate. Both could be wrong about the timeline. What’s certain is the bill—capex, energy, compliance, and, if you’re not careful, damages plus fees. The Bletchley line warned of “catastrophic” harm; the tribunal in Vancouver answered with something more immediate: if the machine speaks for you, you pay for it. TIME
Call it the real midpoint between doom and hype: the expensive, regulated, litigated now.


