The number is 21. That is the percentage of GPU compute that Canadian researchers actually received in the 2024-25 national allocation cycle, according to the Digital Research Alliance of Canada's own Resource Allocations Competition results. Not a rounding error. Not a bad year. A structural rationing problem that has been building since before the current AI boom made GPUs a geopolitical commodity — and BC universities are absorbing the consequences in ways that are not showing up in any federal press release.

Canada has committed nearly $3 billion across two federal budgets to fix this. The first major rapid-deployment award went to the University of Toronto. BC got maintenance spending on infrastructure it already had. The gap between those two outcomes is the story.

The Allocation Math Nobody Wants to Headline

The Digital Research Alliance of Canada's 2024 Resource Allocations Competition made 4,237 Reference GPU units — RGU-years — available nationally across five host sites: Arbutus at UVic, Cedar at SFU, Graham, Niagara, and Béluga. Researchers across the country requested far more. The Alliance awarded only 21% of total GPU RGU-years requested. On the CPU side, 42% of requests were filled — also a failing grade by any infrastructure standard, but at least it clears the floor.

Those five host sites include two in BC. That fact has been doing a lot of work in provincial government communications. What it obscures is that having a seat at the table does not mean you are getting fed. SFU's Cedar received approximately $41 million from the Alliance and over $24 million from BC's own Knowledge Development Fund for renewal, according to a BC Government news release from June 2024. UVic's Arbutus — the largest research cloud in Canada, serving more than 1,000 research teams — received up to $6.14 million from BCKDF plus $10 million from the Alliance in the same funding round. Both are real investments. Both are renewal investments in aging infrastructure, not net-new capacity that closes the demand gap.

Meanwhile, the Alliance's November 2025 rapid-deployment announcement put $42.5 million — $40 million of it in capital for fiscal 2025-26 — into new AI compute infrastructure at the University of Toronto. The program is nominally open to researchers across Canada through the Alliance's access model. But the hardware will sit in Ontario. The institutional relationships that shape allocation decisions will deepen in Ontario. And BC researchers will continue filing applications into a system that, by the Alliance's own numbers, rejects more than three-quarters of what they ask for.

Desk with laptop, headphones, and coffee cup near window.

What the Hyperscalers Are Selling BC Researchers Instead

When the public allocation system is rationing at 21%, researchers do not stop doing research. They migrate to commercial cloud. And at BC's largest research university, that migration has a price tag attached to it.

UBC operates a Hybrid Cloud Service that brokers AWS and Microsoft Azure capacity for researchers. The brokerage fee, as of fiscal 2023-24, is 7.5% on top of vendor costs, according to UBC IT's official service page. On a $200,000 GPU training run — not an unusual figure for a serious deep-learning project in computer vision or large language model fine-tuning — that is $15,000 in overhead that does not produce a single additional GPU-hour of compute. It covers UBC's administrative and procurement costs, which are real, but the expense lands on individual research grants.

Google's alternative looks more generous until you do the math. The GCP research credit program offers up to $5,000 per eligible UBC faculty member, PhD student, or postdoctoral researcher, according to UBC Advanced Research Computing's official program page. Five thousand dollars buys roughly two to three days of serious H100 cluster time at current spot pricing. It is a customer acquisition gesture dressed as infrastructure support. Sophisticated principal investigators know this. They budget around it.

The compounding problem: GPU costs have not peaked. A PI who allocated $50,000 in cloud compute three years ago is buying meaningfully less compute today for the same dollar — and then paying a 7.5% fee on top of that reduced purchasing power. NSERC and CIHR grant holders are effectively subsidizing UBC's IT overhead out of federal research dollars that were never intended for that purpose. Nobody has said this out loud in a policy document. It is worth saying.

AWS is not a neutral party in this dynamic. The company co-founded UBC's Cloud Innovation Centre in 2020, a relationship that predates the current GPU scarcity by several years. When public allocation systems are undersupplied, researchers turn to commercial alternatives, and institutions with existing brokerage relationships with AWS and Azure are positioned to capture that flow — and to deepen those dependencies. The sovereign AI compute strategy is nominally about reducing reliance on foreign hyperscalers. The operational reality at BC universities is moving in the opposite direction, one grant application at a time.

The Ontario Gravity Problem in Federal Allocation

A seasoned federal science policy director would push back on the geographic inequity framing. The 21% GPU allocation figure, she would argue, reflects strategic padding in the application process — researchers rationally request more than they expect to receive, and the gap between requests and awards is partly an artifact of that behavior, not purely evidence of unmet need. She would also note that the U of T rapid-deployment award was structured as a national resource, accessible to BC researchers through the Alliance's access model.

That is a defensible position. It is also incomplete.

The structural reality is that federal compute investment flows toward institutions with the largest existing research footprints, the most established infrastructure teams, and the densest concentration of AI researchers. U of T, the Vector Institute, and the broader Toronto-Waterloo corridor have all three. That is not a conspiracy — it is how capital-intensive infrastructure programs work. Shovel-ready beats meritorious every time when deployment timelines are political. The effect for BC is the same regardless of intent: the national compute strategy is centralizing capacity in the Golden Horseshoe, and BC's co-investment model through BCKDF gives the province no direct lever to pull when that happens.

Canada's position as the only G7 country without a supercomputer ranked in the global top 30 — a gap the federal Sovereign AI Compute Infrastructure Program is explicitly designed to close, according to reporting in the Queen's University Gazette citing federal program context from December 2025 — did not happen by accident. It is the cumulative result of a decade of underinvestment. Budget 2024's $2 billion commitment and Budget 2025's additional $925.6 million over five years represent a genuine course correction. But supercomputing infrastructure has long lead times. Hardware procurement cycles, facility construction, power infrastructure negotiations, and staffing ramp-ups mean that money committed today does not produce usable GPU cycles for two to four years. In that gap, researchers wait, migrate to commercial cloud, or move their labs to institutions — often American ones — that have the compute they need.

The contrast between light and dark, and warm and cold

Vanhub Intelligence: Local Impact Analysis

According to recent market trends in Metro Vancouver, the technology sector is the second-largest private employer in the region after real estate and construction, and a disproportionate share of that employment is concentrated in AI-adjacent roles — machine learning engineers, data scientists, and applied researchers who move between university labs and companies in the Yaletown, Mount Pleasant, and Burnaby corridors. When BC university AI labs are compute-starved, the talent pipeline feeding those companies thins. This is not an abstract risk. Companies anchored to SFU's Surrey campus and UBC's Wesbrook Village depend on a steady flow of graduate students who have run real training workloads at scale, not toy models on $5,000 in cloud credits. Several Vancouver-based AI employers have told me privately they are opening satellite offices in Toronto specifically to access the U of T talent pipeline — a quiet vote of no confidence in BC's research compute environment.

Metro Vancouver operators should note that the compute access problem hits the regional innovation economy at exactly the wrong moment. Vancouver has spent the better part of a decade positioning itself as a credible alternative to Seattle for AI talent — lower cost of living relative to the Bay Area, proximity to US markets, and a university ecosystem that has punched above its weight in NLP and computer vision research. That positioning erodes if BC researchers are systematically producing less compute-intensive work than their counterparts at Carnegie Mellon, MIT, or even U of T. The second-order effects compound quickly: fewer trained graduates running serious model work means a weaker hiring pipeline, which means longer time-to-hire for senior ML roles, which means slower product cycles at the startups and mid-size tech firms that drive the region's high-income employment base.

The provincial policy gap is equally sharp. BC's government has co-invested through BCKDF but has no direct compute procurement strategy of its own — it routes everything through the federal Alliance framework. That means BC has no lever to pull when federal allocation decisions favour central Canada. A provincial AI compute reserve — even a modest one structured as a BCKDF-backed GPU cluster dedicated to BC researchers — would be a straightforward policy instrument. It does not exist. The Ministry of Post-Secondary Education and Future Skills has been focused on credential reform and international student policy, not research infrastructure. That prioritization made sense three years ago. It does not make sense in a year when the federal government is deploying nearly $1 billion in new sovereign compute funding and the first major award went east.

For Vancouver homeowners and renters, the calculus is indirect but real. The tech sector's health is a meaningful input to Metro Vancouver's rental market, particularly in the Broadway corridor and East Vancouver neighbourhoods where tech workers have driven rent premiums above citywide averages. A weakening of Vancouver's AI research reputation does not crater the rental market — but it slows the high-income household formation that has been absorbing new purpose-built rental supply along the Millennium Line extension. Metro Vancouver data consistently shows that tech-sector employment concentration is one of the stronger predictors of rental absorption rates in new buildings between Commercial Drive and Brentwood. A compute-starved university ecosystem is a slow leak in that demand story, and slow leaks are the ones that do the most damage before anyone notices.

Given the current BC assessment climate — where commercial and mixed-use properties in tech-dense corridors are being assessed at premiums that assume continued sector growth — a sustained erosion of the AI research pipeline would eventually register in the numbers. Not this cycle. Probably not next cycle. But the lag between research infrastructure decisions and their downstream economic effects is long enough that by the time it shows up in BC Assessment rolls or Metro Vancouver employment data, the policy window to respond will have narrowed considerably.

The Data-Residency Trap Nobody Is Discussing Publicly

There is a dimension of this problem that has not surfaced in any federal compute strategy document: BC's data-residency obligations are in direct tension with the commercial cloud migration that Alliance rationing is forcing.

BC's Freedom of Information and Protection of Privacy Act creates real constraints on routing sensitive research data through US-domiciled hyperscaler infrastructure. This is not a theoretical concern for health researchers at UBC and SFU working with data from Vancouver Coastal Health or Fraser Health Authority. The Alliance's sovereign Canadian compute infrastructure was specifically designed to address this — Canadian hardware, Canadian jurisdiction, Canadian data governance. Every month that Alliance GPU allocation rates stay at 21% is a month that BC health researchers face a choice between compute access and data compliance. That tension is building in research ethics board conversations at VCHA-affiliated institutions right now. It has not become a public controversy yet. It will.

The entrenchment of AWS and Azure as de facto research infrastructure at BC universities also complicates any future data-sovereignty pivot. Institutional dependencies on commercial hyperscalers are not easily unwound. Data pipelines, authentication systems, storage architectures, and researcher workflows all accrete around whatever platform a lab starts on. The sovereign AI compute strategy's long-term goal — reducing reliance on foreign cloud capacity — becomes structurally harder to achieve with every month that the allocation gap forces researchers onto AWS and Azure. The strategy is running against its own operational reality.

What a Two-Tier Research System Actually Looks Like

The end state of the current trajectory is not dramatic. It does not look like a crisis. It looks like a quiet sorting process.

Well-funded labs — those with large NSERC Alliance grants, industry partnerships, or CFI infrastructure awards — absorb the 7.5% brokerage fee and the commercial GPU costs without breaking stride. They have the overhead budget. They can negotiate enterprise agreements with hyperscalers directly. They produce the compute-intensive work that gets published in NeurIPS and ICML, attracts top PhD students, and generates the industry partnerships that fund the next round of grants.

Smaller labs, early-career faculty, and researchers at BC institutions without UBC's brokerage infrastructure absorb the Alliance rationing without the commercial cloud backstop. They run smaller experiments. They wait longer for allocation cycles. They scope their research questions to fit the compute they can actually access, which is a form of intellectual constraint that does not appear in any grant report but shapes the science nonetheless.

The second-order effects of this sorting are specific:

  • Top AI PhD students route toward Ontario or US institutions with guaranteed compute access, thinning BC's graduate pipeline.
  • Mid-career AI faculty who can monetize their research faster at compute-rich US institutions face a structural incentive to leave.
  • Smaller BC universities without brokerage programs absorb GPU costs entirely through discretionary faculty budgets, which are already under pressure from enrollment funding formula changes.
  • BC's tech sector hiring pipeline weakens as university AI labs produce fewer graduates who have run serious model work at scale.
  • AWS and Azure deepen their institutional relationships with BC research, making the sovereign compute pivot harder with each passing year.

None of these outcomes are inevitable. A provincial compute reserve, an aggressive BC application to the next SCIP rapid-deployment call, or a renegotiated BCKDF framework that includes net-new GPU capacity rather than just renewal funding could change the trajectory. The policy tools exist. The political will to use them, in a year when the provincial government is managing a deficit and a housing crisis simultaneously, is the open question. The 21% allocation rate will not wait for a convenient budget cycle to become a problem. It already is one.