Norwegian companies in finance, healthcare, and the public sector are experiencing increasing pressure to run sensitive AI workloads on infrastructure they control themselves. The debate over sovereign cloud and local data centers is no longer just about ideology — it's about money, regulatory compliance, and strategic control over one's own data.
A technical review published on Towards Data Science explores how organizations can build GPU-as-a-Service (GPUaaS) on their own servers using Kubernetes, supporting multi-tenant architecture, workload scheduling, and cost modeling. The article provides a practical framework for those wishing to avoid vendor lock-in with major cloud providers.
What Does It Cost to Own Your Own GPUs?
Capital expenditures are significant. An NVIDIA H100 server with eight GPU units can cost around $250,000 — and that is just the starting point. According to market research referenced by the Towards Data Science article, power, cooling, and maintenance can add another 40–60 percent of the hardware price over its lifetime.
Behind these figures lie a range of operating expenses: specialized cooling infrastructure, networking equipment like 100GbE InfiniBand switches, licenses, and qualified IT personnel. Hardware is typically depreciated over three to five years, requiring planning for replacement and upgrades.

The Cloud Is Flexible — But Can Become Expensive
Public cloud GPUaaS competes on availability and low entry barriers. Hourly rates for NVIDIA H100 vary from around $1.50 per hour at specialized providers to between $4 and $8 per hour from major hyperscalers when ordered without a prior agreement, according to price data from late 2024 and early 2025.
For experimental phases, varying workloads, and rapid startups, this is hard to beat. However, with continuous, high utilization, the picture changes quickly.
According to analyses from Accenture, cited in the research basis for this article, local GPU infrastructure becomes cost-competitive with the cloud when the utilization rate consistently exceeds 60–70 percent over the hardware's lifetime. The result can be a 30–50 percent lower total cost over a three-year period.

GDPR and Data Control: The Norwegian Dimension
For organizations subject to GDPR and sector-specific regulations, local infrastructure provides legal predictability that public cloud can rarely match fully.
For Norwegian companies in regulated industries — especially the healthcare sector, financial institutions, and public agencies — data control and data sovereignty are often as important as the cost issue itself. Personal data and sensitive business data processed by AI systems may be subject to requirements that data does not leave the EEA zone, or in practice, Norwegian jurisdiction.
Cloud-based solutions from non-European providers have created legal uncertainty, particularly following the Schrems II ruling and the ongoing discussion surrounding the CLOUD Act and US government access to data. Local GPUaaS eliminates this uncertainty — provided the infrastructure is operated correctly.
Kubernetes as an Internal Cloud Platform
The core of the technical approach described in the Towards Data Science article is the use of Kubernetes to abstract GPU resources and offer them as an internal service across different teams and projects. This enables:
- Multi-tenant isolation: Different business units or projects can share GPU capacity without interfering with each other
- Dynamic scheduling: Workloads are prioritized and distributed efficiently based on need
- Cost visibility: Consumption can be tracked per team or application, providing a basis for internal pricing and budget management
Who Should Choose What?
There is no universal answer. Organizations with sporadic AI projects, a need for rapid scaling, or limited IT expertise will still find public cloud most appropriate. For those with stable, intensive workloads — and especially those with strict data handling requirements — the math points toward local or hybrid infrastructure.
The technical complexity, however, is not trivial. Building and operating an internal GPUaaS platform requires deep expertise in both hardware and Kubernetes orchestration — something many Norwegian organizations must either build internally or source externally.
The source for this article is Towards Data Science's technical review of GPUaaS architecture for enterprises, supplemented by independent market data on GPU prices and TCO analyses.
