AI Compute · Scaling Now

Beyond estimation. The AI measurement crisis has a hardware answer.

Mālama Labs is bringing rack-level hardware-signed power attestation to AI data centers. The same verification pipeline that produced 2,786+ on-chain SaveCards from Dallas now extends to the largest unverified emissions source in the world: AI compute.

Get updates → Request a pilot →

DCGM and Kepler tell you what the software says. Mālama proves it at the hardware line.

Watch · 60 seconds

The hardware answer to AI's biggest blind spot.

A 60-second walkthrough of why estimation breaks at AI scale and how rack-level attestation with cryptographic signing closes the gap. Same proven pipeline as Dallas, applied to the largest unmeasured emissions source on the planet.

Press play →
01 · The Crisis

The measurement gap is not a rounding error.

The AI industry has no standardized methodology for measuring its environmental footprint. Companies disclose what they choose, if they disclose at all. Software telemetry runs on the same machines it is measuring, which means the trust loop is unbroken and unverifiable.

The Federation of American Scientists concluded that Meta's actual emissions may be up to 19,000× higher than market-based reports suggest. That is the difference between climate disclosure as a marketing exercise and climate disclosure as physical reality.

Every model upgrade, every video generation, every reasoning step compounds the gap. A single 5-second video generation consumes ~944 Wh — a day of laptop power for a single clip. GPT-o3 uses ~39.2 Wh per prompt, roughly 2,500× a lightweight text classifier.

Voluntary disclosures will not close this gap. Software telemetry measured by the operator cannot close this gap. Hardware-signed measurement at the rack, external to the data plane, will.

02 · The Numbers

What aipower.fyi reveals.

Mālama's AI Energy Impact dashboard tracks 30 AI models with full methodology transparency. Every assumption is published with confidence levels and source citations. The contribution form is open. This is the estimation tier. Hardware sensors are next.

VIDEO GENERATION
944 Wh

Per 5-second clip. Equivalent to a day of laptop power. Facility-average water allocation of up to 1 L per clip.

GPT-o3 REASONING
39.2 Wh

Per prompt. ~2,500× more than lightweight classification (0.016 Wh). Frontier reasoning is energy-intensive by design.

EFFICIENCY GAP
1,888,880×

Between most- and least-efficient AI tasks. Model choice and workload pattern matter enormously.

AGENTIC COMPOUNDING
3 to 10×

Multi-step agent workflows compound cost per task. Next-generation AI is more agentic by default.

Explore Dashboard ↗ | View Methodology →
03 · The Hardware Answer

Directly measured. Cryptographically signed. Independently verifiable.

We deploy a dedicated attestation appliance between the rack PDU and the compute infrastructure. It continuously measures electrical load, pairs it with GPU and BMC telemetry, and signs every record in a tamper-resistant secure element before release. The signed record is anchored to a public ledger. Any auditor, today or a decade from now, can verify any record without trusting the operator.

DIRECT POWER MEASUREMENT

Rack-level, per-workload attribution

High-frequency current and voltage measurement at the PDU feed, correlated with per-GPU telemetry (NVIDIA DCGM / NVML) and request traces emitted by the inference server (vLLM, Triton, TGI, or custom). The result is per-workload energy attribution with published uncertainty bounds — not per-inference point estimates, but defensible envelopes bound to a hardware-signed record.

WATER & CARBON ATTRIBUTION

Measured at the facility, allocated to the workload

Cooling water and grid carbon are measured at the facility meter and the grid interconnect, not at the workload. Mālama's attestation appliance binds facility meters into the same signed envelope as rack power, producing a transparent allocation from hardware-measured facility metrics to workload-level reports. Allocation methodology is open (WUE × energy × EWIF for water; real-time locational carbon intensity for CO₂). What is measured is labeled measured. What is allocated is labeled allocated.

CARBON INTENSITY SYNC

Real-time locational, not annual averages

Every signed energy record is paired with the grid's carbon intensity at the time and location of consumption, not an annual average. The same kilowatt-hour carries radically different emissions weight depending on when and where it was drawn. Market-based reporting is supported with cryptographic receipts for power purchase agreements and guarantees of origin.

04 · How It Connects

One trust architecture. Two upstream data streams.

The AI compute product line is not a parallel stack. It is the same six-layer Reality Engine architecture that runs in Dallas, with a rack-form-factor attestation appliance as a second class of signing device feeding the same Hex Node validators.

UPSTREAM A · CARBON SAVECARDS

Genesis 300 outdoor nodes. Soil, atmospheric, ERW telemetry from biochar and weathering sites.

UPSTREAM B · AI COMPUTE PACKETS

Mālama Rack Attestation Appliance. PDU + BMC + DCGM telemetry, hardware-signed at the rack.

Converge
VALIDATED BY · HEX NODE NETWORK

Same validators. Same Cardano anchor. Same Proof-of-Truth consensus.

The Dallas Pilot Node #1 (op5pro-field-a) is the technology demonstration that proved Mālama's hardware-signing pipeline end to end. Its 2,786+ on-chain SaveCards establish credibility for the signing architecture itself. The AI compute product line extends that architecture to rack-level deployment.

AI compute pilot deployment targets Q2 2026.
05 · Who It's For

Three buyers. One verified data stream.

Differentiated disclosure, shared substrate.

01 / DATA CENTER OPERATORS

AI Infrastructure & Sustainability Teams

Hardware-verified electricity consumption with workload-level attribution, formatted for EU CSRD ESRS E1, California SB-253 / SB-261, and SBTi-aligned reporting. Replace software-only telemetry with an independent hardware attestation your auditors can verify without trusting your operations team.

Scope 2 (location-based and market-based): rack-level energy consumption with timestamped grid carbon intensity, signed in silicon and anchored on-chain. Supports GHG Protocol Scope 2 Guidance with cryptographic receipts for PPAs and guarantees of origin.

02 / ENTERPRISE PROCUREMENT & ESG

AI Procurement & Corporate Emissions Reporting

Procuring AI compute from hyperscalers and inference platforms. You need defensible Scope 3 Category 1 (purchased goods and services) and Category 11 (use of sold products) attribution per AI workload for corporate emissions reporting. Mālama provides the hardware-signed audit trail your ESG framework and external auditor require.

Scope 3 support: workload-level energy and carbon attribution derived from provider-side hardware measurement, reconciled to your consumption, with selective-disclosure proofs that do not expose provider trade secrets.

03 / HYPERSCALERS & AI PLATFORMS

Workload-Level Reporting Without Exposing Operations

We understand that your per-model, per-customer, per-cluster load curves are competitive information. Mālama's selective-disclosure architecture anchors cryptographic commitments to workload-level records on-chain while keeping the underlying detail in your control. Auditors verify aggregate claims against Merkle roots; zero-knowledge range proofs answer regulatory queries without revealing operational data.

04 / RESEARCHERS & POLICYMAKERS

Academic, Standards, and Policy Communities

Mālama's open methodology, public dashboard, and verifiable data layer are a public good for the field. aipower.fyi is the contribution tier. The hardware layer is the verification tier. Both are documented, both are published.

06 · Roadmap

From dashboard to deployed sensors.

Milestone Status
aipower.fyi dashboard — 30 AI models tracked, open methodology, contribution form active LIVE
Reference attestation appliance — PDU + BMC + DCGM integration reference design, hardware-signed, open methodology note published Q2 2026
Rack sensor pilot — first attestation appliance deployment in a partner data center facility Q2 2026 · Partner LOI in place
Hex Node validator integration — AI compute packets validated by the same Hex Node network that handles Carbon SaveCards Q3 2026
Selective-disclosure module — zero-knowledge range proofs for hyperscaler aggregate reporting Q3 2026
CSRD / SB-253 integration — pre-built export schemas for EU ESRS E1 and California CARB reporting Q4 2026
Multi-site deployment program — hyperscaler and colo operator expansion Q4 2026
Get updates → Talk to the Team →