Building The Internet of GPUs

We are excited to announce our investment in io.net, the leading distributed compute marketplace for AI workloads. We led the seed and participated in the Series A Round. In total, io.net has raised $30M from Multicoin, Hack VC, 6th Man Ventures, Modular Capital, and a syndicate of deeply connected angel investors to make on-demand, production-ready compute markets a reality.

I first met io.net’s founder, Ahmad Shadid, at the Austin Solana Hacker House in April 2023, and was immediately drawn to his singular focus on democratizing access to compute resources for ML workloads.

In the time since, the io.net team has executed against the thesis at light speed. Today, the network has aggregated tens of thousands of distributed GPUs, and facilitated more than 57,000 compute hours for AI enterprises. We are thrilled to partner with them as they power the AI renaissance of the coming decade.

The Global Compute Shortage

Demand for AI compute is growing at a breakneck pace; demand is insatiable. Datacenter revenues for AI workloads topped $100 billion in 2023, but the demand for AI outstrips chip supply in even the most conservative of cases.

New data centers capable of housing this class of hardware require massive upfront investments during a time of high interest rates and scarce capital. The crux of the issue lies with production constraints on advanced chips like the NVidia A100 and H100. While GPU performance increases and costs fall steadily, the physical manufacturing process cannot accelerate rapidly enough; shortages of raw materials, components, and production capacity limit the pace of growth.

Despite the promise of AI, its physical footprint grows larger by the day, demanding space, electricity, and cutting-edge gear that strain budgets around the world. io.net lays the path for a world in which our efforts to accelerate are not constrained by the limitations of the current supply chain.

io.net is a classic instantiation of the DePIN thesis: using token incentives to structurally lower the cost of acquiring and retaining supply-side resources, and ultimately reducing costs for end-consumers. The network brings together a vast, heterogeneous supply of GPUs into a shared pool for AI developers and companies to tap into — today, the network is powered by thousands of GPUs from datacenters, mining farms, and consumer-grade devices.

Although the aggregation of these resources is valuable, AI workloads do not automatically scale from centralized, enterprise-grade hardware to distributed networks. There have been several attempts at building distributed compute networks in the history of crypto, most of which have generated little-to-no meaningful demand-side volume.

The problem of orchestrating and scheduling workloads across heterogenous hardware, with different memory, bandwidth, and storage configurations, is nontrivial. We believe the io.net team has the most practical solution in market today to make this hardware aggregation useful for end customers, and economically productive.

Paving a Way Forward with Clustering

In the history of computing, software frameworks and design patterns mold themselves around the hardware configurations available in the market. Most frameworks and libraries for AI development rely heavily on centralized hardware resources, but there has been significant progress in the last decade on distributing these workloads across discrete instances of geographically distributed hardware.

io.net takes the latent hardware that exists in the world and deploys bespoke networking and orchestration layers over them, brings them online, creates a hyper-scalable, Internet of GPUs. The network leverages Ray, Ludwig, Kubernetes, and a variety of other open-source, distributed computing frameworks in order to allow machine learning engineering and operations teams to scale their workloads across a network of GPUs with minimal adjustments.

ML teams are able to parallelize workloads across io.net GPUs by spinning up on-demand clusters, and leveraging these libraries to handle orchestration, scheduling, fault tolerance, and scaling. For instance, if a group of motion graphics designers contribute their at-home GPUs to the network, io.net can construct a cluster that thoughtfully makes the collective compute resources accessible to an image diffusion-style model developer anywhere in the world.

BC8.ai, a finetuned variant of Stable Diffusion—trained entirely on io.net hardware—serves as an illustration of this. The io.net explorer shows real-time inferences, and payouts to contributors to the network.

Prompt: a geographically distributed artificial intelligence supercomputer

Each inference is recorded on-chain to provide provenance. The payment for this particular image generation went to a cluster of 6 RTX 4090s, which are consumer-grade GPUs for gaming.

Today, there are tens of thousands of devices on the network across mining farms, underutilized datacenters, and Render Network consumer nodes. In addition to creating net new GPU supply, io.net is able to compete with traditional cloud providers on cost, often offering resources cheaper.

They achieve these cost savings by outsourcing GPU coordination and overhead to the protocol. Cloud providers, on the other hand, markup infrastructure costs because they have employee expenses, hardware maintenance, and datacenter overhead. The opportunity cost for consumer card clusters and mining farms is substantially lower than what Hyperscalers are willing to accept, thus a structural arbitrage exists that dynamically prices resources on io.net lower than the ever-increasing cloud rate.

Building the Internet of GPUs

io.net has the unique advantage of remaining asset light, and lowering the marginal cost of serving any given customer to practically zero, while simultaneously owning the relationship directly with both the demand and supply sides of the marketplace.They are in prime position to serve the tens of thousands of new enterprises that needs access to GPUs to build competitive products that everyone will one day interact with.

We are excited to partner with Ahmad and the rest of the team as they build out and accelerate the development of AI across the world. If you are building compute intensive applications, you can access resources from io.net today by signing up here. If you have latent GPUs, you can also contribute them to the network today and earn points for doing so.

The team is hiring excellent sales, engineering, and design roles. Please reach out to careers@io.net.

Leave a Reply

Your email address will not be published. Required fields are marked *