AI อะไรเนี่ย

Industry

NVIDIA & Telecom Giants Partner on AI Grids for Distributed Inference

NVIDIA & Telecom Giants Partner on AI Grids for Distributed Inference

The world of AI is rapidly expanding beyond centralized data centers, pushing intelligence closer to where data is generated and consumed. In a significant move set to redefine the landscape of distributed computing, NVIDIA is collaborating with leading telecommunications companies such as AT&T, T-Mobile, Comcast, and Spectrum to build powerful AI grids. These partnerships aim to transform existing network infrastructure into sophisticated platforms capable of optimizing AI inference on distributed networks, bringing real-time AI capabilities to a massive scale.

What Happened: Turning Networks into Intelligent Platforms

At NVIDIA GTC 2026, the announcements highlighted a pivotal shift: telecom operators are no longer just carriers of data, but active participants in the AI revolution. By leveraging their vast network footprints, which include approximately 100,000 distributed network data centers globally, these companies are unlocking a staggering potential of over 100 gigawatts of new AI capacity. This strategic move transforms passive network real estate into an active, geographically distributed computing platform, running AI inference closer to users, devices, and data. This promises to deliver lower latency, better response times, and optimized cost per token for AI applications.

Key Players Driving the AI Grid Revolution

Several major operators are already deploying or expanding their AI grid initiatives:

  • AT&T for Mission-Critical IoT: As a leader in connected IoT, AT&T is teaming up with Cisco and NVIDIA to build an AI grid specifically for IoT applications. This initiative will support mission-critical, real-time public-safety use cases, such as those leveraging Linker Vision, by moving AI inference closer to where data is created, ensuring faster detection, alerting, and response while maintaining data control at the network edge.

  • Comcast for Hyper-Personalized Experiences: Comcast is transforming one of the nation’s largest low-latency broadband footprints into an AI grid for delivering real-time, hyper-personalized experiences. Through a collaboration with NVIDIA, Decart, Personal AI, and HPE, Comcast has validated its AI grid for conversational agents, interactive media, and NVIDIA GeForce NOW cloud gaming. This setup achieves significantly higher throughput and a lower cost per token, even during peak demand, ensuring responsive and economical services.

  • Spectrum’s Edge AI Infrastructure: Spectrum is deploying an AI grid infrastructure that spans over 1,000 edge data centers, boasting hundreds of megawatts of capacity. This extensive network can reach 500 million devices within 10 milliseconds, with an initial focus on high-resolution graphics rendering using remote GPUs embedded across its fiber-powered, low-latency network.

  • Akamai’s Global Inference Cloud: Akamai is significantly expanding its globally distributed Akamai Inference Cloud across more than 4,400 edge locations. This expansion involves deploying thousands of NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs to power low-latency, real-time AI experiences for diverse applications in gaming, media, financial services, and retail.

  • Indosat Ooredoo Hutchison in Indonesia: Indosat is building an AI grid across Indonesia, connecting its sovereign AI factory with distributed edge and AI-RAN sites. This enables the provision of localized AI services like Sahabat-AI, bringing culturally relevant and compliant AI closer to millions of Indonesians.

  • T-Mobile’s Edge AI Exploration: T-Mobile is also working with NVIDIA to explore edge AI applications, utilizing the NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs to support emerging AI-RAN and edge inference use cases at distributed network locations.

Why It Matters: Reshaping AI Delivery

This widespread adoption of AI grids represents more than just an infrastructure upgrade; it's a fundamental shift in how AI is delivered. By positioning AI processing at the network edge, these initiatives promise to unlock a new class of AI-native applications that are real-time, hyper-personalized, and token-intensive. This move significantly reduces latency, improves efficiency, and allows for greater privacy by keeping sensitive data closer to its origin. The collaboration positions telecom networks at the forefront of scaling AI, enabling an ecosystem where intelligence can truly be everywhere.

Read more:

For a deeper dive into how NVIDIA and telecom leaders are building AI grids to optimize inference on distributed networks, visit the official NVIDIA Blog Post.