Intelligent Computing Center for AI Power
What exactly is an intelligent computing center for AI power? An Intelligent Computing Center is a type of data center designed specifically for artificial intelligence (AI) computing tasks.
Broadly, data centers can be divided into three categories:
General computing centers, which handle everyday computing needs.
Supercomputing centers that focus on extremely high-performance tasks such as scientific research.
Intelligent computing centers that are optimized for AI-focused workloads.
Since 2023, the rise of large AI-generated content (AIGC) models such as ChatGPT and Sora has triggered a global surge in demand for AI computing power. To stay competitive in this AI-driven era, access to powerful and scalable computing infrastructure has become essential. As a result, intelligent computing centers are emerging as a critical focus of investment and development worldwide.
In China, this growth is especially evident. More than 20 cities, including Wuhan, Chengdu, Changsha, Nanjing, and Hohhot, have already established intelligent computing centers. Projections show that by 2025, this number will exceed 50 nationwide.
Must Read: What Is Computing Power? How Humans and Machines Process Information
These centers rely on specialized AI computing hardware designed to process intensive algorithms efficiently. They support a wide range of applications, including:
Computer vision for image and video recognition
Natural language processing (NLP) for text and speech analysis
Machine learning for model training and predictive analytics
Typical tasks handled in intelligent computing centers include image recognition, speech recognition, text analysis, and large-scale model training and inference.
Intelligent Computing Server: What Is the Difference?
Intelligent computing servers are the main hardware used in intelligent computing centers. The key difference between these and traditional general-purpose servers lies in the computing chips they use.
Traditional general-purpose servers mainly rely on the CPU as the core processor. Some are equipped with GPUs (graphics processing units), while others are not. Even when GPUs are included, the number is usually limited to one or two, and their primary role is mostly for tasks such as 3D graphics rendering, rather than large-scale AI workloads.
By contrast, intelligent computing servers also include CPUs to support the operating system, but they are designed with additional, more powerful chips to handle AI-specific tasks. These typically include 4 to 8 GPUs, NPUs (neural network processing units), or TPUs (tensor processing units). The real strength of these servers comes from the combined computing power of these chips, which are built for intensive AI workloads.
This setup, often called a heterogeneous computing architecture (CPU+GPU or CPU+NPU), allows the server to make the most of each chip’s advantages in terms of performance, cost, and energy efficiency.
Why this matters: chips like GPUs, NPUs, and TPUs contain thousands of cores and are excellent at parallel computing. Since AI algorithms involve countless simple matrix operations, they need this kind of powerful parallel processing.
In practice, GPUs, NPUs, and TPUs are manufactured as separate boards and slotted into intelligent computing servers. Once powered on, the servers allocate and execute AI tasks according to the system’s schedule.
In addition to using different chips, AI servers are designed with several enhancements to unleash their full performance and ensure stable, long-term operation. These improvements cover areas such as system architecture, storage, heat dissipation, and topology.
For example:
The DRAM capacity of an intelligent computing server is usually eight times greater than that of a traditional server.
The NAND capacity is about three times larger.
Even the PCB circuit boards have far more layers compared to those in general-purpose servers.
Naturally, this scale of components and design comes at a cost. The price of an intelligent computing server can be dozens of times higher than that of a general-purpose server.
To illustrate, China Mobile recently announced the winning bids for its centralized procurement of new intelligent computing centers for 2024–2025. The contract covered 8,054 intelligent computing servers, with a total value of 19.104 billion RMB (before tax). That means the average cost per server was about 2.372 million RMB. By comparison, a traditional general-purpose server usually costs anywhere from 10,000 to 100,000 RMB, depending on brand and configuration.
Another key difference is power consumption. Because of their heavy reliance on computing power boards, AI servers consume significantly more energy than traditional ones. Take NVIDIA GPUs as an example:
A single A100 GPU draws about 400W.
A single H100 GPU can draw up to 700W.
So, an AI server equipped with eight GPUs can consume 3.2–5.6 kW of power just for the GPUs alone. By contrast, a typical general-purpose server consumes only around 0.3–0.5 kW in total.
In terms of appearance, there isn’t much difference. Both intelligent computing servers and general-purpose servers follow standard architectures and can be installed in a 42U rack. However, AI servers may be slightly larger, especially when packed with additional AI computing boards, reaching 4U, 5U, or even 10U in size.
It’s also worth noting that AI servers are further classified based on workload:
Training servers
Inference servers
Integrated training + inference servers
Each type varies in architecture and size, with training servers usually being the largest because they require more AI boards to handle complex model training.
Will the Intelligent Computing Center Replace the General Computing Center?
With intelligent computing centers becoming increasingly popular, a common question arises: Will they replace general computing centers?
The short answer is no.
Right now, AI-driven intelligent computing is in the spotlight. As a result, construction of intelligent computing centers is expanding rapidly, and the industry is paying close attention. But it’s important to understand that most computing tasks in society are still carried out on traditional general-purpose data centers.
Think about everyday life:
Messaging Apps
Watching videos or playing games
Booking taxis, shopping online, or reserving tickets
All of these rely on the computing power of general-purpose data centers, not AI-focused ones.
The same is true for corporate IT systems like OA, CRM, and ERP; financial platforms used by banks, insurers, and securities firms for transactions and data storage; digital systems in hospitals and schools; and government e-services. These core functions depend on general-purpose data centers.
In short, general data centers are the backbone of the national economy, and demand for them will remain strong for the long term.
Even with rapid growth in AI infrastructure, intelligent computing centers account for only a tiny fraction of this total.
Looking at computing power tells the same story. The China Academy of Information and Communications Technology estimates that by 2025, China’s total computing power will reach 320 EFLOPS, with intelligent computing accounting for 35% (112 EFLOPS). Based on Hohhot’s data, each rack provides 0.008375 EFLOPS, meaning 112 EFLOPS equates to around 13,373 racks.
So, while intelligent computing delivers immense computing power, the actual number of racks and data centers remains relatively small—likely no more than 10% of the total.
Is It Feasible to Transform a General Computing Center into an Intelligent Computing Center?
Under the guidance of the “dual carbon” policy, the approval process for new data centers has become stricter, and compliant existing data center assets are now scarce. This raises an important question: Is it possible to transform general-purpose data centers into intelligent computing centers?
The answer is yes.
Shared Mission and Core Components
At their core, both general computing centers and intelligent computing centers share the same mission: providing a stable environment for server hosting, including reliable cooling and power supply. Fundamentally, both types of centers rely on the same main components.
Typically, IT equipment such as servers and communication devices like switches is owned and supplied by customers. Data center service providers are responsible for building and maintaining the underlying infrastructure, also called supporting equipment, which ensures the continuous operation of these core systems.
The underlying infrastructure in a data center is generally categorized into four main areas: air (cooling), fire (fire protection), water (moisture control), and electricity, including mains power, uninterruptible power supply, and diesel generators. This can also be subdivided into power supply and distribution systems, uninterruptible power supply systems, terminal power distribution systems, auxiliary power systems, and air conditioning systems.
Power Density Considerations
Intelligent computing servers consume significantly more power than general-purpose servers. As a result, the power density per cabinet in an intelligent computing center is much higher. Data shows that a single cabinet in an intelligent computing center typically requires more than 30kW of power, and in some cases, it can exceed 100kW. By comparison, the density of cabinets in traditional data centers generally ranges between 6kW and 15kW.
Transforming a general-purpose data center into an intelligent computing center requires careful recalculation and redesign of the facility’s overall power supply capacity.
If no major capacity expansion is needed, the transformation process is relatively straightforward. It mainly involves replacing traditional servers with intelligent computing servers and associated network equipment, along with rewiring to accommodate the new configuration.
In summary, while general-purpose data centers can be converted into intelligent computing centers, the process hinges on power density, cooling capabilities, and infrastructure readiness. With proper planning and design, such a transformation is entirely feasible.
Capacity Expansion and Renovation
If capacity expansion is required, it means that the cabinet output will increase within the same area. The transformation will involve the purchase and installation of supporting equipment related to power supply and cooling systems. This adds workload and extends the transformation cycle.
Expansion and renovation inevitably incur costs. Deciding whether to transform a traditional general computing center into an intelligent computing center depends not only on objective factors, such as restrictions on new construction, but also on the input-output ratio. In other words, the key question is whether the transformed intelligent computing center can generate greater economic returns.
Final Words
Data centers are crucial ICT infrastructure and the foundation of society’s computing power. They continuously deliver computing resources to meet the demands of our digital lives and support the development of numerous industries.
Over time, the AI boom will stabilize, and the pace of intelligent computing center construction will slow. Making full use of existing intelligent computing resources and ensuring AI generates tangible value will become an increasingly important task.
Rational allocation of general computing, intelligent computing, and supercomputing, along with coordinated development of various types of computing power, will lay a solid foundation for the growth of the digital economy and accelerate society’s transition into the intelligent era.




Leave a Reply