ASICs vs. GPUs: In today’s article, we will focus on ASICs. According to the capital market, ASICs are rapidly rising, challenging the dominance of GPUs in AI computing. Broadcom, the leading ASIC concept stock, has seen its stock price surge from $180 to $250, pushing its market value beyond $1 trillion. In contrast, Nvidia’s influence appears to be fading, with its stock price steadily declining to below $130.
So, has the ASIC era truly arrived? Will Broadcom replace Nvidia and become the new king of AI?
ASICs vs. GPUs: What Are ASIC and GPU?
Both ASICs and GPUs are semiconductor chips designed for computing tasks. They are often called AI chips since they can both be used for AI computing.
More precisely, computing chips include the more familiar CPU and FPGA, as well as ASICs and GPUs.
In the semiconductor industry, chips are generally classified into digital and analog. Digital chips comprise most of the market, accounting for approximately 70%.
Digital chips can be categorized into logic, memory, and microcontroller units (MCUs). CPUs, GPUs, FPGAs, and ASICs fall under the logic chip category.
A logic chip is a type of computing chip that contains various logic gate circuits, enabling it to perform calculations and logical operations.
Among the four major chip types, CPUs and GPUs are general-purpose chips capable of handling various tasks. The CPU, in particular, is an all-around performer, featuring a high single-core clock speed that allows it to manage diverse computing tasks. This versatility makes it the main processor in most systems.
Initially designed for graphics processing, the GPU has thousands of cores, making it ideal for parallel computing—handling many simple calculations simultaneously. Since graphics processing involves computing many pixels simultaneously, AI computing, which relies on massive parallel computations, is a natural fit for GPUs.
AI computing consists of tasks such as parallel matrix multiplication, convolution, loop layers, and gradient operations, all of which GPUs excel at. In contrast, CPUs are not well-suited for AI workloads, contributing to Intel’s stock price dropping below $20.
Since 2023, the AI boom has surged, with most companies relying on Nvidia’s GPU clusters for AI training. When optimized properly, a single GPU can deliver computing power equivalent to dozens or even hundreds of CPU servers. This massive demand has driven Nvidia’s stock price up several times, leading to persistent supply shortages. Now, Let’s take a look at ASIC and FPGA.
ASIC (Application-Specific Integrated Circuit)
An ASIC is a chip designed for a specific task. The official definition of ASIC is: an integrated circuit specially designed and manufactured to meet the needs of a particular user or a specific electronic system. Some well-known examples of ASIC chips include Google’s TPU (Tensor Processing Unit), Bitcoin mining machines, Intel’s Gaudi 2 ASIC chip, IBM’s AIU, and AWS’s Trainium.
In recent years, DPU (Data Processing Unit) and NPU (Neural Processing Unit) have gained popularity and are also types of ASIC chips.
FPGA (Field Programmable Gate Array)
An FPGA is a semi-custom chip, often referred to as a “universal chip”. After manufacturing, an FPGA can be reprogrammed multiple times to perform different digital logic functions based on the user’s needs.
ASIC vs. FPGA
The main difference between ASIC and FPGA lies in customization and flexibility:
- ASICs are fully customized chips with fixed functions that cannot be altered.
- FPGAs are semi-custom chips with flexible functions, allowing for greater playability.
FPGAs don’t require the expensive tape-out process, but since they are editable, they often include redundant functions, leading to wasted resources when used for a single purpose. FPGAs tend to be more expensive in large-scale production than ASICs and offer less energy efficiency.
As a result, FPGAs are primarily used for product prototype development, design iterations, and some low-volume specific applications. They are also frequently used in training, teaching, and ASIC verification.
Therefore, while FPGAs are versatile, they are generally unsuitable for large-scale AI computing shipments.
The Battle of AI Chips: GPU vs. ASIC
Ultimately, the battle in AI chips comes down to GPUs vs. ASICs, each with strengths and weaknesses depending on the task.
GPU vs. ASIC: Which One Is Better?
When it comes to AI computing, both ASICs and GPUs offer unique advantages. However, they differ significantly in design and functionality; their choice often depends on the specific use case.
ASIC: Tailored for Efficiency
An ASIC (Application-Specific Integrated Circuit) is a highly specialized chip designed to optimize performance for a specific task. Its computing power and efficiency are finely tuned to the task it is meant to handle. From the number of cores and the ratio of logical computing units to control units, cache, and chip architecture, everything about an ASIC is customized for maximum performance.
This customization means ASICs can achieve exceptional size, power consumption, reliability, confidentiality, and energy efficiency compared to general-purpose chips like GPUs. For example, AWS’s Trainium 2 (ASIC) outperforms Nvidia’s H100 GPU in reasoning tasks, completing them faster and improving cost-effectiveness by 30-40%. Trainium 3, which is slated for release next year, will double its performance while improving energy efficiency by 40%.
GPU: Nvidia’s Dominance in AI
On the other hand, GPU (Graphics Processing Unit) has enjoyed significant popularity over the past few years, particularly in AI computing. Nvidia, in particular, has capitalized on the AI boom. The breakthroughs in AI, particularly by AI pioneer Geoffrey Hinton and his team, sparked Nvidia’s focus on AI computing and developing increasingly powerful GPUs.
With GPUs, Nvidia steadily increased the number of cores and operating frequency, allowing for faster training times and product releases. However, this increase in power also came with a rise in power consumption—a challenge that Nvidia addressed using passive cooling techniques such as water cooling.
Nvidia’s success goes beyond hardware; their CUDA software suite, a crucial development tool, has solidified its position in AI computing. It enables developers, even beginners, to quickly build and optimize AI models, forming a strong ecosystem around their GPUs.
Challenges for ASIC
Despite their many advantages, ASICs face challenges in AI computing. One major issue is their high cost, long development cycle, and colossal development risks. With AI algorithms evolving rapidly, ASICs’ extended development timeline can make it challenging to keep up. GPUs are the preferred choice for AI training, which demands significant computational power.
However, regarding inference tasks requiring less power and parallelism, ASICs or FPGAs (Field-Programmable Gate Arrays) become more appealing. Inference involves running trained AI models to make predictions or decisions, and the computational demands are lower than training.
Why GPUs Dominate (For Now)
Currently, GPUs dominate the AI chip market, with over 70% of the market share. Nvidia’s AI breakthrough and the rise of GPUs for AI training have solidified its position as the industry leader.
However, inference AI computing is on the rise, driven by the increasing demand for cost-effective and power-efficient solutions. This shift in demand is opening opportunities for ASICs to gain more traction. As more companies look to diversify away from Nvidia, ASICs are poised to take on a larger share of the market.
The Future of ASIC in AI
The shift toward inference has begun, and ASIC chips are seeing increased interest due to their cost-effectiveness and energy efficiency in inference tasks. This trend is evident in the rising stock prices of Broadcom and Marvell, as they invest heavily in developing AI ASIC chips.
For instance, Broadcom is working on AI chips for major clients and expects its AI chip business revenue to reach $15-20 billion by 2025. As the demand for inference-based AI grows, ASICs are set to become a more integral part of the AI ecosystem.
Will ASICs Replace GPUs in AI?
So, is it that easy to replace GPUs with ASICs? Will ASICs replace GPUs soon?
Obviously, no.
Despite the aforementioned advantages in performance, ecology, and integration capabilities, NVIDIA’s GPU will remain the top choice for AI chips in the short to medium term. NVIDIA’s entire software, hardware, and network solutions are very mature, and the company’s technical and financial strength is immense. GPUs’ stock and shipment volume are still high, and its market position is difficult to shake.
Although ASICs are rising rapidly, they still need time to mature. The development of AI ASIC chips also carries significant risks. Even if development is successful, it will take time for users to adopt them.
This means that GPUs and ASICs will coexist for the foreseeable future. Depending on the specific scenario, users will choose the chip that best suits their needs. Developing proprietary ASICs can also benefit manufacturers in negotiating with Nvidia.
It is still difficult to predict the future. The potential impact of quantum computing on the computing field remains a hot topic of discussion.



Leave a Reply