談人工智慧晶片的4種類型
傳統伺服器與AI伺服器的差別在於:
1. 傳統伺服器透過CPU做為主要的運算,
2. AI伺服器除了使用CPU外,還使用GPU、FPGA、ASIC來加速運算。
一般來說,傳統伺服器價格較低,約1500~3000 美元;而AI伺服器價格較高,如AI推論伺服器落在3000~20000美元;而AI訓練伺服器約落在15~30萬美元。
以下簡介各種伺服器晶片的差別:
1. CPU (Central Processing Unit): 以執行複雜指令集為目的,處理重複性高的類神經運算。
2. GPU (Graphics Processing Unit): 擅長浮點數及平行運算,適用於AI深度學習。
3. FPGA (Field Programmable Gate Array): 可依需求調整硬體配置,具備演算法靈活性。
4. ASIC (Application Specific Integrated Circuit): 能針對特定應用最佳化算效能、降低功耗、縮小體積。
4 Types of AI Chip
The difference between traditional servers and AI servers is:
1. Traditional servers use the CPU as the main operations.
2. Except for the CPU, AI servers also use GPU, FPGA, and ASIC to accelerate operations.
Generally speaking, the price of traditional servers is lower, about 1,500 to 3,000 USD; while the price of AI servers is higher, such as AI inference servers is about 3,000 to 20,000 USD; and AI training servers is about 150,000 to 300,000 USD.
The following is the introduction to the differences between various server chips:
1. CPU (Central Processing Unit): Aimed at executing complex instruction sets, it processes highly repetitive neural-like operations.
2. GPU (Graphics Processing Unit): Good at floating point numbers and parallel operations, suitable for AI deep learning.
3. FPGA (Field Programmable Gate Array): The hardware configuration can be adjusted according to needs and has algorithm flexibility.
4. ASIC (Application Specific Integrated Circuit): It can optimize computing performance, reduce power consumption, and reduce size for specific applications.
留言
張貼留言