由 ChatGPT 引爆的生成式人工智慧浪潮正在持續發酵。國際調查機構 Gartner 發布的 2024 十大戰略指出,生成式 AI 將帶來新的可能性,幫助人類做到以前無法勝任的事情。依循此一趨勢,AI 相關科技與產品受到的矚目愈來愈高,尤其是可支持更大規模數據處理的 AI 伺服器,變得至關重要。由於 AI 訓練模型需要大量的運算資源與資料儲存空間,才能提供快速而高效的推理能力,支持大型模型的訓練與處理海量數據,因此,AI 伺服器必須搭載至少 6~8 顆 GPU 圖形處理器,擴充更大記憶體容量,是以 AI 伺服器機殼的設計結構也要跟著升級,才能更好整合伺服器元件,在有限的空間中裝入更多的元件。
勤誠打造 SR113 及 SR115 直立可轉機架式 4U 伺服器機箱,專屬 AI 推理和深度學習 GPGPU 伺服器機箱,最多可支援 5 張 GPGPU 卡。AI 推理伺服器可利用高效運算來滿足大規模的推理需求,更可以 SR115 LCooling 搭載水冷模組,以強大的硬體技術支援,提供散熱好、經測試驗證、完整機電整合的 AI 推理伺服器機箱。
The generative AI wave sparked by ChatGPT continues to ferment. Gartner's 2024 top strategic trends indicate that generative AI will bring new possibilities, enabling humans to accomplish previously impossible tasks. Following this trend, AI-related technologies and products are gaining increasing attention, especially AI servers supporting large-scale data processing, which is becoming increasingly crucial. Due to the substantial computational resources and data storage space required by AI training models to perform rapid and efficient inferences, AI servers must be equipped with at least 6 to 8 GPU processors to support the training of large models and processing massive amounts of data. With expanding memory capacity, the design structure of AI server chassis must also be upgraded to better integrate server components and accommodate more elements in a limited space.
Chenbro’s SR113 and SR115 upright convertible rack-mountable 4U server chassis are designed explicitly for AI inference and deep-learning GPGPU servers, supporting up to 5 GPGPU cards. AI inference servers can utilize efficient computation to meet large-scale inference requirements. The SR115LCooling model is fitted with a water-cooling module, providing excellent thermal performance verified through testing to provide strong hardware support and a fully integrated AI inference server chassis.
#ai #aiserver #chenbro
Ещё видео!