ASUS Shifts Focus to Enterprise Technology and Cloud Solutions
ASUS, a company known for its consumer electronics, is shifting its focus towards enterprise technology and cloud solutions. The Taiwanese company has been enhancing its tech portfolio and achieving success locally, with plans to scale up its innovations globally.
The Taiwania 2 supercomputer, a creation of ASUS and Quanta Computer, has made strides in recent years with its nine-petaflop performance scoring 20th in the Top 500 Supercomputer list in 2018.
At the recent Computex event in Taiwan, Jackie Hsu, Senior Vice President and co-head of the Open Platform Business and IoT business groups, highlighted ASUS’s involvement in several significant projects, building on previous investments in supercomputing for Taiwan’s National Center for High-Performance Computing (NCHC).
The Taiwania 2 supercomputer has dropped to 106 in the current ranking, with more and more supercomputers entering the arena. However, as demand for compute is surging due to AI, the market for such systems is becoming more lucrative than ever before.
The Taiwania 3 and Taiwania 4 Supercomputers
The follow-up project Taiwania 3, while only achieving 2.7 petaflops, has broader capabilities and specialized focus, contributing to its overall performance and utility. It was tailored for various applications, such as biomedical research and climate change studies. Taiwania 3 integrates more sophisticated security measures, such as the requirement for one-time passwords (OTP) for system access, which enhances its utility in sensitive research areas and better networking capabilities.
In May 2023, ASUS won the bid from the National Center for High-Performance Computing (NCHC) to build the Taiwania 4 supercomputer, which includes a data center with a power usage efficiency (PUE) rating of 1.17—an extraordinary feat in Taiwan’s climate. Taiwania 4 will utilize a state-of-the-art AI computing architecture, emphasizing the integration of next-generation CPUs and GPUs to handle even more complex data processing tasks at higher efficiencies. This machine is expected to bolster Taiwan’s capabilities in AI-driven research and industrial applications.
Artificial Intelligence and the Formosa Foundation Model
ASUS has also made strides in artificial intelligence, creating the Formosa Foundation Model, a large language model (LLM) based on the Bloom and Llama 2 models. These LLMs consist of 176B and 70B neuron parameters for various applications and cross-language understanding. The model is engineered to generate traditional Chinese text, addressing the gap in AI models typically trained on American English.
The Formosa Foundation Model is a large language model (LLM) based on the Bloom and Llama 2 models.
Engaging the Enterprise Market
The company also offers a range of servers, encompassing standard models, supercomputer nodes, and AI servers. Though not yet dominant in the server market, ASUS has proven its ability to produce energy-efficient servers, drawing attention from large-scale cloud providers.
ASUS integrates a variety of technological solutions to cater to enterprise customers. A senior ASUS executive recently pointed out the company’s interest in ASUS’s datacenters with a PUE of 1.17 and the Formosa Foundation Model. ASUS has undertaken several projects involving large AI systems, which encompass both software and hardware.
Even though ASUS operates on a smaller scale compared to its rivals, its comprehensive solutions have garnered clientele willing to invest in superior technology. Hsu sees enterprise technology as a key growth area, particularly with the rising demand for AI and computing power.
NVIDIA GTC 2024 and Advanced Cooling Technologies
At the recent NVIDIA GTC 2024 conference, ASUS introduced new GPU server innovations featuring its MGX architecture. Options ranged from entry-level to high-end GPU solutions and liquid-cooled rack systems designed for diverse workloads.
ASUS is also developing advanced liquid-cooling technologies, such as direct-to-chip (D2C) cooling and Immersion Cooling, which can swiftly reduce power usage effectiveness (PUE).
These servers will integrate the NVIDIA Grace Hopper Superchip and NVIDIA Grace CPU Superchip, providing strong performance with 144 Arm Neoverse V9 CPU cores and Scalable Vector Extensions (SVE2). The modular design allows for flexibility and scalability in server configurations.
ASUS also unveiled a no-code AI platform with an integrated software stack. This platform enables businesses to accelerate AI development, supporting LLM pre-training, fine-tuning, and inference. It can handle various LLMs from 7B to over 180B with customized software, ensuring efficient data processing and maximizing system performance.