Acer has launched “aiWorks solution” in India, an Artificial Intelligence Computing Platform. It provides the best streamlined and cost-effective integrated solutions on servers, workstations, networks, and storage. Acer aiWorks is an amalgamation of Altos BrainSphere series of computing system products (including servers, PC workstations, etc.), and Acer Altos Accelerator Resource Manager (AARM) smart accelerator computing resource management system. Besides, aiWorks solution also, provide customers and developers with different artificial intelligence computing system choices, rapid deployment of a development environment, and optimization of artificial intelligence accelerator resource allocation.
On the new launch, Sudhir Goel, Chief Business Officer, Acer India said, “We are excited to bring Acer aiWorks solution to India. One of the keystones of a successful business is a robust, resilient, and reliable IT infrastructure. With the growing demand, there is an increased workload. Acer is aiming to meet this need by bringing in a range of Acer Altos server and workstation solutions to India built on Ai platform to help our customers to be future-ready.”
Based on excellent software and hardware solution “Acer aiWorks”, supports NVIDIA A100 Multi-Instance GPU (MIG) technology. MIG allows each A100 GPU to be divided into up to seven instances. Whether it is high-bandwidth memory, cache, and computing core, they are all independent; GPUs have multiple cutting forms, which can not only withstand workloads of any scale, but also ensure the quality of work services (QoS ), which can also accelerate the scalability of computing resources, and maximize utilization rate. Besides, for Volta and Turing series GPUs, the Acer aiWorks solution also supports NVIDIA CUDA Multi-Process Service (MPS) technology to improve GPU utilization.
Acer Altos Accelerator Resource Manager(AARM)adopting container technology that manages AI accelerators and system resources. AARM also introduces Acer Altos’s own patented algorithm technology to optimize GPU resource and automate the deployment of functions, which greatly reduces the complexity and barrier for users to deploy workload and application for deep learning and machine learning development. Besides, AARM allows individual developers to quickly deploy independent workspaces and development environments on the system, allowing multiple users to share hardware resources while still maintaining independent development environments without mutual influence, which helps developers focus more on the research and development of artificial intelligence applications.
Thanks for all tips from your website.