Aetina lancia il primo modulo MXM potenziato dal processore di inferenza AI di Hailo
Aetina AI-MXM-H84A modules feature four Hailo-8 AI processors, providing up to 104 Tera-Operations Per Second (TOP) AI performance with best-in-class power efficiency to speed up deployment of neural network (NN) and deep learning (DL) processes on edge devices by AI developers. The Hailo-8 AI accelerator allows edge devices to run DL applications at full scale with superb efficiency, effectiveness, and sustainability. Due to its small-sized form factor, the high-performance MXM 3.1 type B module can be easily integrated into a variety of embedded systems by developers and system integrators to handle heavy inference workloads with low latency.
Aetina offers technical services and support for the users of the AI-MXM-H84A module. Users can also benefit from Hailo’s software suite which helps shorten the AI projects development cycles.
“By collaborating with Hailo, we are excited to deliver more edge computing solutions, helping our partners create or adopt AI in different verticals and industries by leveraging the ASIC hardware,” Jackal Chen, Senior Product Manager at Aetina, disse. “Our hardware engineering team is committed to developing more ASIC-based solutions with chip-down and customization design service to smoothen system integration progress for developers.”
“Aetina’s MXM modules represent another step forward in making advanced edge AI applications more accessible and easy to use says” Gary Huang, Hailo’s General Manager of Greater China Business said. “We are grateful for the close cooperation between Aetina and Hailo and look forward to creating more innovative products supporting high-performance AI applications”
Besides the MXM module, the Aetina’s ASIC hardware product line includes its DeviceEdge edge computing systems—small embedded AI computers consisting of a CPU, Acceleratore di intelligenza artificiale, memory units, and other must-have computing hardware components, as well as I/O connectors and expansion slots—for use of AI-powered system development.