Questions? +1 (202) 335-3939 Login
Trusted News Since 1995
A service for technology industry professionals · Tuesday, April 29, 2025 · 807,883,337 Articles · 3+ Million Readers

Mobilint Introduces MLA100 MXM, an 80 TOPS NPU Module for High-Efficiency Embedded AI PCs

Mobilint’s MLA100 MXM NPU module is powered by ARIES, the company’s flagship AI accelerator chip.

The MLA100 MXM, high-efficiency AI accelerator module targeting embedded AI boxes, is now open for sampling and orders.

MLA100 MXM delivers 80 TOPS of performance within 25W of power consumption, enabling advanced AI computing for next-gen robotics, automation, and more.

LAS VEGAS, NV, UNITED STATES, April 29, 2025 /EINPresswire.com/ -- Mobilint, a leading South Korean fabless AI chipmaker, today announced the official launch of MLA100 MXM, a high-performance embedded NPU module designed for ruggedized AI boxes and on-device AI systems. Delivering 80 TOPS (Tera Operations Per Second) of performance within a 25W power envelope, the MLA100 MXM brings powerful AI capabilities to the edge. The MLA100 MXM’s first official live demonstration will take place at the Embedded Vision Summit 2025 during its exhibition days, May 21–22, at the Santa Clara Convention Center in California.

Built on the standardized Mobile PCI Express Module (MXM) interface, the MLA100 MXM targets specialized applications, such as embedded AI systems used in robotics and industrial automation, where space, power, and thermal efficiency are critical.

MLA100 MXM NPU module is powered by ARIES, Mobilint’s flagship AI accelerator chip. Featuring high compute density with eight cores and broad AI model compatibility, ARIES was launched as a direct alternative to traditional GPUs used in on-premises servers. It supports a wide range of models, including vision models and Transformer architectures such as large language models (LLMs) and vision-language models (VLMs), allowing efficient deployment of complex AI workloads at the edge.

Mobilint’s AI development ecosystem, centered around its proprietary software development kit (SDK), enables seamless integration and optimization of AI models to its hardware, including MLA100 MXM. The SDK includes an advanced model compiler, an optimized software stack, and developer tools, which enable businesses to accelerate AI adoption and reduce time to market.

Industry partners, including a leading Korean edge AI solutions provider and major conglomerates, are conducting proof-of-concept tests, integrating Mobilint’s MXM NPU modules into embedded systems and products.

“Our goal with MLA100 MXM is to bring server-class inference to robotics and other edge devices,” said Dongjoo Shin, CEO and CTO of Mobilint. “Real-world applications require hardware, software, and algorithms to be co-optimized for efficient operation. Our advanced software and algorithmic stack will certainly help embedded systems developers get the most out of their AI models.”

Mobilint’s MXM accelerator module is now available for sampling and orders. Mobilint will be exhibiting at booth #807 during the Embedded Vision Summit 2025.

Bonjoo Koo
Mobilint
bonjoo@mobilint.com
Visit us on social media:
LinkedIn
X

Powered by EIN Presswire

Distribution channels: Building & Construction Industry, Electronics Industry, IT Industry, Shipping, Storage & Logistics, Technology

Legal Disclaimer:

EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.

Submit your press release