We’re now several years into the ‘Large Scale Era’ of machine learning, which is currently experiencing two-fold growth in capability approximately every 10 months. At MediaTek, we’re helping to accelerate AI adoption at the edge – that means in devices you own at home, your smartphone, and soon in your vehicle too – to create smarter everyday things that are aware and reactive to your needs and your environment.

The growth of machine learning, and therefore, AI processors to power it, has been huge. MediaTek has been developing its own AI processors – APU’s – for around seven years, and they are already as important as the CPUs and GPUs within our system on chips. This year will see the launch of our 7th generation AI architecture, which will soon be part of our latest flagship products.

To streamline the product development of our partners, we fully support every generation of APU with our NeuroPilot® SDK, which allows developers to tap the full potential of their devices. Meanwhile, our research teams actively investigate and publish scientific papers, journals, and whitepapers about what the next generation of AI could bring.

hybrid ai empowering edge accelerating data center

We’ve reached the point where over 2 billion connected devices are powered by MediaTek every year, many of which contain our APUs, making us one of the world’s leading edge AI suppliers. MediaTek also offers a dedicated Enterprise ASIC service for companies looking to build ultra-high-performance customized Data Center AI solutions. Our flexible Deep Learning Accelerator (DLA) architecture and leading-edge SerDes technology for networking puts us in a unique position to offer it all in a single, compact platform.

Evolution of Edge AI Applications

Since 2014, we’ve been at the forefront of edge AI applications with voice assistants and smart speakers. This has evolved through computer vision and image processing in photography, then videography, media playback, and gaming, into VR and AR, and now into generative AI.

Traditional AI – AI applied to pre-existing content – can be performed on devices with up to 10’s of TOPS computational performance and is generally limited by the capabilities of the AI processor, whereas generative AI requires 10X to 100X more performance and is currently more constrained by available memory capacity and performance.

However, keeping generative AI services locked to the Cloud is too costly and generally limiting in its application scope. Bringing new technologies into the hands of people and businesses has exhibited greater benefits to society and global economies. Efforts are now being made to optimize large language models (LLMs) for specific tasks and devices that we use every day.

The pace of progress is such that this shouldn’t be too far away in the future.

Brilliant at the Edge

Edge AI advantages users for several key reasons:

  1. More efficient overall power consumption and data transmission as the data collected on design-specialized devices will filter out what’s not required at the earliest opportunity. If necessary, these devices can store the pre-processed data ready for periods of low network use or energy costs before transmitting it to the central control hub.
  2. For Human-Machine Interface (HMI) devices in particular, reduced latency improves the user experience because each device doesn’t have to suffer network latency when each interaction is processed by Cloud services.
  3. With the data kept on-device, edge devices are inherently more secure and can more easily meet data privacy requirements.
  4. Each intelligent device can be aware of the real-time environment it operates in, enhancing local personalization.

Data Center AI will Continue to Lead

It’s clear that AI in the Data Center, accessible via Cloud services offers the most powerful performance for leading-edge applications, such as Generative AI. Creating a balance between Cloud and Edge device that takes advantage of both their respective strengths will give the best user experiences.

Empowering Gen-AI in Data Centers with Extreme Performance AI and Networking ASICs

MediaTek is ready to create Data Center-grade AI modules to service the Generative-era. Our Enterprise ASIC services provide the advantages of our deep IP portfolio, including our industry-leading Deep Learning Accelerator (DLA) processor and SerDes technologies, world-class IC design expertise on leading-edge process technologies, and chip-to-chip 2.5D/3D advanced packaging, plus our operational scale, assured supply chain, and global reach, all benefit hyperscale AI providers and tier-1 OEMs as they look to creating next-generation AI as a service.

The Best Partner for Hybrid AI – Edge and Cloud

MediaTek is uniquely positioned to drive and support the adoption of AI applications and technology, whether deployed on the cloud, at the edge in devices, or as a hybrid synergy that takes advantage of both the cloud and the edge together.

Leave a Reply

Your email address will not be published