New Releases

MediaTek Outlines a Three Layer Plan for Cross Device Agentic AI Across Phones Cars and Edge Devices

Reading Guide

5 min read

MediaTek Outlines a Three Layer Plan for Cross Device Agentic AI Across Phones Cars and Edge Devices

At its Dimensity Developer Conference 2026 in Shanghai, MediaTek used the stage to do more than introduce updated tools. The company rolled out Dimensity AI Agentic Engine 2.0, refreshed its Dimensity AI Development Kit 3.0, and highlighted new system level work with partners including OPPO, Xiaomi, and Transsion. In the follow up media session, executives made it clear that the broader goal is to push Agentic AI beyond a single device and toward seamless movement across phones, vehicles, and edge hardware.

MediaTek Outlines a Three Layer Plan for Cross Device Agentic AI Across Phones Cars and Edge Devices

During the interview, MediaTek was asked how it plans to connect agent capabilities across multiple product lines while dealing with familiar pain points such as latency, uneven compute scheduling, and ecosystem fragmentation. The company said its answer starts with a three layer strategy. First, it is designing IP with reuse in mind from the beginning, using a more unified hardware and software standard for its NPU architecture so the same core technology can scale across different power envelopes and performance levels. MediaTek argues that this can lower migration costs when moving AI features from smartphones into car platforms and other device categories.

MediaTek Outlines a Three Layer Plan for Cross Device Agentic AI Across Phones Cars and Edge Devices

The second layer is software. According to the company, its unified NeuroPilot development platform is meant to let developers build once on the Dimensity stack and then move those applications more quickly to tablets, cars, and other terminals after phone side testing is complete. That kind of reuse matters because many AI products lose momentum when developers have to rebuild the same workflows for every device class.

The third and hardest layer is the ecosystem itself. MediaTek acknowledged that differences between operating systems and device environments are still the biggest barrier to true cross terminal coordination. Even so, the company believes large model generalization and a shared natural language command layer can help reduce those walls over time. Executives pointed to the recently discussed ‘Lobster’ framework as an example of how cross system instruction interoperability can make agent workflows more portable.

MediaTek said it is using events like MDDC to encourage partners to align around more unified standards, with the long term aim of letting basic services such as payments, navigation, and social functions plug into the full Dimensity product family with less friction. At the same time, the company does not expect every device to behave the same way. Instead, it described a differentiated collaboration model: AI glasses would focus more on sensing, while heavier processing could stay on the phone, and car systems would handle context transfer such as itinerary data, music preferences, and other user habits after someone gets into the vehicle.

Other questions in the session focused on cars and the industry’s shift from software defined vehicles to AI defined vehicles. MediaTek’s view is that not every in car module should be treated the same from a safety perspective. It drew a clear line between driving or vehicle control systems, which still require strict validation and cannot simply speed up because AI has entered the picture, and cockpit assistant functions, which are more about trip planning, information lookup, and entertainment. In that lower risk layer, the company sees room for much faster innovation, especially as Chinese new energy automakers experiment with more cabin agent experiences.

Executives also argued that one of MediaTek‘s core strengths is that mobile remains one of the fastest moving markets for on device AI. The annual pace of flagship smartphone development keeps pushing compute, efficiency, and bandwidth requirements higher, and MediaTek believes that technical base can transfer directly into automotive scenarios. It cited the 400 TOPS compute capability of its Dimensity CX-1 cockpit platform, along with low bit compression and memory optimization techniques that were first proven in the phone market.

Where the company says it still has work to do is the application experience layer. Phones and cars are used in very different contexts, so making an AI assistant feel natural in both spaces will require deeper collaboration with automakers and app developers. MediaTek described that as a major future investment area rather than a solved problem.

On the Agentic AI roadmap itself, executives stressed that the main challenge is no longer raw compute alone. In their view, the real issue is turning that compute into experiences people can actually notice. That is why MediaTek has been investing in always on sensing, lower power perception, and system level optimization that can prevent multiple apps from fighting over NPU resources at the same time. It also described a full stack approach that spans dual NPU and memory technologies at the hardware layer, migration tools for bringing large models onto devices, and a unified engine for brands building system native agent experiences.

Memory pricing came up as well. MediaTek acknowledged that memory capacity and bandwidth are becoming major bottlenecks for on device AI, even when compute is already sufficient for many use cases. To address that, the company promoted its Low Bit compression toolkit, saying it can improve compression efficiency by as much as 58 percent at similar model quality, while dynamic model loading and memory side compression help reduce footprint and bandwidth pressure. Executives argued that rising memory costs may actually push the industry toward more rational decisions about what belongs on device and what should stay in the cloud.

The discussion also touched on how large models fit into AI phones. MediaTek sees two broad directions: operating system vendors moving downward from the system layer, and app companies moving upward from the application layer. The company said it is working with both camps. It also noted that newer frameworks that separate the harness layer from the model layer may speed up personal AI by letting devices use CPUs for memory and context handling before every part of the stack is fully ready on device. That, in turn, is influencing future chip planning, with more emphasis on tighter CPU and NPU coordination across the next generation of Dimensity platforms.

Previous MediaTek Pushes Agentic AI Beyond the Phone With New Dimensity and Ecosystem Tools Next Sony Says Xperia 1 VIII AI Camera Assistant Gives Shooting Guidance Rather Than Editing Photos After the Fact
C
About cizchu

Senior Technology Editor with 10 years of experience covering mobile technology.

Recommended Articles