Memory - Edge AI and Vision Alliance https://www.edge-ai-vision.com/category/technologies/memory/ Designing machines that perceive and understand. Thu, 19 Feb 2026 22:32:42 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://www.edge-ai-vision.com/wp-content/uploads/2019/12/cropped-logo_colourplus-32x32.png Memory - Edge AI and Vision Alliance https://www.edge-ai-vision.com/category/technologies/memory/ 32 32 The Forest Listener: Where edge AI meets the wild https://www.edge-ai-vision.com/2026/02/the-forest-listener-where-edge-ai-meets-the-wild/ Mon, 23 Feb 2026 09:00:52 +0000 https://www.edge-ai-vision.com/?p=56867 This blog post was originally published at Micron’s website. It is reprinted here with the permission of Micron. Let’s first discuss the power of enabling. Enabling a wide electronic ecosystem is essential for fostering innovation, scalability and resilience across industries. By supporting diverse hardware, software and connectivity standards, organizations can accelerate product development, reduce costs and […]

The post The Forest Listener: Where edge AI meets the wild appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at Micron’s website. It is reprinted here with the permission of Micron.

Let’s first discuss the power of enabling. Enabling a wide electronic ecosystem is essential for fostering innovation, scalability and resilience across industries. By supporting diverse hardware, software and connectivity standards, organizations can accelerate product development, reduce costs and enhance user experiences. A broad ecosystem encourages collaboration among manufacturers, developers and service providers, helping to drive interoperability. Enabling an ecosystem for your customers is a huge value for your product in any market, but for a market that spans many applications, it’s paramount for allowing your customers to get to the market quickly. Micron has a diverse set of ecosystem partners for broad applications like microprocessors, including STMicroelectronics (STM). We have collaborated with STM for years, matching our memory solutions to their products. Ultimately, these partnerships empower our mutual businesses to deliver smarter, more connected solutions that meet the evolving needs of consumers and enterprises alike.

The platform and the kit

There’s something uniquely satisfying about peeling back the anti-static bag and revealing the STM32MP257F-DK dev board brimming with potential. As an embedded developer, I am excited when new silicon lands on my desk, especially when it promises to redefine what’s possible at the edge. The STM32MP257F-DK from STMicroelectronics is one of those launches that truly innovates. The STM32MP257F-DK Discovery Kit is a compact, developer-friendly platform designed to bring edge AI to life. And in my case, to the forest. It became the heart of one of my most exciting projects yet: the Forest Listener, a solar-powered, AI-enabled bird-watching companion that blends embedded engineering with natural exploration.

A new kind of birdwatcher

After a few weeks of development and testing, my daughter and I headed into the woods just after sunrise — as usual, binoculars around our necks, a thermos of tea in the backpack and a quiet excitement in the air. But this time, we brought along a new companion. The Forest Listener is a smart birdwatcher, an AI-powered system that sees and hears the forest just like we do. Using a lightweight model trained with STM32’s model zoo, it identifies bird species on the spot. No cloud, no latency, just real-time inference at the edge. My daughter has mounted the device on a tripod, connected the camera and powered it on. The screen lights up. It’s ready! Suddenly, a bird flutters into view. The camera captures the moment. Within milliseconds, the 1.35 TOPS neural processing unit (NPU) kicks in, optimized for object detection. The Cortex-A35 logs the sighting (image, species, timestamp), while the Cortex-M33 manages sensors and power. My daughter, watching on a connected tablet, lights up: “Look, Dad! It found another one!” A Eurasian jay, this time.

Built for the edge … and the outdoors

Later, at home, we scroll through the logs saved on the Memory cards. The system can also upload sightings via Ethernet. She’s now learning names, songs and patterns. It’s a beautiful bridge between nature and curiosity. At the core of this seamless experience is Micron LPDDR4 memory. It delivers the high bandwidth needed for AI inference and multimedia processing, while maintaining ultra-low power consumption, critical for our solar-powered setup. Performance is only part of the story: What truly sets Micron LPDDR4 apart is its long-term reliability and support. Validated by STM for use with the STM32MP257F-DK, this memory is manufactured at Micron’s dedicated longevity fab, ensuring a more stable, multiyear supply chain. That’s a game-changer for developers to build solutions that need to last — not just in home appliances, but in the harsh field environment. Whether you’re deploying an AI app in remote forests, industrial plants or smart homes, you need components that are not only fast and efficient but also built to endure. Micron LPDDR4 is engineered to meet the stringent requirements of embedded and industrial markets, with a commitment to support and availability that gives manufacturers peace of mind.

Beyond bird-watching

The Forest Listener is just one example of what the STM32MP257F-DK and Micron LPDDR4 can enable. In factories, the same edge-AI capabilities can monitor machines, detect anomalies, and reduce downtime. In smart homes, they can power face recognition, voice control and energy monitoring — making homes more intelligent, responsive and private, all without relying on the cloud.

For more information about Micron solutions that are enabling AI at the edge, visit micron.com and check out our industrial solutions and LPDDR4/4X product insights.

Donato Bianco, Senior Ecosystem Enablement Manager, Micron Technology

 

The post The Forest Listener: Where edge AI meets the wild appeared first on Edge AI and Vision Alliance.

]]>
January 2026 DRAM Market Update https://www.edge-ai-vision.com/2026/02/january-2026-dram-market-update/ Sun, 15 Feb 2026 00:28:01 +0000 https://www.edge-ai-vision.com/?p=56832 The post January 2026 DRAM Market Update appeared first on Edge AI and Vision Alliance.

]]>

The post January 2026 DRAM Market Update appeared first on Edge AI and Vision Alliance.

]]>
Why DRAM Prices Keep Rising in the Age of AI https://www.edge-ai-vision.com/2026/01/why-dram-prices-keep-rising-in-the-age-of-ai/ Fri, 23 Jan 2026 14:00:16 +0000 https://www.edge-ai-vision.com/?p=56590 This market analysis was originally published at the Yole Group’s website. It is reprinted here with the permission of the Yole Group.   As hyperscale data centers rewrite the rules of the memory market, shortages could persist until 2027. Strong server DRAM demand for AI data centers is driving memory prices higher throughout the market, […]

The post Why DRAM Prices Keep Rising in the Age of AI appeared first on Edge AI and Vision Alliance.

]]>
This market analysis was originally published at the Yole Group’s website. It is reprinted here with the permission of the Yole Group.

 

As hyperscale data centers rewrite the rules of the memory market, shortages could persist until 2027.

Strong server DRAM demand for AI data centers is driving memory prices higher throughout the market, as customers scramble to secure supply for their production needs amid fears of future shortages.

The DRAM market is in an AI-driven upcycle, with hyperscale data centers soaking up supply and pushing prices higher since Q3 2025. Because AI servers require far more DDR5 (and HBM) per system than traditional servers, availability is tightening across PCs, smartphones, and other end markets.

In this context, John Lorenz, Director, Memory & Computing activities at Yole Group, highlights a key driver of today’s price dynamics: fear of future scarcity. As DRAM manufacturers prioritize higher-margin HBM and server-grade DDR5, other segments react defensively, often buying ahead, amplifying shortages and pushing spot prices higher.

At Yole Group, memory activity tracks these structural changes across the value chain, from technology roadmaps including DDR5, LPDDR, HBM and more to supply capacity, pricing mechanisms and end-market demand. Drawing on perspectives from leading memory experts, Yole Group’s related analyses quantify how hyperscaler behavior, manufacturing constraints and long fab lead times could keep market tightness and elevated pricing, an important theme well into 2027. Enjoy reading this snapshot!

The latest price upswing started during the third quarter of 2025, when DRAM prices climbed by 13.5% quarter over quarter. While the DRAM market can be volatile, with price changes of 15-20% in the past, the rally came on top of a strong rebound from 2023 through late 2024 and early 2025. That suggested the market had reached a cyclical peak and was poised for a downturn. Instead, early signals from company earnings suggest prices may have jumped a further 30% in the fourth quarter.

Spot prices for DDR5 used in servers have surged by as much as 100% in some cases. PC makers are already feeling the impact: Hewlett Packard and Dell have warned they may remove certain laptop models from their line-ups next year, either because DRAM has become too expensive or they are concerned they will not be able procure enough.

AI infrastructure is redrawing the DRAM demand curve

At the heart of the imbalance is the AI infrastructure buildout. Data center operators are buying AI accelerators at scale, along with the general-purpose servers needed to run them. AI accelerators rely on high-bandwidth memory (HBM), while the host servers consume large volumes of standard DDR5.

A single AI server configured with eight accelerators, each with 200GB of HBM, contains around 1.6TB of HBM and roughly 3TB of DDR5. By comparison, a typical non-AI server built in 2025 uses less than 1TB of DRAM in total. This rapid increase in memory content per system is outpacing supply.

HBM further distorts the market, commanding far higher prices and margins than DDR5, and manufacturers have strong incentives to prioritize it. Producing HBM can take up to four times as many wafers per gigabyte as DDR5, meaning that shifts to increase output reduce the available capacity for conventional server memory.

The effects are rippling into other end markets. Automotive applications typically use LPDDR4 and LPDDR5, the same memory found in smartphones, tablets and laptops. But as automotive is still a strategic play for memory suppliers, particularly with the growth of self-driving cars which require more memory, they are unlikely to cut off the industry. They do, however, have the upper hand to charge more for automotive customers to still get their supply.

That dynamic helps explain strategic moves such as Micron’s decision to wind down its Crucial consumer business, reflecting a focus on higher-margin, AI-driven demand rather than direct-to-consumer products.

Outside the data center, smartphones account for around 25% of global DRAM bit demand, while PCs represent roughly 10–11%. Consumer electronics, beyond phones and PCs, including gaming devices and wearables, add another 6%. Automotive accounts for about 5%, and industrial, medical and military uses combined roughly 4%.

Data centers dominate, representing around 50% of total DRAM bit demand. AI workloads alone account for roughly 30% of that total (HBM and non-HBM) giving them outsized influence over pricing.

Hyperscaler demand increasingly sets DRAM pricing

History shows how quickly DRAM cycles can turn. Between 2014 and 2016, prices fell in response to flat demand, prompting Android-based smartphone manufacturers, especially in China, to compete by increasing memory content. That additional demand absorbed excess supply and pushed prices higher, until costs squeezed margins and vendors paused content growth or shifted toward lower-spec models.

This time, the usual self-correcting mechanism, where high prices trigger pullbacks in demand, has not yet materialized. Hyperscalers and server manufacturers are far less price-sensitive than consumer device makers and are willing to pay up to secure DRAM supply to remain competitive in the AI race, keeping prices elevated for everyone else.

On the supply side, relief is structurally constrained by long lead times. Building or expanding a DRAM fab typically takes 2-3 three years to reach volume production. Some incremental supply is expected in 2026, but much of it is limited.

China’s CXMT is adding capacity but mainly serves domestic customers and has yet to meet the requirements of leading global buyers. Samsung is adding equipment at its P4 facility but is prioritizing HBM rather than broader DRAM supply. SK hynix’s M15X fab should begin contributing output in the second half of 2026, with more meaningful volumes in 2027, while Micron’s new Boise fab is also expected to add supply in 2027.

Until then, it would take smartphone and PC makers slowing memory content growth or AI infrastructure spending moderating to ease pricing pressure ahead of large-scale capacity additions.

As AI infrastructure continues to reshape memory demand, DRAM pricing will remain a key watchpoint for the entire electronics ecosystem, well beyond the data center. Understanding how technology transitions, supply allocation, and hyperscaler procurement strategies interact is essential to anticipate risk and opportunity across markets.

To stay ahead, follow Yole Group and explore the memory-focused products and analyses for data-driven perspectives on pricing, capacity, and end-market impacts. And stay tuned throughout 2026: analysts will be sharing fresh insights via Yole Group’s events program, new articles, and expert webinars, bringing you timely updates, deep dives, and actionable takeaways as the market evolves!

About the author

John Lorenz is Director, Memory & Computing at Yole Group.

He leads the growth of the team’s technical expertise and market intelligence, while managing key business relationships with industry leaders. John also drives the development of Yole Group’s market research and strategy consulting activities focused on memory and computing technologies and markets.

Having joined Yole Group’s computing team in 2019, John brings deep insight leading-edge semiconductor manufacturing to the division, which has been responsible for over 100 marketing and technology analyses delivered for industrial groups, start-ups, and research institutes.

Before joining Yole Group, John spent 15 years at Micron Technology in R&D/manufacturing, engineering, and strategic planning roles gaining experience across the memory and computing industries.

He holds a Bachelor of Science in Mechanical Engineering from the University of Illinois Urbana-Champaign (USA), where he specialized in MEMS devices.

The post Why DRAM Prices Keep Rising in the Age of AI appeared first on Edge AI and Vision Alliance.

]]>
When DRAM Becomes the Bottleneck (Again): What the 2026 Memory Squeeze Means for Edge AI https://www.edge-ai-vision.com/2026/01/when-dram-becomes-the-bottleneck-again-what-the-2026-memory-squeeze-means-for-edge-ai/ Mon, 12 Jan 2026 09:00:07 +0000 https://www.edge-ai-vision.com/?p=56425 A funny thing is happening in the edge AI world: some of the most important product decisions you’ll make this year won’t be about TOPS, sensor resolution, or which transformer variant to deploy. They’ll be about memory—how much you can get, how much it costs, and whether you can ship the exact part you designed […]

The post When DRAM Becomes the Bottleneck (Again): What the 2026 Memory Squeeze Means for Edge AI appeared first on Edge AI and Vision Alliance.

]]>
A funny thing is happening in the edge AI world: some of the most important product decisions you’ll make this year won’t be about TOPS, sensor resolution, or which transformer variant to deploy. They’ll be about memory—how much you can get, how much it costs, and whether you can ship the exact part you designed around.

If that sounds abstract, here’s a very concrete, engineer-facing signal: on December 1, 2025, Raspberry Pi raised prices on several Pi 4 and Pi 5 SKUs explicitly citing an “unprecedented rise in the cost of LPDDR4 memory,” and said the increases help secure memory supply in a constrained 2026 market. For many teams, Pis aren’t “consumer gadgets”—they’re prototyping platforms, lab fixtures, vision pipeline testbeds, and quick-turn demos. When the cost of your dev fleet and internal tooling moves like this, it’s a canary.

Zoom out and the picture gets sharper: the memory market is splitting into “AI infrastructure gets what it needs” and “everyone else adapts.” EE Times calls this the “Great Memory Pivot,” and—crucially—it’s being amplified by stockpiling behavior. Major OEMs are buffering memory inventory to reduce risk, which in turn worsens shortages and pushes prices higher.

For edge AI and computer vision teams, the takeaway isn’t “PCs are expensive.” It’s that we’re heading into a period where memory behaves less like a commodity and more like a capacity-allocated input—and edge products sit uncomfortably close to the blast radius.

The two forces that matter most to edge teams

1) AI infrastructure is crowding out conventional DRAM/LPDDR

The clearest near-term data point comes from TrendForce: conventional DRAM contract prices for 1Q26 are forecast to rise ~55–60% QoQ, driven by DRAM suppliers reallocating advanced nodes and capacity toward server and HBM products to support AI server demand. TrendForce also says server DRAM contract prices could surge by more than 60% QoQ.

Edge implication: even if you never touch HBM, the market dynamics around HBM and server DRAM pull the entire supply chain toward higher-margin, AI-driven segments, tightening availability and raising prices for the memory your edge designs actually use. And in practice, edge teams don’t just experience “higher price”; they experience allocation, lead-time uncertainty, and last-minute substitutions that turn into board spins and slipped launches.

2) LPDDR is explicitly called out as staying undersupplied

TrendForce doesn’t just talk about servers. It says LPDDR4X and LPDDR5X are expected to stay undersupplied, with uneven resource distribution supporting higher prices.

That’s directly relevant to edge AI and vision because LPDDR is everywhere in the edge stack: smart cameras and NVRs, robotics compute modules, industrial gateways, in-cabin systems, drones, and many “embedded Linux + NPU” boxes. LPDDR constraints hit you both ways:

  • Capacity: can you get the density you want?
  • Cost: can you afford it at scale?
  • SKU fragility: can you swap without a redesign if allocation tightens?

Again, the Raspberry Pi move is the engineer-friendly example: they directly attribute price changes to LPDDR4 costs and explicitly mention AI infrastructure competition. 

Why edge AI is more sensitive than typical embedded systems

Edge AI and computer vision systems are in the middle of a structural shift: workloads are getting wider and more concurrent, not just more accurate.

A 2022-ish camera pipeline might have been: ISP → detection → tracking. A 2026 product pipeline often includes some mix of: detection + tracking + re-ID + segmentation + multi-camera fusion + privacy filtering + local search/embedding + event summarization. Even when models are “small,” the system-level reality is that you’re holding more intermediate state, more queues, more buffers, and more simultaneous streams.

Three practical reasons memory becomes the choke point:

  1. Bandwidth limits show up before compute limits. Many edge systems are memory-traffic-bound long before the NPU saturates. “More TOPS” doesn’t help if tensors are waiting on memory.
  2. Concurrency drives peak usage. You can optimize average footprint and still lose to peak bursts: a model swap, two video streams, a backlog spike, a logging burst—and suddenly you’re in the danger zone (OOM resets, frame drops, tail-latency explosions).
  3. Soldered-memory designs reduce escape routes. If you ship soldered LPDDR, you can’t treat memory like a field-upgradable afterthought. You either got the config right—or you’re spinning hardware.

Stockpiling changes the rules for edge product planning

One of the most important new themes in the last two weeks of reporting is that the shortage is being amplified by behavior, not just fundamentals. EE Times describes large OEMs stockpiling critical components (including memory) to buffer shortages—and explicitly notes that this stockpiling makes shortages worse and pushes prices higher.

This matters for edge companies because stockpiling is a competitive weapon:

  • Big buyers secure allocation and smooth out volatility.
  • Smaller and mid-sized edge OEMs/ODMs get pushed toward spot markets, last-minute substitutions, and uncomfortable BOM surprises.
  • Product teams end up redesigning around what’s available rather than what’s optimal.

In other words: forecasting discipline and supplier relationships start to determine product viability, not just product-market fit.

What this changes in edge AI product decisions

1) “Memory optionality” becomes a design requirement

If you can credibly support multiple densities (or multiple qualified parts) without a full board spin, you reduce existential risk.

Practical patterns:

  • PCB/layout options that support more than one density or vendor part
  • Firmware that can adapt model scheduling to available RAM
  • Feature flags / “degrade gracefully” modes that reduce peak memory without breaking core value

2) Your AI strategy becomes a supply-chain strategy

Teams will increasingly win by shipping memory-efficient capability, not just higher accuracy.

Engineering investments that suddenly have real business leverage:

  • Activation-aware quantization and buffer reuse (not just weight compression)
  • Streaming/tiled vision pipelines that avoid large live tensors
  • Smarter scheduling to prevent worst-case concurrency peaks
  • Bandwidth reduction techniques (operator fusion, lower-resolution intermediate features, fewer full-frame copies)

3) SKU strategy will simplify (whether you like it or not)

In a tight allocation market, too many SKUs becomes self-inflicted pain: each memory configuration increases planning complexity, qualification cost, and the probability that one SKU becomes unbuildable.

Many edge companies will converge toward:

  • Fewer memory configurations
  • Clear “base” and “pro” SKUs
  • Longer pricing windows (or more frequent repricing)

4) Prototyping and internal infrastructure costs rise

This is the “engineer tax” that’s easy to miss. If Raspberry Pi prices move because LPDDR moves, your dev boards, test rigs, and in-house tooling budgets are likely to move too. That can slow iteration velocity precisely when teams are trying to ship more complex, more AI-forward products.

The realistic timeline: don’t bet on a quick snap-back

One reason this cycle feels different is that multiple credible sources are describing tightness persisting and prices moving sharply.

Micron’s fiscal Q1 2026 earnings call prepared remarks argues that aggregate industry supply will remain substantially short “for the foreseeable future,” that HBM demand strains supply due to a 3:1 trade ratio with DDR5, and that tightness is expected to persist “through and beyond calendar 2026.” Reuters reporting similarly frames this as more than a one-quarter wobble, describing an AI-driven supply crunch and quoting major players calling the shortage “unprecedented.” 

Edge takeaway: plan like this is a multi-quarter design and sourcing constraint, not a temporary annoyance you can outwait.

A pragmatic playbook for edge AI and vision teams

For engineering leads

  • Instrument peak memory, not just average. Treat worst-case bursts as first-class test cases.
  • Make bandwidth visible. Profile memory traffic and copy counts; optimize data movement early.
  • Build a “ship mode.” Define what features can drop (or run less frequently) when memory is constrained.
  • Treat memory as a product KPI. Publish memory budgets alongside latency and accuracy.

For product and business leads

  • Tie roadmap bets to buildability. A feature that requires an unavailable memory configuration is not a feature—it’s a slip.
  • Reduce SKU sprawl. Fewer configurations means fewer ways supply can break you.
  • Qualify alternates on purpose. Make multi-sourcing part of the schedule, not an emergency scramble.
  • Treat allocation like GTM. Your launch plan should include supply assurance milestones, not just marketing milestones.

The punchline

Edge AI is getting smarter, more multimodal, and more “always on.” But the industry is also learning—again—that the constraint that matters is often the one you don’t put on the slide.

In 2026, the teams that win won’t just have better models. They’ll have better memory discipline: designs that tolerate volatility, software that respects bandwidth, and product plans that assume supply constraints are real.

 

Disclosure: Micron Technology is a member of the Edge AI and Vision Alliance. The company is cited here as one of several sources for public market and supply commentary.

Further Reading:

1GB Raspberry Pi 5 now available at $45, and memory-driven price rises – Raspberry Pi press release, December 2025.

The Great Memory Stockpile – EE Times, January 2026.

Chip shortages threaten 20% rise in consumer electronics prices – Financial Times, January 2026.

Memory Makers Prioritize Server Applications, Driving Across-the-Board Price Increases in 1Q26, Says TrendForce – TrendForce, January 2026.

Micron Technology Fiscal Q1 2026 Earnings Call Prepared Remarks – Micron Technology investor filings, December, 2025.

Micron HBM Designed into Leading AMD AI Platform – Micron Technology press release, June 2025.

AI Sets the Price: Why DRAM Shortages Are Rewriting Memory Market Economics – Fusion WorldWide, November 2025.

Samsung likely to flag 160% jump in Q4 profit as AI boom stokes chip prices – Reuters, January 2026.

Memory chipmakers rise as global supply shortage whets investor appetite – Reuters, January 2026.

The post When DRAM Becomes the Bottleneck (Again): What the 2026 Memory Squeeze Means for Edge AI appeared first on Edge AI and Vision Alliance.

]]>
MemryX Unveils MX4 Roadmap: Enabling Distributed, Asynchronous Dataflow for Highly Efficient Data Center AI https://www.edge-ai-vision.com/2026/01/memryx-unveils-mx4-roadmap-enabling-distributed-asynchronous-dataflow-for-highly-efficient-data-center-ai/ Thu, 08 Jan 2026 22:20:00 +0000 https://www.edge-ai-vision.com/?p=56472 ANN ARBOR, Mich., Dec. 26, 2025 (PRNewswire) — MemryX Inc., a company delivering production AI inference acceleration, today announced its strategic roadmap for the MX4. The next-generation accelerator is engineered to scale the company’s “at-memory” dataflow architecture from edge deployments into the data center, leveraging 3D hybrid-bonded memory to eliminate the industry’s most pressing bottleneck: the “memory wall.” MemryX is […]

The post MemryX Unveils MX4 Roadmap: Enabling Distributed, Asynchronous Dataflow for Highly Efficient Data Center AI appeared first on Edge AI and Vision Alliance.

]]>
ANN ARBOR, Mich., Dec. 26, 2025 (PRNewswire) — MemryX Inc., a company delivering production AI inference acceleration, today announced its strategic roadmap for the MX4. The next-generation accelerator is engineered to scale the company’s “at-memory” dataflow architecture from edge deployments into the data center, leveraging 3D hybrid-bonded memory to eliminate the industry’s most pressing bottleneck: the “memory wall.”

MemryX is currently in production with its MX3 silicon, delivering >20× better performance per watt than mainstream GPUs for targeted AI inference applications. With MX4, MemryX is extending that production-proven foundation to address data center workloads increasingly constrained not by compute, but by memory capacity, bandwidth, and energy efficiency.

MemryX has now signed an agreement with a next-generation 3D memory partner to execute a dedicated 2026 test chip program, validating a targeted ~5µm-class hybrid-bonded interface and direct-to-tile memory integration. The partner is not disclosed at this time.

The announcement comes as the semiconductor industry increasingly prioritizes deterministic inference architectures for the next era of AI processing, reinforced by recent multibillion-dollar licensing and investment activity across AI hardware—such as Nvidia’s $20B deal with Groq, which underscores the massive strategic value of efficient inference solutions. While the first generation of dataflow solutions proved the efficiency of 2D SRAM, MemryX is moving into the third dimension to address the power, cost, and complexity constraints of frontier AI workloads.

Software Continuity: Leveraging the MX3 Compiler Foundation

MemryX plans to leverage its mature, production-proven MX3 software stack — including its compiler and runtime — as the foundation for MX4. While MX4 introduces new capabilities to support larger memory footprints and data center-scale configurations, the roadmap is designed to preserve key elements of the MX3 programming model and toolchain to accelerate adoption and shorten time-to-deployment for existing and new customers.

Beyond LLMs: Powering Frontier Inference

While Large Language Models (LLMs) remain a priority, the data center is rapidly evolving toward Large Action Models (LAMs), high-resolution multimodal vision, and real-time recommendation engines. These “frontier workloads” require massive memory capacity and predictable throughput that traditional 2.5D HBM-based architectures struggle to provide efficiently.

The MX4 addresses this by physically bonding high-bandwidth memory directly to compute tiles, shifting the focus from data movement back to high-efficiency computation.

The Asynchronous Advantage: Scalability Without Bottlenecks

The MX4 represents a fundamental departure from synchronous chip designs. Many current accelerators rely on a global synchronous clock, which can introduce clock skew and thermal challenges as designs scale using 3D stacks.

Like the MX3, the MX4 utilizes a data-driven producer/consumer flow-control model and avoids the centralized memory bottlenecks common in traditional architectures by enabling direct interfaces from 3D memory to compute tiles. However, rather than using 2D embedded SRAM like the MX3, the MX4 directly connects computing tiles to 3D memories without using single shared controllers.

  • Asynchronous Scaling: Tiles operate independently, processing only when data is available and downstream consumers are ready. This naturally manages backpressure and reduces the switching overhead and clocking complexities inherent in synchronous architectures.
  • Direct-to-Tile 3D Interface: By targeting a ~5µm-class hybrid bonding pitch, MX4 enables a distributed vertical interconnect in which individual compute engines access memory layers directly—without relying on a single shared memory controller used by today’s HBM-based designs.
  • Technology Agnostic: The architecture is designed to support multiple 3D direct to memory formats, including today’s stacked DRAM and emerging FeRAM-class technologies.

Roadmap to Production

  • 2026: Dedicated test chip (in partnership with a 3D memory provider) to validate ~5µm-class hybrid bonding interface and direct-to-tile 3D memory integration
  • 2027: First MX4 customer sampling
  • 2028: Production release, scaling from single-chip systems to multi-chip data center arrays supporting >1TB memory configurations

“The industry has recognized that deterministic dataflow is a compelling path forward for AI inference, but both efficiency and scale are critical,” said Keith Kressin, CEO of MemryX. “By combining our production-proven architecture—including an asynchronous flow model—with 3D hybrid bonding, we are removing the physical barriers to power-efficient trillion-parameter scalability. We aren’t just building a faster chip; we are building a more practical roadmap for the future of AI.”

Learn More

To review the architectural foundation of the MX4, visit the MemryX MX3 Architecture Overview: https://developer.memryx.com/architecture/architecture.html

Specifications, partners, and timelines are targets and subject to change.

About MemryX Inc.

MemryX Inc. is a fabless semiconductor company focused on AI inference acceleration, with a production-proven “at-memory” dataflow architecture that delivers superior efficiency for edge and upcoming data center applications. Backed by $44M in Series B funding from investors including HarbourVest, NEOM Investment Fund (NIF), Arm IoT Fund, eLab Ventures, M Ventures, and Motus Ventures, MemryX is driving the next wave of AI hardware innovation from its headquarters in Ann Arbor, Michigan.

Media and Investor Contact:

Roger Peene, VP Marketing
Email: roger.peene@memryx.com
Website: www.memryx.com

SOURCE MemryX

The post MemryX Unveils MX4 Roadmap: Enabling Distributed, Asynchronous Dataflow for Highly Efficient Data Center AI appeared first on Edge AI and Vision Alliance.

]]>
AMD Spartan UltraScale+ FPGA Kit Adds Proven Infineon HyperRAM Support for Edge AI Designs https://www.edge-ai-vision.com/2025/12/amd-spartan-ultrascale-fpga-kit-adds-proven-infineon-hyperram-support-for-edge-ai-designs/ Tue, 02 Dec 2025 15:00:14 +0000 https://www.edge-ai-vision.com/?p=56142 Somewhat eclipsed by last week’s announcement that the AMD Spartan™ UltraScale+™ FPGA SCU35 Evaluation Kit is now available, AMD and Infineon have disclosed successful validation of Infineon’s 64-Mb HYPERRAM™ memory and HYPERRAM controller IP on the platform. This collaboration expands the kit’s value for engineers building edge AI and computer vision systems. The SCU35 kit, […]

The post AMD Spartan UltraScale+ FPGA Kit Adds Proven Infineon HyperRAM Support for Edge AI Designs appeared first on Edge AI and Vision Alliance.

]]>
Somewhat eclipsed by last week’s announcement that the AMD Spartan™ UltraScale+™ FPGA SCU35 Evaluation Kit is now available, AMD and Infineon have disclosed successful validation of Infineon’s 64-Mb HYPERRAM™ memory and HYPERRAM controller IP on the platform. This collaboration expands the kit’s value for engineers building edge AI and computer vision systems.

The SCU35 kit, built around the Spartan UltraScale+ SU35P device, is marketed as an affordable platform for industrial, medical, and data center designs that need I/O expansion and board-management capabilities—making it a solid fit for cost-sensitive, low-power edge systems that demand high I/O density and robust security features.

By qualifying Infineon’s 64-Mb HYPERRAM and controller IP with the SCU35 kit and AMD’s MicroBlaze™ V soft-core RISC-V processor, AMD effectively adds a pre-integrated, high-bandwidth external memory option that requires minimal pins and board resources. The MicroBlaze V core is a 32-bit RISC-V soft processor IP fully integrated into Vivado™ and Vitis™ flows, and the HYPERRAM controller IP provides a verified host interface, simplifying integration.

For edge vision and AI workloads, this combination addresses several common design constraints:

  • Memory bandwidth with low pin count: HYPERRAM, accessed over the HYPERBUS™ interface, delivers DRAM-class bandwidth via a very small pin footprint (HyperBus devices use a narrow bus with only a handful of signals), which is valuable when the FPGA I/O budget is already consumed by image sensors, high-speed interfaces, or control signals.

  • Power- and cost-efficient buffering: A 64-Mb (8-MB) HYPERRAM device provides ample external storage for frame buffers, intermediate feature maps, or scratchpad memory for pre/post-processing, without the power, complexity, or BOM impact of traditional LPDDR solutions—key for battery-powered cameras, smart sensors, and distributed industrial nodes.

  • Faster path from prototype to production: Infineon’s HYPERBUS memory controller IP is delivered as fully implemented and verified RTL with documentation, providing a proven memory interface for the SCU35 kit. Combined with AMD’s Vivado and Vitis tools and MicroBlaze V support, this reduces the effort required to bring up external memory, allowing teams to focus on vision pipelines, ML kernels, and system-level optimization.

Typical edge AI and vision applications that can benefit include multi-sensor IoT gateways, smart cameras, access-control terminals, industrial monitoring nodes, and other embedded systems that need modest AI acceleration, secure connectivity, and reliable control-plane processing rather than large, data-center-scale models.

Infineon’s 64-Mb HYPERRAM devices and corresponding controller IP are supported and licensed for use with the SCU35 platform, and the AMD Spartan UltraScale+ FPGA SCU35 Evaluation Kit is available for order today, with shipments scheduled to begin in January 2026. For members of the engineering community, the combination of an affordable, globally available FPGA kit and a validated, low-pin-count external memory solution provides a practical, production-relevant starting point for next-generation edge AI and computer vision designs.

The post AMD Spartan UltraScale+ FPGA Kit Adds Proven Infineon HyperRAM Support for Edge AI Designs appeared first on Edge AI and Vision Alliance.

]]>
Micron Ships Automotive UFS 4.1, Designed to Unlock Intelligent Mobility With Speed, Safety and Reliability https://www.edge-ai-vision.com/2025/11/micron-ships-automotive-ufs-4-1-designed-to-unlock-intelligent-mobility-with-speed-safety-and-reliability/ Thu, 20 Nov 2025 17:00:42 +0000 https://www.edge-ai-vision.com/?p=55996 Architected to power AI workloads, Micron’s latest automotive solution, built with G9 NAND, equips the industry to create safer, smarter more connected driver experiences MUNICH, Nov. 13, 2025 (GLOBE NEWSWIRE) — Automotive Computing Conference — Micron Technology, Inc. (Nasdaq: MU), today announced shipping of qualification samples of its automotive universal flash storage (UFS) 4.1 solution to […]

The post Micron Ships Automotive UFS 4.1, Designed to Unlock Intelligent Mobility With Speed, Safety and Reliability appeared first on Edge AI and Vision Alliance.

]]>
Architected to power AI workloads, Micron’s latest automotive solution, built with G9 NAND, equips the industry to create safer, smarter more connected driver experiences

MUNICH, Nov. 13, 2025 (GLOBE NEWSWIRE) — Automotive Computing Conference — Micron Technology, Inc. (Nasdaq: MU), today announced shipping of qualification samples of its automotive universal flash storage (UFS) 4.1 solution to customers worldwide, enabling rapid data access, robust reliability and enhanced safety and security for next-generation vehicles. Delivering bandwidth of 4.2 gigabytes per second (GB/s) — double that of its predecessor — Micron’s automotive UFS 4.1 accelerates data access for AI models, enriching the in-cabin experience by powering features such as voice assistants, personalized infotainment and advanced safety alerts. This bandwidth in advanced driver assistance systems (ADAS) and autonomous vehicles also enables rich data capture from cameras, lidar and radar sensors to upload and feed AI model retraining and enhancement in the data center.

Micron’s automotive UFS 4.1 is built with the company’s sophisticated 9th-generation (G9) 3D NAND flash memory technology, delivering high performance and capacity and supplying the market with the latest process technology to accelerate AI. With this rollout, Micron G9 NAND is the most advanced NAND in the industry to be qualified for rigorous automotive standards such as the AEC-Q1041 — enabling Micron’s UFS 4.1 to meet the high bar required for automotive quality, safety and reliability.

“As the automotive industry shifts toward greater autonomy and more intelligent in-cabin experiences, robust high-performance storage is foundational to enabling the next generation of intelligent vehicles,” said Kris Baxter, corporate vice president and general manager of Micron’s Automotive and Embedded Business Unit. “Micron’s automotive UFS 4.1 is engineered to deliver exceptional safety, reliability and performance, enabling the automotive industry to advance intelligent mobility and unlock AI at the edge.”

Delivering a leap forward in automotive storage performance

As vehicles evolve into intelligent platforms, capabilities such as autonomous driving, enriched cabins and real-time AI applications require bandwidth to start up quickly from ignition, instantly access and swap large language models for generative AI interactions, and log massive volumes of sensor data. High-performance solutions like Micron’s UFS 4.1 are essential to accelerating this intelligence at the source.

Micron’s UFS 4.1 delivers:

  • Turbocharged read and write speeds: UFS 4.1’s bandwidth offers accelerated sequential read speeds for rapid data access for AI. Enhanced write speeds enable ultra-fast data logging for ADAS models, supporting refinement of perception and decision-making algorithms. UFS 4.1’s high read performance can enable use cases such as rapid switching between generative AI models for in-cabin experiences, allowing system designers to store multiple models while reducing memory requirements — all while providing a low-latency user experience and optimizing costs.
  • Fast boot times: Thanks to Micron’s G9 technology and proprietary firmware, Micron’s UFS 4.1 offers 30% faster device boot and 18% faster system boot.2 These boot times enable intelligent systems to rapidly come online when the ignition is engaged, delivering more responsive cockpit experiences.
  • Ultra-high endurance: Micron’s automotive UFS 4.1 offers up to 100,000 program/erase (P/E) cycles for single-level cell mode and 3,000 program/erase cycles for triple-level cell mode, providing the necessary endurance for automotive applications where vehicles may operate for years with millions of write operations from lidar, radar and cameras.
  • Extended thermal protection: Engineered for harsh vehicle environments, UFS 4.1 provides thermal protection and consistent high performance from -40°C up to 115°C case temperature — extending beyond JEDEC’s standard 105°C to provide manufacturers the opportunity to reduce thermal cooling footprints without compromising reliability for mission-critical autonomous driving.
  • Advanced host features: The solution offers advanced UFS 4.1 features, including a host-initiated defragmentation feature that leverages advanced algorithms to optimize data workloads for defragmentation to provide fast performance, especially during high-demand periods.

Micron’s UFS 4.1 enables real-time telemetry, intelligent health, safety and security

Micron’s automotive UFS 4.1 is engineered to meet the highest standards for automotive applications. The solution achieves Automotive Safety Integrity Level B compliance (ISO 26262) for functional safety. Software development aligned with ASPICE3 Level 3 and comprehensive product security engineering practices based on ISO/SAE 214344 further strengthen quality and safeguard data to meet the rigorous demands of modern vehicles.

The solution provides comprehensive real-time telemetry with advanced health monitoring and device-level exception notifications, enabling automotive platforms to proactively detect potential issues before they impact vehicle performance. This capability supports predictive maintenance and fleet management while minimizing the risk of failures on the road.

By delivering reliable, high-speed storage, Micron’s latest automotive storage solution enables manufacturers to unlock new AI horizons and accelerate the development of next-generation vehicles, while end users benefit from enhanced safety, smarter in-cabin features and seamless connectivity on the road.

For more information on Micron’s automotive UFS 4.1 solution, visit here.
Additional Resources

About Micron Technology, Inc.

We are an industry leader in innovative memory and storage solutions, transforming how the world uses information to enrich life for all. With a relentless focus on our customers, technology leadership, and manufacturing and operational excellence, Micron delivers a rich portfolio of high-performance DRAM, NAND and NOR memory and storage products through our Micron® and Crucial® brands. Every day, the innovations that our people create fuel the data economy, enabling advances in artificial intelligence (AI) and compute-intensive applications that unleash opportunities — from the data center to the intelligent edge and across the client and mobile user experience. To learn more about Micron Technology, Inc. (Nasdaq: MU), visit micron.com.

© 2025 Micron Technology, Inc. All rights reserved. Information, products, and/or specifications are subject to change without notice. Micron, the Micron logo, and all other Micron trademarks are the property of Micron Technology, Inc. All other trademarks are the property of their respective owners.

1 Claim based on Micron market research finding that Micron G9 NAND is the highest-layered NAND in the industry to be qualified for automotive standards such as Automotive Electronics Council-Q104 (AEC-Q104): Automotive Electronics Council – Qualification standard for multi-chip modules (Q104)
2 Based on Micron’s internal testing, comparing Micron’s automotive UFS 4.1 operated in UFS 3.1 mode to Micron’s UFS 3.1 predecessor devices on an external UFS 3.1-compatible platform
3 Automotive SPICE (Software Process Improvement and Capability Determination) – Level 3 process maturity
4 International Organization for Standardization / Society of Automotive Engineers – Road vehicles cybersecurity engineering standard

The post Micron Ships Automotive UFS 4.1, Designed to Unlock Intelligent Mobility With Speed, Safety and Reliability appeared first on Edge AI and Vision Alliance.

]]>
Trends in Embedded AI: Designing Hardware for Machine Learning on the Edge https://www.edge-ai-vision.com/2025/11/trends-in-embedded-ai-designing-hardware-for-machine-learning-on-the-edge/ Wed, 19 Nov 2025 09:00:14 +0000 https://www.edge-ai-vision.com/?p=55981 This blog post was originally published at Tessolve’s website. It is reprinted here with the permission of Tessolve. The world is increasingly becoming connected, intelligent, and autonomous. At the core of this transformation is Artificial Intelligence (AI), which is swiftly transitioning from the cloud to the edge, nearer to where data is generated and actions are […]

The post Trends in Embedded AI: Designing Hardware for Machine Learning on the Edge appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at Tessolve’s website. It is reprinted here with the permission of Tessolve.

The world is increasingly becoming connected, intelligent, and autonomous. At the core of this transformation is Artificial Intelligence (AI), which is swiftly transitioning from the cloud to the edge, nearer to where data is generated and actions are performed. This shift, often referred to as Embedded AI, is redefining how we design embedded systems. It is creating a demand for specialized hardware capable of performing complex Machine Learning (ML) tasks with efficiency and low latency.

The proliferation of IoT devices, autonomous vehicles, smart cities, and industrial automation demands real-time processing and decision-making capabilities that cannot always rely on constant cloud connectivity. Latency, bandwidth limitations, and privacy concerns are driving the imperative to integrate AI directly into embedded devices. This presents a unique set of challenges and opportunities for hardware architects and embedded system designers.

The Rise of Edge AI: Why Now?

Several factors are driving the rapid adoption of Edge AI today:

  • Improved Hardware Efficiency: Advanced semiconductors and AI accelerators deliver high-performance inference within limited power budgets.
  • Optimized AI Models: Techniques like quantization, pruning, and knowledge distillation enable smaller, efficient models for constrained devices.
  • Connectivity Limits: While 5G reduces latency, sending raw data to the cloud is costly; edge AI ensures local, immediate decision-making.
  • Privacy & Security: On-device processing safeguards sensitive information, reducing risks tied to cloud transmission.

Key Hardware Trends for Embedded AI

Designing hardware for machine learning on the edge requires a holistic approach, considering not just processing power but also memory, power consumption, thermal management, and robust packaging.

1. Specialized AI Accelerators

General-purpose CPUs are not optimized for the parallel computations inherent in ML algorithms. This has led to the rise of dedicated AI accelerators:

  • Neural Processing Units (NPUs): These are specifically designed to speed up neural network operations, offering high performance per watt. They often feature large numbers of MAC (multiply-accumulate) units.
  • GPUs (Graphics Processing Units): While traditionally used for graphics, GPUs’ parallel architecture makes them highly effective for ML training and, increasingly, inference in more powerful edge devices..
  • FPGAs (Field-Programmable Gate Arrays): FPGAs offer flexibility and reconfigurability, allowing developers to customize hardware logic to precisely match the requirements of specific ML models. This can lead to highly optimized performance for certain applications.
  • ASICs (Application-Specific Integrated Circuits): For high-volume applications where performance and power efficiency are paramount, custom ASICs offer the ultimate optimization, albeit with higher upfront development costs.

Understand More: Accelerating Edge Intelligence with FPGA Co-Processors: Best Practices for Real-Time Analytics

2. Heterogeneous Computing Architectures

The most effective design of embedded system solutions often combines different types of processing units. A typical edge AI system might integrate a CPU for control tasks, an NPU for ML inference, and a GPU for more intensive parallel processing or image pre-processing. Orchestrating these diverse units efficiently is a key design challenge.

3. Memory Optimization

ML models can be memory-intensive. Edge devices often have limited on-chip or external memory. Strategies include:

  • High-Bandwidth Memory (HBM): For higher-performance edge devices, HBM can provide the necessary data throughput.
  • On-chip Memory Hierarchies: Carefully managing caches and local memories is crucial to minimizing off-chip memory access, which is power-intensive and slower.
  • Quantization: Reducing the precision of model weights and activations (e.g., from 32-bit floating point to 8-bit integers) significantly reduces memory footprint and computational requirements.

4. Power Efficiency

Edge devices are often battery-powered or operate within strict power budgets. Low-power design techniques are critical:

  • Dynamic Voltage and Frequency Scaling (DVFS): Adjusting power and clock speed based on workload.
  • Power Gating: Shutting down unused parts of the chip.
  • Specialized Low-Power Modes: Components designed for ultra-low power consumption during idle or low-activity periods.

5. Robustness and Reliability

Embedded AI hardware must operate reliably in diverse and often harsh environments. This includes considerations for:

  • Thermal Management: Efficient heat dissipation is essential for maintaining performance and device longevity.
  • Electromagnetic Compatibility (EMC): Ensuring the device doesn’t interfere with or is not affected by other electronic systems.
  • Shock and Vibration Resistance: Especially important for industrial and automotive applications.

6. Security at the Edge

As devices become more intelligent and connected, they become attractive targets for attacks. Hardware-level security features are becoming indispensable:

  • Secure Boot: Ensuring only trusted software can run on the device.
  • Hardware Root of Trust: A secure foundation for cryptographic operations.
  • Memory Protection Units (MPUs): Preventing unauthorized access to critical memory regions.
  • Trusted Execution Environments (TEEs): Isolated environments for secure processing of sensitive data and ML models.

Learn More: Edge AI in Embedded Systems: Bringing AI Close to Devices

Challenges and Future Directions

While the promise of Embedded AI is immense, several challenges remain. Integrating diverse hardware components and optimizing software stacks for various accelerators is highly complex. Additionally, managing the entire lifecycle of an AI model on an edge device, from training to deployment and updates, requires sophisticated embedded system design expertise.

Future trends will likely include:

  • Even Greater Specialization: Further tailored AI accelerators for specific ML tasks (e.g., vision, natural language processing).
  • Neuromorphic Computing: Hardware that mimics the human brain’s structure and function, offering ultra-low power and event-driven processing.
  • On-Device Learning: Edge devices can train and learn from new data on their own. This allows them to adapt to different environments without needing to frequently retrain in the cloud.
  • Standardization: Efforts to create more unified software frameworks and hardware interfaces to simplify the development and deployment of Edge AI solutions.

The evolution of Embedded AI is a testament to human ingenuity. By effectively blending hardware innovation with intelligent software, we are pushing the boundaries of what is possible at the very edge of our digital world. The demand for skilled engineers capable of navigating the intricacies of designing embedded system solutions for this new paradigm will only continue to grow.

Empowering Edge AI Innovation with Tessolve

At Tessolve, we understand the critical importance of robust and efficient hardware for the success of Embedded AI. As a leading engineering solution provider, we empower our clients to bring their cutting-edge Edge AI visions to life. From initial concept to silicon realization and beyond, our comprehensive suite of services covers every aspect of embedded system design. We specialize in crafting optimized silicon for machine learning workloads, including verification and validation of complex heterogeneous architectures, ensuring your AI accelerators perform flawlessly. With our deep expertise in advanced testing methodologies and our commitment to an advanced design solution approach, Tessolve helps you navigate the complexities of power optimization, memory management, and security. This accelerates your time to market with reliable, high-performance Edge AI products. Let us be your trusted partner in shaping the future of intelligent devices.

The post Trends in Embedded AI: Designing Hardware for Machine Learning on the Edge appeared first on Edge AI and Vision Alliance.

]]>
AI Drives the Wheel: How Computing Power is Reshaping the Automotive Industry https://www.edge-ai-vision.com/2025/10/ai-drives-the-wheel-how-computing-power-is-reshaping-the-automotive-industry/ Tue, 21 Oct 2025 14:46:32 +0000 https://www.edge-ai-vision.com/?p=55677 This market research report was originally published at the Yole Group’s website. It is reprinted here with the permission of the Yole Group. In its new report, Automotive Computing and AI 2025, Yole Group analyzes the technological and market forces redefining vehicle intelligence, safety, and connectivity. The automotive industry is accelerating into a new era […]

The post AI Drives the Wheel: How Computing Power is Reshaping the Automotive Industry appeared first on Edge AI and Vision Alliance.

]]>
This market research report was originally published at the Yole Group’s website. It is reprinted here with the permission of the Yole Group.

In its new report, Automotive Computing and AI 2025, Yole Group analyzes the technological and market forces redefining vehicle intelligence, safety, and connectivity.

The automotive industry is accelerating into a new era of intelligence, and semiconductors are at the heart of this revolution. Yole Group’s latest report, Automotive Computing and AI 2025, provides a comprehensive overview of how computing and artificial intelligence are transforming vehicles into smarter, safer, and more connected platforms.

Yole Group’s experts dive deep into the technical and strategic transformations shaping automotive computing, examining how supply chain disruptions, centralized architectures, chiplet processors, and the rise of AI are redefining the competitive landscape. Automotive Computing and AI 2025 explores the evolution of ADAS and Infotainment platforms, sensor fusion, and the integration of AI accelerators across the vehicle. It details market shares, design strategies, and technological shifts among leading players, from NVIDIA, Mobileye, and Qualcomm to Huawei, Horizon Robotics, and NXP. It also offers an insightful mapping of the emergence of new semiconductor alliances.

“In our 2025 report, we noticed that centralized processors are no longer just a niche market limited to a few vehicles and OEMs but the main growth driver of the entire ADAS market, and that is even more true in China.”
Hugo Antoine
Technology and Market Analyst, Yole Group

By connecting market data, technology roadmaps, and ecosystem analysis, Automotive Computing and AI 2025 offers a comprehensive view of how AI-driven computing will shape the future of mobility.

“Designing ADAS processors in-house by OEMs is costly, but at around 3 million units in total it becomes a real advantage, and a growing number of Chinese OEMs are daring to take that risk.”
Adrien Sanchez
Senior Technology and Market Analyst, Yole Group

Key insights from the 2025 report

  • The automotive processor market reached $8.9 billion in 2024, split between ADAS at about $4 billion and infotainment at almost $5 billion.
  • ADAS and safety applications are projected to grow at a 19% CAGR between 2024 and 2030, reaching $12.3 billion by 2030.
  • Infotainment and telematics are expected to grow at an 8% CAGR over the same period.
  • The move toward centralized architecture is pushing OEMs to design their own chips, while leaders like NVIDIA, Qualcomm, and Huawei redefine competitive dynamics.
  • Mobileye remains dominant in ADAS front cameras with 36% of the total ADAS market, but faces new challenges from NVIDIA, Qualcomm, Horizon Robotics, and Huawei HiSilicon.
  • The need for more powerful, better optimized, safer, and more modular processors is driving the emergence of chiplets in the automotive industry.

“China’s approach to processors is to pursue independence and autonomy, free from constraints. Even if its processors lag behind the West, it doesn’t matter—the key is to have its own processors. As vehicles evolve into software-defined platforms, the ability to integrate AI at every level, from safety to infotainment, will define the winners of the next decade.”
Daniel Niu
Market Researcher, Yole Group

With the Automotive Computing and AI 2025 report, Yole Group reinforces its leading position in semiconductor and AI market intelligence. The report helps industry stakeholders understand emerging architectures, assess competitive landscapes, and anticipate the technologies shaping tomorrow’s mobility.

Stay tuned on Yole Group’s website!

Automotive White Paper – Vol. 2

With this new Automotive White Paper, Yole Group takes a closer look at the fast-evolving automotive industry, where semiconductors are driving a new era of innovation. As vehicles become smarter, safer, and more autonomous, the demand for advanced chips is accelerating, reshaping the strategies of leading semiconductor players and challengers alike.

In Automotive White Paper, Vol.2, Yole Group’s experts explore key technological breakthroughs, market challenges, and future opportunities, offering a comprehensive view of how semiconductors are powering the next generation of vehicles. Access the Automotive White Paper, Vol. 2 and don’t miss out on Yole Group’s latest investigations!

The post AI Drives the Wheel: How Computing Power is Reshaping the Automotive Industry appeared first on Edge AI and Vision Alliance.

]]>
“Three Big Topics in Autonomous Driving and ADAS,” an Interview with Valeo https://www.edge-ai-vision.com/2025/10/three-big-topics-in-autonomous-driving-and-adas-an-interview-with-valeo/ Mon, 20 Oct 2025 08:00:52 +0000 https://www.edge-ai-vision.com/?p=55504 Frank Moesle, Software Department Manager at Valeo, talks with Independent Journalist Junko Yoshida for the “Three Big Topics in Autonomous Driving and ADAS” interview at the May 2025 Embedded Vision Summit. In this on-stage interview, Moesle and Yoshida focus on trends and challenges in automotive technology, autonomous driving and ADAS.… “Three Big Topics in Autonomous […]

The post “Three Big Topics in Autonomous Driving and ADAS,” an Interview with Valeo appeared first on Edge AI and Vision Alliance.

]]>
Frank Moesle, Software Department Manager at Valeo, talks with Independent Journalist Junko Yoshida for the “Three Big Topics in Autonomous Driving and ADAS” interview at the May 2025 Embedded Vision Summit. In this on-stage interview, Moesle and Yoshida focus on trends and challenges in automotive technology, autonomous driving and ADAS.…

“Three Big Topics in Autonomous Driving and ADAS,” an Interview with Valeo

Register or sign in to access this content.

Registration is free and takes less than one minute. Click here to register and get full access to the Edge AI and Vision Alliance's valuable content.

The post “Three Big Topics in Autonomous Driving and ADAS,” an Interview with Valeo appeared first on Edge AI and Vision Alliance.

]]>