Micron Technology - Edge AI and Vision Alliance https://www.edge-ai-vision.com/category/provider/micron-technology/ Designing machines that perceive and understand. Thu, 19 Feb 2026 22:23:45 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://www.edge-ai-vision.com/wp-content/uploads/2019/12/cropped-logo_colourplus-32x32.png Micron Technology - Edge AI and Vision Alliance https://www.edge-ai-vision.com/category/provider/micron-technology/ 32 32 The Forest Listener: Where edge AI meets the wild https://www.edge-ai-vision.com/2026/02/the-forest-listener-where-edge-ai-meets-the-wild/ Mon, 23 Feb 2026 09:00:52 +0000 https://www.edge-ai-vision.com/?p=56867 This blog post was originally published at Micron’s website. It is reprinted here with the permission of Micron. Let’s first discuss the power of enabling. Enabling a wide electronic ecosystem is essential for fostering innovation, scalability and resilience across industries. By supporting diverse hardware, software and connectivity standards, organizations can accelerate product development, reduce costs and […]

The post The Forest Listener: Where edge AI meets the wild appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at Micron’s website. It is reprinted here with the permission of Micron.

Let’s first discuss the power of enabling. Enabling a wide electronic ecosystem is essential for fostering innovation, scalability and resilience across industries. By supporting diverse hardware, software and connectivity standards, organizations can accelerate product development, reduce costs and enhance user experiences. A broad ecosystem encourages collaboration among manufacturers, developers and service providers, helping to drive interoperability. Enabling an ecosystem for your customers is a huge value for your product in any market, but for a market that spans many applications, it’s paramount for allowing your customers to get to the market quickly. Micron has a diverse set of ecosystem partners for broad applications like microprocessors, including STMicroelectronics (STM). We have collaborated with STM for years, matching our memory solutions to their products. Ultimately, these partnerships empower our mutual businesses to deliver smarter, more connected solutions that meet the evolving needs of consumers and enterprises alike.

The platform and the kit

There’s something uniquely satisfying about peeling back the anti-static bag and revealing the STM32MP257F-DK dev board brimming with potential. As an embedded developer, I am excited when new silicon lands on my desk, especially when it promises to redefine what’s possible at the edge. The STM32MP257F-DK from STMicroelectronics is one of those launches that truly innovates. The STM32MP257F-DK Discovery Kit is a compact, developer-friendly platform designed to bring edge AI to life. And in my case, to the forest. It became the heart of one of my most exciting projects yet: the Forest Listener, a solar-powered, AI-enabled bird-watching companion that blends embedded engineering with natural exploration.

A new kind of birdwatcher

After a few weeks of development and testing, my daughter and I headed into the woods just after sunrise — as usual, binoculars around our necks, a thermos of tea in the backpack and a quiet excitement in the air. But this time, we brought along a new companion. The Forest Listener is a smart birdwatcher, an AI-powered system that sees and hears the forest just like we do. Using a lightweight model trained with STM32’s model zoo, it identifies bird species on the spot. No cloud, no latency, just real-time inference at the edge. My daughter has mounted the device on a tripod, connected the camera and powered it on. The screen lights up. It’s ready! Suddenly, a bird flutters into view. The camera captures the moment. Within milliseconds, the 1.35 TOPS neural processing unit (NPU) kicks in, optimized for object detection. The Cortex-A35 logs the sighting (image, species, timestamp), while the Cortex-M33 manages sensors and power. My daughter, watching on a connected tablet, lights up: “Look, Dad! It found another one!” A Eurasian jay, this time.

Built for the edge … and the outdoors

Later, at home, we scroll through the logs saved on the Memory cards. The system can also upload sightings via Ethernet. She’s now learning names, songs and patterns. It’s a beautiful bridge between nature and curiosity. At the core of this seamless experience is Micron LPDDR4 memory. It delivers the high bandwidth needed for AI inference and multimedia processing, while maintaining ultra-low power consumption, critical for our solar-powered setup. Performance is only part of the story: What truly sets Micron LPDDR4 apart is its long-term reliability and support. Validated by STM for use with the STM32MP257F-DK, this memory is manufactured at Micron’s dedicated longevity fab, ensuring a more stable, multiyear supply chain. That’s a game-changer for developers to build solutions that need to last — not just in home appliances, but in the harsh field environment. Whether you’re deploying an AI app in remote forests, industrial plants or smart homes, you need components that are not only fast and efficient but also built to endure. Micron LPDDR4 is engineered to meet the stringent requirements of embedded and industrial markets, with a commitment to support and availability that gives manufacturers peace of mind.

Beyond bird-watching

The Forest Listener is just one example of what the STM32MP257F-DK and Micron LPDDR4 can enable. In factories, the same edge-AI capabilities can monitor machines, detect anomalies, and reduce downtime. In smart homes, they can power face recognition, voice control and energy monitoring — making homes more intelligent, responsive and private, all without relying on the cloud.

For more information about Micron solutions that are enabling AI at the edge, visit micron.com and check out our industrial solutions and LPDDR4/4X product insights.

Donato Bianco, Senior Ecosystem Enablement Manager, Micron Technology

 

The post The Forest Listener: Where edge AI meets the wild appeared first on Edge AI and Vision Alliance.

]]>
When DRAM Becomes the Bottleneck (Again): What the 2026 Memory Squeeze Means for Edge AI https://www.edge-ai-vision.com/2026/01/when-dram-becomes-the-bottleneck-again-what-the-2026-memory-squeeze-means-for-edge-ai/ Mon, 12 Jan 2026 09:00:07 +0000 https://www.edge-ai-vision.com/?p=56425 A funny thing is happening in the edge AI world: some of the most important product decisions you’ll make this year won’t be about TOPS, sensor resolution, or which transformer variant to deploy. They’ll be about memory—how much you can get, how much it costs, and whether you can ship the exact part you designed […]

The post When DRAM Becomes the Bottleneck (Again): What the 2026 Memory Squeeze Means for Edge AI appeared first on Edge AI and Vision Alliance.

]]>
A funny thing is happening in the edge AI world: some of the most important product decisions you’ll make this year won’t be about TOPS, sensor resolution, or which transformer variant to deploy. They’ll be about memory—how much you can get, how much it costs, and whether you can ship the exact part you designed around.

If that sounds abstract, here’s a very concrete, engineer-facing signal: on December 1, 2025, Raspberry Pi raised prices on several Pi 4 and Pi 5 SKUs explicitly citing an “unprecedented rise in the cost of LPDDR4 memory,” and said the increases help secure memory supply in a constrained 2026 market. For many teams, Pis aren’t “consumer gadgets”—they’re prototyping platforms, lab fixtures, vision pipeline testbeds, and quick-turn demos. When the cost of your dev fleet and internal tooling moves like this, it’s a canary.

Zoom out and the picture gets sharper: the memory market is splitting into “AI infrastructure gets what it needs” and “everyone else adapts.” EE Times calls this the “Great Memory Pivot,” and—crucially—it’s being amplified by stockpiling behavior. Major OEMs are buffering memory inventory to reduce risk, which in turn worsens shortages and pushes prices higher.

For edge AI and computer vision teams, the takeaway isn’t “PCs are expensive.” It’s that we’re heading into a period where memory behaves less like a commodity and more like a capacity-allocated input—and edge products sit uncomfortably close to the blast radius.

The two forces that matter most to edge teams

1) AI infrastructure is crowding out conventional DRAM/LPDDR

The clearest near-term data point comes from TrendForce: conventional DRAM contract prices for 1Q26 are forecast to rise ~55–60% QoQ, driven by DRAM suppliers reallocating advanced nodes and capacity toward server and HBM products to support AI server demand. TrendForce also says server DRAM contract prices could surge by more than 60% QoQ.

Edge implication: even if you never touch HBM, the market dynamics around HBM and server DRAM pull the entire supply chain toward higher-margin, AI-driven segments, tightening availability and raising prices for the memory your edge designs actually use. And in practice, edge teams don’t just experience “higher price”; they experience allocation, lead-time uncertainty, and last-minute substitutions that turn into board spins and slipped launches.

2) LPDDR is explicitly called out as staying undersupplied

TrendForce doesn’t just talk about servers. It says LPDDR4X and LPDDR5X are expected to stay undersupplied, with uneven resource distribution supporting higher prices.

That’s directly relevant to edge AI and vision because LPDDR is everywhere in the edge stack: smart cameras and NVRs, robotics compute modules, industrial gateways, in-cabin systems, drones, and many “embedded Linux + NPU” boxes. LPDDR constraints hit you both ways:

  • Capacity: can you get the density you want?
  • Cost: can you afford it at scale?
  • SKU fragility: can you swap without a redesign if allocation tightens?

Again, the Raspberry Pi move is the engineer-friendly example: they directly attribute price changes to LPDDR4 costs and explicitly mention AI infrastructure competition. 

Why edge AI is more sensitive than typical embedded systems

Edge AI and computer vision systems are in the middle of a structural shift: workloads are getting wider and more concurrent, not just more accurate.

A 2022-ish camera pipeline might have been: ISP → detection → tracking. A 2026 product pipeline often includes some mix of: detection + tracking + re-ID + segmentation + multi-camera fusion + privacy filtering + local search/embedding + event summarization. Even when models are “small,” the system-level reality is that you’re holding more intermediate state, more queues, more buffers, and more simultaneous streams.

Three practical reasons memory becomes the choke point:

  1. Bandwidth limits show up before compute limits. Many edge systems are memory-traffic-bound long before the NPU saturates. “More TOPS” doesn’t help if tensors are waiting on memory.
  2. Concurrency drives peak usage. You can optimize average footprint and still lose to peak bursts: a model swap, two video streams, a backlog spike, a logging burst—and suddenly you’re in the danger zone (OOM resets, frame drops, tail-latency explosions).
  3. Soldered-memory designs reduce escape routes. If you ship soldered LPDDR, you can’t treat memory like a field-upgradable afterthought. You either got the config right—or you’re spinning hardware.

Stockpiling changes the rules for edge product planning

One of the most important new themes in the last two weeks of reporting is that the shortage is being amplified by behavior, not just fundamentals. EE Times describes large OEMs stockpiling critical components (including memory) to buffer shortages—and explicitly notes that this stockpiling makes shortages worse and pushes prices higher.

This matters for edge companies because stockpiling is a competitive weapon:

  • Big buyers secure allocation and smooth out volatility.
  • Smaller and mid-sized edge OEMs/ODMs get pushed toward spot markets, last-minute substitutions, and uncomfortable BOM surprises.
  • Product teams end up redesigning around what’s available rather than what’s optimal.

In other words: forecasting discipline and supplier relationships start to determine product viability, not just product-market fit.

What this changes in edge AI product decisions

1) “Memory optionality” becomes a design requirement

If you can credibly support multiple densities (or multiple qualified parts) without a full board spin, you reduce existential risk.

Practical patterns:

  • PCB/layout options that support more than one density or vendor part
  • Firmware that can adapt model scheduling to available RAM
  • Feature flags / “degrade gracefully” modes that reduce peak memory without breaking core value

2) Your AI strategy becomes a supply-chain strategy

Teams will increasingly win by shipping memory-efficient capability, not just higher accuracy.

Engineering investments that suddenly have real business leverage:

  • Activation-aware quantization and buffer reuse (not just weight compression)
  • Streaming/tiled vision pipelines that avoid large live tensors
  • Smarter scheduling to prevent worst-case concurrency peaks
  • Bandwidth reduction techniques (operator fusion, lower-resolution intermediate features, fewer full-frame copies)

3) SKU strategy will simplify (whether you like it or not)

In a tight allocation market, too many SKUs becomes self-inflicted pain: each memory configuration increases planning complexity, qualification cost, and the probability that one SKU becomes unbuildable.

Many edge companies will converge toward:

  • Fewer memory configurations
  • Clear “base” and “pro” SKUs
  • Longer pricing windows (or more frequent repricing)

4) Prototyping and internal infrastructure costs rise

This is the “engineer tax” that’s easy to miss. If Raspberry Pi prices move because LPDDR moves, your dev boards, test rigs, and in-house tooling budgets are likely to move too. That can slow iteration velocity precisely when teams are trying to ship more complex, more AI-forward products.

The realistic timeline: don’t bet on a quick snap-back

One reason this cycle feels different is that multiple credible sources are describing tightness persisting and prices moving sharply.

Micron’s fiscal Q1 2026 earnings call prepared remarks argues that aggregate industry supply will remain substantially short “for the foreseeable future,” that HBM demand strains supply due to a 3:1 trade ratio with DDR5, and that tightness is expected to persist “through and beyond calendar 2026.” Reuters reporting similarly frames this as more than a one-quarter wobble, describing an AI-driven supply crunch and quoting major players calling the shortage “unprecedented.” 

Edge takeaway: plan like this is a multi-quarter design and sourcing constraint, not a temporary annoyance you can outwait.

A pragmatic playbook for edge AI and vision teams

For engineering leads

  • Instrument peak memory, not just average. Treat worst-case bursts as first-class test cases.
  • Make bandwidth visible. Profile memory traffic and copy counts; optimize data movement early.
  • Build a “ship mode.” Define what features can drop (or run less frequently) when memory is constrained.
  • Treat memory as a product KPI. Publish memory budgets alongside latency and accuracy.

For product and business leads

  • Tie roadmap bets to buildability. A feature that requires an unavailable memory configuration is not a feature—it’s a slip.
  • Reduce SKU sprawl. Fewer configurations means fewer ways supply can break you.
  • Qualify alternates on purpose. Make multi-sourcing part of the schedule, not an emergency scramble.
  • Treat allocation like GTM. Your launch plan should include supply assurance milestones, not just marketing milestones.

The punchline

Edge AI is getting smarter, more multimodal, and more “always on.” But the industry is also learning—again—that the constraint that matters is often the one you don’t put on the slide.

In 2026, the teams that win won’t just have better models. They’ll have better memory discipline: designs that tolerate volatility, software that respects bandwidth, and product plans that assume supply constraints are real.

 

Disclosure: Micron Technology is a member of the Edge AI and Vision Alliance. The company is cited here as one of several sources for public market and supply commentary.

Further Reading:

1GB Raspberry Pi 5 now available at $45, and memory-driven price rises – Raspberry Pi press release, December 2025.

The Great Memory Stockpile – EE Times, January 2026.

Chip shortages threaten 20% rise in consumer electronics prices – Financial Times, January 2026.

Memory Makers Prioritize Server Applications, Driving Across-the-Board Price Increases in 1Q26, Says TrendForce – TrendForce, January 2026.

Micron Technology Fiscal Q1 2026 Earnings Call Prepared Remarks – Micron Technology investor filings, December, 2025.

Micron HBM Designed into Leading AMD AI Platform – Micron Technology press release, June 2025.

AI Sets the Price: Why DRAM Shortages Are Rewriting Memory Market Economics – Fusion WorldWide, November 2025.

Samsung likely to flag 160% jump in Q4 profit as AI boom stokes chip prices – Reuters, January 2026.

Memory chipmakers rise as global supply shortage whets investor appetite – Reuters, January 2026.

The post When DRAM Becomes the Bottleneck (Again): What the 2026 Memory Squeeze Means for Edge AI appeared first on Edge AI and Vision Alliance.

]]>
Micron Ships Automotive UFS 4.1, Designed to Unlock Intelligent Mobility With Speed, Safety and Reliability https://www.edge-ai-vision.com/2025/11/micron-ships-automotive-ufs-4-1-designed-to-unlock-intelligent-mobility-with-speed-safety-and-reliability/ Thu, 20 Nov 2025 17:00:42 +0000 https://www.edge-ai-vision.com/?p=55996 Architected to power AI workloads, Micron’s latest automotive solution, built with G9 NAND, equips the industry to create safer, smarter more connected driver experiences MUNICH, Nov. 13, 2025 (GLOBE NEWSWIRE) — Automotive Computing Conference — Micron Technology, Inc. (Nasdaq: MU), today announced shipping of qualification samples of its automotive universal flash storage (UFS) 4.1 solution to […]

The post Micron Ships Automotive UFS 4.1, Designed to Unlock Intelligent Mobility With Speed, Safety and Reliability appeared first on Edge AI and Vision Alliance.

]]>
Architected to power AI workloads, Micron’s latest automotive solution, built with G9 NAND, equips the industry to create safer, smarter more connected driver experiences

MUNICH, Nov. 13, 2025 (GLOBE NEWSWIRE) — Automotive Computing Conference — Micron Technology, Inc. (Nasdaq: MU), today announced shipping of qualification samples of its automotive universal flash storage (UFS) 4.1 solution to customers worldwide, enabling rapid data access, robust reliability and enhanced safety and security for next-generation vehicles. Delivering bandwidth of 4.2 gigabytes per second (GB/s) — double that of its predecessor — Micron’s automotive UFS 4.1 accelerates data access for AI models, enriching the in-cabin experience by powering features such as voice assistants, personalized infotainment and advanced safety alerts. This bandwidth in advanced driver assistance systems (ADAS) and autonomous vehicles also enables rich data capture from cameras, lidar and radar sensors to upload and feed AI model retraining and enhancement in the data center.

Micron’s automotive UFS 4.1 is built with the company’s sophisticated 9th-generation (G9) 3D NAND flash memory technology, delivering high performance and capacity and supplying the market with the latest process technology to accelerate AI. With this rollout, Micron G9 NAND is the most advanced NAND in the industry to be qualified for rigorous automotive standards such as the AEC-Q1041 — enabling Micron’s UFS 4.1 to meet the high bar required for automotive quality, safety and reliability.

“As the automotive industry shifts toward greater autonomy and more intelligent in-cabin experiences, robust high-performance storage is foundational to enabling the next generation of intelligent vehicles,” said Kris Baxter, corporate vice president and general manager of Micron’s Automotive and Embedded Business Unit. “Micron’s automotive UFS 4.1 is engineered to deliver exceptional safety, reliability and performance, enabling the automotive industry to advance intelligent mobility and unlock AI at the edge.”

Delivering a leap forward in automotive storage performance

As vehicles evolve into intelligent platforms, capabilities such as autonomous driving, enriched cabins and real-time AI applications require bandwidth to start up quickly from ignition, instantly access and swap large language models for generative AI interactions, and log massive volumes of sensor data. High-performance solutions like Micron’s UFS 4.1 are essential to accelerating this intelligence at the source.

Micron’s UFS 4.1 delivers:

  • Turbocharged read and write speeds: UFS 4.1’s bandwidth offers accelerated sequential read speeds for rapid data access for AI. Enhanced write speeds enable ultra-fast data logging for ADAS models, supporting refinement of perception and decision-making algorithms. UFS 4.1’s high read performance can enable use cases such as rapid switching between generative AI models for in-cabin experiences, allowing system designers to store multiple models while reducing memory requirements — all while providing a low-latency user experience and optimizing costs.
  • Fast boot times: Thanks to Micron’s G9 technology and proprietary firmware, Micron’s UFS 4.1 offers 30% faster device boot and 18% faster system boot.2 These boot times enable intelligent systems to rapidly come online when the ignition is engaged, delivering more responsive cockpit experiences.
  • Ultra-high endurance: Micron’s automotive UFS 4.1 offers up to 100,000 program/erase (P/E) cycles for single-level cell mode and 3,000 program/erase cycles for triple-level cell mode, providing the necessary endurance for automotive applications where vehicles may operate for years with millions of write operations from lidar, radar and cameras.
  • Extended thermal protection: Engineered for harsh vehicle environments, UFS 4.1 provides thermal protection and consistent high performance from -40°C up to 115°C case temperature — extending beyond JEDEC’s standard 105°C to provide manufacturers the opportunity to reduce thermal cooling footprints without compromising reliability for mission-critical autonomous driving.
  • Advanced host features: The solution offers advanced UFS 4.1 features, including a host-initiated defragmentation feature that leverages advanced algorithms to optimize data workloads for defragmentation to provide fast performance, especially during high-demand periods.

Micron’s UFS 4.1 enables real-time telemetry, intelligent health, safety and security

Micron’s automotive UFS 4.1 is engineered to meet the highest standards for automotive applications. The solution achieves Automotive Safety Integrity Level B compliance (ISO 26262) for functional safety. Software development aligned with ASPICE3 Level 3 and comprehensive product security engineering practices based on ISO/SAE 214344 further strengthen quality and safeguard data to meet the rigorous demands of modern vehicles.

The solution provides comprehensive real-time telemetry with advanced health monitoring and device-level exception notifications, enabling automotive platforms to proactively detect potential issues before they impact vehicle performance. This capability supports predictive maintenance and fleet management while minimizing the risk of failures on the road.

By delivering reliable, high-speed storage, Micron’s latest automotive storage solution enables manufacturers to unlock new AI horizons and accelerate the development of next-generation vehicles, while end users benefit from enhanced safety, smarter in-cabin features and seamless connectivity on the road.

For more information on Micron’s automotive UFS 4.1 solution, visit here.
Additional Resources

About Micron Technology, Inc.

We are an industry leader in innovative memory and storage solutions, transforming how the world uses information to enrich life for all. With a relentless focus on our customers, technology leadership, and manufacturing and operational excellence, Micron delivers a rich portfolio of high-performance DRAM, NAND and NOR memory and storage products through our Micron® and Crucial® brands. Every day, the innovations that our people create fuel the data economy, enabling advances in artificial intelligence (AI) and compute-intensive applications that unleash opportunities — from the data center to the intelligent edge and across the client and mobile user experience. To learn more about Micron Technology, Inc. (Nasdaq: MU), visit micron.com.

© 2025 Micron Technology, Inc. All rights reserved. Information, products, and/or specifications are subject to change without notice. Micron, the Micron logo, and all other Micron trademarks are the property of Micron Technology, Inc. All other trademarks are the property of their respective owners.

1 Claim based on Micron market research finding that Micron G9 NAND is the highest-layered NAND in the industry to be qualified for automotive standards such as Automotive Electronics Council-Q104 (AEC-Q104): Automotive Electronics Council – Qualification standard for multi-chip modules (Q104)
2 Based on Micron’s internal testing, comparing Micron’s automotive UFS 4.1 operated in UFS 3.1 mode to Micron’s UFS 3.1 predecessor devices on an external UFS 3.1-compatible platform
3 Automotive SPICE (Software Process Improvement and Capability Determination) – Level 3 process maturity
4 International Organization for Standardization / Society of Automotive Engineers – Road vehicles cybersecurity engineering standard

The post Micron Ships Automotive UFS 4.1, Designed to Unlock Intelligent Mobility With Speed, Safety and Reliability appeared first on Edge AI and Vision Alliance.

]]>
Micron Demonstration of Its Key Partner Enablement, Driving Solutions for AI and VLMs at the Edge https://www.edge-ai-vision.com/2025/07/micron-demonstration-of-its-key-partner-enablement-driving-solutions-for-ai-and-vlms-at-the-edge/ Tue, 15 Jul 2025 08:01:58 +0000 https://www.edge-ai-vision.com/?p=54492 Wil Florentino, Senior Strategic Marketing Manager for Industrial at Micron, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Florentino demonstrates examples of its extensive partnerships and customers enabling artificial intelligence (AI) for the global market. Florentino showcases two specific demos with different use cases. The […]

The post Micron Demonstration of Its Key Partner Enablement, Driving Solutions for AI and VLMs at the Edge appeared first on Edge AI and Vision Alliance.

]]>
Wil Florentino, Senior Strategic Marketing Manager for Industrial at Micron, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Florentino demonstrates examples of its extensive partnerships and customers enabling artificial intelligence (AI) for the global market.

Florentino showcases two specific demos with different use cases. The first is a DEEPX accelerator with Micron LPDDR5X connected to a Raspberry Pi module, also with Micron memory onboard. The second is with Axelera and their Metis AI platform, full of Micron’s LPDDR4 running four simultaneous video language models (VLMs).

The post Micron Demonstration of Its Key Partner Enablement, Driving Solutions for AI and VLMs at the Edge appeared first on Edge AI and Vision Alliance.

]]>
Micron Demonstration of Memory in Automotive: Great Things Come in Small Packages https://www.edge-ai-vision.com/2025/07/micron-demonstration-of-memory-in-automotive-great-things-come-in-small-packages/ Tue, 15 Jul 2025 08:00:23 +0000 https://www.edge-ai-vision.com/?p=54489 Bill Stafford, Marketing Solutions Director at Micron, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Stafford demonstrates his company’s collaboration with Infineon on the TRAVEO T2G CYT4EN graphics microcontroller teamed with Micron LPDDR4 and e.MMC, which give automotive clusters media rich content, high performance, and […]

The post Micron Demonstration of Memory in Automotive: Great Things Come in Small Packages appeared first on Edge AI and Vision Alliance.

]]>
Bill Stafford, Marketing Solutions Director at Micron, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Stafford demonstrates his company’s collaboration with Infineon on the TRAVEO T2G CYT4EN graphics microcontroller teamed with Micron LPDDR4 and e.MMC, which give automotive clusters media rich content, high performance, and low power in a compact/cost effective system.

The post Micron Demonstration of Memory in Automotive: Great Things Come in Small Packages appeared first on Edge AI and Vision Alliance.

]]>
Micron Demonstration of Memory in Artificial Intelligence: Leading from Performance to Safety https://www.edge-ai-vision.com/2025/07/micron-demonstration-of-memory-in-artificial-intelligence-leading-from-performance-to-safety/ Mon, 14 Jul 2025 08:01:06 +0000 https://www.edge-ai-vision.com/?p=54471 Bill Stafford, Marketing Solutions Director at Micron, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Stafford demonstrates his company’s key products in the artificial intelligence (AI) and computer vision markets. Micron manufactures one of the broadest portfolios of memory and storage solutions in the world.

The post Micron Demonstration of Memory in Artificial Intelligence: Leading from Performance to Safety appeared first on Edge AI and Vision Alliance.

]]>
Bill Stafford, Marketing Solutions Director at Micron, demonstrates the company’s latest edge AI and vision technologies and products at the 2025 Embedded Vision Summit. Specifically, Stafford demonstrates his company’s key products in the artificial intelligence (AI) and computer vision markets.

Micron manufactures one of the broadest portfolios of memory and storage solutions in the world.

The post Micron Demonstration of Memory in Artificial Intelligence: Leading from Performance to Safety appeared first on Edge AI and Vision Alliance.

]]>
“Addressing Evolving AI Model Challenges Through Memory and Storage,” a Presentation from Micron https://www.edge-ai-vision.com/2025/06/addressing-evolving-ai-model-challenges-through-memory-and-storage-a-presentation-from-micron/ Tue, 10 Jun 2025 08:00:24 +0000 https://www.edge-ai-vision.com/?p=54084 Wil Florentino, Senior Segment Marketing Manager at Micron, presents the “Addressing Evolving AI Model Challenges Through Memory and Storage” tutorial at the May 2025 Embedded Vision Summit. In the fast-changing world of artificial intelligence, the industry is deploying more AI compute at the edge. But the growing diversity and data footprint of transformers and models […]

The post “Addressing Evolving AI Model Challenges Through Memory and Storage,” a Presentation from Micron appeared first on Edge AI and Vision Alliance.

]]>
Wil Florentino, Senior Segment Marketing Manager at Micron, presents the “Addressing Evolving AI Model Challenges Through Memory and Storage” tutorial at the May 2025 Embedded Vision Summit.

In the fast-changing world of artificial intelligence, the industry is deploying more AI compute at the edge. But the growing diversity and data footprint of transformers and models such as large language models and large multimodal models puts a spotlight on memory performance and data storage capacity as key bottlenecks. Enabling the full potential of AI in industries such as manufacturing, automotive, robotics and transportation will require us to find efficient ways to deploy this new generation of complex models.

In this presentation, Florentino explores how memory and storage are responding to this need and solving complex issues in the AI market. He examines the storage capacity and memory bandwidth requirements of edge AI use cases ranging from tiny devices with severe cost and power constraints to edge servers, and he explains how new memory technologies such as LPDDR5, LPCAMM2 and multi-port SSDs are helping system developers to meet these challenges.

See here for a PDF of the slides.

The post “Addressing Evolving AI Model Challenges Through Memory and Storage,” a Presentation from Micron appeared first on Edge AI and Vision Alliance.

]]>
Micron Technology Demonstration of Its DRAM and Flash Memory Product Lines https://www.edge-ai-vision.com/2024/10/micron-technology-demonstration-of-its-dram-and-flash-memory-product-lines/ Mon, 14 Oct 2024 14:03:07 +0000 https://www.edge-ai-vision.com/?p=50586 David Henderson, Industrial Segment Director at Micron Technology, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Khalilian demonstrates his company’s latest DRAM and flash memory products. Demonstrations include: Image recognition and matching on an i.MX 8 Plus-based platform containing Micron’s DRAM, eMMC and serial NOR […]

The post Micron Technology Demonstration of Its DRAM and Flash Memory Product Lines appeared first on Edge AI and Vision Alliance.

]]>
David Henderson, Industrial Segment Director at Micron Technology, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Khalilian demonstrates his company’s latest DRAM and flash memory products.

Demonstrations include:

  • Image recognition and matching on an i.MX 8 Plus-based platform containing Micron’s DRAM, eMMC and serial NOR flash memories
  • The automotive-qualified quad-port 4150 SSD implementing secure multi-camera footage capture, processing and storage, and
  • Showcases of SSDs based on 176- and 232-layer NAND flash memory and DRAM in CXL, RDIMM and LPCAMM form factors.

The post Micron Technology Demonstration of Its DRAM and Flash Memory Product Lines appeared first on Edge AI and Vision Alliance.

]]>
“The Importance of Memory for Breaking the Edge AI Performance Bottleneck,” a Presentation from Micron Technology https://www.edge-ai-vision.com/2024/07/the-importance-of-memory-for-breaking-the-edge-ai-performance-bottleneck-a-presentation-from-micron-technology/ Tue, 02 Jul 2024 08:00:03 +0000 https://www.edge-ai-vision.com/?p=48619 Wil Florentino, Senior Marketing Manager for Industrial/IIoT at Micron Technology, presents the “Importance of Memory for Breaking the Edge AI Performance Bottleneck” tutorial at the May 2024 Embedded Vision Summit. In recent years there’s been tremendous focus on designing next-generation AI chipsets to improve neural network inference performance. As higher performance processors are called upon […]

The post “The Importance of Memory for Breaking the Edge AI Performance Bottleneck,” a Presentation from Micron Technology appeared first on Edge AI and Vision Alliance.

]]>
Wil Florentino, Senior Marketing Manager for Industrial/IIoT at Micron Technology, presents the “Importance of Memory for Breaking the Edge AI Performance Bottleneck” tutorial at the May 2024 Embedded Vision Summit.

In recent years there’s been tremendous focus on designing next-generation AI chipsets to improve neural network inference performance. As higher performance processors are called upon to execute ever-larger models—from vision transformers to LLMs—memory bandwidth is frequently the key performance bottleneck. With the demands for memory bandwidth and storage capacity varying across applications, it is critical to identify the right memory technologies that match the complexity and performance needs of your application.

In this talk, Florentino explores how to choose the right memory to break the performance bottleneck in edge AI systems. He also highlights recent memory technology developments that are enabling higher memory performance and capacity at the edge.

See here for a PDF of the slides.

The post “The Importance of Memory for Breaking the Edge AI Performance Bottleneck,” a Presentation from Micron Technology appeared first on Edge AI and Vision Alliance.

]]>
Navigating the Future: How Avnet is Addressing Challenges in AMR Design https://www.edge-ai-vision.com/2024/04/navigating-the-future-how-avnet-is-addressing-challenges-in-amr-design/ Tue, 30 Apr 2024 12:50:43 +0000 https://www.edge-ai-vision.com/?p=47656 This blog post was originally published at Avnet’s website. It is reprinted here with the permission of Avnet. Autonomous mobile robots (AMRs) are revolutionizing industries such as manufacturing, logistics, agriculture, and healthcare by performing tasks that are too dangerous, tedious, or costly for humans. AMRs can navigate complex and dynamic environments, communicate with other devices […]

The post Navigating the Future: How Avnet is Addressing Challenges in AMR Design appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at Avnet’s website. It is reprinted here with the permission of Avnet.

Autonomous mobile robots (AMRs) are revolutionizing industries such as manufacturing, logistics, agriculture, and healthcare by performing tasks that are too dangerous, tedious, or costly for humans. AMRs can navigate complex and dynamic environments, communicate with other devices and systems, and make intelligent decisions based on sensor data and machine learning.

At the heart of these capabilities lie sophisticated electronic components. High-performance processors, such as microcontrollers and microprocessors, provide the computational power necessary for real-time decision-making and navigation. Advanced sensors and image processing, enable perception and environmental awareness, allowing AMRs to detect obstacles, navigate paths, and interact safely with their surroundings. High-performance memory ensures that critical data is collected and stored quickly and securely.

Additionally, antennas for wireless communication, such as Wi-Fi, Bluetooth, and cellular connectivity, facilitate seamless interaction with other devices, systems, and the cloud, enabling remote monitoring, control, and fleet management.

However, designing AMRs is not an easy feat, as it involves many technical and business challenges that require a holistic and integrated approach. In this article, we will discuss some of the key challenges of designing AMRs and how Avnet, a leading global technology solutions provider, can help you overcome them with its end-to-end ecosystem of products and services.

Connectivity/Interoperability

One of the main challenges of designing AMRs is ensuring that they can connect and interoperate with other devices and systems, such as smart home appliances, electric vehicles, cloud platforms, and industrial networks. This requires adopting common standards and protocols, ensuring compatibility and compliance, and managing security and privacy risks. Autonomous mobile robots use wireless communication technologies such as Wi-Fi, Bluetooth, LoRa, Zigbee, and cellular networks to communicate with each other and with other devices. These technologies enable robots to share data, coordinate their activities, and respond to changing conditions. Avnet brings the hardware components such as antennas, connectors, sensors, memory and of course processors that provide the connectivity you need. Avnet can also help you with testing and certification services and cloud and edge computing platforms, to ensure that your AMRs can seamlessly integrate with different ecosystems and applications.

Sensor networks

Autonomous mobile robots rely heavily on sensor networks to collect data about their environment and share it with other robots and devices.

The uses include localization and mapping because AMRs need to know their precise position within their environment while Simultaneous Localization and Mapping (SLAM) algorithms use sensor data to create maps of the robot’s surroundings.

This includes obstacle detection and avoidance with vision, light and sound-based sensors that analyze images to recognize obstacles, lane markings and other relevant features to ensure proper direction and safety. As well as environmental sensing for temperature and chemical sensing along with motion and speed monitoring and control.

Power Density

Space is often limited in robotic applications, but careful component selection, design and layout can make all the difference. Efficient power conversion, storage and delivery help ensure that higher power requirements can be met while reducing the overall size of the product.

Whether it includes wireless charging, efficient motor control, or higher power interconnect solutions, power density must be an early consideration of the overall design.

Reduced Cost

Cost reduction can take on different forms that include the overall system cost, or the cost of ownership.

Cost of ownership affects the designs of AMRs from not only the initial purchase price but also the ongoing costs such as maintenance, repairs, upgrades and energy consumption. Designers need to strike a balance between incorporating advanced features and maintaining affordability.

Overall system cost is much the same because each component and layout decision can affect the system cost.

In the early design phase engineers must make trade-offs between cost and performance. For example, using high-end sensors and processors can enhance the capabilities, but will also drive up the cost. This means finding cost-effective components and utilizing efficient design processes will lead to a lower cost of ownership as well as lower system costs.

Conclusion

Autonomous Mobile Robots are playing an ever-growing part in our daily lives whether it’s inventory transportation, sorting or loading in a factory/warehouse setting, cleaning and sanitation for medical applications as well as package delivery and agriculture equipment. As these robots advance with more features and capabilities, they require higher power, faster processing, more accurate sensors and deeper data storage. Designing for these trends can be a challenge, Avnet is uniquely positioned to meet that challenge head-on with a deep line card of industry-leading suppliers and design support from factory-trained engineers and a deep library of reference designs.

Jamie Pederson
Technical Content Manager, Avnet


On June 11 and 13, 2024, Alliance Member company Avnet, along with partner technology suppliers such as fellow Alliance Members Advantech, Microchip Technology, Micron Technology and Renesas, will deliver the webinar series “The Future of Mobile Autonomous Robots“. For more information and to register for one, multiple or all of the webinars, visit the main event page.

The post Navigating the Future: How Avnet is Addressing Challenges in AMR Design appeared first on Edge AI and Vision Alliance.

]]>