Processors - Edge AI and Vision Alliance https://www.edge-ai-vision.com/category/technologies/processors/ Designing machines that perceive and understand. Thu, 19 Feb 2026 22:08:17 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://www.edge-ai-vision.com/wp-content/uploads/2019/12/cropped-logo_colourplus-32x32.png Processors - Edge AI and Vision Alliance https://www.edge-ai-vision.com/category/technologies/processors/ 32 32 How Lenovo is scaling Level 4 autonomous robotaxis on Arm https://www.edge-ai-vision.com/2026/02/how-lenovo-is-scaling-level-4-autonomous-robotaxis-on-arm/ Fri, 20 Feb 2026 09:00:53 +0000 https://www.edge-ai-vision.com/?p=56863 This blog post was originally published at Arm’s website. It is reprinted here with the permission of Arm. As L4 robotaxis shift from pilot to production, Arm offers the compute foundation needed to deliver end-to-end physical AI that scales across vehicle fleets. After years of autonomous driving pilots and controlled trials, the automotive industry is moving toward the production-scale deployment of Level 4 (L4) robotaxis. This marks […]

The post How Lenovo is scaling Level 4 autonomous robotaxis on Arm appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at Arm’s website. It is reprinted here with the permission of Arm.

As L4 robotaxis shift from pilot to production, Arm offers the compute foundation needed to deliver end-to-end physical AI that scales across vehicle fleets.

After years of autonomous driving pilots and controlled trials, the automotive industry is moving toward the production-scale deployment of Level 4 (L4) robotaxis. This marks a significant moment for artificial intelligence (AI), as it moves from advising humans on recommended actions to enabling vehicles that perceive their environment, although it comes with a steep increase in technical demands.

Compared with today’s advanced L2++ vehicles, L4 systems typically require broader sensor stack, such as LiDAR, cameras and radar, which drive data processing requirements from roughly 25GB per hour to as much as 19TB per hour. This has forced a fundamental rethink of compute for physical AI.

To that effect, Lenovo has developed L4 Autonomous Driving Domain Controller AD1, a production-ready autonomous driving computing platform powered by dual Arm-based NVIDIA DRIVE AGX Thor chips. WeRide is deploying the platform in its GXR Robotaxi, which is the world’s first mass-produced L4 autonomous vehicles.

Inside Lenovo AD1

The Lenovo AD1 serves as the central brain inside the GXR Robotaxi, managing multiple functions from perception, prediction and trajectory planning, to real-time motion control and safety monitoring.  The platform is designed for production-grade L4 autonomy for robotaxis and other autonomous vehicles. Supporting over 2,000 TOPS of AI capacity, it enables dense perception, prediction, and planning models to run simultaneously for faster, better decision-making on the roads.

For robotaxis, many loosely coupled electronic control units (ECUs) cannot deliver the latency, safety, or scalability L4 requires, so instead they need centralized, high-performance compute platforms. Therefore, AD1 is powered by NVIDIA DRIVE AGX Thor, a centralized car computer built on the Arm Neoverse V3AE CPU, which brings previously separate driving, parking, cockpit, and monitoring functions into one compute domain.

Efficiency, safety, and foundation for physical AI

Arm serves as the foundational compute architecture of the NVIDIA DRIVE AGX Thor platform, enabling advanced computing capabilities that power Lenovo’s AD1 platform.

  1. Performance per watt for fleet economics: As robotaxis operate for extended hours in demanding dense urban environments, the Arm compute platform delivers server-class performance into a highly efficient power envelope, enabling large AI workloads without compromising vehicle battery or thermal design.
  2. A safety-ready architecture: The Arm ecosystem – including functional-safety-capable technologies, toolchains, software solutions, and long-established automotive partners – supports the platforms designed to meet ASIL-D and other global safety requirements, a critical factor for long-lived commercial deployments.
  3. A mature, scalable software ecosystem: Since Arm provides a unified architecture across cloud, edge and physical environments, it allows developers to build, optimize, and scale AI models using widely available software tools and frameworks.
  4. A roadmap aligned with future AI workloads: As physical AI models continue to grow in size and complexity, compute efficiency and architectural stability become increasingly important. By building on Arm, automakers gain a consistent architectural foundation with a long-term roadmap and helps avoid future redesigns and keeping the compute strategy stable even as AI evolves.

The road to autonomy is being built on Arm

The deployment of Lenovo AD1 in WeRide’s GXR Robotaxis shows how physical AI in autonomous driving systems is moving beyond controlled pilots and into real, complex urban environments. As autonomous capabilities advance through L4 robotaxis and other autonomous vehicles, the industry is converging on platforms that deliver high performance, safety, and power-efficiency through a centralized architecture.

Arm sits at the core of this shift, providing the foundation that enables companies like Lenovo and WeRide to run dense AI workloads continuously, adapt to rapidly evolving models, and support fleets that must operate reliably for years. As robotaxis expand into new cities and global markets, the Arm compute platform – built for safety and engineered to meet the real-world demands of physical AI at scale – is a critical part of the road ahead.

The post How Lenovo is scaling Level 4 autonomous robotaxis on Arm appeared first on Edge AI and Vision Alliance.

]]>
What Does a GPU Have to Do With Automotive Security? https://www.edge-ai-vision.com/2026/02/what-does-a-gpu-have-to-do-with-automotive-security/ Thu, 19 Feb 2026 09:00:05 +0000 https://www.edge-ai-vision.com/?p=56857 This blog post was originally published at Imagination Technologies’ website. It is reprinted here with the permission of Imagination Technologies. The automotive industry is undergoing the most significant transformation since the advent of electronics in cars. Vehicles are becoming software-defined, connected, AI-driven, and continuously updated. This evolution brings extraordinary new capability – but it also brings […]

The post What Does a GPU Have to Do With Automotive Security? appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at Imagination Technologies’ website. It is reprinted here with the permission of Imagination Technologies.

The automotive industry is undergoing the most significant transformation since the advent of electronics in cars. Vehicles are becoming software-defined, connected, AI-driven, and continuously updated. This evolution brings extraordinary new capability – but it also brings greater levels of cybersecurity and functional-safety risks.

The GPU, once only a graphics accelerator for infotainment screens, is now also a primary compute engine for safety-critical tasks like vehicle perception, driver monitoring and camera stitching. The modern GPU is no longer a passive block in the SoC, or something you provision for; it is cyber-relevant, safety-relevant –  and increasingly a point of focus for OEMs, Tier-1s and safety assessors.

At Imagination Technologies, we believe customer-trusted platforms start with evidence-based, secure IP, ‘certified’ with the relevant standards, that enable apple-to-apple comparisons with other products in the market. In this article we explore why GPUs have become relevant to automotive cybersecurity and the dual role that they play.

Cybersecurity and GPUs – who cares?

As vehicles converge with cloud services, AI, and IoT ecosystems, the attack surface obviously, or rather intuitively, grows significantly. Automotive platforms have now evolved from isolated ECUs to domain and zonal controllers interconnected over high-bandwidth networks, running mixed-criticality workloads, and increasingly reliant on GPU-accelerated compute.

Today you’ll find automotive GPUs involved in AI perception and sensor-fusion workloads, neural-network inference and complex 3D interfaces and real-time visualisation tools like surround-view cameras. A common theme across the above is ‘data’, with different level of value and sensitivity. And with those, attackers normally follow suit.

The GPU as Both an Attack Surface—and a Defensive Asset

The duality of the GPU is one of the most important shifts in automotive compute.

The GPU as an Attack Surface

Increasingly, GPUs deal with challenges such as:

  • Side-channel leakage from massively parallel compute (read more)
  • Privilege escalation through GPU memory or scheduling (read more)
  • Manipulation of GPU-processed AI inputs (read more)
  • Fault injection or data corruption (read more)
  • Malicious workloads exploiting shared GPU pipelines (read more)

This is why any automotive GPU requires secure memory boundaries, robust virtualisation, privilege levels, and fault detection engineered directly into the architecture.

The GPU as a Security Accelerator

At the same time, GPUs are extremely efficient for handling a variety of algorithms for encryption and decryption, hashing, digital signing, key generation, and post-quantum cryptography.  By offloading these tasks, GPUs can reduce CPU load and preserve the tight real-time constraints that are an essential requirement in modern automotive systems.

Functional Safety and Cybersecurity: Interlinked, Not Identical

Because it handles perception data, model execution, and visual outputs, a compromised GPU can indirectly influence safety-critical behaviour. For example, tampering with perception inputs can mislead ADAS decision-making.

Cybersecurity and functional safety reinforce each other, but they serve different purposes. All safety-critical functions rely on cybersecurity, because a cyber attack can force a system into a hazardous state. But not all cybersecurity events create immediate safety hazards, such as personal-data leakage.

However, a compromised GPU can indirectly influence safety logic—especially in AI-based perception and decision-making systems. This makes it essential that ISO 26262 (functional safety) and ISO 21434 (cybersecurity) objectives are addressed together from concept through deployment.

Security as a Lifecycle Discipline: Imagination’s CSMS

Cybersecurity is not a bolt-on feature. It is a continuous discipline governed by a Cybersecurity Management System (CSMS) that spans threat analysis and risk assessment, secure design and architecture, secure coding and verification, vulnerability monitoring, incident response and supply-chain assurance. Imagination operates an externally certified CSMS, enabling our partners to build compliance arguments on top of a robust, audited foundation.

PowerVR GPU Security & Safety Features

Across our BXS and DXS GPU families, Imagination integrates a comprehensive set of hardware and architectural protections, including:

  • Memory protection and integrity checking
  • Hardware-based virtualisation for domain isolation
  • Privilege boundaries and secure task separation
  • Deterministic compute paths for safety-critical workloads
  • Fault detection and diagnostics, such as Tile Region Protection or Idle Cycle Stealing
  • Secure-boot integration and alignment with system-wide trust anchors

These features are backed by ISO 26262-certified safety documentation and – for future functionally safe products – by security documentation that accelerate customer development activities and assessments.

Importantly, some of our safety mechanisms also reinforce cybersecurity.  For example, Tile Region Protection, originally designed to detect accidental data corruption in safety contexts, can also reveal abnormal access patterns characteristic of fault-injection or data-manipulation attacks. By monitoring unexpected behaviour at the hardware level, the GPU raises the difficulty of successfully executing low-level tampering attacks.

This dual benefit follows the duality explained earlier. Safety mechanisms strengthening cybersecurity pedigree is a key advantage of integrating protection directly into the architecture rather than relying on external layers.

Conclusion

GPUs now sit at the heart of automotive compute—and therefore at the heart of automotive safety and cybersecurity. As perception, AI, and real-time visualisation become central to vehicle behaviour and driver interfaces, the GPU must evolve from a performance component into a certifiable, cyber-resilient compute engine.

At Imagination Technologies, we embed safety, security, lifecycle engineering, and certified processes directly into our GPU IP—providing OEMs and Tier-1s with the foundation to build secure, high-performance, real-time systems. To find out more about our solutions, reach out to the team and book a meeting.

Antonio Priore, Senior Director, Engineering – Product Safety and Security, Imagination Technologies

The post What Does a GPU Have to Do With Automotive Security? appeared first on Edge AI and Vision Alliance.

]]>
Ambarella to Showcase “The Ambarella Edge: From Agentic to Physical AI” at Embedded World 2026 https://www.edge-ai-vision.com/2026/02/ambarella-to-showcase-the-ambarella-edge-from-agentic-to-physical-ai-at-embedded-world-2026/ Wed, 18 Feb 2026 21:29:00 +0000 https://www.edge-ai-vision.com/?p=56852 Enabling developers to build, integrate, and deploy edge AI solutions at scale SANTA CLARA, Calif., — Ambarella, Inc. (NASDAQ: AMBA), an edge AI semiconductor company, today announced that it will exhibit at Embedded World 2026, taking place March 10-12 in Nuremberg, Germany. At the show, Ambarella’s theme, “The Ambarella Edge: From Agentic to Physical AI,” […]

The post Ambarella to Showcase “The Ambarella Edge: From Agentic to Physical AI” at Embedded World 2026 appeared first on Edge AI and Vision Alliance.

]]>
Enabling developers to build, integrate, and deploy edge AI solutions at scale

SANTA CLARA, Calif., — Ambarella, Inc. (NASDAQ: AMBA), an edge AI semiconductor company, today announced that it will exhibit at Embedded World 2026, taking place March 10-12 in Nuremberg, Germany. At the show, Ambarella’s theme, “The Ambarella Edge: From Agentic to Physical AI,” will anchor live demonstrations that highlight how Ambarella’s AI SoCs, software stack, and developer tools deliver a competitive advantage across a wide range of AI applications—from agentic automation and orchestration to physical AI systems deployed in real-world environments.

Ambarella’s exhibit will showcase a scalable AI SoC portfolio providing high AI performance per watt, complemented by a software platform that supports rapid development across diverse edge AI workloads, consistent performance characteristics, and efficient deployment at the edge. Live demos will feature differentiation at the stack-level, partner solutions, and developer workflows across robotics, industrial automation, automotive, edge infrastructure, security, and AIoT use cases.

“Developers are increasingly building AI applications that must operate under strict power, latency, and reliability constraints, while still delivering high levels of performance,” said Muneyb Minhazuddin, Customer Growth Officer at Ambarella. “Here, we are showing how Ambarella’s ecosystem—bringing together performance-efficient AI SoCs with a robust software stack, sample workflows, and engineering resources—accelerates the development of edge AI solutions for a wide range of vertical industry segments.”

Ambarella will also present its Developer Zone (DevZone), giving developers, partners, independent software vendors (ISVs), module builders, and system integrators hands-on access to software tools, optimized models, and agentic blueprints. Together, these elements make it easier for teams to integrate more efficiently and deploy at scale using Ambarella’s technology.

Ambarella’s exhibit will be located in Hall 5, Booth 5-355 at Embedded World 2026. To schedule a guided tour, please contact your Ambarella representative.

About Ambarella
Ambarella’s products are used in a wide variety of edge AI and human vision applications, including video security, advanced driver assistance systems (ADAS), electronic mirrors, telematics, driver/cabin monitoring, autonomous driving, edge infrastructure, drones and other robotics applications. Ambarella’s low-power systems-on-chip (SoCs) offer high-resolution video compression, advanced image and radar processing, and powerful deep neural network processing to enable intelligent perception, sensor fusion and planning. For more information, please visit
www.ambarella.com.

Ambarella Contacts

  • Media contact: Molly McCarthy, mmccarthy@ambarella.com, +1 408-400-1466
  • Investor contact: Louis Gerhardy, lgerhardy@ambarella.com, +1 408-636-2310
  • Sales contact: https://www.ambarella.com/contact-us/

The post Ambarella to Showcase “The Ambarella Edge: From Agentic to Physical AI” at Embedded World 2026 appeared first on Edge AI and Vision Alliance.

]]>
Right Sizing AI for Embedded Applications https://www.edge-ai-vision.com/2026/02/right-sizing-ai-for-embedded-applications/ Tue, 03 Feb 2026 09:00:51 +0000 https://www.edge-ai-vision.com/?p=56665 This blog post was originally published at BrainChip’s website. It is reprinted here with the permission of BrainChip. We all know the AI revolution train is heading straight for the Embedded Station. Some of us are already in the driver’s seat, while others are waiting for the first movers to pave the way so we can […]

The post Right Sizing AI for Embedded Applications appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at BrainChip’s website. It is reprinted here with the permission of BrainChip.

We all know the AI revolution train is heading straight for the Embedded Station. Some of us are already in the driver’s seat, while others are waiting for the first movers to pave the way so we can become fast adopters. No matter where you are on this journey, one thing becomes clear: AI must adapt to the embedded application sandbox—not the other way around.

Embedded applications typically operate within a power envelope ranging from milliwatts to around 10 watts. For AI to be effective in many embedded markets, it must respect the power-performance boundaries of the application. Imagine your favorite device that you charge once a day. If adding embedded AI to a product means you now need to charge it every four hours, you are likely to stop using the product altogether.

This is where embedded AI fundamentally differs from cloud AI. In the cloud, adding more computations is often the default solution. But in embedded systems, the level of AI compute must be dictated by what the overall power and performance constraints allow. You can’t just throw more compute silicon at the problem.

There are two key approaches to scaling AI effectively for embedded applications:

1. Process Technology

At the foundational level, advanced process technologies like GlobalFoundries’ 22FDX+ with Adaptive Body Biasing offer a compelling solution. These transistors can deliver high performance during compute-intensive tasks while maintaining low leakage during idle or always-on modes. This dynamic adaptability ensures that the overall power-performance integrity of the application is preserved.

2. Alternative Compute Architectures

Emerging architectures like neuromorphic computing are gaining attention for their ability to run inference at a fraction of the power—and with lower latency—compared to traditional models. These ultra-low-power solutions are particularly promising for applications where energy efficiency is paramount and real-time response is also important.

BrainChip’s AKD1500 Edge AI co-processor, built on GlobalFoundries 22FDX platform, demonstrates how neuromorphic design can make AI practical for the smallest and most power-sensitive devices. Powered by the company’s AkidaTM technology, the chip uses an event-based approach, processing only when there’s information thereby avoiding the constant compute cycles that waste energy by reading and writing to either on-chip SRAM or off-chip DRAM as in traditional AI systems.  The co-processor performs event-based convolutions that leverage sparsity throughout the whole network in activation maps and kernels, significantly reducing computation power and latency by running as many layers on the Akida TM fabric. The diagram below shows all the interfaces, as well as the 8 Node Akida IP as the centerpiece of the AI co-processor.

The design further improves efficiency by handling data locally and using operations that cut power consumption dramatically. The result is a chip that delivers real-time intelligence while operating within just a few hundred milliwatts, making it possible to add AI features to wearable, sensors, and other AIoT devices that previously relied on the cloud for such capability.

The Akida low-cost, low-power AI co-processor solution offers a silicon-proven design that has already demonstrated critical performance metrics, substantially reducing risk for developers. With fully functional interfaces tested at operational speeds and proven interoperability across multiple MCU and MPU boards, the platform ensures seamless integration. The AKD1500 co-processor supports both power-conscious MCUs via SPI4 and high-performance MPUs through M.2 and PCIe interfaces, providing flexibility across many configurations. Enabling software development early with silicon prototypes accelerates time to market. Several customers have already advanced to prototype stages, validating the design’s maturity and readiness for deployment. As an example, Onsor Technologies’ Nexa smart glasses utilize the AKD1500 for low power inference to predict epileptic seizures, providing quality-of-life benefits for those suffering from epilepsy.

The best part of this is that the AKD1500 can be used with any low cost existing MCU with a SPI interface or an Applications processor where there is a PCIe connection available for higher performance. Adding the AKD1500 AI co-processor makes the time to market very short with available MCUs today.

Final Thoughts

As AI starts to sweep across the length and breadth of embedded space , right sizing becomes not just a technical necessity but a strategic imperative. The goal isn’t to fit the biggest model into the smallest device – it’s to fit the right model into the right device, with the right balance of performance, power, and user experience.

 

Anand Rangarajan
Director, End Markets, GlobalFoundries

Todd Vierra
Vice President, Customer Engagement, BrainChip

The post Right Sizing AI for Embedded Applications appeared first on Edge AI and Vision Alliance.

]]>
Robotics Builders Forum offers Hardware, Know-How and Networking to Developers https://www.edge-ai-vision.com/2026/01/robotics-day-offers-hardware-know-how-and-networking-to-developers/ Thu, 29 Jan 2026 14:00:56 +0000 https://www.edge-ai-vision.com/?p=56654 On February 25, 2026 from 8:30 am to 5:30 pm ET, Advantech, Qualcomm, Arrow, in partnership with D3 Embedded, Edge Impulse, and the Pittsburgh Robotics Network will present Robotics Builders Forum, an in-person conference for engineers and product teams. Qualcomm and D3 Embedded are members of the Edge AI and Vision Alliance, while Edge Impulse […]

The post Robotics Builders Forum offers Hardware, Know-How and Networking to Developers appeared first on Edge AI and Vision Alliance.

]]>
On February 25, 2026 from 8:30 am to 5:30 pm ET, Advantech, Qualcomm, Arrow, in partnership with D3 Embedded, Edge Impulse, and the Pittsburgh Robotics Network will present Robotics Builders Forum, an in-person conference for engineers and product teams. Qualcomm and D3 Embedded are members of the Edge AI and Vision Alliance, while Edge Impulse is a subsidiary of Qualcomm.

Here’s the description, from the event registration page:

Overview

Exclusive in-person event: get practical guidance, platform roadmap & hands-on experience to accelerate compute & AI choices for your robot

Join us for an exclusive, in-person Robotics Day/ Builders Forum built for engineers and product teams developing AMRs, humanoids, and industrial robotics applications. Co-hosted with Arrow, Qualcomm, Edge Impulse and Advantech, and supported by ecosystem partners, the event delivers practical guidance on choosing compute platforms, integrating vision and sensors, and accelerating AI development from prototype to deployment.

What to expect

  • Expert keynotes on robotics platform trends, roadmap considerations, and rugged edge deployment
  • Live demo showcase with real hardware and end-to-end solution workflows you can evaluate firsthand
  • Three technical breakout tracks with deep dives on compute, vision and perception, and AI software optimization
  • High-value networking with peer robotics builders, plus direct access to industry leaders, solution architects, and partner technical teams

You’ll leave with clearer platform direction, implementation best practices, and trusted connections for follow-up technical discussions and next-step evaluations. Attendance is limited to keep conversations focused and interactive.

To close the day, we will host a Connections Mixer at the Sky Lounge featuring a brief wrap-up and a raffle. This casual networking hour is designed to help attendees connect with peers, speakers, and solution teams in a relaxed setting. Sponsored by D3 Embedded.
————————————————————————————————–

This event is free and designed for professionals building or evaluating robotics and AMR solutions, including robotics and AMR product managers, system architects and embedded engineers, industrial automation R&D leaders, perception and vision engineers, and operations and engineering directors. We also welcome professionals tracking the latest robotics trends and platform direction.

Invitation-only access

Click Get ticket and complete the Event Registration form to apply for a free ticket. Event hosts will review submissions and email confirmed invitations (with an event code) to qualified attendees. Please present your ticket at reception to receive your full-day conference badge.

Location

Wyndham Grand Pittsburgh Downtown
600 Commonwealth Place
Pittsburgh, PA 15222

Agenda

08:30 AM – 09:00 AM – Breakfast & Connections Kickoff

09:00 AM – 09:15 AM – Opening Remarks & Day Overview 

09:15 AM – 09:45 AM – Keynote 1: Global Robotics Trends and How You Can Take Advantage (sponsored by Arrow) 

09:45 AM – 10:30 AM – Keynote 2: Utilizing Dragonwing for Industrial Arm-Based Robotics Solutions (sponsored by Qualcomm, Edge Impulse)

10:30 AM – 11:00 AM – Keynote 3: Ruggedizing Robotics Solutions for Mobility and Harsh Environments (sponsored by Advantech) 

11:00 AM – Break 

11:15 AM – 11:45 AM – Keynote 4: Selecting the Proper Cameras and Sensors for AI-Assisted Perception (sponsored by D3 Embedded) 

11:45 AM – 12:45 PM – Lunch 

12:45 PM – 03:30 PM – Three Breakout Rotations (45 min each with breaks) 

Track A: Building Out a Full-Scale Humanoid Robot from a Hardware Perspective
Track B: Leveraging Software Solutions to Get the Most Out of Your Processor
Track C: Designing and Integrating Machine Vision Solutions for AMRs and Humanoids

03:30 PM – 05:30 PM – Connections Mixer at Sky Lounge (sponsored by D3 Embedded)

To register for this free webinar, please see the event page.

The post Robotics Builders Forum offers Hardware, Know-How and Networking to Developers appeared first on Edge AI and Vision Alliance.

]]>
NanoXplore and STMicroelectronics Deliver European FPGA for Space Missions https://www.edge-ai-vision.com/2026/01/nanoxplore-and-stmicroelectronics-deliver-european-fpga-for-space-missions/ Wed, 28 Jan 2026 17:00:04 +0000 https://www.edge-ai-vision.com/?p=56650 Key Takeaways: NanoXplore’s NG-ULTRA FPGA becomes the first product qualified to new European ESCC 9030 standard for space applications The product leverages a supply chain fully based in the European Union, from design to manufacturing and test, and delivered by ST Its advanced digital capability enables European customers to develop higher performance, more competitive satellites […]

The post NanoXplore and STMicroelectronics Deliver European FPGA for Space Missions appeared first on Edge AI and Vision Alliance.

]]>
Key Takeaways:
  • NanoXplore’s NG-ULTRA FPGA becomes the first product qualified to new European ESCC 9030 standard for space applications
  • The product leverages a supply chain fully based in the European Union, from design to manufacturing and test, and delivered by ST
  • Its advanced digital capability enables European customers to develop higher performance, more competitive satellites and space missions

NanoXplore, the European leader in the design of SoC FPGA and radiation-hardened FPGA technologies, and STMicroelectronics, a global semiconductor leader serving customers across the spectrum of electronics applications, announce the qualification of NG-ULTRA for space applications. This radiation-hardened SoC FPGA has been designed specifically for space applications, including low- and medium-earth orbit constellations, and is set to be used in numerous satellite equipment systems, including flagship missions such as Galileo, Copernicus, and potentially IRIS².

First product certified to ESCC 9030 for the European New Space industry

This qualification marks a major industrial and technological milestone for the European space ecosystem: NG-ULTRA is the first product qualified to ESCC 9030, a new European standard dedicated to high-performance micro-circuits in flip-chip’ed on organic substrate or plastic package. This standard delivers the reliability required for space applications while enabling a transition away from traditional ceramic-packaged solutions – well suited for deep-space but heavier and more expensive – marking a key step forward for constellations and higher-volume missions.

The “new space” dynamic (constellations, Low and Medium Earth Orbits, higher volumes) is transforming requirements for onboard digital equipment and driving a shift in scale: there is a simultaneous need for greater computing power, controlled power consumption, and contained costs compatible with large-scale deployments. NG-ULTRA addresses this challenge by enabling more data to be processed directly in orbit (edge computing), thereby limiting transmission bottlenecks between space and ground.

NG-ULTRA targets strategic functions such as on-board computers, data management and routing between sub-systems, image and video processing (real-time compression and encoding), Software Defined Radio (SDR) – enabling remote evolution of communication modes, and onboard autonomy (detection, recognition, supervision).

A secure, European supply chain

Beyond performance, this program embodies a strategic ambition to secure a sovereign and sustainable European supply chain for long-duration missions by reducing critical dependencies. For NG-ULTRA, the industrial framework combines design, manufacturing, assembly, and testing capabilities across European sites, with the aim of reconciling competitiveness, volume production, and space-grade reliability.

In addition to its own R&D and design center in Paris, Grenoble and Montpellier, NanoXplore leverages various STMicroelectronics facilities in Europe, including the Grenoble R&D and design center, the 300mm digital fab of Crolles, the space-specialist packaging facility in Rennes (France), the test and reliability site in Grenoble (France) and Agrate (Italy) and additional redundant qualified sites in Europe.

Technical specifications

With an “all-in-one” SoC (System on Chip) architecture designed specifically for platform and onboard computing applications, NG-ULTRA combines a multi-core processor with programmable hardware on a single chip. This architecture allows for greater design agility, reduces electronic board complexity and component count, and optimizes latency, mass, and power consumption.

NG-ULTRA is built on STMicroelectronics’ 28nm FD-SOI digital technology platform, recognized for its advantages in energy efficiency, resistance to space radiation and advanced architecture features. Combined with a unique advanced radiation hardening technology, the NG-ULTRA is built to survive the thermal cycles, shocks, and vibrations of launch and long-term orbital life so as to ensure best in class performances and durability in the harsh space environment throughout the mission lifetime.

The NG-ULTRA has been designed to operate reliably in harsh radiation environments, offering a Total Ionizing Dose (TID) tolerance of up to 50 krad (Si) to ensure long-term performance. It also demonstrates strong resilience to single-event effects, with Single Event Latch-up (SEL) immunity tested up to 65 MeV·cm²/mg and Single Event Upset (SEU) immunity validated for Linear Energy Transfer (LET) levels exceeding 60 MeV·cm²/mg.

NG-ULTRA integrates a full SoC based on quad core Arm® Cortex® R52 and provides high computational capability (537k LUTs + 32 Mb RAM) to address the most complex onboard computer requirements.

Its streamlined architecture drastically reduces PCB complexity and system mass—two of the most critical constraints in space design. By minimizing the component count, the NG-ULTRA simultaneously lowers total power consumption and project costs while increasing overall system reliability.

In addition, the SRAM-based architecture of the NG-ULTRA enables an adaptive hardware approach, allowing for unlimited on-orbit reconfiguration. This “hardware-as-software” flexibility allows operators to update functionality post-launch, adapt to evolving communication standards, or optimize the chip for different mission phases. The NG-ULTRA thus provides a future-proof platform that extends the operational relevance of assets long after they leave the launchpad.

To facilitate adoption, NG-ULTRA is also available as an evaluation kit — a complete prototyping platform that allows to rapidly validate performance and interfaces, reduce integration risks, and accelerate software and onboard logic development prior to flight-board production.

About NanoXplore

NanoXplore is a French fabless company designing radiation-hardened FPGA components for high-reliability environments, specifically space and avionics. The company recently launched the NG-ULTRA, the world’s most advanced radiation-hardened FPGA SoC. With an international presence, NanoXplore is the European leader in the design and development of SoC FPGA technologies and a key partner to the major players in the aerospace sector.

About STMicroelectronics

At ST, we are 50,000 creators and makers of semiconductor technologies mastering the semiconductor supply chain with state-of-the-art manufacturing facilities. An integrated device manufacturer, we work with more than 200,000 customers and thousands of partners to design and build products, solutions, and ecosystems that address their challenges and opportunities, and the need to support a more sustainable world. Our technologies enable smarter mobility, more efficient power and energy management, and the wide-scale deployment of cloud-connected autonomous things. We are on track to be carbon neutral in all direct and indirect emissions (scopes 1 and 2), product transportation, business travel, and employee commuting emissions (our scope 3 focus), and to achieve our 100% renewable electricity sourcing goal by the end of 2027. Further information can be found at www.st.com.

The post NanoXplore and STMicroelectronics Deliver European FPGA for Space Missions appeared first on Edge AI and Vision Alliance.

]]>
Voyager SDK v1.5.3 is Live, and That Means Ultralytics YOLO26 Support https://www.edge-ai-vision.com/2026/01/voyager-sdk-v1-5-3-is-live-and-that-means-ultralytics-yolo26-support/ Tue, 27 Jan 2026 21:32:41 +0000 https://www.edge-ai-vision.com/?p=56648 Voyager v1.5.3 dropped, and Ultralytics YOLO26 support is the big headline here. If you’ve been following Ultralytics’ releases, you’ll know Ultralytics YOLO26 is specifically engineered for edge devices like Axelera’s Metis hardware. Why Ultralytics YOLO26 matters for your projects: The architecture is designed end-to-end, which means no more NMS (non-maximum suppression) post-processing. That translates to simpler deployment and […]

The post Voyager SDK v1.5.3 is Live, and That Means Ultralytics YOLO26 Support appeared first on Edge AI and Vision Alliance.

]]>
Voyager v1.5.3 dropped, and Ultralytics YOLO26 support is the big headline here. If you’ve been following Ultralytics’ releases, you’ll know Ultralytics YOLO26 is specifically engineered for edge devices like Axelera’s Metis hardware.

Why Ultralytics YOLO26 matters for your projects:

The architecture is designed end-to-end, which means no more NMS (non-maximum suppression) post-processing. That translates to simpler deployment and genuinely faster inference. It talks about up to 43% speed improvements on CPUs compared to previous versions. For anyone running projects on Orange Pi, Raspberry Pi, or similar setups, that’s a nice boost.

Small object detection also gets a nice bump thanks to ProgLoss and STAL improvements. If you’re working on anything that needs to catch smaller details (maybe retail analytics, inspection systems, drone footage analysis), this should be super interesting.

Ultralytics YOLO26 comes in n/s/m/l flavours across all the usual tasks: detection, segmentation, pose estimation, oriented bounding boxes, and classification. Good options for the speed vs. accuracy tradeoff based on your hardware and use case.

Bug fixes and stability improvements:

Beyond Ultralytics YOLO26, this release cleans up several issues from v1.5.2. Resource leaks in GStreamer and AxInferenceNet pipelines are fixed, segmentation faults when recreating pipelines with trackers are sorted, and there’s better performance for cascaded pipelines with secondary models.

If you’ve got systems with multiple Metis devices, there’s also a deadlock fix for setups with more than eight of them.

Get it now:

Head over to the usual spots to grab v1.5.3. If you’re already running projects on earlier versions, the stability fixes alone make this a welcome update.

The post Voyager SDK v1.5.3 is Live, and That Means Ultralytics YOLO26 Support appeared first on Edge AI and Vision Alliance.

]]>
Free Webinar Highlights Compelling Advantages of FPGAs https://www.edge-ai-vision.com/2026/01/free-webinar-highlights-compelling-advantages-of-fpgas/ Mon, 26 Jan 2026 22:36:11 +0000 https://www.edge-ai-vision.com/?p=56570 On March 17, 2026 at 9 am PT (noon ET), Efinix’s Mark Oliver, VP of Marketing and Business Development, will present the free hour webinar “Why your Next AI Accelerator Should Be an FPGA,” organized by the Edge AI and Vision Alliance. Here’s the description, from the event registration page: Edge AI system developers often […]

The post Free Webinar Highlights Compelling Advantages of FPGAs appeared first on Edge AI and Vision Alliance.

]]>
On March 17, 2026 at 9 am PT (noon ET), Efinix’s Mark Oliver, VP of Marketing and Business Development, will present the free hour webinar “Why your Next AI Accelerator Should Be an FPGA,” organized by the Edge AI and Vision Alliance. Here’s the description, from the event registration page:

Edge AI system developers often assume that AI workloads require a GPU or NPU. But when cost, latency, complex I/O or tight power budgets dominate, FPGAs offer compelling advantages.

In this talk we’ll explore how FPGA serve not just a compute block, but as a system-integration and acceleration platform that can combine tailored sensor I/O, signal processing, pre/post-processing and neural inference on one device.

We’ll also show how to map AI models onto FPGAs without doing customer hardware design, using two two practical on-ramps—(1) a software-first flow that generates custom instructions callable from C, and (2) a turnkey CNN acceleration block.

Using representative embedded-vision workloads, we’ll show apples-to-apples benchmarks. Attendees will leave with a decision checklist and a concrete “first experiment” plan.

Mark Oliver is an industry veteran with extensive experience in engineering, applications, and marketing. A native of the UK, Mark gained a degree in Electrical and Electronic Engineering from the University of Leeds. During a ten year tenure with Hewlett Packard, he managed Engineering and Manufacturing functions in HP Divisions both in Europe and the US before heading up Product Marketing and Applications Engineering at a series of video related startups. Prior to joining Efinix, Mark was Director of Worldwide Storage Accounts at Marvell, heading up Marketing and Business Development activities.

To register for this free webinar, please see the event page. For more information, please email webinars@edge-ai-vision.com.

The post Free Webinar Highlights Compelling Advantages of FPGAs appeared first on Edge AI and Vision Alliance.

]]>
Meet MIPS S8200: Real-Time, On-Device AI for the Physical World https://www.edge-ai-vision.com/2026/01/meet-mips-s8200-real-time-on-device-ai-for-the-physical-world/ Mon, 26 Jan 2026 14:00:17 +0000 https://www.edge-ai-vision.com/?p=56621 This blog post was originally published at MIPS’s website. It is reprinted here with the permission of MIPS. Physical AI is the ability for machines to sense their environment, think locally, act safely, and communicate quickly without waiting on the cloud. In safety-critical scenarios like driver assistance or industrial robotics, milliseconds matter. That’s why MIPS’ […]

The post Meet MIPS S8200: Real-Time, On-Device AI for the Physical World appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at MIPS’s website. It is reprinted here with the permission of MIPS.

Physical AI is the ability for machines to sense their environment, think locally, act safely, and communicate quickly without waiting on the cloud. In safety-critical scenarios like driver assistance or industrial robotics, milliseconds matter. That’s why MIPS’ edge-first approach focuses on ultra-low latency, low power, and cost-efficient inference delivered by its Atlas portfolio—and specifically the S8200 “Think” subsystem.

What is MIPS S8200 software-first neural processing unit?

MIPS S8200 is a scalable, RISC-V–based NPU designed for autonomous edge platforms. It combines tightly coupled AI engines with RISC-V application cores to accelerate both vector and matrix workloads, supporting modern frameworks (PyTorch, TensorFlow) and scaling from tens to hundreds of TOPS via coherent cluster tiling, while targeting higher TOPS/W efficiency than legacy architectures for edge deployments. In the MIPS Atlas portfolio, MIPS S8200 is the decision engine that enables multi-modal inference on device. MIPS positions S8200 under the “Think” pillar of the “Sense, Think, Act, Communicate” workload so customers can build complete physical-AI stacks with predictable latency and safety.

Why on-device AI at the edge?

Sending sensor data to the cloud and waiting for inference increases latency, risks privacy, and consumes power, which is unacceptable when a vehicle must brake now, or a robot must intercept a falling object with human-like (or better) reflexes. On-device AI lets platforms react in milliseconds under tight thermal and battery constraints. From a systems perspective, dedicated NPUs deliver inference far more power-efficiently than GPUs while freeing general purpose processors for other tasks, ideal for battery or thermally-limited endpoints.

Key Use Cases Enabled by MIPS S8200

1) Automotive ADAS & Autonomous Perception (Front Camera + 360°)

Modern vehicles aggregate feeds from multiple cameras to build a bird’s-eye view (BEV) around the car. Leading models like BEVFormer1 fuse spatial and temporal cues with transformer architectures, enabling robust perception for lane structures, vehicles, and pedestrians—even in low visibility. S8200’s transformer-friendly design and vector/matrix acceleration help run BEVFormer-class workloads and concurrent tasks (e.g., drive policy) in parallel, meeting stringent latency budgets.

  • Front-camera ADAS: rapid detection/classification for forward collision warning, lane keeping, and traffic-signal understanding.
  • Full-surround perception: camera fusion to detect adjacent vehicles/pedestrians with faster-than-human reaction times.
  • Concurrent decision-making: drive policy modules run alongside perception to determine acceleration, braking, and lane changes.

2) Industrial Robotics & AMRs

Factories, warehouses, and mobile robots are evolving beyond fixed paths to human-interactive, task-adaptive behavior. These systems use vision-language-action (VLA) models: listening to natural language, understanding intent, locating the target, and safely manipulating it with appropriate force or speed, and path planning in real time. MIPS S8200 brings multi-modal inference to the edge so robots can operate autonomously without cloud round-trips, preserving privacy and uptime.

3) Healthcare, Agriculture, and Smart Manufacturing

MIPS S8200’s multi-modal capabilities enable diverse edge scenarios: predictive maintenance & quality control in smart factories; medical imaging assistance and monitoring at the point of care; precision farming (pest detection, crop monitoring) and autonomous implements. These are among the target verticals MIPS highlights for physical AI at the edge.

Open & Modular: Built for “Any Model, Past, Present; and Future”

Teams need freedom to optimize their models, and MIPS’ open approach leans on RISC-V (an open, extensible, instruction set architecture) so implementers can add custom instructions to benefit the workload (e.g., accelerating softmax in transformer attention) and co-design the software and hardware together. On the software side, MIPS embraces MLIR and the IREE ecosystem to modularize the compiler/runtime via dialects, making it easier to plug in optimizations, target diverse accelerators, and keep the toolchain transparent. MIPS Atlas Explorer lets teams model workloads, predict performance, and identify bottlenecks before hardware is fixed, allowing designers to prioritize use-case performance over raw TOPS.

Why S8200 for Product & Engineering Teams

  • Edge-first performance: deterministic latency for safety-critical actions in vehicles and robots.
  • Scalable efficiency: coherent cluster tiling from 10 TOPS to 100s of TOPS
  • Future-proof: designed to run convolutional and transformer workloads, including BEVFormer-class perception and VLA models without locking into proprietary stacks.
  • Open ecosystem: RISC-V + MLIR/IREE for customizable, transparent optimization pipelines.
  • Faster decisions: Atlas Explorer to de-risk design choices before tape-out and/or platform freeze.

The Bottom Line

As AI moves from cloud demos to real machines that navigate streets and factory floors, the winners will be platforms that sense-think-act at the edge. MIPS S8200 gives teams a practical path to deploy multi-modal, transformer-class AI locally—with the open tooling and simulation-first workflow engineers need to hit their latency, power, and safety targets. This shift also addresses a looming labor gap: U.S. manufacturing could face ~2.0–2.1M unfilled jobs2 by ~2030, increasing the need for automation that is safe, flexible, and easy to deploy – the autonomous edge with Physical AI built on MIPS.

Footnotes

1 – BEVFormer (ECCV 2022) arXiv: https://arxiv.org/abs/2203.17270

2 – Manufacturing labor gap (NAM/Deloitte): https://nam.org/2-1-million-manufacturing-jobs-could-go-unfilled-by-2030-13743/

The post Meet MIPS S8200: Real-Time, On-Device AI for the Physical World appeared first on Edge AI and Vision Alliance.

]]>
The Next Platform Shift: Physical and Edge AI, Powered by Arm https://www.edge-ai-vision.com/2026/01/the-next-platform-shift-physical-and-edge-ai-powered-by-arm/ Mon, 26 Jan 2026 09:00:15 +0000 https://www.edge-ai-vision.com/?p=56597 This blog post was originally published at Arm’s website. It is reprinted here with the permission of Arm. The Arm ecosystem is taking AI beyond the cloud and into the real-world As CES 2026 opens, a common thread quickly emerges across the show floor: most of what people are seeing, touching, and experiencing is already built on Arm. Arm-based […]

The post The Next Platform Shift: Physical and Edge AI, Powered by Arm appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at Arm’s website. It is reprinted here with the permission of Arm.

The Arm ecosystem is taking AI beyond the cloud and into the real-world

As CES 2026 opens, a common thread quickly emerges across the show floor: most of what people are seeing, touching, and experiencing is already built on Arm. Arm-based platforms power the devices and systems behind the product and technology demos, including intelligent vehicles navigating complex environments, robots interacting with humans, and immersive XR devices blending the digital and physical worlds.

These mark a broader inflection point for AI as it becomes increasingly sophisticated, moving from perception to action in the real world. As NVIDIA CEO Jensen Huang put it in his CES 2026 keynote, “the ChatGPT moment for physical AI is here.” And it’s happening on Arm.

Built for the real world: Edge-first design and proven software ecosystem

As AI moves into the physical world it must operate under real-world constraints. This next phase is defined by systems that can respond instantly, run efficiently, and operate reliably in the physical world. That transition demands compute that is designed for predictable, low-latency performance, extreme power and thermal efficiency, and continuous local inference. Just as critical, safety and security must be foundational, not layered on after deployment.

This is where edge-first platforms become essential, with Arm uniquely positioned. Arm delivers both unmatched energy efficiency and the world’s largest software developer base, making it the natural platform for building and scaling physical and edge AI systems globally. From operating systems and middleware to AI frameworks and developer tools, partners like NVIDIA and Qualcomm have developed their technologies on Arm over decades. That maturity means innovation can move faster, scale more broadly, and deploy more safely as AI transitions from digital intelligence to physical intelligence in the real world.

The next frontier: AI that moves

At CES 2026, NVIDIA outlined its vision for robotics, with on-stage demos of robots powered by its new physical AI stack. NVIDIA unveiled open robot foundation models, simulation tools, and edge hardware – including Jetson Thor that is built on Arm Neoverse – to accelerate AI that can reason, plan, and adapt in dynamic environments. Partners including Boston Dynamics, Caterpillar, LG Electronics, and NEURA Robotics showcased robots trained on NVIDIA’s full physical AI stack that leverages the Arm compute platform and deeply established software ecosystem spanning automotive, autonomous and robotics.

Qualcomm is further advancing its robotics portfolio with the new Dragonwing IQ10 robotics processor for advanced use cases like industrial robots, autonomous mobile robots (AMRs), and humanoid systems. Qualcomm’s robotics portfolio runs on the Arm compute platform, delivering energy-efficient robots and physical AI at the edge.

These robotics announcements build on pre-existing technologies pioneered across automotive, an industry that Arm has enabled for decades. Much like robots, AI systems in vehicles already sense their environment, make split-second decisions, and act safely in the physical world. As robotics evolves, it will increasingly mirror the complexity, safety requirements, and system architecture of modern vehicles. Many of the companies shaping the future of automotive will also design the robots of tomorrow, like Rivian. With the entire automotive industry already building on Arm, the transition from cars to robots is a natural one.

In automotive at CES 2026, NVIDIA debuted their Drive AV Software in the all-new Mercedes-Benz CLA. The AV stack’s in-vehicle compute and Hyperion architecture is powered by Arm Neoverse-based NVIDIA DRIVE AGX Thor. Meanwhile, Qualcomm’s Snapdragon Digital Chassis continues to expand, and is now adopted by global automakers transitioning to AI-defined vehicles. These platforms are builton Arm’s compute efficiency and consistent software ecosystem across infotainment, advanced driver assistance systems (ADAS), and in-vehicle AI.

Scaling intelligence from edge to cloud

Beyond robotics and automotive, we’re continuing to see momentum for Arm-based platforms both in the cloud and at the edge.

NVIDIA’s new Vera Rubin AI platform includes six new chips, two of which – Vera and Bluefield-4 – are built on Arm. Bluefield-4, a DPU powered by the Arm Neoverse V2-based Grace CPU, delivers up to six times the compute performance of its predecessor, transforming the DPU’s role in rack-scale inference and enabling new optimizations such as a new AI inference specific storage solution.

At the developer level, NVIDIA is pushing the frontier with powerful local AI systems. Developers can take advantage of the latest open and frontier AI models on a local deskside system, from 100-billion-parameter models on DGX Spark to 1-trillion-parameter models on DGX Station. Both platforms are powered by the Arm-based Grace Blackwell architecture, delivering petaflop-class performance and enabling seamless development that can scale from desk to data center.

On the personal computing front, the Windows on Arm AI PC portfolio is expanding into the mainstream, enabling OEMs to scale solutions to the mass market, extend battery life, and close the gap with legacy x86 systems.

Arm is the compute foundation powering CES 2026

What connects NVIDIA, Qualcomm, and a global ecosystem of innovators? Arm’s scalable, energy-efficient architecture.

CES 2026 is already demonstrating that the Arm compute platform powers data centers, robots, vehicles and countless edge devices, including:

  • NVIDIA’s accelerated platforms, from cloud to edge;
  • Qualcomm’s mobile, AI PC, XR/Wearables, and automotive systems; and
  • Nuro’s driverless fleets and Uber’s cloud infrastructure.

A prime example is the Nuro-Lucid-Uber partnership. Nuro’s latest driverless platform, built on the Arm Neoverse platform, enables efficient, real-time edge AI in autonomous Lucid Gravity SUVs. These vehicles, featuring NVIDIA DRIVE Thor and Arm Neoverse V3AE, deliver Level 4 autonomy with safety-critical reliability. Uber, meanwhile, is scaling on Arm-based Ampere servers to lower power use while increasing cloud density, illustrating Arm’s pivotal role from cloud to car.

Why ecosystem scale wins

CES 2026 sends a clear message: AI is now becoming embedded in the world around us. Making the physical and edge AI era a reality isn’t about individual chips or product launches; it requires full-stack ecosystem scale. This means:

  • Software portability across devices;
  • Developer familiarity and productivity;
  • Long product lifecycles with stable platforms; and
  • Standards-based innovation across industries.

The next platform shift isn’t defined by model size, but by intelligence that can operate autonomously, adapt in real time, and scale efficiently from cloud to edge. It’s about systems that are designed from day one to learn continuously, distribute decision-making, and perform within real-world constraints.

Arm provides the common compute foundation that makes this possible – trusted, scalable, and optimized for efficiency. That’s why Arm shows up everywhere at CES 2026 and wherever physical AI is taking shape.

The post The Next Platform Shift: Physical and Edge AI, Powered by Arm appeared first on Edge AI and Vision Alliance.

]]>