Tools - Edge AI and Vision Alliance https://www.edge-ai-vision.com/category/technologies/tools/ Designing machines that perceive and understand. Wed, 18 Feb 2026 21:29:00 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://www.edge-ai-vision.com/wp-content/uploads/2019/12/cropped-logo_colourplus-32x32.png Tools - Edge AI and Vision Alliance https://www.edge-ai-vision.com/category/technologies/tools/ 32 32 Ambarella to Showcase “The Ambarella Edge: From Agentic to Physical AI” at Embedded World 2026 https://www.edge-ai-vision.com/2026/02/ambarella-to-showcase-the-ambarella-edge-from-agentic-to-physical-ai-at-embedded-world-2026/ Wed, 18 Feb 2026 21:29:00 +0000 https://www.edge-ai-vision.com/?p=56852 Enabling developers to build, integrate, and deploy edge AI solutions at scale SANTA CLARA, Calif., — Ambarella, Inc. (NASDAQ: AMBA), an edge AI semiconductor company, today announced that it will exhibit at Embedded World 2026, taking place March 10-12 in Nuremberg, Germany. At the show, Ambarella’s theme, “The Ambarella Edge: From Agentic to Physical AI,” […]

The post Ambarella to Showcase “The Ambarella Edge: From Agentic to Physical AI” at Embedded World 2026 appeared first on Edge AI and Vision Alliance.

]]>
Enabling developers to build, integrate, and deploy edge AI solutions at scale

SANTA CLARA, Calif., — Ambarella, Inc. (NASDAQ: AMBA), an edge AI semiconductor company, today announced that it will exhibit at Embedded World 2026, taking place March 10-12 in Nuremberg, Germany. At the show, Ambarella’s theme, “The Ambarella Edge: From Agentic to Physical AI,” will anchor live demonstrations that highlight how Ambarella’s AI SoCs, software stack, and developer tools deliver a competitive advantage across a wide range of AI applications—from agentic automation and orchestration to physical AI systems deployed in real-world environments.

Ambarella’s exhibit will showcase a scalable AI SoC portfolio providing high AI performance per watt, complemented by a software platform that supports rapid development across diverse edge AI workloads, consistent performance characteristics, and efficient deployment at the edge. Live demos will feature differentiation at the stack-level, partner solutions, and developer workflows across robotics, industrial automation, automotive, edge infrastructure, security, and AIoT use cases.

“Developers are increasingly building AI applications that must operate under strict power, latency, and reliability constraints, while still delivering high levels of performance,” said Muneyb Minhazuddin, Customer Growth Officer at Ambarella. “Here, we are showing how Ambarella’s ecosystem—bringing together performance-efficient AI SoCs with a robust software stack, sample workflows, and engineering resources—accelerates the development of edge AI solutions for a wide range of vertical industry segments.”

Ambarella will also present its Developer Zone (DevZone), giving developers, partners, independent software vendors (ISVs), module builders, and system integrators hands-on access to software tools, optimized models, and agentic blueprints. Together, these elements make it easier for teams to integrate more efficiently and deploy at scale using Ambarella’s technology.

Ambarella’s exhibit will be located in Hall 5, Booth 5-355 at Embedded World 2026. To schedule a guided tour, please contact your Ambarella representative.

About Ambarella
Ambarella’s products are used in a wide variety of edge AI and human vision applications, including video security, advanced driver assistance systems (ADAS), electronic mirrors, telematics, driver/cabin monitoring, autonomous driving, edge infrastructure, drones and other robotics applications. Ambarella’s low-power systems-on-chip (SoCs) offer high-resolution video compression, advanced image and radar processing, and powerful deep neural network processing to enable intelligent perception, sensor fusion and planning. For more information, please visit
www.ambarella.com.

Ambarella Contacts

  • Media contact: Molly McCarthy, mmccarthy@ambarella.com, +1 408-400-1466
  • Investor contact: Louis Gerhardy, lgerhardy@ambarella.com, +1 408-636-2310
  • Sales contact: https://www.ambarella.com/contact-us/

The post Ambarella to Showcase “The Ambarella Edge: From Agentic to Physical AI” at Embedded World 2026 appeared first on Edge AI and Vision Alliance.

]]>
Upcoming Webinar on CSI-2 over D-PHY & C-PHY https://www.edge-ai-vision.com/2026/02/upcoming-webinar-on-csi-2-over-d-phy-c-phy/ Wed, 11 Feb 2026 20:54:05 +0000 https://www.edge-ai-vision.com/?p=56822 On February 24, 2026, at 9:00 am PST (12:00 pm EST) MIPI Alliance will deliver a webinar “MIPI CSI-2 over D-PHY & C-PHY: Advancing Imaging Conduit Solutions” From the event page: MIPI CSI-2®, together with MIPI D-PHY™ and C-PHY™ physical layers, form the foundation of image sensor solutions across a wide range of markets, including […]

The post Upcoming Webinar on CSI-2 over D-PHY & C-PHY appeared first on Edge AI and Vision Alliance.

]]>
On February 24, 2026, at 9:00 am PST (12:00 pm EST) MIPI Alliance will deliver a webinar “MIPI CSI-2 over D-PHY & C-PHY: Advancing Imaging Conduit Solutions” From the event page:

MIPI CSI-2®, together with MIPI D-PHY™ and C-PHY™ physical layers, form the foundation of image sensor solutions across a wide range of markets, including smartphones, computing, automotive, robotics and beyond. This webinar will explore the latest CSI-2 feature developments and the continued evolution of MIPI’s low-energy, high-performance physical layer transport solutions–D-PHY and C-PHY–which leverage differential and ternary signaling, respectively.

Attendees will gain insight into recently adopted capabilities such as event-based sensing and processing, as well as D‑PHY embedded clock mode. The session will also cover near-term enhancements, including dual-PHY macro support and multi-drop bus capability, along with a forward-looking view of longer-term feature developments. By the close of the webinar, attendees will understand how MIPI imaging solutions are enabling next-generation computer and machine vision applications across a wide range of product ecosystems.

Register Now »

Featured Speakers:

Haran Thanigasalam, Chair of the MIPI Camera Working Group and Camera Interest Group

Raj Kumar Nagpal, Chair of the MIPI D-PHY Working Group

George Wiley, Chair of the MIPI C-PHY Working Group

For more information and to register, visit the event page.

The post Upcoming Webinar on CSI-2 over D-PHY & C-PHY appeared first on Edge AI and Vision Alliance.

]]>
What’s New in MIPI Security: MIPI CCISE and Security for Debug https://www.edge-ai-vision.com/2026/02/whats-new-in-mipi-security-mipi-ccise-and-security-for-debug/ Wed, 11 Feb 2026 09:00:30 +0000 https://www.edge-ai-vision.com/?p=56797 This blog post was originally published at MIPI Alliance’s website. It is reprinted here with the permission of MIPI Alliance. As the need for security becomes increasingly more critical, MIPI Alliance has continued to broaden its portfolio of standardized solutions, adding two more specifications in late 2025, and continuing work on significant updates to the MIPI Camera […]

The post What’s New in MIPI Security: MIPI CCISE and Security for Debug appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at MIPI Alliance’s website. It is reprinted here with the permission of MIPI Alliance.

As the need for security becomes increasingly more critical, MIPI Alliance has continued to broaden its portfolio of standardized solutions, adding two more specifications in late 2025, and continuing work on significant updates to the MIPI Camera Security Framework specifications slated for completion in mid-2026.

Read on to learn more about the newly released specifications and what lies ahead for the MIPI Camera Security Framework.

MIPI CCISE: Protecting Camera Command and Control Interfaces

The new MIPI Command and Control Interface Service Extensions (MIPI CCISE™) v1.0, released in December 2025, defines a set of security service extensions that can apply data integrity protection and optional encryption to the MIPI CSI-2® camera control interface based on the I2C transport interface. The protection is provided end-to-end between the image sensor and its associated SoC or electronic control unit (ECU).

MIPI CCISE rounds out the existing MIPI Camera Security Framework, which includes MIPI Camera Security v1.0, MIPI Camera Security Profiles v1.0 and MIPI Camera Service Extensions (MIPI CSE™) v2.0. Together, the specifications define a flexible approach to add end-to-end security to image sensor applications that leverage MIPI CSI-2, enabling authentication of image system components, data integrity protection, optional data encryption, and protection of image sensor command and control channels. The specifications provide implementers with a choice of protocols, cryptographic algorithms, integrity tag modes and security protection levels to offer a solution that is uniquely effective in both its security extent and implementation flexibility.

Use of MIPI camera security specifications enables an automotive system to fulfill advanced driver-assistance systems (ADAS) safety goals up to ASIL D level (per ISO 26262:2018) and supports functional safety and security mechanisms, including end-to-end protection as recommended for high diagnostic coverage of the data communication bus.

While the initial focus of the camera security framework was on securing long-reach, wired in-vehicle network connections between CSI-2 based image sensors and their related processing ECUs, the specifications are also highly relevant to non-automotive machine vision applications that leverage CSI-2-based image sensors.

A downloadable white paper, A Guide to the MIPI Camera Security Framework for Automotive Applications, provides a detailed explanation of how these specifications work together to provide application layer end-to-end data protection.

MIPI Security Specification for Debug: Enabling Remote Debug of Systems in the Field

The recently adopted MIPI Security Specification for Debug defines a standardized method for establishing secure, authenticated debug sessions between a debug and test system and a target system.

Designed to enable remote debugging in potentially hostile real-world locations outside of a test lab, the specification allows secure remote debugging of production devices without relying solely on traditional physical protections such as buried traces or restricted access to debug ports. Instead, it introduces a trusted, cryptographically protected communication path that spans end-to-end, from the physical debug tool to the target device’s package pins, through all connectors, cabling, routing and bridges.

The new speciation adds a secure messaging layer to the existing MIPI debug architecture, wrapping debug traffic in encrypted, authenticated messages while remaining interface-agnostic. Core components include a secure communications manager that is responsible for security protocol, data model processing and key generation; cryptographic message-protection functions; and secure communication management paths. To accomplish this, the specification leverages the DMTF Security Protocol and Data Model (SPDM) industry standard for platform security.

This approach ensures authenticity, confidentiality and integrity for all debug communications, regardless of the underlying transport interface, whether MIPI I3C®, USB, PCIe or others. Debugger behavior remains consistent across interfaces, simplifying implementation and validation.

The specification complements the broader MIPI debug ecosystem.

 

Coming in 2026: New “Fast Boot” Options for MIPI Camera Security

Enhancements to the suite of MIPI camera security specifications are being developed to enable faster boot times for imaging systems, minimizing the time taken from power-on to streaming of secure video data.

These enhancements will continue to leverage the DMTF SPDM framework and message formats, but will introduce an optional new security mode that will half the number of security handshake operations required to complete the establishment of a secure video streaming channel compared with currently defined security modes. Image sensors will be able to implement both current and new modes of operation to provide backward compatibility, and SoCs may only require software updates to implement the new mode of operation.

Both the MIPI Camera Security and the MIPI Camera Security Profiles specifications are scheduled to be updated to v1.1 in mid-2026. However, the companion specifications that will fully enable the enhancements, MIPI CSE v2.1 and the new CSE Exchange Format (EF) v1.0, will follow later this year.

All security specifications are currently available only to MIPI Alliance members.

 

Ian Smith
MIPI Alliance Technical Content Consultant

The post What’s New in MIPI Security: MIPI CCISE and Security for Debug appeared first on Edge AI and Vision Alliance.

]]>
Production-Ready, Full-Stack Edge AI Solutions Turn Microchip’s MCUs and MPUs Into Catalysts for Intelligent Real-Time Decision-Making https://www.edge-ai-vision.com/2026/02/production-ready-full-stack-edge-ai-solutions-turn-microchips-mcus-and-mpus-into-catalysts-for-intelligent-real-time-decision-making/ Tue, 10 Feb 2026 20:15:25 +0000 https://www.edge-ai-vision.com/?p=56811 Chandler, Ariz., February 10, 2026 — A major next step for artificial intelligence (AI) and machine learning (ML) innovation is moving ML models from the cloud to the edge for real-time inferencing and decision-making applications in today’s industrial, automotive, data center and consumer Internet of Things (IoT) networks. Microchip Technology (Nasdaq: MCHP) has extended its edge AI offering […]

The post Production-Ready, Full-Stack Edge AI Solutions Turn Microchip’s MCUs and MPUs Into Catalysts for Intelligent Real-Time Decision-Making appeared first on Edge AI and Vision Alliance.

]]>
Chandler, Ariz., February 10, 2026 — A major next step for artificial intelligence (AI) and machine learning (ML) innovation is moving ML models from the cloud to the edge for real-time inferencing and decision-making applications in today’s industrial, automotive, data center and consumer Internet of Things (IoT) networks. Microchip Technology (Nasdaq: MCHP) has extended its edge AI offering with full-stack solutions that streamline development of production-ready applications using its microcontrollers (MCUs) and microprocessors (MPUs) – the devices that are located closest to the many sensors at the edge that gather sensor data, control motors, trigger alarms and actuators, and more.

Microchip’s products are long-time embedded-design workhorses, and the new solutions turn its MCUs and MPUs into complete platforms for bringing secure, efficient and scalable intelligence to the edge. The company has rapidly built and expanded its growing, full-stack portfolio of silicon, software and tools that solve edge AI performance, power consumption and security challenges while simplifying implementation.

“AI at the edge is no longer experimental—it’s expected, because of its many advantages over cloud implementations,” said Mark Reiten, corporate vice president of Microchip’s Edge AI business unit. “We created our Edge AI business unit to combine our MCUs, MPUs and FPGAs with optimized ML models plus model acceleration and robust development tools. Now, the addition of the first in our planned family of application solutions accelerates the design of secure and efficient intelligent systems that are ready to deploy in demanding markets.”

Microchip’s new full-stack application solutions for its MCUs and MPUs encompass pre-trained and deployable models as well as application code that can be modified, enhanced and applied to different environments. This can be done either through Microchip’s embedded software and ML development tools or those from Microchip partners. The new solutions include:

  • Detection and classification of dangerous electrical arc faults using AI-based signal analysis
  • Condition monitoring and equipment health assessment for predictive maintenance
  • Facial recognition with liveness detection supporting secure, on-device identity verification
  • Keyword spotting for consumer, industrial and automotive command-and-control interfaces

Development Tools for AI at the Edge
Engineers can leverage familiar Microchip development platforms to rapidly prototype and deploy AI models, reducing complexity and accelerating design cycles. The company’s MPLAB? X Integrated Development Environment (IDE) with its MPLAB Harmony software framework and MPLAB ML Development Suite plug-in provides a unified and scalable approach for supporting embedded AI model integration through optimized libraries. Developers can, for example, start with simple proof-of-concept tasks on 8-bit MCUs and move them to production-ready high-performance applications on Microchip’s 16- or 32-bit MCUs.

For its FPGAs, Microchip’s VectorBlox™ Accelerator SDK 2.0 AI/ML inference platform accelerates vision, Human-Machine Interface (HMI), sensor analytics and other computationally intensive workloads at the edge while also enabling training, simulation and model optimization within a consistent workflow.

Other support includes training and enablement tools like the company’s motor control reference design featuring its dsPIC? DSCs for data extraction in a real-time edge AI data pipeline, and others for load disaggregation in smart e-metering, object detection and counting, and motion surveillance. Microchip also helps solve edge AI challenges through complementary components that are required for product design and development. These include PCIe? devices that connect embedded compute at the edge and high-density power modules that enable edge AI in industrial automation and data center applications.

The analyst firm IoT Analytics stated in its October 2025 market report that embedding edge AI capabilities directly into MCUs is among the top four industry trends, enabling AI-driven applications “…that reduce latency, enhance data privacy, and lower dependency on cloud infrastructure.” Microchip’s AI initiative reinforces this trend with its MCU and MPU platform, as well as its FPGAs. Edge AI ecosystems increasingly require support for both software AI accelerators and integrated hardware acceleration on multiple devices across a range of memory configurations.

Availability
Microchip is actively working with customers of its full-stack application solutions, providing a variety of model training and other workflow support. The company is also working with multiple partners whose software provides developers with additional deployment-ready options. To learn more about Microchip’s edge AI offering and new full-stack solutions, visit www.microchip.com/EdgeAI. Additional information on each solution can be found at Microchip’s on-demand Edge AI Webinar Series, starting February 17.

About Microchip
Microchip Technology Inc. is a broadline supplier of semiconductors committed to making innovative design easier through total system solutions that address critical challenges at the intersection of emerging technologies and durable end markets. Its easy-to-use development tools and comprehensive product portfolio support customers throughout the design process, from concept to completion. Headquartered in Chandler, Arizona, Microchip offers outstanding technical support and delivers solutions across the industrial, automotive, consumer, aerospace and defense, communications and computing markets. For more information, visit the Microchip website at www.microchip.com.

The post Production-Ready, Full-Stack Edge AI Solutions Turn Microchip’s MCUs and MPUs Into Catalysts for Intelligent Real-Time Decision-Making appeared first on Edge AI and Vision Alliance.

]]>
Accelerating next-generation automotive designs with the TDA5 Virtualizer™ Development Kit https://www.edge-ai-vision.com/2026/02/accelerating-next-generation-automotive-designs-with-the-tda5-virtualizer-development-kit/ Tue, 10 Feb 2026 09:00:45 +0000 https://www.edge-ai-vision.com/?p=56795 This blog post was originally published at Texas Instruments’ website. It is reprinted here with the permission of Texas Instruments. Introduction Continuous innovation in high-performance, power-efficient systems-on-a-chip (SoCs) is enabling safer, smarter and more autonomous driving experiences in even more vehicles. As another big step forward, Texas Instruments and Synopsys developed a Virtualizer Development Kit™ (VDK) for the […]

The post Accelerating next-generation automotive designs with the TDA5 Virtualizer™ Development Kit appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at Texas Instruments’ website. It is reprinted here with the permission of Texas Instruments.

Introduction

Continuous innovation in high-performance, power-efficient systems-on-a-chip (SoCs) is enabling safer, smarter and more autonomous driving experiences in even more vehicles.

As another big step forward, Texas Instruments and Synopsys developed a Virtualizer Development Kit™ (VDK) for the TDA5 high-performance compute SoC family, which includes the TDA54-Q1. The TDA5 VDK enables developers to evaluate, develop and test devices in the TDA5 family ahead of initial silicon samples, providing a seamless development cycle with one software development kit (SDK) for both physical and virtual SoCs. Each device in the TDA5 family have a corresponding VDK to enable a common virtualization design and consistent user experience.

Along with the VDK, TI and Synopsys are providing additional components to create the full virtual development environment. Figure 1 provides an overview of available resources, which include:

  • The virtual prototype, which is the simulated model of a TDA5 SoC.
  • Deployment services from Synopsys, which are add-ons and interfaces that enable developers to integrate the VDK with other virtual components or tools.
  • Documentation for the TDA5 and the TDA54-Q1 software development kit.
  • Reference software examples for each TDA5 VDK and SDK to help developers get started.

Figure 1 Block diagram showing components provided by TI and Synopsys to get started with development on the VDK.

Why virtualization matters

Virtualization designs greatly reduce automotive development cycles by enabling software development without physical hardware. This allows developers to accelerate or “shift-left” development by starting software earlier and then migrating to physical hardware once available (as shown in Figure 2). Additionally, earlier software development extends to ecosystem partners, enabling key third-party software components to be available earlier.

Figure 2 Visualization of how software can be migrated from VDK to SoC.

Accelerating development with virtualization

The TDA5 VDK helps software developers work more effectively and efficiently, allowing them to use software-in-the-loop testing, so they can test and validate virtually without needing costly on-the-road testing.

Developers can use the TDA5 VDK to enhance debugging capabilities with deeper insights into internal device operations than what is typically exposed through the physical SoC pins. The TDA5 VDK also provides fault injection capabilities, enabling developers to simulate failures inside the device to get better information on how the software behaves when something goes wrong.

Scalability of virtualization

Scalability is another key benefit of the TDA5 VDK because virtualization platforms don’t require shipping, allowing development teams to ramp faster and be more responsive with resource allocation for ongoing projects. The TDA5 VDK also enables automated test environments, since development teams can replace traditional “board farms” with virtual environments running on remote computers. This helps automakers streamline continuous integration, continuous deployment (CICD) workflows to more efficiently and effectively accomplishing testing.

Since the TDA5 VDK is also available for future TDA5 SoCs, developers can scale work across multiple projects. If a developer is using the VDK for a specific TDA5 device (for example, TDA54), they can explore other products in the TDA5 family in a virtual environment without needing to change hardware configurations.

System integration

Virtualization designs such as the TDA5 VDK serve as the foundation for developers to build complete digital twins for their designs. By virtualizing the SoC, it can be integrated with other virtual components and tools to create larger simulated systems such as full ECU networks. Figure 3 shows how developers can leverage the capabilities of the Synopsys platform to integrate the VDK with other virtual components and simulate complete designs.


Figure 3 Diagram showing how the VDK can integrate with other virtual components and simulate complete designs.

 

Digital environment simulation tools can also be integrated with the TDA5 VDK to enable virtual testing in simulated driving scenarios, allowing developers to quickly perform reproducible testing. The TDA5 VDK also allows developers to leverage the broad ecosystem of tools and partners from Synopsys to get the most of their virtual development experience.

Getting started with the TDA54 VDK

The TDA54 SDK is now available on TI.com to help engineers get started with the TDA54 virtual development kit. Samples of the TDA54-Q1 SoC, the first device in the TDA5 family, will be sampling to select automotive customers by the end of 2026. Contact TI for more information about the TDA5 VDK and how to get started.

The post Accelerating next-generation automotive designs with the TDA5 Virtualizer™ Development Kit appeared first on Edge AI and Vision Alliance.

]]>
Into the Omniverse: OpenUSD and NVIDIA Halos Accelerate Safety for Robotaxis, Physical AI Systems https://www.edge-ai-vision.com/2026/02/into-the-omniverse-openusd-and-nvidia-halos-accelerate-safety-for-robotaxis-physical-ai-systems/ Mon, 09 Feb 2026 09:00:59 +0000 https://www.edge-ai-vision.com/?p=56608 This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. NVIDIA Editor’s note: This post is part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows using the latest advancements in OpenUSD and NVIDIA Omniverse. New NVIDIA safety […]

The post Into the Omniverse: OpenUSD and NVIDIA Halos Accelerate Safety for Robotaxis, Physical AI Systems appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA.

NVIDIA Editor’s note: This post is part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows using the latest advancements in OpenUSD and NVIDIA Omniverse.

New NVIDIA safety frameworks and technologies are advancing how developers build safe physical AI.

Physical AI is moving from research labs into the real world, powering intelligent robots and autonomous vehicles (AVs) — such as robotaxis — that must reliably sense, reason and act amid unpredictable conditions.

To safely scale these systems, developers need workflows that connect real-world data, high-fidelity simulation and robust AI models atop the common foundation provided by the OpenUSD framework.

The recently published OpenUSD Core Specification 1.0, OpenUSD — aka Universal Scene Description — now defines standard data types, file formats and composition behaviors, giving developers predictable, interoperable USD pipelines as they scale autonomous systems.

Powered by OpenUSD, NVIDIA Omniverse libraries combine NVIDIA RTX rendering, physics simulation and efficient runtimes to create digital twins and simulation-ready (SimReady) assets that accurately reflect real-world environments for synthetic data generation and testing.

NVIDIA Cosmos world foundation models can run on top of these simulations to amplify data variation, generating new weather, lighting and terrain conditions from the same scenes so teams can safely cover rare and challenging edge cases.

 

In addition, advancements in synthetic data generation, multimodal datasets and SimReady workflows are now converging with the NVIDIA Halos framework for AV safety, creating a standards-based path to safer, faster, more cost-effective deployment of next-generation autonomous machines.

Building the Foundation for Safe Physical AI

Open Standards and SimReady Assets

The OpenUSD Core Specification 1.0 establishes the standard data models and behaviors that underpin SimReady assets, enabling developers to build interoperable simulation pipelines for AI factories and robotics on OpenUSD.

Built on this foundation, SimReady 3D assets can be reused across tools and teams and loaded directly into NVIDIA Isaac Sim, where USDPhysics colliders, rigid body dynamics and composition-arc–based variants let teams test robots in virtual facilities that closely mirror real operations.

Open-Source Learning 

The Learn OpenUSD curriculum is now open source and available on GitHub, enabling contributors to localize and adapt templates, exercises and content for different audiences, languages and use cases. This gives educators a ready-made foundation to onboard new teams into OpenUSD-centric simulation workflows.​

Generative Worlds as Safety Multiplier

Gaussian splatting — a technique that uses editable 3D elements to render environments quickly and with high fidelity — and world models are accelerating simulation pipelines for safe robotics testing and validation.

At SIGGRAPH Asia, the NVIDIA Research team introduced Play4D, a streaming pipeline that enables 4D Gaussian splatting to accurately render dynamic scenes and improve realism.

Spatial intelligence company World Labs is using its Marble generative world model with NVIDIA Isaac Sim and Omniverse NuRec so researchers can turn text prompts and sample images into photorealistic, Gaussian-based physics-ready 3D environments in hours instead of weeks.

Those worlds can then be used for physical AI training, testing and sim-to-real transfer. This high-fidelity simulation workflow expands the range of scenarios robots can practice in while keeping experimentation safely in simulation.

Lightwheel Helps Teams Scale Robot Training With SimReady Assets

Powered by OpenUSD, Lightwheel’s SimReady asset library includes a common scene description layer, making it easy to assemble high-fidelity digital twins for robots. The SimReady assets are embedded with precise geometry, materials and validated physical properties, which can be loaded directly into NVIDIA Isaac Sim and Isaac Lab for robot training. This allows robots to experience realistic contacts, dynamics and sensor feedback as they learn.

End-to-End Autonomous Vehicle Safety

End-to-end autonomous vehicle safety advancements are accelerating with new research, open frameworks and inspection services that make validation more rigorous and scalable.

NVIDIA researchers, with collaborators at Harvard University and Stanford University, recently introduced the Sim2Val framework to statistically combine real-world and simulated test results, reducing AV developers’ need for costly physical mileage while demonstrating how robotaxis and AVs can behave safely across rare and safety-critical scenarios.

Learn more by watching NVIDIA’s “Safety in the Loop” livestream:

 

These innovations are complemented by a new, open-source NVIDIA Omniverse NuRec Fixer, a Cosmos-based model trained on AV data that removes artifacts in neural reconstructions to produce higher-quality SimReady assets.

To align these advances with rigorous global standards, the NVIDIA Halos AI Systems Inspection Lab — accredited by ANAB — provides impartial inspection and certification of Halos elements across robotaxi fleets, AV stacks, sensors and manufacturer platforms through the Halos Certification Program.

AV Ecosystem Leaders Putting Physical AI Safety to Work

BoschNuro and Wayve are among the first participants in the NVIDIA Halos AI Systems Inspection Lab, which aims to accelerate the safe, large-scale deployment of robotaxi fleets. Onsemi, which makes sensor systems for AVs, industrial automation and medical applications, has recently become the first company to pass inspection for the NVIDIA Halos AI Systems Inspection Lab.

 

The open-source CARLA simulator integrates NVIDIA NuRec and Cosmos Transfer to generate reconstructed drives and diverse scenario variations, while Voxel51’s FiftyOne engine, linked to Cosmos Dataset Search, NuRec and Cosmos Transfer, helps teams curate, annotate and evaluate multimodal datasets across the AV pipeline.​

 

Mcity at the University of Michigan is enhancing the digital twin of its 32-acre AV test facility using Omniverse libraries and technologies. The team is integrating the NVIDIA Blueprint for AV simulation and Omniverse Sensor RTX application programming interfaces to create physics-based models of camera, lidar, radar and ultrasonic sensors.

By aligning real sensor recordings with high-fidelity simulated data and sharing assets openly, Mcity enables safe, repeatable testing of rare and hazardous driving scenarios before vehicles operate on public roads.

Get Plugged Into the World of OpenUSD and Physical AI Safety

Learn more about OpenUSD, NVIDIA Halos and physical AI safety by exploring these resources:

 

Katie Washabaugh, Product Marketing Manager for Autonomous Vehicle Simulation, NVIDIA

The post Into the Omniverse: OpenUSD and NVIDIA Halos Accelerate Safety for Robotaxis, Physical AI Systems appeared first on Edge AI and Vision Alliance.

]]>
Enhancing Images: Adaptive Shadow Correction Using OpenCV https://www.edge-ai-vision.com/2026/02/enhancing-images-adaptive-shadow-correction-using-opencv/ Thu, 05 Feb 2026 09:00:50 +0000 https://www.edge-ai-vision.com/?p=56674 This blog post was originally published at OpenCV’s website. It is reprinted here with the permission of OpenCV. Imagine capturing the perfect landscape photo on a sunny day, only to find harsh shadows obscuring key details and distorting colors. Similarly, in computer vision projects, shadows can interfere with object detection algorithms, leading to inaccurate results. […]

The post Enhancing Images: Adaptive Shadow Correction Using OpenCV appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at OpenCV’s website. It is reprinted here with the permission of OpenCV.

Imagine capturing the perfect landscape photo on a sunny day, only to find harsh shadows obscuring key details and distorting colors. Similarly, in computer vision projects, shadows can interfere with object detection algorithms, leading to inaccurate results. Shadows are a common nuisance in image processing, introducing uneven illumination that compromises both aesthetic quality and functional analysis.

In this blog post, we’ll tackle this challenge head-on with a practical approach to shadow correction using OpenCV. Our method leverages Multi-Scale Retinex (MSR) for illumination normalization, combined with adaptive shadow masking in LAB and HSV color spaces. This technique not only removes shadows effectively but also preserves natural colors and textures.

We’ll provide a complete Python script that includes interactive trackbars for real-time parameter tuning, making it easy to adapt to different images. Whether you’re a photographer, a developer working on augmented reality, or just curious about image enhancement, this guide will equip you with the tools to banish shadows from your images.

How Shadows Affect Image Appearance

Before diving into solutions, let’s understand shadows and their challenges in image processing. A shadow forms when an object blocks light, reducing illumination on a surface. This dims the area but doesn’t alter the object’s inherent properties.

Key points to consider,

  • Shadows impact illumination, not reflectance (the object’s true color and material).
  • The same object may look dark in shadow and bright in light, confusing viewers and algorithms.
  • Shadows vary: soft (smooth transitions) or hard (sharp edges), needing precise detection to prevent artifacts.

Simply brightening an image won’t fix shadows; it can overexpose highlights or skew colors. Instead, effective correction separates illumination from reflectance. The image model is I = R × L, where I denotes the observed image, R denotes reflectance, and L denotes illumination. To recover R, estimate and normalize L, often using logs for stability.

Real-world examples show how shadows cause uneven lighting, which our method corrects by isolating and adjusting these components.

These visuals illustrate uneven lighting from shadows, guiding our approach to preserve true colors.

Understanding the Fundamentals

Before diving into the code, let’s build a solid foundation on the key concepts.

Color Spaces Explained

Images are typically represented in RGB (Red, Green, Blue), but for shadow removal, other color spaces are more suitable because they separate luminance (brightness) from chrominance (color).

  • LAB Color Space: This is a perceptually uniform color space where L represents lightness (0-100), A the green-red axis, and B the blue-yellow axis. It’s ideal for shadow correction because we can manipulate the L channel independently without affecting colors. In OpenCV, we convert using cv.cvtColor(img, cv.COLOR_BGR2LAB).

Fig: LAB Color Space
  • HSV Color Space: Hue (H), Saturation (S), and Value (V). Shadows often appear as areas with low saturation and value. We use the S channel to help identify shadows, as they tend to desaturate colors.

Fig: HSV Color Space

 

Switching to these spaces allows us to target shadows more precisely.

Retinex Theory Basics

Retinex theory, proposed by Edwin Land in the 1970s, models how the human visual system achieves color constancy, perceiving colors consistently under varying illumination, much like how our eyes adapt to different lighting without changing perceived object colors. The core idea is that an image can be decomposed into reflectance (intrinsic object properties, like surface material) and illumination (lighting variations, such as shadows or highlights).

Multi-Scale Retinex (MSR) extends this by applying Gaussian blurs at multiple scales to estimate illumination, inspired by the multi-resolution processing in human vision. For each scale:

  1. Blur the image to approximate the illumination component and smooth out local variations.
  2. Subtract the log of the blurred image from the log of the original (to handle the multiplicative nature of illumination effects, as log transforms multiplication to addition for easier separation).
  3. Average across scales for a robust estimate, balancing local and global corrections.

This results in an enhanced image with reduced shadows, improved dynamic range, and better contrast in low-light areas. In our code, we apply MSR only to the L channel for efficiency, focusing on luminance where shadows primarily affect brightness.

Fig: The structure of multi-scale retinex (MSR)

Shadow Detection Challenges

Simple thresholding on brightness fails because shadows vary in intensity (from subtle gradients to deep darkness) and can blend seamlessly with inherently dark objects, leading to false positives or missed areas. We need an adaptive approach that considers context:

  • Combine low luminance (L < threshold) with low saturation (S < threshold), as shadows not only darken but also desaturate colors by reducing light intensity without adding new hues.
  • Use morphological operations, such as closing to fill small gaps in the mask and opening to remove isolated noise specks, to refine the mask for better accuracy and continuity.
  • Smooth the mask with a Gaussian blur to achieve seamless blending and prevent visible edges or halos in the corrected image.

This ensures we correct only shadowed areas without over-processing the rest of the image, maintaining natural transitions and avoiding artifacts.

Overview of the Shadow Removal Pipeline

Our pipeline processes the image step-by-step for effective shadow correction:

  1. Load and Preprocess: Read the image and resize for faster preview (e.g., 50% scale).
  2. Color Space Conversion: Convert to LAB (for luminance/chrominance) and HSV (for saturation).
  3. Compute Retinex: Apply Multi-Scale Retinex on the L channel to create an illumination-normalized version.
  4. Generate Shadow Mask: Use adaptive conditions on normalized L and S, then blur for softness.
  5. Remove Shadows: Blend the original L with Retinex L in shadowed areas. For A/B channels, blend with estimated background colors to avoid color shifts.
  6. Interactive Tuning: Use OpenCV trackbars to adjust strength, sensitivity, and blur in real-time.
  7. Display Results: Show original, mask, and corrected image side-by-side.

This approach is adaptive, meaning it responds to image content, and the parameters allow customization for various lighting conditions.

Diving into the Code: Step-by-Step Breakdown

Let’s dissect the Python script. We’ll assume you have OpenCV and NumPy installed (pip install opencv-python numpy).

Prerequisites

  • Python 3.x
  • OpenCV (cv2)
  • NumPy (np)

Core Functions

Multi-Scale Illumination Normalization (Retinex Processing)

This function computes the Multi-Scale Retinex on the lightness channel.

def multiscale_retinex(L):
    scales = [31, 101, 301]  # Small, medium, large scales for different illumination sizes
    retinex = np.zeros_like(L, dtype=np.float32)
    for k in scales:
        blur = cv.GaussianBlur(L, (k, k), 0)  # Blur to estimate illumination
        retinex += np.log(L + 1) - np.log(blur + 1)  # Log subtraction for reflectance
    retinex /= len(scales)  # Average across scales
    retinex = cv.normalize(retinex, None, 0, 255, cv.NORM_MINMAX)  # Scale to 0-255
    return retinex

Why these scales? Smaller kernels capture fine details, larger ones handle broad shadows. The +1 avoids log(0) issues. Normalization ensures the output matches the input range.

Adaptive Shadow Detection and Mask Generation

Creates a binary shadow mask and softens it.

def compute_shadow_mask_adaptive(L, S, sensitivity=1.0, mask_blur=21):
    shadow_cond = (L < 0.5 * sensitivity) & (S < 0.5)  # Low brightness and saturation
    mask = shadow_cond.astype(np.float32)  # 0 or 1 float
    mask_blur = mask_blur if mask_blur % 2 == 1 else mask_blur + 1  # Ensure odd for Gaussian
    mask = cv.GaussianBlur(mask, (mask_blur, mask_blur), 0)  # Soften edges
    return mask

Sensitivity scales the luminance threshold, allowing tuning for faint or dark shadows. The blur prevents harsh transitions.

Mask-Guided Shadow Removal and Color Preservation

The heart of the correction: refines the mask and blends channels.

def remove_shadows_adaptive_v3(L, A, B, L_retinex, strength=0.9, mask=None, mask_blur=31):
    kernel = cv.getStructuringElement(cv.MORPH_ELLIPSE, (7, 7))  # Elliptical kernel for morphology
    shadow_mask = cv.morphologyEx(mask, cv.MORPH_CLOSE, kernel)  # Close gaps
    shadow_mask = cv.morphologyEx(shadow_mask, cv.MORPH_OPEN, kernel)  # Remove noise
    shadow_mask = cv.dilate(shadow_mask, kernel, iterations=1)  # Expand slightly
    shadow_mask = cv.GaussianBlur(shadow_mask, (mask_blur, mask_blur), 0)  # Smooth
    mask_smooth = np.power(shadow_mask, 1.5)  # Non-linear for stronger effect in core shadows

    L_final = (1 - strength * mask_smooth) * L + (strength * mask_smooth) * L_retinex  # Blend L
    L_final = np.clip(L_final, 0, 255)  # Prevent overflow

    mask_inv = 1 - mask_smooth  # Non-shadow areas
    A_bg = np.sum(A * mask_inv) / (np.sum(mask_inv) + 1e-6)  # Average A in non-shadows
    B_bg = np.sum(B * mask_inv) / (np.sum(mask_inv) + 1e-6)  # Average B

    A_final = (1 - strength * mask_smooth) * A + (strength * mask_smooth) * A_bg  # Blend A/B
    B_final = (1 - strength * mask_smooth) * B + (strength * mask_smooth) * B_bg

    return L_final, A_final, B_final

Morphological ops refine the mask: closing fills holes, opening removes specks, dilation ensures coverage. The power function makes blending more aggressive in deep shadows. Background color estimation for A/B preserves chromaticity.

Trackbar Callback Utility

A placeholder for trackbar callbacks, as required by OpenCV.

def nothing(x):
    pass

Full Code:
The entry point handles image loading, setup, and the interactive loop.

import cv2 as cv
import numpy as np

# Retinex (compute once)
def multiscale_retinex(L):
    scales = [31, 101, 301]
    retinex = np.zeros_like(L, dtype=np.float32)
    for k in scales:
        blur = cv.GaussianBlur(L, (k, k), 0)
        retinex += np.log(L + 1) - np.log(blur + 1)
    retinex /= len(scales)
    retinex = cv.normalize(retinex, None, 0, 255, cv.NORM_MINMAX)
    return retinex

# Adaptive Shadow Mask
def compute_shadow_mask_adaptive(L, S, sensitivity=1.0, mask_blur=21):
    shadow_cond = (L < 0.5 * sensitivity) & (S < 0.5)
    mask = shadow_cond.astype(np.float32)
    mask_blur = mask_blur if mask_blur % 2 == 1 else mask_blur + 1
    mask = cv.GaussianBlur(mask, (mask_blur, mask_blur), 0)
    return mask

# Shadow Removal
def remove_shadows_adaptive_v3(L, A, B, L_retinex, strength=0.9, mask=None, mask_blur=31):
    kernel = cv.getStructuringElement(cv.MORPH_ELLIPSE, (7, 7))
    shadow_mask = cv.morphologyEx(mask, cv.MORPH_CLOSE, kernel)
    shadow_mask = cv.morphologyEx(shadow_mask, cv.MORPH_OPEN, kernel)
    shadow_mask = cv.dilate(shadow_mask, kernel, iterations=1)
    shadow_mask = cv.GaussianBlur(shadow_mask, (mask_blur, mask_blur), 0)
    mask_smooth = np.power(shadow_mask, 1.5)

    L_final = (1 - strength * mask_smooth) * L + (strength * mask_smooth) * L_retinex
    L_final = np.clip(L_final, 0, 255)

    mask_inv = 1 - mask_smooth
    A_bg = np.sum(A * mask_inv) / (np.sum(mask_inv) + 1e-6)
    B_bg = np.sum(B * mask_inv) / (np.sum(mask_inv) + 1e-6)

    A_final = (1 - strength * mask_smooth) * A + (strength * mask_smooth) * A_bg
    B_final = (1 - strength * mask_smooth) * B + (strength * mask_smooth) * B_bg

    return L_final, A_final, B_final

def nothing(x):
    pass

# Main
if __name__ == "__main__":
    img = cv.imread("image.jpg")
    if img is None:
        raise IOError("Image not found")

    scale = 0.5
    img_preview = cv.resize(img, None, fx=scale, fy=scale, interpolation=cv.INTER_AREA)

    lab = cv.cvtColor(img_preview, cv.COLOR_BGR2LAB).astype(np.float32)
    L, A, B = cv.split(lab)
    L_retinex = multiscale_retinex(L)

    hsv = cv.cvtColor(img_preview, cv.COLOR_BGR2HSV).astype(np.float32)
    S = hsv[:, :, 1] / 255.0

    cv.namedWindow("Shadow Removal", cv.WINDOW_NORMAL)
    cv.createTrackbar("Strength", "Shadow Removal", 90, 200, nothing)
    cv.createTrackbar("Sensitivity", "Shadow Removal", 90, 200, nothing)
    cv.createTrackbar("MaskBlur", "Shadow Removal", 31, 101, nothing)

    while True:
        strength = cv.getTrackbarPos("Strength", "Shadow Removal") / 100.0
        sensitivity = cv.getTrackbarPos("Sensitivity", "Shadow Removal") / 100.0
        mask_blur = cv.getTrackbarPos("MaskBlur", "Shadow Removal")
        mask_blur = max(3, mask_blur)
        mask_blur = mask_blur if mask_blur % 2 == 1 else mask_blur + 1

        mask = compute_shadow_mask_adaptive(L / 255.0, S, sensitivity, mask_blur)

        L_final, A_final, B_final = remove_shadows_adaptive_v3(
            L, A, B, L_retinex, strength, mask, mask_blur
        )

        lab_out = cv.merge([L_final, A_final, B_final]).astype(np.uint8)
        result = cv.cvtColor(lab_out, cv.COLOR_LAB2BGR)

        # BUILD RGB VIEW
        orig_rgb = cv.cvtColor(img_preview, cv.COLOR_BGR2RGB)
        mask_rgb = cv.cvtColor((mask * 255).astype(np.uint8), cv.COLOR_GRAY2RGB)
        result_rgb = cv.cvtColor(result, cv.COLOR_BGR2RGB)

        combined_rgb = np.hstack([orig_rgb, mask_rgb, result_rgb])

        # Convert back so OpenCV shows correct colors
        combined_bgr = cv.cvtColor(combined_rgb, cv.COLOR_RGB2BGR)

        cv.imshow("Shadow Removal", combined_bgr)

        key = cv.waitKey(30) & 0xFF
        if key == 27 or cv.getWindowProperty("Shadow Removal", cv.WND_PROP_VISIBLE) < 1:
            break

    cv.destroyAllWindows()

Key points:

  • Resizing speeds up processing for previews.
  • Retinex is computed once outside the loop for efficiency.
  • The loop updates on trackbar changes, recomputing the mask and correction.
  • Display stacks original, mask (grayscale as RGB), and result for comparison.

Running the Code and Tuning Parameters

Setup Instructions

  1. Save the code as a .py format.
  2. Replace “image.jpg” with your image path (JPEG, PNG, etc.).
  3. Run: python shadow_removal.py.

A window will appear with trackbars and a side-by-side view.

Output:

Interactive Demo

  • Strength (0-2.0): Controls blending intensity. Higher values apply more correction but increase the risk of artifacts.
  • Sensitivity (0-2.0): Adjusts shadow detection threshold. Lower for detecting subtle shadows, higher for aggressive ones.
  • MaskBlur (3-101, odd): Softens mask edges. Larger values for smoother transitions in large shadows.

For outdoor scenes with cast shadows, increase sensitivity. For indoor low-light, reduce the strength to avoid over-brightening.

Potential Improvements and Limitations

Enhancements

  • Batch Processing: Extend the pipeline to process multiple images or video frames, enabling use in real-time or large-scale applications.
  • ML Integration: Incorporate deep learning models (such as U-Net) to generate more accurate, semantic shadow masks using datasets like ISTD.
  • Colored Shadow Handling: Improve robustness by detecting and correcting color shifts caused by colored or indirect lighting.
  • Performance Optimization: Speed up processing for large images by parallelizing Retinex scales or working on downsampled inputs.

Limitations

  • Visual Artifacts: In textured regions or near shadow boundaries, blending can introduce halos or inconsistencies, requiring more refined masks.
  • Computational Cost: Multi-Scale Retinex with large kernels can be slow on high-resolution images; preprocessing steps like downsampling are often necessary.
  • Lighting Assumptions: The method works best for neutral (achromatic) shadows and may struggle under colored or complex illumination conditions.
  • Low-Light Noise Amplification: Shadow enhancement can amplify image noise in dark areas; denoising may be needed beforehand.
  • Compared to Deep Learning: OpenCV methods don’t match deep learning for complex shadow removal, and images with heavy shadowing can be tough to fully correct.

Overall, this is a solid baseline for many scenarios, and performance can be improved by tuning parameters to the specific image and lighting conditions.

Conclusion

Shadows pose a challenge in image enhancement because they affect illumination without changing object properties. This blog presented an adaptive shadow-correction pipeline using OpenCV that combines Multi-Scale Retinex with color-space–based shadow detection to reduce shadows while preserving natural colors. Interactive parameter tuning makes the method flexible across different images. Although it cannot fully match deep learning approaches for complex scenes, it provides a lightweight and effective baseline that can be further improved or extended.

Reference

Image Shadow Removal Method Based on LAB Space

Shadow Detection and Removal

Image Shadow Remover

 

Frequently Asked Questions

Why not simply increase the brightness to remove shadows?

Increasing brightness affects the entire image and can wash out highlights or distort colors. Shadow removal requires separating illumination from reflectance to selectively correct shadowed regions.

Why are LAB and HSV color spaces used instead of RGB?

LAB and HSV separate brightness from color information, making it easier to detect and correct shadows without introducing color shifts.

 

Sanjana Bhat
OpenCV

The post Enhancing Images: Adaptive Shadow Correction Using OpenCV appeared first on Edge AI and Vision Alliance.

]]>
Production Software Meets Production Hardware: Jetson Provisioning Now Available with Avocado OS https://www.edge-ai-vision.com/2026/02/production-software-meets-production-hardware-jetson-provisioning-now-available-with-avocado-os/ Mon, 02 Feb 2026 09:00:53 +0000 https://www.edge-ai-vision.com/?p=56738 This blog post was originally published at Peridio’s website. It is reprinted here with the permission of Peridio. The gap between robotics prototypes and production deployments has always been an infrastructure problem disguised as a hardware problem. Teams build incredible computer vision models and robotic control systems on NVIDIA Jetson developer kits, only to hit […]

The post Production Software Meets Production Hardware: Jetson Provisioning Now Available with Avocado OS appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at Peridio’s website. It is reprinted here with the permission of Peridio.

The gap between robotics prototypes and production deployments has always been an infrastructure problem disguised as a hardware problem. Teams build incredible computer vision models and robotic control systems on NVIDIA Jetson developer kits, only to hit a wall when scaling to production fleets. The bottleneck isn’t the AI or the algorithms—it’s the months spent building custom Linux systems, provisioning infrastructure, and OTA mechanisms that should have been solved problems.

Today, we’re announcing native provisioning support for NVIDIA Jetson Orin Nano, Orin NX and AGX Orin in Avocado OS. This completes our production software stack for the industry’s leading AI edge hardware, delivering deterministic Linux, secure OTA updates, and fleet management from day one.

What We’ve Learned About Production Jetson Deployments

Through partnerships with companies like RoboFlow and SoloTech, and conversations with teams building everything from autonomous mobile robots to industrial smart cameras, a clear pattern emerged. The technical challenges weren’t about AI models or robotic control algorithms—teams had those figured out. The bottleneck was infrastructure.

Teams consistently hit the same obstacles:

  • Custom Yocto BSP builds consuming 3-6 months of engineering time
  • RTC configuration issues causing timestamp failures in vision pipelines
  • Fragile update mechanisms that break when scaling beyond dozens of devices
  • Manual provisioning workflows that don’t translate to manufacturing partnerships
  • Security compliance requirements eating bandwidth from core product development

These aren’t edge cases. This is the standard experience of taking Jetson from prototype to production. And it’s exactly backward—teams solving hard problems in robotics and computer vision shouldn’t be rebuilding the same embedded Linux infrastructure.

Premium Hardware Deserves Production-Ready Software

NVIDIA Jetson Orin Nano delivers 67 TOPS of AI performance with exceptional power efficiency. It’s the computational foundation for modern edge AI—supporting everything from multi-camera vision systems to real-time SLAM processing to local LLM inference. The hardware is production-ready.

The software needs to match.

What “production-grade” actually means:

Stable Base OS: Deterministic Linux that supports robust solutions. Not Ubuntu images that drift with package updates. Reproducible, image-based systems where every device runs identical, validated software.

Full NVIDIA Tool Suite: CUDA, TensorRT, OpenCV—pre-integrated and production-tested. Not reference implementations that require months of BSP work. The complete NVIDIA stack, ready to support inference solutions from partners like RoboFlow and SoloTech.

Day One Provisioning: Factory-ready deployment without custom scripts and USB ceremonies. Cryptographically verified images, hardware-backed credentials, and deterministic flashing workflows that integrate with manufacturing partners.

Fleet-Scale Operations: Atomic OTA updates with automatic rollback. Phased releases with cohort targeting. Air-gapped update delivery for secure environments. Infrastructure that works reliably across thousands of devices.

This is what we mean by production-ready hardware meeting production-grade software. Jetson provides the computational horsepower. Avocado OS and Peridio Core provide the operational infrastructure to actually ship products.

Complete Stack: From Build to Fleet

With Jetson provisioning now available, teams get the complete deployment pipeline:

Build Phase

  • Pre-integrated NVIDIA BSPs with validated hardware support
  • Modular system composition using declarative configuration
  • Reproducible builds with cryptographic verification
  • CUDA, TensorRT, ROS2, OpenCV—all validated and integrated

Provisioning Phase

  • Native Jetson flashing via tegraflash profile
  • Automated partition layout and bootloader configuration
  • Factory credential injection for fleet registration
  • Deterministic provisioning from Linux host environments

Deployment Phase

  • Atomic, image-based OTA updates with automatic rollback
  • Phased releases with cohort targeting
  • SBOM generation and CVE tracking
  • Air-gapped update delivery for secure environments

Fleet Operations

  • Centralized device management via Peridio Console
  • Real-time telemetry and health monitoring
  • Remote access for debugging and diagnostics
  • 10+ year support lifecycle matching industrial hardware

This isn’t a reference design or example code. It’s production infrastructure that scales from 10 devices to 10,000 and beyond.

Why This Matters: Robotics is Moving Faster Than Expected

The robotics industry is accelerating at an unprecedented pace. The foundational layer—perception—is rapidly maturing, unlocking capabilities that seemed years away just months ago. Vision language models (VLMs) and vision-language-action models (VLAs) are fundamentally changing how robots understand and interact with their environments. Engineers who once relied entirely on deterministic control systems are now integrating fine-tuned AI models that can handle ambiguity and adapt to novel situations. The innovation happening right now suggests 2026 will be a breakout year for practical robotics deployment.

Last week at Circuit Launch’s Robotics Week in the Valley, we saw this firsthand. Teams that aren’t roboticists or computer vision experts were training models with RoboFlow, integrating VLA platforms like SoloTech, and building working demonstrations in hours—not weeks.

The AI tooling has advanced exponentially. Inference frameworks are mature. Hardware platforms like Jetson deliver exceptional performance. But embedded Linux infrastructure has been the persistent bottleneck preventing teams from shipping at the pace they’re prototyping.

This matters because:

When prototyping velocity increases 10x, production infrastructure can’t remain a 6-month investment. Teams building breakthrough applications need to move from working demo to deployed fleet at the same pace they move from idea to working demo.

The companies winning in robotics will be the ones focused on their core innovation—better vision algorithms, more sophisticated manipulation, smarter navigation. Not the ones rebuilding Yocto layers and debugging RTC drivers.

Technical Foundation: Why Provisioning is Hard

The challenge with Jetson provisioning isn’t technical complexity—it’s reproducibility at scale. Most teams start by configuring their development board manually: installing packages, setting up environments, tweaking configurations until everything works. Then they try to capture those steps in scripts to replicate the setup on the next device.

This manual-to-scripted approach falls apart quickly. What runs perfectly on your desk becomes unpredictable in production. By the time you’re managing even a handful of devices, you’re troubleshooting subtle environment differences, dealing with drift from package updates, and questioning whether any two devices are truly running the same stack.

Production provisioning solves this fundamentally differently. Instead of scripting manual steps, you’re building reproducible system images where every device boots into an identical, validated environment. The OS becomes a clean foundation—deterministic, verifiable, and ready to run whatever AI toolchain your application requires. No configuration drift. No “it works on my machine” surprises.

This is where Avocado OS and NVIDIA’s tegraflash tooling come together. We’ve integrated deeply with NVIDIA’s BSP to automate the entire provisioning workflow—partition layouts, bootloader configuration, cryptographic verification, hardware initialization sequences. The complexity is still there, but it’s handled systematically rather than cobbled together through scripts.

We document the Linux host requirement explicitly because it matters. Provisioning workflows require reliable hardware enumeration and direct device access. macOS and Windows introduce VM-in-VM architectures that create timing issues and device passthrough complexity. Native Linux (Ubuntu 22.04+, Fedora 39+) ensures consistent, reliable provisioning.

For production deployments, this integrates with manufacturing partners. AdvantechSeeed Studio, and ecosystem partners can run provisioning at end-of-line, delivering pre-configured devices directly to deployment sites. Zero-touch deployment at scale.

Scale Across the Jetson Family

Teams can scale up and down within the Jetson family with unified toolchains and processes across the Jetson family:

  • NVIDIA Jetson Orin Nano: 67 TOPS, efficient edge AI for vision and robotics
  • NVIDIA Jetson Orin NX: Up to 157 TOPS for balanced performance for production deployments
  • NVIDIA Jetson AGX Orin: Up to 275 TOPS for demanding AI workloads
  • NVIDIA Jetson Thor (coming soon): Next-generation automotive and robotics platform

One development workflow. Consistent provisioning. Predictable behavior across the product line. This matters when your prototype needs to scale, or when different deployment scenarios require different performance tiers.

Getting Started: Production-Ready in Minutes

For teams ready to move from prototype to production, our provisioning guide walks through the complete workflow—from initializing your project to flashing your first device.

The entire process, from clean hardware to production-ready deployment, takes minutes, not months. The guide covers everything you need: Linux host setup, project initialization, building production images, and first boot configuration.

What’s Next: NVIDIA Momentum

Provisioning is the foundation. What comes next is ecosystem momentum.

We’re working with partners across the robotics and computer vision stack—from inference platforms like RoboFlow and SoloTech to hardware manufacturers like Advantech. The goal is creating a complete solution ecosystem where teams can focus entirely on their application layer while we handle everything below it.

We should talk if you are:

  • Building on Jetson and struggling with the path to production.
  • Evaluating hardware platforms and need production software from day one.
  • Just getting started and want to avoid months of infrastructure work.

Production Software That Matches Production Hardware

Our thesis has always been that embedded engineers should ship applications, not operating systems. The robotics acceleration we’re seeing validates this more than ever. Teams have breakthrough ideas for autonomous systems, vision AI, and robotic manipulation. They shouldn’t spend months on Linux infrastructure.

Jetson provisioning is production-ready today. It’s the result of deep technical work, extensive partner validation, and clear understanding of what teams actually need when taking hardware to production.

Production-ready hardware. Production-grade software. Available now.

 


Ready to deploy production-ready Jetson? Check out our Jetson solution overview, explore the provisioning guide, or request a demo to discuss your use case.

If you’re working with Jetson and want to connect about production deployment challenges, join our Discord or reach out directly—we’d love to learn about your use case and how we can help.

 

Bill Brock
CEO, Peridio

The post Production Software Meets Production Hardware: Jetson Provisioning Now Available with Avocado OS appeared first on Edge AI and Vision Alliance.

]]>
Robotics Builders Forum offers Hardware, Know-How and Networking to Developers https://www.edge-ai-vision.com/2026/01/robotics-day-offers-hardware-know-how-and-networking-to-developers/ Thu, 29 Jan 2026 14:00:56 +0000 https://www.edge-ai-vision.com/?p=56654 On February 25, 2026 from 8:30 am to 5:30 pm ET, Advantech, Qualcomm, Arrow, in partnership with D3 Embedded, Edge Impulse, and the Pittsburgh Robotics Network will present Robotics Builders Forum, an in-person conference for engineers and product teams. Qualcomm and D3 Embedded are members of the Edge AI and Vision Alliance, while Edge Impulse […]

The post Robotics Builders Forum offers Hardware, Know-How and Networking to Developers appeared first on Edge AI and Vision Alliance.

]]>
On February 25, 2026 from 8:30 am to 5:30 pm ET, Advantech, Qualcomm, Arrow, in partnership with D3 Embedded, Edge Impulse, and the Pittsburgh Robotics Network will present Robotics Builders Forum, an in-person conference for engineers and product teams. Qualcomm and D3 Embedded are members of the Edge AI and Vision Alliance, while Edge Impulse is a subsidiary of Qualcomm.

Here’s the description, from the event registration page:

Overview

Exclusive in-person event: get practical guidance, platform roadmap & hands-on experience to accelerate compute & AI choices for your robot

Join us for an exclusive, in-person Robotics Day/ Builders Forum built for engineers and product teams developing AMRs, humanoids, and industrial robotics applications. Co-hosted with Arrow, Qualcomm, Edge Impulse and Advantech, and supported by ecosystem partners, the event delivers practical guidance on choosing compute platforms, integrating vision and sensors, and accelerating AI development from prototype to deployment.

What to expect

  • Expert keynotes on robotics platform trends, roadmap considerations, and rugged edge deployment
  • Live demo showcase with real hardware and end-to-end solution workflows you can evaluate firsthand
  • Three technical breakout tracks with deep dives on compute, vision and perception, and AI software optimization
  • High-value networking with peer robotics builders, plus direct access to industry leaders, solution architects, and partner technical teams

You’ll leave with clearer platform direction, implementation best practices, and trusted connections for follow-up technical discussions and next-step evaluations. Attendance is limited to keep conversations focused and interactive.

To close the day, we will host a Connections Mixer at the Sky Lounge featuring a brief wrap-up and a raffle. This casual networking hour is designed to help attendees connect with peers, speakers, and solution teams in a relaxed setting. Sponsored by D3 Embedded.
————————————————————————————————–

This event is free and designed for professionals building or evaluating robotics and AMR solutions, including robotics and AMR product managers, system architects and embedded engineers, industrial automation R&D leaders, perception and vision engineers, and operations and engineering directors. We also welcome professionals tracking the latest robotics trends and platform direction.

Invitation-only access

Click Get ticket and complete the Event Registration form to apply for a free ticket. Event hosts will review submissions and email confirmed invitations (with an event code) to qualified attendees. Please present your ticket at reception to receive your full-day conference badge.

Location

Wyndham Grand Pittsburgh Downtown
600 Commonwealth Place
Pittsburgh, PA 15222

Agenda

08:30 AM – 09:00 AM – Breakfast & Connections Kickoff

09:00 AM – 09:15 AM – Opening Remarks & Day Overview 

09:15 AM – 09:45 AM – Keynote 1: Global Robotics Trends and How You Can Take Advantage (sponsored by Arrow) 

09:45 AM – 10:30 AM – Keynote 2: Utilizing Dragonwing for Industrial Arm-Based Robotics Solutions (sponsored by Qualcomm, Edge Impulse)

10:30 AM – 11:00 AM – Keynote 3: Ruggedizing Robotics Solutions for Mobility and Harsh Environments (sponsored by Advantech) 

11:00 AM – Break 

11:15 AM – 11:45 AM – Keynote 4: Selecting the Proper Cameras and Sensors for AI-Assisted Perception (sponsored by D3 Embedded) 

11:45 AM – 12:45 PM – Lunch 

12:45 PM – 03:30 PM – Three Breakout Rotations (45 min each with breaks) 

Track A: Building Out a Full-Scale Humanoid Robot from a Hardware Perspective
Track B: Leveraging Software Solutions to Get the Most Out of Your Processor
Track C: Designing and Integrating Machine Vision Solutions for AMRs and Humanoids

03:30 PM – 05:30 PM – Connections Mixer at Sky Lounge (sponsored by D3 Embedded)

To register for this free webinar, please see the event page.

The post Robotics Builders Forum offers Hardware, Know-How and Networking to Developers appeared first on Edge AI and Vision Alliance.

]]>
OpenMV’s Latest: Firmware v4.8.1, Multi-sensor Vision, Faster Debug, and What’s Next https://www.edge-ai-vision.com/2026/01/openmvs-latest-firmware-v4-8-1-multi-sensor-vision-faster-debug-and-whats-next/ Thu, 29 Jan 2026 09:00:24 +0000 https://www.edge-ai-vision.com/?p=56604 OpenMV kicked off 2026 with a substantial software update and a clearer look at where the platform is headed next. The headline is OpenMV Firmware v4.8.1 paired with OpenMV IDE v4.8.1, which adds multi-sensor capabilities, expands event-camera support, and lays the groundwork for a major debugging and connectivity upgrade coming with firmware v5. If you’re […]

The post OpenMV’s Latest: Firmware v4.8.1, Multi-sensor Vision, Faster Debug, and What’s Next appeared first on Edge AI and Vision Alliance.

]]>
OpenMV kicked off 2026 with a substantial software update and a clearer look at where the platform is headed next.

The headline is OpenMV Firmware v4.8.1 paired with OpenMV IDE v4.8.1, which adds multi-sensor capabilities, expands event-camera support, and lays the groundwork for a major debugging and connectivity upgrade coming with firmware v5.

If you’re building edge-vision systems on OpenMV Cams, here are the product-focused updates worth knowing.


Firmware + IDE v4.8.1: the biggest changes

OpenMV’s latest release is OpenMV Firmware v4.8.1 with OpenMV IDE v4.8.1:

New CSI module (multi-sensor support)

OpenMV introduced a new, class-based CSI module designed to support multiple camera sensors at the same time. This is now the preferred approach going forward.

The older sensor module is now deprecated. With v4.8.1, OpenMV recommends updating code to use the CSI module; no new features will be added to the legacy sensor module.

This multi-sensor work also enables official support for OpenMV’s multispectral thermal module—using an RGB camera + FLIR® Lepton® together.

OpenMV multispectral thermal camera module (RGB + thermal)

OpenMV also teased what’s next in this direction: dual RGB and RGB + event-vision configurations are planned (only targeted for the N6).

Multi-sensor camera configuration (concept / hardware example)

GENX320: event-camera mode arrives

OpenMV added an event-vision mode for the GENX320 event camera. In this mode, the camera can deliver per-pixel event updates with microsecond timestamps—useful for applications like ultra-fast motion analysis and vibration measurement.

New USB debug protocol (foundation for firmware v5)

Firmware v4.8.1 and IDE v4.8.1 set the stage for a new USB Debug protocol planned for OpenMV firmware v5.0.0. OpenMV’s stated goals are better performance and reliability in the IDE connection—plus significantly more capability than the current link.

The new protocol introduces channels that can be registered in Python, enabling high-throughput data transfer (OpenMV cites >15MB/s over USB on some cameras). It also supports custom transports, making it possible to debug/control a camera over alternative links (UART/serial, Ethernet, Wi-Fi, CAN, SPI, I2C, etc.) depending on your implementation.

Related tooling: OpenMV Python (desktop CLI / tooling) and the OpenMV forums.

Universal TinyUSB support

OpenMV is moving “almost all” camera models to TinyUSB as part of the USB-stack standardization effort. They cite benefits including better behavior in configurations involving the N6’s NPU and Octal SPI flash.

A growing ML library (MediaPipe + YOLO family)

OpenMV says it has worked through much of its plan to support “smartphone-level” AI models on the upcoming N6 and AE3. They highlight support for running models from Google MediaPipe, YOLOv2, YOLOv5 YOLOv8 and more.

OpenMV ML / model support teaser (Kickstarter GIF)

Roboflow integration for training custom models

OpenMV now has an operable workflow for training custom models using Roboflow, with an emphasis on training custom YOLOv8 models that can run onboard once the N6 and AE3 are in market.

 

Other notable improvements

  • Frame buffer management improvements with a new queuing system.
  • Embedded code profiler support in firmware + IDE (requires a profiling build to use).
  • Automated unit testing in GitHub Actions; OpenMV cites testing Cortex-M7 and Cortex-M55 targets using QEMU to catch regressions (including SIMD correctness).
  • Image quality improvements for the PAG7936 and PS5520 sensors, plus numerous bug fixes across the platform.

Kickstarter hardware: N6 and AE3 status

On the hardware front, OpenMV says it is now manufacturing the OpenMV N6 and OpenMV AE3, check out their Kickstarter for ongoing updates.

OpenMV N6 / AE3 manufacturing update (Kickstarter GIF)

 


What to do now

  • If you’re actively developing on OpenMV, consider updating to v4.8.1 and planning your code migration from the deprecated module to the new CSI module.
  • If you’re exploring event-based vision, the new GENX320 event mode is the key software enablement to watch.
  • Keep an eye on firmware v5 for the new debug protocol—especially if you need higher-throughput streaming, custom host/device channels, or alternative debug transports.

The post OpenMV’s Latest: Firmware v4.8.1, Multi-sensor Vision, Faster Debug, and What’s Next appeared first on Edge AI and Vision Alliance.

]]>