Robotics - Edge AI and Vision Alliance https://www.edge-ai-vision.com/category/applications/robotics/ Designing machines that perceive and understand. Sat, 07 Feb 2026 01:36:11 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://www.edge-ai-vision.com/wp-content/uploads/2019/12/cropped-logo_colourplus-32x32.png Robotics - Edge AI and Vision Alliance https://www.edge-ai-vision.com/category/applications/robotics/ 32 32 Into the Omniverse: OpenUSD and NVIDIA Halos Accelerate Safety for Robotaxis, Physical AI Systems https://www.edge-ai-vision.com/2026/02/into-the-omniverse-openusd-and-nvidia-halos-accelerate-safety-for-robotaxis-physical-ai-systems/ Mon, 09 Feb 2026 09:00:59 +0000 https://www.edge-ai-vision.com/?p=56608 This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. NVIDIA Editor’s note: This post is part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows using the latest advancements in OpenUSD and NVIDIA Omniverse. New NVIDIA safety […]

The post Into the Omniverse: OpenUSD and NVIDIA Halos Accelerate Safety for Robotaxis, Physical AI Systems appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA.

NVIDIA Editor’s note: This post is part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows using the latest advancements in OpenUSD and NVIDIA Omniverse.

New NVIDIA safety frameworks and technologies are advancing how developers build safe physical AI.

Physical AI is moving from research labs into the real world, powering intelligent robots and autonomous vehicles (AVs) — such as robotaxis — that must reliably sense, reason and act amid unpredictable conditions.

To safely scale these systems, developers need workflows that connect real-world data, high-fidelity simulation and robust AI models atop the common foundation provided by the OpenUSD framework.

The recently published OpenUSD Core Specification 1.0, OpenUSD — aka Universal Scene Description — now defines standard data types, file formats and composition behaviors, giving developers predictable, interoperable USD pipelines as they scale autonomous systems.

Powered by OpenUSD, NVIDIA Omniverse libraries combine NVIDIA RTX rendering, physics simulation and efficient runtimes to create digital twins and simulation-ready (SimReady) assets that accurately reflect real-world environments for synthetic data generation and testing.

NVIDIA Cosmos world foundation models can run on top of these simulations to amplify data variation, generating new weather, lighting and terrain conditions from the same scenes so teams can safely cover rare and challenging edge cases.

 

In addition, advancements in synthetic data generation, multimodal datasets and SimReady workflows are now converging with the NVIDIA Halos framework for AV safety, creating a standards-based path to safer, faster, more cost-effective deployment of next-generation autonomous machines.

Building the Foundation for Safe Physical AI

Open Standards and SimReady Assets

The OpenUSD Core Specification 1.0 establishes the standard data models and behaviors that underpin SimReady assets, enabling developers to build interoperable simulation pipelines for AI factories and robotics on OpenUSD.

Built on this foundation, SimReady 3D assets can be reused across tools and teams and loaded directly into NVIDIA Isaac Sim, where USDPhysics colliders, rigid body dynamics and composition-arc–based variants let teams test robots in virtual facilities that closely mirror real operations.

Open-Source Learning 

The Learn OpenUSD curriculum is now open source and available on GitHub, enabling contributors to localize and adapt templates, exercises and content for different audiences, languages and use cases. This gives educators a ready-made foundation to onboard new teams into OpenUSD-centric simulation workflows.​

Generative Worlds as Safety Multiplier

Gaussian splatting — a technique that uses editable 3D elements to render environments quickly and with high fidelity — and world models are accelerating simulation pipelines for safe robotics testing and validation.

At SIGGRAPH Asia, the NVIDIA Research team introduced Play4D, a streaming pipeline that enables 4D Gaussian splatting to accurately render dynamic scenes and improve realism.

Spatial intelligence company World Labs is using its Marble generative world model with NVIDIA Isaac Sim and Omniverse NuRec so researchers can turn text prompts and sample images into photorealistic, Gaussian-based physics-ready 3D environments in hours instead of weeks.

Those worlds can then be used for physical AI training, testing and sim-to-real transfer. This high-fidelity simulation workflow expands the range of scenarios robots can practice in while keeping experimentation safely in simulation.

Lightwheel Helps Teams Scale Robot Training With SimReady Assets

Powered by OpenUSD, Lightwheel’s SimReady asset library includes a common scene description layer, making it easy to assemble high-fidelity digital twins for robots. The SimReady assets are embedded with precise geometry, materials and validated physical properties, which can be loaded directly into NVIDIA Isaac Sim and Isaac Lab for robot training. This allows robots to experience realistic contacts, dynamics and sensor feedback as they learn.

End-to-End Autonomous Vehicle Safety

End-to-end autonomous vehicle safety advancements are accelerating with new research, open frameworks and inspection services that make validation more rigorous and scalable.

NVIDIA researchers, with collaborators at Harvard University and Stanford University, recently introduced the Sim2Val framework to statistically combine real-world and simulated test results, reducing AV developers’ need for costly physical mileage while demonstrating how robotaxis and AVs can behave safely across rare and safety-critical scenarios.

Learn more by watching NVIDIA’s “Safety in the Loop” livestream:

 

These innovations are complemented by a new, open-source NVIDIA Omniverse NuRec Fixer, a Cosmos-based model trained on AV data that removes artifacts in neural reconstructions to produce higher-quality SimReady assets.

To align these advances with rigorous global standards, the NVIDIA Halos AI Systems Inspection Lab — accredited by ANAB — provides impartial inspection and certification of Halos elements across robotaxi fleets, AV stacks, sensors and manufacturer platforms through the Halos Certification Program.

AV Ecosystem Leaders Putting Physical AI Safety to Work

BoschNuro and Wayve are among the first participants in the NVIDIA Halos AI Systems Inspection Lab, which aims to accelerate the safe, large-scale deployment of robotaxi fleets. Onsemi, which makes sensor systems for AVs, industrial automation and medical applications, has recently become the first company to pass inspection for the NVIDIA Halos AI Systems Inspection Lab.

 

The open-source CARLA simulator integrates NVIDIA NuRec and Cosmos Transfer to generate reconstructed drives and diverse scenario variations, while Voxel51’s FiftyOne engine, linked to Cosmos Dataset Search, NuRec and Cosmos Transfer, helps teams curate, annotate and evaluate multimodal datasets across the AV pipeline.​

 

Mcity at the University of Michigan is enhancing the digital twin of its 32-acre AV test facility using Omniverse libraries and technologies. The team is integrating the NVIDIA Blueprint for AV simulation and Omniverse Sensor RTX application programming interfaces to create physics-based models of camera, lidar, radar and ultrasonic sensors.

By aligning real sensor recordings with high-fidelity simulated data and sharing assets openly, Mcity enables safe, repeatable testing of rare and hazardous driving scenarios before vehicles operate on public roads.

Get Plugged Into the World of OpenUSD and Physical AI Safety

Learn more about OpenUSD, NVIDIA Halos and physical AI safety by exploring these resources:

 

Katie Washabaugh, Product Marketing Manager for Autonomous Vehicle Simulation, NVIDIA

The post Into the Omniverse: OpenUSD and NVIDIA Halos Accelerate Safety for Robotaxis, Physical AI Systems appeared first on Edge AI and Vision Alliance.

]]>
What Sensor Fusion Architecture Offers for NVIDIA Orin NX-Based Autonomous Vision Systems https://www.edge-ai-vision.com/2026/02/what-sensor-fusion-architecture-offers-for-nvidia-orin-nx-based-autonomous-vision-systems/ Fri, 06 Feb 2026 09:00:44 +0000 https://www.edge-ai-vision.com/?p=56689 This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Key Takeaways Why multi-sensor timing drift weakens edge AI perception How GNSS-disciplined clocks align cameras, LiDAR, radar, and IMUs Role of Orin NX as a central timing authority for sensor fusion Operational gains from unified time-stamping […]

The post What Sensor Fusion Architecture Offers for NVIDIA Orin NX-Based Autonomous Vision Systems appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems.

Key Takeaways

  • Why multi-sensor timing drift weakens edge AI perception
  • How GNSS-disciplined clocks align cameras, LiDAR, radar, and IMUs
  • Role of Orin NX as a central timing authority for sensor fusion
  • Operational gains from unified time-stamping in autonomous vision systems

Autonomous vision systems deployed at the edge depend on seamless fusion of multiple sensor streams (cameras, LiDAR, Radar, IMU, and GNSS) to interpret dynamic environments in real time. For NVIDIA Orin NX-based platforms, the challenge lies in merging all the data types within microseconds to maintain spatial awareness and decision accuracy.

Latency from unsynchronized sensors can break perception continuity in edge AI vision deployments. For instance, a camera might capture a frame before LiDAR delivers its scan, or the IMU might record motion slightly out of phase. Such mismatches produce misaligned depth maps, unreliable object tracking, and degraded AI inference performance. A sensor fusion system anchored on the Orin NX mitigates this issue through GNSS-disciplined synchronization.

In this blog, you’ll learn everything you need to know about the sensor fusion architecture, why the unified time base matters, and how it boosts edge AI vision deployments.

What are the Different Types of Sensors and Interfaces?

Sensor Interface Sync Mechanism Timing Reference Notes
 GNSS Receiver UART + PPS PPS (1 Hz) + NMEA UTC GPS time Provides absolute time and PPS for system clock discipline
 Cameras (GMSL) GMSL (CSI) Trigger derived from PPS PPS-aligned frame start Frames precisely aligned to GNSS time
 LiDAR Ethernet (USB NIC) IEEE 1588 PTP PTP synchronized to Orin NX Time-stamped point clouds
Radar Ethernet (USB NIC) IEEE 1588 PTP PTP synchronized to Orin NX Time-stamped detections
 IMU I²C Polled; software time stamp Orin NX system clock (GNSS-disciplined) Short-range sensor directly connected to Orin

Coordinating Multi-Sensor Timing with Orin NX

Edge AI systems rely on timing discipline as much as compute power. The NVIDIA Orin NX acts as the central clock, aligning every connected sensor to a single reference point through GNSS time discipline.

The GNSS receiver sends a Pulse Per Second (PPS) signal and UTC data via NMEA to the Orin NX, which aligns its internal clock with global GPS time. This disciplined clock becomes the authority across all interfaces. From there, synchronization extends through three precise routes:

  1. PTP over Ethernet: The Orin NX functions as a PTP Grandmaster through its USB NIC. LiDAR and radar units operate as PTP slaves, delivering time-stamped point clouds and detections that stay aligned to the GNSS time domain.
  2. PPS-derived camera triggers: Cameras linked via GMSL or MIPI CSI receive frame triggers generated from the PPS signal. This ensures frame start alignment to GNSS time with zero drift between captures.
  3. Timed IMU polling: The IMU connects over I²C and is polled at consistent intervals, typically between 500 Hz and 1 kHz. Software time stamps are derived from the same GNSS-disciplined clock, keeping IMU data in sync with all other sensors.

Importance of a Unified Time Base

All sensors share the same GNSS-aligned time domain, enabling precise fusion of LiDAR, radar, camera, and IMU data.

 

Implementation Guidelines for Stable Sensor Fusion

  • USB NIC and PTP configuration: Enable hardware time-stamping (ethtool -T ethX) so Ethernet sensors maintain nanosecond alignment.
  • Camera trigger setup: Use a hardware timer or GPIO to generate PPS-derived triggers for consistent frame alignment.
  • IMU polling: Maintain fixed-rate polling within Orin NX to align IMU data with the GNSS-disciplined clock.
  • Clock discipline: Use both PPS and NMEA inputs to keep the Orin NX clock aligned to UTC for accurate fusion timing.

Strengths of Leveraging Sensor Fusion-Based Autonomous Vision

Direct synchronization control

Removing the intermediate MCU lets Orin NX handle timing internally, cutting latency and eliminating cross-processor jitter.

Unified global time-stamping

All sensors operate on GNSS time, ensuring every frame, scan, and motion reading aligns to a single reference.

Sub-microsecond Ethernet alignment

PTP synchronization keeps LiDAR and radar feeds locked to the same temporal window, maintaining accuracy across fast-moving scenes.

Deterministic frame capture

PPS-triggered cameras guarantee frame starts occur exactly on the GNSS second, preventing drift between visual and depth data.

Consistent IMU data

High-frequency IMU polling stays aligned with the master clock, preserving accurate motion tracking for fusion and localization.

e-con Systems Offers Custom Edge AI Vision Boxes

e-con Systems has been designing, developing, and manufacturing OEM camera solutions since 2003. We offer customizable Edge AI Vision Boxes powered by NVIDIA Orin NX and Orin Nano. It brings together multi-camera interfaces, hardware-level synchronization, and AI-ready processing into one cohesive unit for real-time vision tasks.

Our Edge AI Vision Box – Darsi simplifies the adoption of GNSS-disciplined fusion in robotics, autonomous mobility, and industrial vision. It comes with support for PPS-triggered cameras, PTP-synced Ethernet sensors, and flexible connectivity options. It also provides an end-to-end framework where developers can plug in sensors, train models, and run inference directly at the edge (without external synchronization hardware).

Know more -> e-con Systems’ Orin NX/Nano-based Edge AI Vision Box

Use our Camera Selector to find other best-fit cameras for your edge AI vision applications.

If you need expert guidance for selecting the right imaging setup, please reach out to camerasolutions@e-consystems.com.

FAQs

  1. What role does sensor fusion play in edge AI vision systems?
    Sensor fusion aligns data from cameras, LiDAR, radar, and IMU sensors to a common GNSS-disciplined time base. It ensures every frame and data point corresponds to the same moment, thereby improving object detection, 3D reconstruction, and navigation accuracy in edge AI systems.
  1. How does NVIDIA Orin NX handle synchronization across sensors?
    The Orin NX functions as both the compute core and timing master. It receives a PPS signal and UTC data from the GNSS receiver, disciplines its internal clock, and distributes synchronization through PTP for Ethernet sensors, PPS triggers for cameras, and fixed-rate polling for IMUs.
  1. Why is a unified time base critical for reliable fusion?
    When all sensors share a single GNSS-aligned clock, the system eliminates time-stamp drift and timing mismatches. So, fusion algorithms can process coherent multi-sensor data streams, which enable the AI stack to operate with consistent depth, motion, and spatial context.
  1. What are the implementation steps for achieving stable sensor fusion?
    Developers should enable hardware time-stamping for PTP sensors, use PPS-based hardware triggers for cameras, poll IMUs at fixed intervals, and feed both PPS and NMEA inputs into the Orin NX clock. These steps maintain accurate UTC alignment through long runtime cycles.
  1. How does e-con Systems support developers building with Orin NX?
    e-con Systems provides customizable Edge AI Vision Boxes powered by NVIDIA Orin NX and Orin Nano. They are equipped with synchronized camera interfaces, AI-ready processing, and GNSS-disciplined timing. Hence, product developers can deploy real-time vision solutions quickly and with full temporal accuracy.

Prabu Kumar
Chief Technology Officer and Head of Camera Products, e-con Systems

The post What Sensor Fusion Architecture Offers for NVIDIA Orin NX-Based Autonomous Vision Systems appeared first on Edge AI and Vision Alliance.

]]>
Production Software Meets Production Hardware: Jetson Provisioning Now Available with Avocado OS https://www.edge-ai-vision.com/2026/02/production-software-meets-production-hardware-jetson-provisioning-now-available-with-avocado-os/ Mon, 02 Feb 2026 09:00:53 +0000 https://www.edge-ai-vision.com/?p=56738 This blog post was originally published at Peridio’s website. It is reprinted here with the permission of Peridio. The gap between robotics prototypes and production deployments has always been an infrastructure problem disguised as a hardware problem. Teams build incredible computer vision models and robotic control systems on NVIDIA Jetson developer kits, only to hit […]

The post Production Software Meets Production Hardware: Jetson Provisioning Now Available with Avocado OS appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at Peridio’s website. It is reprinted here with the permission of Peridio.

The gap between robotics prototypes and production deployments has always been an infrastructure problem disguised as a hardware problem. Teams build incredible computer vision models and robotic control systems on NVIDIA Jetson developer kits, only to hit a wall when scaling to production fleets. The bottleneck isn’t the AI or the algorithms—it’s the months spent building custom Linux systems, provisioning infrastructure, and OTA mechanisms that should have been solved problems.

Today, we’re announcing native provisioning support for NVIDIA Jetson Orin Nano, Orin NX and AGX Orin in Avocado OS. This completes our production software stack for the industry’s leading AI edge hardware, delivering deterministic Linux, secure OTA updates, and fleet management from day one.

What We’ve Learned About Production Jetson Deployments

Through partnerships with companies like RoboFlow and SoloTech, and conversations with teams building everything from autonomous mobile robots to industrial smart cameras, a clear pattern emerged. The technical challenges weren’t about AI models or robotic control algorithms—teams had those figured out. The bottleneck was infrastructure.

Teams consistently hit the same obstacles:

  • Custom Yocto BSP builds consuming 3-6 months of engineering time
  • RTC configuration issues causing timestamp failures in vision pipelines
  • Fragile update mechanisms that break when scaling beyond dozens of devices
  • Manual provisioning workflows that don’t translate to manufacturing partnerships
  • Security compliance requirements eating bandwidth from core product development

These aren’t edge cases. This is the standard experience of taking Jetson from prototype to production. And it’s exactly backward—teams solving hard problems in robotics and computer vision shouldn’t be rebuilding the same embedded Linux infrastructure.

Premium Hardware Deserves Production-Ready Software

NVIDIA Jetson Orin Nano delivers 67 TOPS of AI performance with exceptional power efficiency. It’s the computational foundation for modern edge AI—supporting everything from multi-camera vision systems to real-time SLAM processing to local LLM inference. The hardware is production-ready.

The software needs to match.

What “production-grade” actually means:

Stable Base OS: Deterministic Linux that supports robust solutions. Not Ubuntu images that drift with package updates. Reproducible, image-based systems where every device runs identical, validated software.

Full NVIDIA Tool Suite: CUDA, TensorRT, OpenCV—pre-integrated and production-tested. Not reference implementations that require months of BSP work. The complete NVIDIA stack, ready to support inference solutions from partners like RoboFlow and SoloTech.

Day One Provisioning: Factory-ready deployment without custom scripts and USB ceremonies. Cryptographically verified images, hardware-backed credentials, and deterministic flashing workflows that integrate with manufacturing partners.

Fleet-Scale Operations: Atomic OTA updates with automatic rollback. Phased releases with cohort targeting. Air-gapped update delivery for secure environments. Infrastructure that works reliably across thousands of devices.

This is what we mean by production-ready hardware meeting production-grade software. Jetson provides the computational horsepower. Avocado OS and Peridio Core provide the operational infrastructure to actually ship products.

Complete Stack: From Build to Fleet

With Jetson provisioning now available, teams get the complete deployment pipeline:

Build Phase

  • Pre-integrated NVIDIA BSPs with validated hardware support
  • Modular system composition using declarative configuration
  • Reproducible builds with cryptographic verification
  • CUDA, TensorRT, ROS2, OpenCV—all validated and integrated

Provisioning Phase

  • Native Jetson flashing via tegraflash profile
  • Automated partition layout and bootloader configuration
  • Factory credential injection for fleet registration
  • Deterministic provisioning from Linux host environments

Deployment Phase

  • Atomic, image-based OTA updates with automatic rollback
  • Phased releases with cohort targeting
  • SBOM generation and CVE tracking
  • Air-gapped update delivery for secure environments

Fleet Operations

  • Centralized device management via Peridio Console
  • Real-time telemetry and health monitoring
  • Remote access for debugging and diagnostics
  • 10+ year support lifecycle matching industrial hardware

This isn’t a reference design or example code. It’s production infrastructure that scales from 10 devices to 10,000 and beyond.

Why This Matters: Robotics is Moving Faster Than Expected

The robotics industry is accelerating at an unprecedented pace. The foundational layer—perception—is rapidly maturing, unlocking capabilities that seemed years away just months ago. Vision language models (VLMs) and vision-language-action models (VLAs) are fundamentally changing how robots understand and interact with their environments. Engineers who once relied entirely on deterministic control systems are now integrating fine-tuned AI models that can handle ambiguity and adapt to novel situations. The innovation happening right now suggests 2026 will be a breakout year for practical robotics deployment.

Last week at Circuit Launch’s Robotics Week in the Valley, we saw this firsthand. Teams that aren’t roboticists or computer vision experts were training models with RoboFlow, integrating VLA platforms like SoloTech, and building working demonstrations in hours—not weeks.

The AI tooling has advanced exponentially. Inference frameworks are mature. Hardware platforms like Jetson deliver exceptional performance. But embedded Linux infrastructure has been the persistent bottleneck preventing teams from shipping at the pace they’re prototyping.

This matters because:

When prototyping velocity increases 10x, production infrastructure can’t remain a 6-month investment. Teams building breakthrough applications need to move from working demo to deployed fleet at the same pace they move from idea to working demo.

The companies winning in robotics will be the ones focused on their core innovation—better vision algorithms, more sophisticated manipulation, smarter navigation. Not the ones rebuilding Yocto layers and debugging RTC drivers.

Technical Foundation: Why Provisioning is Hard

The challenge with Jetson provisioning isn’t technical complexity—it’s reproducibility at scale. Most teams start by configuring their development board manually: installing packages, setting up environments, tweaking configurations until everything works. Then they try to capture those steps in scripts to replicate the setup on the next device.

This manual-to-scripted approach falls apart quickly. What runs perfectly on your desk becomes unpredictable in production. By the time you’re managing even a handful of devices, you’re troubleshooting subtle environment differences, dealing with drift from package updates, and questioning whether any two devices are truly running the same stack.

Production provisioning solves this fundamentally differently. Instead of scripting manual steps, you’re building reproducible system images where every device boots into an identical, validated environment. The OS becomes a clean foundation—deterministic, verifiable, and ready to run whatever AI toolchain your application requires. No configuration drift. No “it works on my machine” surprises.

This is where Avocado OS and NVIDIA’s tegraflash tooling come together. We’ve integrated deeply with NVIDIA’s BSP to automate the entire provisioning workflow—partition layouts, bootloader configuration, cryptographic verification, hardware initialization sequences. The complexity is still there, but it’s handled systematically rather than cobbled together through scripts.

We document the Linux host requirement explicitly because it matters. Provisioning workflows require reliable hardware enumeration and direct device access. macOS and Windows introduce VM-in-VM architectures that create timing issues and device passthrough complexity. Native Linux (Ubuntu 22.04+, Fedora 39+) ensures consistent, reliable provisioning.

For production deployments, this integrates with manufacturing partners. AdvantechSeeed Studio, and ecosystem partners can run provisioning at end-of-line, delivering pre-configured devices directly to deployment sites. Zero-touch deployment at scale.

Scale Across the Jetson Family

Teams can scale up and down within the Jetson family with unified toolchains and processes across the Jetson family:

  • NVIDIA Jetson Orin Nano: 67 TOPS, efficient edge AI for vision and robotics
  • NVIDIA Jetson Orin NX: Up to 157 TOPS for balanced performance for production deployments
  • NVIDIA Jetson AGX Orin: Up to 275 TOPS for demanding AI workloads
  • NVIDIA Jetson Thor (coming soon): Next-generation automotive and robotics platform

One development workflow. Consistent provisioning. Predictable behavior across the product line. This matters when your prototype needs to scale, or when different deployment scenarios require different performance tiers.

Getting Started: Production-Ready in Minutes

For teams ready to move from prototype to production, our provisioning guide walks through the complete workflow—from initializing your project to flashing your first device.

The entire process, from clean hardware to production-ready deployment, takes minutes, not months. The guide covers everything you need: Linux host setup, project initialization, building production images, and first boot configuration.

What’s Next: NVIDIA Momentum

Provisioning is the foundation. What comes next is ecosystem momentum.

We’re working with partners across the robotics and computer vision stack—from inference platforms like RoboFlow and SoloTech to hardware manufacturers like Advantech. The goal is creating a complete solution ecosystem where teams can focus entirely on their application layer while we handle everything below it.

We should talk if you are:

  • Building on Jetson and struggling with the path to production.
  • Evaluating hardware platforms and need production software from day one.
  • Just getting started and want to avoid months of infrastructure work.

Production Software That Matches Production Hardware

Our thesis has always been that embedded engineers should ship applications, not operating systems. The robotics acceleration we’re seeing validates this more than ever. Teams have breakthrough ideas for autonomous systems, vision AI, and robotic manipulation. They shouldn’t spend months on Linux infrastructure.

Jetson provisioning is production-ready today. It’s the result of deep technical work, extensive partner validation, and clear understanding of what teams actually need when taking hardware to production.

Production-ready hardware. Production-grade software. Available now.

 


Ready to deploy production-ready Jetson? Check out our Jetson solution overview, explore the provisioning guide, or request a demo to discuss your use case.

If you’re working with Jetson and want to connect about production deployment challenges, join our Discord or reach out directly—we’d love to learn about your use case and how we can help.

 

Bill Brock
CEO, Peridio

The post Production Software Meets Production Hardware: Jetson Provisioning Now Available with Avocado OS appeared first on Edge AI and Vision Alliance.

]]>
Robotics Builders Forum offers Hardware, Know-How and Networking to Developers https://www.edge-ai-vision.com/2026/01/robotics-day-offers-hardware-know-how-and-networking-to-developers/ Thu, 29 Jan 2026 14:00:56 +0000 https://www.edge-ai-vision.com/?p=56654 On February 25, 2026 from 8:30 am to 5:30 pm ET, Advantech, Qualcomm, Arrow, in partnership with D3 Embedded, Edge Impulse, and the Pittsburgh Robotics Network will present Robotics Builders Forum, an in-person conference for engineers and product teams. Qualcomm and D3 Embedded are members of the Edge AI and Vision Alliance, while Edge Impulse […]

The post Robotics Builders Forum offers Hardware, Know-How and Networking to Developers appeared first on Edge AI and Vision Alliance.

]]>
On February 25, 2026 from 8:30 am to 5:30 pm ET, Advantech, Qualcomm, Arrow, in partnership with D3 Embedded, Edge Impulse, and the Pittsburgh Robotics Network will present Robotics Builders Forum, an in-person conference for engineers and product teams. Qualcomm and D3 Embedded are members of the Edge AI and Vision Alliance, while Edge Impulse is a subsidiary of Qualcomm.

Here’s the description, from the event registration page:

Overview

Exclusive in-person event: get practical guidance, platform roadmap & hands-on experience to accelerate compute & AI choices for your robot

Join us for an exclusive, in-person Robotics Day/ Builders Forum built for engineers and product teams developing AMRs, humanoids, and industrial robotics applications. Co-hosted with Arrow, Qualcomm, Edge Impulse and Advantech, and supported by ecosystem partners, the event delivers practical guidance on choosing compute platforms, integrating vision and sensors, and accelerating AI development from prototype to deployment.

What to expect

  • Expert keynotes on robotics platform trends, roadmap considerations, and rugged edge deployment
  • Live demo showcase with real hardware and end-to-end solution workflows you can evaluate firsthand
  • Three technical breakout tracks with deep dives on compute, vision and perception, and AI software optimization
  • High-value networking with peer robotics builders, plus direct access to industry leaders, solution architects, and partner technical teams

You’ll leave with clearer platform direction, implementation best practices, and trusted connections for follow-up technical discussions and next-step evaluations. Attendance is limited to keep conversations focused and interactive.

To close the day, we will host a Connections Mixer at the Sky Lounge featuring a brief wrap-up and a raffle. This casual networking hour is designed to help attendees connect with peers, speakers, and solution teams in a relaxed setting. Sponsored by D3 Embedded.
————————————————————————————————–

This event is free and designed for professionals building or evaluating robotics and AMR solutions, including robotics and AMR product managers, system architects and embedded engineers, industrial automation R&D leaders, perception and vision engineers, and operations and engineering directors. We also welcome professionals tracking the latest robotics trends and platform direction.

Invitation-only access

Click Get ticket and complete the Event Registration form to apply for a free ticket. Event hosts will review submissions and email confirmed invitations (with an event code) to qualified attendees. Please present your ticket at reception to receive your full-day conference badge.

Location

Wyndham Grand Pittsburgh Downtown
600 Commonwealth Place
Pittsburgh, PA 15222

Agenda

08:30 AM – 09:00 AM – Breakfast & Connections Kickoff

09:00 AM – 09:15 AM – Opening Remarks & Day Overview 

09:15 AM – 09:45 AM – Keynote 1: Global Robotics Trends and How You Can Take Advantage (sponsored by Arrow) 

09:45 AM – 10:30 AM – Keynote 2: Utilizing Dragonwing for Industrial Arm-Based Robotics Solutions (sponsored by Qualcomm, Edge Impulse)

10:30 AM – 11:00 AM – Keynote 3: Ruggedizing Robotics Solutions for Mobility and Harsh Environments (sponsored by Advantech) 

11:00 AM – Break 

11:15 AM – 11:45 AM – Keynote 4: Selecting the Proper Cameras and Sensors for AI-Assisted Perception (sponsored by D3 Embedded) 

11:45 AM – 12:45 PM – Lunch 

12:45 PM – 03:30 PM – Three Breakout Rotations (45 min each with breaks) 

Track A: Building Out a Full-Scale Humanoid Robot from a Hardware Perspective
Track B: Leveraging Software Solutions to Get the Most Out of Your Processor
Track C: Designing and Integrating Machine Vision Solutions for AMRs and Humanoids

03:30 PM – 05:30 PM – Connections Mixer at Sky Lounge (sponsored by D3 Embedded)

To register for this free webinar, please see the event page.

The post Robotics Builders Forum offers Hardware, Know-How and Networking to Developers appeared first on Edge AI and Vision Alliance.

]]>
Faster Sensor Simulation for Robotics Training with Machine Learning Surrogates https://www.edge-ai-vision.com/2026/01/faster-sensor-simulation-for-robotics-training-with-machine-learning-surrogates/ Wed, 28 Jan 2026 09:00:51 +0000 https://www.edge-ai-vision.com/?p=56617 This article was originally published at Analog Devices’ website. It is reprinted here with the permission of Analog Devices. Training robots in the physical world is slow, expensive, and difficult to scale. Roboticists developing AI policies depend on high quality data—especially for complex tasks like picking up flexible objects or navigating cluttered environments. These tasks rely […]

The post Faster Sensor Simulation for Robotics Training with Machine Learning Surrogates appeared first on Edge AI and Vision Alliance.

]]>
This article was originally published at Analog Devices’ website. It is reprinted here with the permission of Analog Devices.

Training robots in the physical world is slow, expensive, and difficult to scale. Roboticists developing AI policies depend on high quality data—especially for complex tasks like picking up flexible objects or navigating cluttered environments. These tasks rely on data from sensors, motors, and other components used by the robot. Yet generating this data in the real world is time-consuming and requires extensive hardware infrastructure.

Simulation offers a scalable alternative. By running multiple robotic motion scenarios in parallel, teams can significantly reduce the time required for data collection. However, most simulations environments face a trade-off: performance or physical precision.

A model with near-perfect, real-world fidelity often requires vast amounts of computation and time. Such precise but slow simulations produce less data, reducing their usefulness. Instead, many developers choose simplifications that improve speed but result in a disconnect between training and deployment—commonly known as the sim-to-real gap. This means that robots trained solely in simulation will struggle in the real world. Their policies will be confused by actual sensor data that includes noise, interference, and flaws.

To address this challenge and accelerate simulation, Analog Devices developed a machine learning-based surrogate model. In our testing, the model simulated the behavior of an indirect time-of-flight (iToF) sensor with near-real-time performance, while preserving critical characteristics of the real sensor’s output. The model offers a true acceleration breakthrough in scalable, realistic training for robotic policies, and a path forward with complex simulation.

Simulating Sensors with Real-World Accuracy

iToF sensors, such as ADI’s ADTF3175, are common in robotic perception. These sensors emit light in a regular pattern to measure depth by calculating its reflection. In the real world, sensors exhibit readout noise, and accounting for this interference is essential for training reliable robotic policies. However, most simulation environments offer idealized sensor data. For example, NVIDIA’s Isaac Sim™ provides clean depth maps based on geometry, not the noisy output of real-world sensors.

To fill this gap, ADI had previously developed a physics-based simulator that modeled iToF sensor behavior at the pixel level. While accurate, the simulator was too slow for full-frame, real-time use. At just 0.008 frames per second (FPS), it was impractical for training AI policies that require thousands of scenes per second.

Using Machine Learning to Speed Up Simulation

The breakthrough came from using machine learning to emulate the high-fidelity simulator’s output. We trained a multilayer perceptron (MLP) model as a surrogate to approximate the behavior of the precise white-box simulator. Importantly, the team designed this stand-in model to learn not just the average output but also reflect the original’s variability and noise characteristics.

The surrogate model decomposes its task into three sub-tasks:

  • Predict the expected depth measurement.
  • Estimate the standard deviation, accounting for uncertainty.
  • Predict whether a pixel’s depth measurement will be invalid or unresolved.

The surrogate model uses this probabilistic output to capture the essential stochastic behavior of the original simulator while dramatically accelerating inference. The result is a simulation that runs at 17 FPS. That’s fast enough for real-time use while maintaining approximately 1% error from the high-fidelity model.

Real-World Validation in Isaac Sim

After building the surrogate model, the team integrated it into NVIDIA’s Isaac Sim environment. Testing using a digital twin of a robot arm performing peg-insertion tasks showed that the model closely matched the original simulator’s output. The output even included the noise that was absent from standard simulations.

Real-world iToF sensors are sensitive to optical effects in the near-infrared (NIR) range, a property often ignored in standard simulations. Furthermore, iToF performance varies across different surface materials. To ensure the surrogate accounts for both behaviors, the team used fast surrogate inference and adjusted the NIR reflectivity of simulated objects to better match sensor behavior in physical experiments.

This technique helped reduce differences between simulation and real sensor data, particularly on matte surfaces. While imperfect, these adaptations made major strides to minimize the sim-to-real gap. The team is actively exploring additional improvements, including changes to the underlying physics models and

What’s Next: Improving Fidelity and Generalization

This surrogate model serves as a baseline for enabling fast, realistic simulation of iToF sensors in robotic training workflows. But it’s only the first step. New work involves physics-informed neural operator (PINO) models to improve accuracy, reduce training data needs, and generalize across different scenes and tasks.

In the future, the aim is to eliminate the need for an intermediate white-box simulator. By training models directly on real-world sensor data, simulators could adapt more readily to diverse environments without requiring manual tuning or scene-specific calibration.

These developments could dramatically reduce the time and cost required to deploy robotics systems to real-world environments. Ideally, this work will advance deployments in logistics, manufacturing, product inspection, and beyond.

 

Philip Sharos, Principal Engineer, Edge AI

The post Faster Sensor Simulation for Robotics Training with Machine Learning Surrogates appeared first on Edge AI and Vision Alliance.

]]>
NAMUGA Successfully Concludes CES Participation, official Launch of Next-Generation 3D LiDAR Sensor ‘Stella-2’ https://www.edge-ai-vision.com/2026/01/namuga-successfully-concludes-ces-participation-official-launch-of-next-generation-3d-lidar-sensor-stella-2/ Thu, 22 Jan 2026 17:35:37 +0000 https://www.edge-ai-vision.com/?p=56578 Las Vegas, NV, Jan 15 — NAMUGA announced that it successfully concluded the unveiling of its new product, Stella-2, at CES 2026, the world’s largest IT and consumer electronics exhibition, held in Las Vegas, USA, from January 6 to 9. The newly unveiled product, Stella-2, is a solid-state LiDAR jointly developed by NAMUGA and Lumotive. In […]

The post NAMUGA Successfully Concludes CES Participation, official Launch of Next-Generation 3D LiDAR Sensor ‘Stella-2’ appeared first on Edge AI and Vision Alliance.

]]>
Las Vegas, NV, Jan 15 — NAMUGA announced that it successfully concluded the unveiling of its new product, Stella-2, at CES 2026, the world’s largest IT and consumer electronics exhibition, held in Las Vegas, USA, from January 6 to 9.

The newly unveiled product, Stella-2, is a solid-state LiDAR jointly developed by NAMUGA and Lumotive. In particular, Stella-2 has been evaluated as enabling more precise and proactive responses in outdoor environments by significantly improving sensing distance and frame rate compared to its predecessor. In addition to existing partners such as Infineon, LIPS, and PMD, NAMUGA also received a series of new collaboration proposals.

The key themes of this year’s CES were undoubtedly Physical AI and robotics. As demand for next-generation sensors surged across industries including robotics, smart infrastructure, and autonomous driving, NAMUGA’s 3D sensing technology and large-scale mass production experience drew significant attention as key competitive strengths. Notably, NAMUGA was recently selected as a supplier of 3D sensing modules for a global automotive robot platform.

Tangible outcomes were also achieved. At CES 2026, NAMUGA finalized the initial supply of Stella-2 samples to a North American global e-commerce big tech partner. This achievement demonstrates NAMUGA’s competitiveness, having passed the partner’s stringent technical and quality standards. Building on this supply, NAMUGA plans to explore opportunities to expand the application of 3D sensing-based solutions to the partner’s logistics robots.

Meanwhile, Hyundai Motor Group Executive Chair Euisun Chung’s visit to the Samsung Electronics booth, where he proposed combining MobeD with robot vacuum cleaners, drew considerable attention. The 3D sensing camera, a core component of AI robot vacuum cleaners supplied by NAMUGA, is a high value-added technology essential for distance measurement.

NAMUGA CEO Lee Dong-ho stated, “Through CES 2026, we were able to confirm the high level of interest and potential surrounding 3D sensing technologies among IT companies,” adding, “As NAMUGA’s 3D sensing technology continues to be adopted by global automotive and e-commerce companies, we are keeping pace with global trends in line with the advent of the Physical AI era.”

NAMUGA CEO Lee Dong-ho discussing 3D robot sensor strategies at CES 2026

NAMUGA CEO Lee Dong-ho introducing Stella-2 with Lumotive CEO Sam Heidari at CES 2026

The post NAMUGA Successfully Concludes CES Participation, official Launch of Next-Generation 3D LiDAR Sensor ‘Stella-2’ appeared first on Edge AI and Vision Alliance.

]]>
NVIDIA Unveils New Open Models, Data and Tools to Advance AI Across Every Industry https://www.edge-ai-vision.com/2026/01/nvidia-unveils-new-open-models-data-and-tools-to-advance-ai-across-every-industry/ Wed, 07 Jan 2026 00:24:04 +0000 https://www.edge-ai-vision.com/?p=56394 This post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Expanding the open model universe, NVIDIA today released new open models, data and tools to advance AI across every industry. These models — spanning the NVIDIA Nemotron family for agentic AI, the NVIDIA Cosmos platform for physical AI, the new NVIDIA Alpamayo family for autonomous vehicle […]

The post NVIDIA Unveils New Open Models, Data and Tools to Advance AI Across Every Industry appeared first on Edge AI and Vision Alliance.

]]>
This post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA.

Expanding the open model universe, NVIDIA today released new open models, data and tools to advance AI across every industry.

These models — spanning the NVIDIA Nemotron family for agentic AI, the NVIDIA Cosmos platform for physical AI, the new NVIDIA Alpamayo family for autonomous vehicle development, NVIDIA Isaac GR00T for robotics and NVIDIA Clara for biomedical — will empower companies with the tools to develop real-world AI systems.

NVIDIA contributes open-source training frameworks and one of the world’s largest collections of open multimodal data, including 10 trillion language training tokens, 500,000 robotics trajectories, 455,000 protein structures and 100 terabytes of vehicle sensor data. This is an unprecedented scale of diverse open resources to accelerate innovation in language, robots, scientific research and autonomous vehicles.

Leading technology companies — including Bosch, CodeRabbit, CrowdStrike, Cohesity, Fortinet, Franka Robotics, Humanoid, Palantir, Salesforce, ServiceNow, Hitachi and Uber — are adopting and building on NVIDIA’s open model technologies.

NVIDIA Nemotron Brings Speech, Multimodal Intelligence and Safety to AI Agents

Building on the recently released NVIDIA Nemotron 3 family of open models and data, NVIDIA is releasing Nemotron models for speech, multimodal retrieval-augmented generation (RAG) and safety.

  • Nemotron Speech comprises leaderboard-topping open models, including a new ASR modelthat deliver real-time, low-latency speech recognition for live captions and speech AI applications. Daily and Modal benchmarks show that the model delivers 10x faster performance than other models in its class.
  • Nemotron RAG comprises new embed and rerank vision language models (VLMs) that provide highly accurate multilingual and multimodal data insights to enhance document search and information retrieval.
  • Nemotron Safety models, which strengthen the safety and trustworthiness of AI applications, now include the Llama Nemotron Content Safety model, featuring expanded language support, and Nemotron PII, which detects sensitive data with high accuracy.

Bosch is adopting Nemotron Speech to enable drivers to interact with their vehicles. ServiceNow trains its Apriel model family on open datasets, including Nemotron for cost-efficient multimodal performance.

Cadence and IBM are piloting NVIDIA Nemotron RAG models to improve search and reasoning across complex technical documents.

CrowdStrike, Cohesity and Fortinet are adopting NVIDIA Nemotron Safety models to strengthen the trustworthiness of their AI applications.

Palantir is integrating Nemotron models into its Ontology framework to build a first-of-its-kind, integrated technology stack for specialized AI agents. CodeRabbit is using Nemotron models to power and scale its AI code reviews, improving speed and cost efficiency while maintaining high review accuracy.

NVIDIA is also releasing open-source datasets, training resources and blueprints to developers, including the dataset and training code for the Llama Embed Nemotron 8B model, featured on the MMTEB leaderboard. This is in addition to the updated LLM Router that shows developers how to automatically direct AI requests to the best model for the job, and the dataset used to build the new Nemotron Speech ASR model.

New Models for Every Type of Physical AI and Robot

Developing physical AI for robots and autonomous systems requires large, diverse datasets and models that can perceive, reason and act in complex, real-world environments. On Hugging Face, robotics is the fastest-growing segment, with NVIDIA’s open robotics models and datasets leading the platform’s downloads.

NVIDIA is releasing NVIDIA Cosmos open world foundation models that bring humanlike reasoning and world generation to accelerate physical AI development and validation.

NVIDIA has also released open models and blueprints for each physical AI embodiment, built on Cosmos:

  • Isaac GR00T N1.6 is an open reasoning vision language action (VLA) model, purpose-built for humanoid robots, that unlocks full body control and uses NVIDIA Cosmos Reason for better reasoning and contextual understanding.
  • The NVIDIA Blueprint for video search and summarization, part of the NVIDIA Metropolis platform, is a reference workflow for building vision AI agents that can analyze large volumes of recorded and live video to improve operational efficiency and public safety.

SalesforceMilestone, Hitachi, Uber, VAST Data and Encord are using Cosmos Reason for traffic and workplace productivity AI agents. Franka Robotics, Humanoid and NEURA Robotics are using Isaac GR00T to simulate, train and validate new behaviors for robots before scaling to production.

NVIDIA Alpamayo for Reasoning-Based Autonomous Vehicles

Developing safe, scalable autonomous driving depends on AI that can perceive, reason and act in complex real-world environments and scenarios, with development workflows that support rapid training, testing and improvement at scale.

NVIDIA is releasing NVIDIA Alpamayo, a new family of open models, simulation tools and large datasets to advance reasoning-based autonomous vehicle development. It includes:

  • Alpamayo 1, the first open, large-scale reasoning VLA model for autonomous vehicles (AVs) that enables vehicles to understand their surroundings, as well as explain their actions.​
  • AlpaSim, an open-source simulation framework that enables closed-loop training and evaluation of reasoning-based AV models across diverse environments and edge cases.

NVIDIA is also releasing Physical AI Open Datasets, including over 1,700 hours of driving data collected across the widest range of geographies and conditions, covering rare and complex real-world edge cases essential for advancing reasoning architectures.

NVIDIA Clara for Healthcare and Life Sciences

To lower costs and deliver treatments faster, NVIDIA is launching new Clara AI models that bridge the gap between digital discovery and real-world medicine.

Helping researchers design treatments that are safer, more effective and easier to produce, these models include:

  • La-Proteina enables the design of large, atom-level-precise proteins for research and drug candidate development, giving scientists new tools to study diseases previously considered untreatable.
  • ReaSyn v2 ensures AI-designed drugs are practical to synthesize by incorporating a manufacturing blueprint into the discovery process.
  • KERMT provides high-accuracy, computational safety testing early in development by predicting how a potential drug will interact with the human body.
  • RNAPro unlocks the potential of personalized medicine by predicting the complex 3D shapes of RNA molecules.

In addition, an NVIDIA dataset of 455,000 synthetic protein structures helps AI researchers build more accurate AI models.

Get Started With NVIDIA Open Models and Technologies

NVIDIA open models, data and frameworks are now available on GitHub and Hugging Face and from a range of cloud, inference and AI infrastructure platforms, as well as build.nvidia.com, giving developers flexible access to supporting resources.

Many of these models are also available as NVIDIA NIM microservices for secure, scalable deployment on any NVIDIA-accelerated infrastructure, from the edge to the cloud.

Learn more by watching NVIDIA Live at CES.

Kari Briski

The post NVIDIA Unveils New Open Models, Data and Tools to Advance AI Across Every Industry appeared first on Edge AI and Vision Alliance.

]]>
Qualcomm Introduces a Full Suite of Robotics Technologies, Powering Physical AI from Household Robots up to Full-Size Humanoids https://www.edge-ai-vision.com/2026/01/qualcomm-introduces-a-full-suite-of-robotics-technologies-powering-physical-ai-from-household-robots-up-to-full-size-humanoids/ Tue, 06 Jan 2026 23:43:23 +0000 https://www.edge-ai-vision.com/?p=56388 Key Takeaways: Utilizing leadership in Physical AI with comprehensive stack systems built on safety-grade high performance SoC platforms, Qualcomm’s general-purpose robotics architecture delivers industry-leading power efficiency, and scalability, enabling capabilities from personal service robots to next generation industrial autonomous mobile robots and full-size humanoids that can reason, adapt, and decide. New end-to‑end architecture accelerates automation […]

The post Qualcomm Introduces a Full Suite of Robotics Technologies, Powering Physical AI from Household Robots up to Full-Size Humanoids appeared first on Edge AI and Vision Alliance.

]]>
Key Takeaways:
  • Utilizing leadership in Physical AI with comprehensive stack systems built on safety-grade high performance SoC platforms, Qualcomm’s general-purpose robotics architecture delivers industry-leading power efficiency, and scalability, enabling capabilities from personal service robots to next generation industrial autonomous mobile robots and full-size humanoids that can reason, adapt, and decide.
  • New end-to‑end architecture accelerates automation by transforming physical embodiments for general‑purpose, continuously learning robots for retail, logistics, and manufacturing.
  • The Qualcomm Dragonwing™ IQ10 Series is its leading and latest addition to premium-tier robotics processors for humanoids and advanced autonomous mobile robots (AMRs).
  • Figure and Qualcomm Technologies are collaborating to define the next generation of compute architecture as Figure scales their humanoid platforms.
  • Qualcomm is building a comprehensive ecosystem for its robotics platforms working with a variety of companies such as Advantech, APLUX, AutoCore, Booster, Figure, Kuka Robotics, Robotec.ai, and VinMotion to bring deployment-ready robotics at scale.

Las Vegas, NV, January 5, 2026 — At CES, Qualcomm Technologies, Inc. (NASDAQ: QCOM) introduced a next-generation robotics comprehensive-stack architecture that integrates hardware, software, and compound AI. Qualcomm Technologies also unveiled its latest high performance robotics processor for industrial AMRs and advanced full-size humanoids, the Qualcomm Dragonwing™ IQ10 Series. This is the latest robotics-specific processor which expands the current robotics roadmap for the Company, delivering high performance, energy-efficient “Brain of the Robot” capabilities. Utilizing Qualcomm Technologies’ proven expertise in edge AI, high performance, low-power systems, this innovation transforms prototypes into deployable, intelligent machines.

“As pioneers in energy efficient, high–performance Physical AI systems, we know what it takes to make even the most complex robotics systems perform reliably, safely, and at scale,” said Nakul Duggal, executive vice president and group general manager, automotive, industrial and embedded IoT and robotics, Qualcomm Technologies, Inc. “By building on our strong foundational low-latency safety-grade high performance technologies ranging from sensing, perception to planning and action, we’re redefining what’s possible with physical AI by moving intelligent machines out of the labs and into real-world environments.”

“Figure’s mission is to develop general-purpose humanoid robots powered by advanced AI to eliminate unsafe and undesirable jobs, boost productivity across industries, and create economic abundance that enables happier, more purposeful lives for humanity,” stated Brett Adcock, founder and chief executive officer, Figure. “Qualcomm Technologies’ platform, with its combination of exceptional compute capabilities and energy efficiency, is a valuable building block in enabling Figure to turn our vision into reality.”

Building on a Proven Foundation: From Concept to Deployment

This general-purpose robotics architecture utilizes Qualcomm Technologies’ expertise in power efficiency, scalability, and edge AI performance to unlock a new era of autonomous robotics and connected intelligence. Today, the Dragonwing industrial processor roadmap powers an assortment of general-purpose robotics form factors, including industry-leading humanoid robots from Booster, VinMotion, and other global robotics providers. This architecture supports advanced perception, motion planning with end-to-end AI models such as VLAs and VLMs enabling generalized manipulation capabilities and human-robot interaction. The introduction of the Dragonwing IQ10 helps Qualcomm Technologies take a significant step toward practical, real‑world deployment across industrial applications. Qualcomm Technologies is engaged in discussions with Kuka Robotics for their next-generation robotics solution.

Comprehensive Stack Architecture

The general-purpose robotics architecture with the Dragonwing IQ10 redefines what’s possible in robotics by combining powerful heterogeneous edge computing, edge AI, mixed-criticality systems, software, machine learning operations, and an AI data flywheel, supported by a growing partner ecosystem and complemented by a strong suite of developer tools. This end-to-end approach enables robots to easily reason and adapt to the spatial and temporal environment intelligently and is optimized to scale across various form factors with industrial-grade reliability. This collaborative network accelerates the development of deployment-ready robotics solutions, solving the last-mile challenge and enabling faster, more scalable innovation across industries.

Experience the Qualcomm-Powered Humanoids at CES

VinMotion’s Motion 2 humanoid, powered by the Qualcomm Dragonwing™ IQ9 Series will be displayed at the Qualcomm Booth #5001 during CES. Also featured at the booth, Booster’s K1 Geek highlights Qualcomm Technologies’ leadership in edge AI, underscoring the Company’s commitment to advancing physical AI for developers and organizations alike. Qualcomm Technologies is also demonstrating Advantech’s commercially available robotics development kit for rapid, multi‑application development and deployment. Separately, the booth features an in-depth look into teleoperation tooling and an AI data flywheel for collection, training, and deployment to continuously add new skills across robotic form factors.

To learn more about Qualcomm’s robotics initiatives, please visit the Qualcomm Robotics Page.

About Qualcomm

Qualcomm relentlessly innovates to deliver intelligent computing everywhere, helping the world tackle some of its most important challenges. Building on our 40 years of technology leadership in creating era-defining breakthroughs, we deliver a broad portfolio of solutions built with our leading-edge AI, high-performance, low-power computing, and unrivaled connectivity. Our Snapdragon® platforms power extraordinary consumer experiences, and our Qualcomm Dragonwing™ products empower businesses and industries to scale to new heights. Together with our ecosystem partners, we enable next-generation digital transformation to enrich lives, improve businesses, and advance societies. At Qualcomm, we are engineering human progress.

Qualcomm Incorporated includes our licensing business, QTL, and the vast majority of our patent portfolio. Qualcomm Technologies, Inc., a subsidiary of Qualcomm Incorporated, operates, along with its subsidiaries, substantially all of our engineering and research and development functions and substantially all of our products and services businesses, including our QCT semiconductor business. Snapdragon and Qualcomm branded products are products of Qualcomm Technologies, Inc. and/or its subsidiaries. Qualcomm patents are licensed by Qualcomm Incorporated.

The post Qualcomm Introduces a Full Suite of Robotics Technologies, Powering Physical AI from Household Robots up to Full-Size Humanoids appeared first on Edge AI and Vision Alliance.

]]>
The Coming Robotics Revolution: How AI and Macnica’s Capture, Process, Communicate Philosophy Will Define the Next Industrial Era https://www.edge-ai-vision.com/2025/12/the-coming-robotics-revolution-how-ai-and-macnicas-capture-process-communicate-philosophy-will-define-the-next-industrial-era/ Mon, 29 Dec 2025 09:00:09 +0000 https://www.edge-ai-vision.com/?p=56312 This blog post was originally published at Macnica’s website. It is reprinted here with the permission of Macnica. Just as networking and fiber-optic infrastructure quietly laid the groundwork for the internet economy, fueling the rise of Amazon, Facebook, and the digital platforms that redefined commerce and communication, today’s breakthroughs in artificial intelligence are setting the stage […]

The post The Coming Robotics Revolution: How AI and Macnica’s Capture, Process, Communicate Philosophy Will Define the Next Industrial Era appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at Macnica’s website. It is reprinted here with the permission of Macnica.

Just as networking and fiber-optic infrastructure quietly laid the groundwork for the internet economy, fueling the rise of Amazon, Facebook, and the digital platforms that redefined commerce and communication, today’s breakthroughs in artificial intelligence are setting the stage for the next great leap: the age of robotics.

As AI advances in reasoning, perception, and adaptability, the vision of giving machines true intelligence is becoming real. The foundation is forming across three interconnected layers that Macnica calls Capture → Process → Communicate, a complete ecosystem where perception meets computation, and computation meets action.

1. Capture: Where Intelligent Behavior Begins

Intelligent robotics starts with sensing. A robot can only act as intelligently as the data it perceives.

Macnica’s capture technologies provide the sensory foundation of autonomy, enabling machines to understand the world around them, their own motion, and even their internal state.

Through an ecosystem that includes Sony industrial CMOS and global-shutter sensors, Infineon and Toppan ToF modules, Ambarella and Renesas imaging processors, and Macnica’s own Streal strain sensors, robots can now “see,” “hear,” “feel,” and “measure” their environment with unprecedented precision.

These are complemented by magnetic encoders, MEMS motion sensors, radar, and acoustic arrays, all integrated through high-speed interfaces such as SLVS-EC, MIPI CSI-2, and SPI.

Together, these inputs create the digital nervous system of autonomous intelligence, feeding the data pipelines that drive perception and decision-making.

As AI models evolve, the capture domain becomes even more powerful. Cameras and sensors no longer just record; they interpret, enabling context-aware systems that adapt in real time. As Dr. Fei-Fei Li, Stanford professor and co-director of the Stanford Human-Centered AI Institute describes it, “Vision is our most powerful sense, the richest source of information about the physical world.” In AI and robotics, the vast majority of meaningful input is still visual…  streams of light that machines must capture, interpret, and act on in real time.

2. Process: Turning Perception into Intelligence

Once data is captured, it must be processed quickly, locally, and securely.

This is where AI and edge computing converge, transforming robotics from deterministic machines into adaptive, learning systems.

Macnica partners with leaders such as Altera, Ambarella, DeepX, and iENSO to deliver compute architectures optimized for real-time vision, sensor fusion, and AI inference.

Our platforms use FPGA and SoC acceleration to handle high-bandwidth imaging, edge-AI engines for perception and path planning, and modular frameworks that let customers combine imaging, mechanical, and environmental data streams deterministically.

This approach is powerful, flexible, and IP-secure. Customers can license proven IP blocks, integrate proprietary algorithms, or co-develop solutions with Macnica engineers. In every case, the customer retains full ownership of their proprietary code, algorithms, and any custom modules developed collaboratively. Macnica’s role is to provide the expertise, frameworks, and integration tools that accelerate design while ensuring that intellectual property created by the customer remains entirely theirs.

That openness accelerates innovation while ensuring long-term sustainability, which is critical for robotics lifecycles measured in decades rather than quarters.

With the latest AI architectures, robotic systems can now learn to navigate complex spaces, detect intent, and coordinate motion across multiple actuators in real time at the edge.

3. Communicate: Connecting Machines, People, and Intelligence

In robotics, communication is the connective tissue that unites sensing, processing, and human interaction.

Macnica enables deterministic networking through time-synchronized Ethernet frameworks that coordinate multi-camera, multi-axis robotic systems with sub-millisecond precision. This ensures predictable, safe, and synchronized motion essential for industrial robotics and autonomous systems.

On the human interface side, Macnica integrates Ortustech’s industrial LCD and touch solutions for clarity and reliability across mobile, factory, and embedded environments. From bright, wide-temperature HMIs to compact rugged displays, our visual systems ensure that data is not only transmitted but also clearly understood.

Beyond the edge, technologies such as ST 2110 IP video transport and Marvell/Infineon networking solutions allow massive real-time data streams, including visual, mechanical, and environmental information, to be distributed securely across systems or even across multiple sites. This connects local intelligence to the broader enterprise and links AI robotics with industrial cloud infrastructure.

4. A Unified Ecosystem for Scalable Innovation

The three elements – Capture, Process, and Communicate – work together in harmony.

Through a carefully curated partner network, Macnica Americas connects leading suppliers across sensing, compute, display, and embedded design into one interoperable ecosystem.

Layer Key Partners Macnica’s Contribution
Capture

Camera icon
  • Sony*
  • Infineon
  • Toppan
  • Renesas
  • Macnica Streal
Imaging, strain, environmental, and motion sensor integration
Process

Gear icon
  • Altera
  • Ambarella
  • DeepX
  • iENSO
  • Connect Tech
FPGA/SoC compute, AI acceleration, and sensor-fusion frameworks
Communicate

Left and right arrows icon
  • Marvell
  • Infineon
  • Silex
  • Ortustech
  • Innolux
Deterministic networking, wireless communication, and HMI displays
Integration & Support Macnica Americas Architecture design, validation, and lifecycle enablement

This architecture transforms discrete components into validated, scalable solutions that are ready for deployment. It minimizes integration risk, shortens time-to-market, and allows customers to focus on innovation rather than infrastructure.

5. Robotics as the Physical Frontier of AI

As AI continues to expand its reasoning, perception, and creativity, robotics becomes its natural extension into the physical world.

The economic potential is vast: automation of labor, intelligent logistics, adaptive manufacturing, and human-assist systems that extend capability rather than replace it.

Robotics is where the digital meets the tangible, where intelligence does not just analyze – it acts.

As with the early internet, the leaders will be those who build the enabling infrastructure, the ones who connect perception, computation, and communication.

That is exactly what Macnica does.

By enabling systems to Capture, Process, and Communicate, we turn intelligence into motion, data into decisions, and innovation into impact.

The Bottom Line

If AI is the new electricity, robotics is the grid, and Macnica is helping wire it.

By building the interoperable foundation that allows intelligent machines to sense, think, and act together, Macnica is not just participating in the robotics revolution – it is powering it.

 

Sebastien Dignard, President, Macnica Americas, Inc.

The post The Coming Robotics Revolution: How AI and Macnica’s Capture, Process, Communicate Philosophy Will Define the Next Industrial Era appeared first on Edge AI and Vision Alliance.

]]>
The Art of Robotics and The Growing Intellect of Autonomy https://www.edge-ai-vision.com/2025/11/the-art-of-robotics-and-the-growing-intellect-of-autonomy/ Thu, 20 Nov 2025 21:00:47 +0000 https://www.edge-ai-vision.com/?p=56010 This blog post was originally published at IDTechEx’s website. It is reprinted here with the permission of IDTechEx. ‘Robotics’ takes on many different forms today, from cars pre-empting a driver’s needs and making coffee-stop decisions in their best interest, to humanoid robots operating in warehouses and cobots assisting humans in production lines. IDTechEx’s portfolio of […]

The post The Art of Robotics and The Growing Intellect of Autonomy appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at IDTechEx’s website. It is reprinted here with the permission of IDTechEx.

‘Robotics’ takes on many different forms today, from cars pre-empting a driver’s needs and making coffee-stop decisions in their best interest, to humanoid robots operating in warehouses and cobots assisting humans in production lines. IDTechEx’s portfolio of Robotics & Autonomy Research Reports is home to a multitude of diverse possibilities arising within the robotics sector, including forecasts and predictions for developments and uptake in the short to medium future.

The developing intuition of in-cabin sensing

The driver monitoring system (DMS) and the occupant monitoring system (OMS) are two vital roles within autonomous vehicle systems that utilize technologies such as near-infrared cameras and radar to monitor drivers’ states and improve passengers’ safety. These systems are drawing attention as a result of a number of regulations, particularly in line with vehicle autonomy ramping up globally. DMS will see drivers’ states of awareness monitored, as the systems pick up on potential drowsiness or fatigue by gaze tracking and detecting eyelid movement. The DMS also includes hands-on detection, so the car becomes aware when the driver removes their hands from the wheel.

Working in line with the implementation of AI within vehicles, in-cabin sensors could relay information to the vehicle’s intelligence system, which could then make the decision to suggest scheduling in a coffee stop or snack break along the route. The increased intelligence and autonomy of a vehicle’s internal systems means that they are becoming trained to always be aware of the welfare of passengers, allowing for safer and more comfortable driving.

IDTechEx’s report, “Autonomous Driving Software and AI in Automotive 2026-2046: Technologies, Markets, Players“, covers vehicle software and systems that assist the driver on the road, providing extra layers of personalization and safety. “In-Cabin Sensing 2025-2035: Technologies, Opportunities, and Markets” further explores the use of different technology types in the makeup of in-cabin sensing systems, and the regulations surrounding their uptake.

Vehicle autonomy, radar systems, and ADAS

Front and side radars can provide all round protection for vehicles on the road, serving unique purposes and working together to enhance their effectiveness and safety. The front radars on a vehicle require both long range and angular resolution to be able to detect objects, people, or other cars with as much time as possible to ensure the best course of action can be taken and that the driver is aware. Automatic emergency braking (AEB) is one of the main features enabled with a vehicle’s front radars, that works as part of a vehicle’s advanced driver assistance system (ADAS) to increase safety.

Junction pedestrian automatic emergency braking will allow vehicles able to stop on their own to prevent collisions, should the driver not be able to act quickly enough. Both front and side radars will be both be responsible for this particular function, to ensure a wide coverage around the vehicle at short distances. Side radars, however, have exclusive functionality for lane change assist and blind spot detection, and are known for having a much wider field of view than front radars, in order to keep tabs on the places the driver can’t see.

The future of radar could see the technology being used to enable real-time maps and share information with other road users in order to avoid collisions and traffic jams. This may be referred to as a ‘radar mesh’ – a large system of shared information across central compute platforms. As this network expands, it could be imagined that traffic lights may be able to be controlled with on-the-go data from vehicles in surrounding areas, for safer and more efficient journeys.

IDTechEx’s report “Automotive Radar Market 2025-2045: Robotaxis & Autonomous Cars” covers radar use in autonomous vehicles and robotaxis, and the varying types of technologies that have either been commercialized or are in developmental stages. IDTechEx predicts that the radar market for automotives will reach 500 million annual sales in 2041 – a forecast which showcases the scope for the market’s success in becoming increasingly well established.

ADAS Level 2 is a relatively new phenomenon that is reshaping vehicle safety, with even more advanced capabilities than with previous systems. Sensors can be used in ADAS, alongside radar, to provide higher levels of protection on the road. Cameras can classify information, unlike radars, to identify specific objects and road signs.

Hands-free driving can also be enabled as a result of ADAS, so drivers can sip their coffee as the car drives. Though they are currently required to keep their eyes on the road, drivers could one day also see the possibility of being able to remove their gaze to chat to the passenger or reply to an email. However, liability challenges are currently a large barrier to Level 3 ADAS adoption. IDTechEx’s report, “Passenger Car ADAS Market 2025-2045: Technology, Market Analysis, and Forecasts” covers the up and coming features of ADAS that will increase both the safety and autonomous functions of vehicles.

Robotic coworkers – humanoids and cobots

Outside of vehicle capabilities, robotics and autonomy have a large part to play in sectors such as warehousing and manufacturing, where a more traditional representations of robots can be seen. Humanoids are designed to have humanlike movement capabilities and are being deployed for their ability to be used as general-purpose machines. Their actuators, tactile sensors, and AI-driven software and sensors make them capable of working independently in industrial and non-industrial environments. The former would require humanoids with larger battery packs, and the latter sees a need for light weight and lower force, proving this type of robot to be adaptable to varying environments, from vehicle assembly and manufacturing to moving boxes around warehouses. IDTechEx’s report, “Humanoid Robots 2025-2035: Technologies, Markets and Opportunities“, covers the primary applications for humanoids, and predictions for their uptake across sectors over the next decade.


Collaborative robots (cobots) share the helpfulness of humanoids, though are designed to work effectively alongside humans to increase efficiency in factories and assembly lines. Compared with traditional industrial robots, cobots are slower moving, and more light weight, and are equipped with soft gripper technology for increased sensitivity while working around delicate components. As a result, they can also be used in quality inspections, packaging, and machine tending. IDTechEx reports that they are lower in cost than alternative machines and have a small footprint as well as ease of programming and flexibility. The report, “Collaborative Robots 2025-2045: Technologies, Players, and Markets“, explores the diverse capabilities of cobots further.

For more information on the latest developments within the robotics sector, visit IDTechEx’s expansive portfolio of Robotics & Autonomy Research Reports.

Lily-Rose Schuett, Journalist, IDTechEx

The post The Art of Robotics and The Growing Intellect of Autonomy appeared first on Edge AI and Vision Alliance.

]]>