D3 Embedded - Edge AI and Vision Alliance https://www.edge-ai-vision.com/category/provider/d3/ Designing machines that perceive and understand. Wed, 04 Feb 2026 19:16:50 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://www.edge-ai-vision.com/wp-content/uploads/2019/12/cropped-logo_colourplus-32x32.png D3 Embedded - Edge AI and Vision Alliance https://www.edge-ai-vision.com/category/provider/d3/ 32 32 Robotics Builders Forum offers Hardware, Know-How and Networking to Developers https://www.edge-ai-vision.com/2026/01/robotics-day-offers-hardware-know-how-and-networking-to-developers/ Thu, 29 Jan 2026 14:00:56 +0000 https://www.edge-ai-vision.com/?p=56654 On February 25, 2026 from 8:30 am to 5:30 pm ET, Advantech, Qualcomm, Arrow, in partnership with D3 Embedded, Edge Impulse, and the Pittsburgh Robotics Network will present Robotics Builders Forum, an in-person conference for engineers and product teams. Qualcomm and D3 Embedded are members of the Edge AI and Vision Alliance, while Edge Impulse […]

The post Robotics Builders Forum offers Hardware, Know-How and Networking to Developers appeared first on Edge AI and Vision Alliance.

]]>
On February 25, 2026 from 8:30 am to 5:30 pm ET, Advantech, Qualcomm, Arrow, in partnership with D3 Embedded, Edge Impulse, and the Pittsburgh Robotics Network will present Robotics Builders Forum, an in-person conference for engineers and product teams. Qualcomm and D3 Embedded are members of the Edge AI and Vision Alliance, while Edge Impulse is a subsidiary of Qualcomm.

Here’s the description, from the event registration page:

Overview

Exclusive in-person event: get practical guidance, platform roadmap & hands-on experience to accelerate compute & AI choices for your robot

Join us for an exclusive, in-person Robotics Day/ Builders Forum built for engineers and product teams developing AMRs, humanoids, and industrial robotics applications. Co-hosted with Arrow, Qualcomm, Edge Impulse and Advantech, and supported by ecosystem partners, the event delivers practical guidance on choosing compute platforms, integrating vision and sensors, and accelerating AI development from prototype to deployment.

What to expect

  • Expert keynotes on robotics platform trends, roadmap considerations, and rugged edge deployment
  • Live demo showcase with real hardware and end-to-end solution workflows you can evaluate firsthand
  • Three technical breakout tracks with deep dives on compute, vision and perception, and AI software optimization
  • High-value networking with peer robotics builders, plus direct access to industry leaders, solution architects, and partner technical teams

You’ll leave with clearer platform direction, implementation best practices, and trusted connections for follow-up technical discussions and next-step evaluations. Attendance is limited to keep conversations focused and interactive.

To close the day, we will host a Connections Mixer at the Sky Lounge featuring a brief wrap-up and a raffle. This casual networking hour is designed to help attendees connect with peers, speakers, and solution teams in a relaxed setting. Sponsored by D3 Embedded.
————————————————————————————————–

This event is free and designed for professionals building or evaluating robotics and AMR solutions, including robotics and AMR product managers, system architects and embedded engineers, industrial automation R&D leaders, perception and vision engineers, and operations and engineering directors. We also welcome professionals tracking the latest robotics trends and platform direction.

Invitation-only access

Click Get ticket and complete the Event Registration form to apply for a free ticket. Event hosts will review submissions and email confirmed invitations (with an event code) to qualified attendees. Please present your ticket at reception to receive your full-day conference badge.

Location

Wyndham Grand Pittsburgh Downtown
600 Commonwealth Place
Pittsburgh, PA 15222

Agenda

08:30 AM – 09:00 AM – Breakfast & Connections Kickoff

09:00 AM – 09:15 AM – Opening Remarks & Day Overview 

09:15 AM – 09:45 AM – Keynote 1: Global Robotics Trends and How You Can Take Advantage (sponsored by Arrow) 

09:45 AM – 10:30 AM – Keynote 2: Utilizing Dragonwing for Industrial Arm-Based Robotics Solutions (sponsored by Qualcomm, Edge Impulse)

10:30 AM – 11:00 AM – Keynote 3: Ruggedizing Robotics Solutions for Mobility and Harsh Environments (sponsored by Advantech) 

11:00 AM – Break 

11:15 AM – 11:45 AM – Keynote 4: Selecting the Proper Cameras and Sensors for AI-Assisted Perception (sponsored by D3 Embedded) 

11:45 AM – 12:45 PM – Lunch 

12:45 PM – 03:30 PM – Three Breakout Rotations (45 min each with breaks) 

Track A: Building Out a Full-Scale Humanoid Robot from a Hardware Perspective
Track B: Leveraging Software Solutions to Get the Most Out of Your Processor
Track C: Designing and Integrating Machine Vision Solutions for AMRs and Humanoids

03:30 PM – 05:30 PM – Connections Mixer at Sky Lounge (sponsored by D3 Embedded)

To register for this free webinar, please see the event page.

The post Robotics Builders Forum offers Hardware, Know-How and Networking to Developers appeared first on Edge AI and Vision Alliance.

]]>
D3 Embedded Showcases Camera/Radar Fusion, ADAS Cameras, Driver Monitoring, and LWIR solutions at CES https://www.edge-ai-vision.com/2026/01/d3-embedded-showcases-camera-radar-fusion-adas-cameras-driver-monitoring-and-lwir-solutions-at-ces/ Wed, 07 Jan 2026 20:09:29 +0000 https://www.edge-ai-vision.com/?p=56417 Las Vegas, NV, January 7, 2026 — D3 Embedded is showcasing a suite of technology solutions in partnership with fellow Edge AI and Vision Alliance Members HTEC, STMicroelectronics and Texas Instruments at CES 2026. Solutions include driver and in-cabin monitoring, ADAS, surveillance, targeting and human tracking – and will be viewable at different locations within […]

The post D3 Embedded Showcases Camera/Radar Fusion, ADAS Cameras, Driver Monitoring, and LWIR solutions at CES appeared first on Edge AI and Vision Alliance.

]]>
Las Vegas, NV, January 7, 2026 — D3 Embedded is showcasing a suite of technology solutions in partnership with fellow Edge AI and Vision Alliance Members HTEC, STMicroelectronics and Texas Instruments at CES 2026. Solutions include driver and in-cabin monitoring, ADAS, surveillance, targeting and human tracking – and will be viewable at different locations within CES 2026.

Single-Camera and Radar Interior Sensor Fusion for In-Cabin Sensing

Location: LVCC North Hall, N116

At CES, we are excited to showcase the first fusion of single-camera and radar technologies on Texas Instruments’ platform for full interior sensing.

  • Performs Fusion-Based DMS and OMS in One Package
  • Enhances Child Presence Detection and Intruder Detection
  • Reduces Sensor, Camera, and Processor Redundancy and Cost
  • Addresses Euro NCAP and Regulatory Requirements in a Single System

ADAS 8MP Front Camera

Location: LVCC North Hall, N116

As well as our ADAS 8MP Front Camera Reference Design powered by the Texas Instruments TDA4A Entry.

  • Full L2 Vision Perception and ASIL Safety
  • Sony IMX728 8MP Camera with High Dynamic Range (HDR), LED Flicker Mitigation (LFM), and an AEC-Q100 Grade 2 Image Sensor
  • Fully Customizable and Production-Ready

 

Driver Monitoring System / Occupant Monitoring System RGB-IR Imager

Location: Appointment Only – Bandol 1

Introducing the DesignCore® Chroma Series Camera with a 5MP RGB-IR, dual mode (Global and Rolling Shutter) 100 dB+ HDR imager specifically designed for Driving Monitoring and Occupant Monitoring applications. Paired with D3 Embedded’s TDA4 Rugged Vision Processor, this end-to-end system leverages the RGB-IR sensor to capture infrared light, ensuring drivers are not distracted by visible illumination.

 

D3 Embedded is also excited to introduce compact, automotive-grade long-wave infrared (LWIR) thermal camera solutions in partnership with Teledyne FLIR.

  • Automotive-grade ASIL-B LWIR
  • VGA with no mechanical shutter
  • GMSL2 connectivity
  • Small & rugged (IP69K)

Perfect for electro-optical/infrared (EO/IR) imaging solutions which combine visible light and thermal imaging to provide enhanced surveillance, targeting, and tracking capabilities.

About D3 Embedded

D3 Embedded is a 100% U.S.-based company that develops end-to-end solutions integrating sensors, connectivity, embedded processing and AI to deliver advanced perception for performance-critical applications. Using its proven DesignCore® product platforms and stage-gate development process, D3 Embedded helps its customers minimize the cost, schedule, and technical risks of product development for performance-critical applications. D3 Embedded is an Elite member of the NVIDIA Partner Network. The company holds expertise in autonomous machines and robotics, electrification, sensing, imaging and optics, edge computing and detection algorithms. To support its products and services, the company offers ODM customization of hardware and software, validation testing, and in-house manufacturing services. Learn more at www.D3Embedded.com.

The post D3 Embedded Showcases Camera/Radar Fusion, ADAS Cameras, Driver Monitoring, and LWIR solutions at CES appeared first on Edge AI and Vision Alliance.

]]>
D3 Embedded, HTEC, Texas Instruments and Tobii Pioneer the Integration of Single-camera and Radar Interior Sensor Fusion for In-cabin Sensing https://www.edge-ai-vision.com/2025/10/d3-embedded-htec-texas-instruments-and-tobii-pioneer-the-integration-of-single-camera-and-radar-interior-sensor-fusion-for-in-cabin-sensing/ Mon, 13 Oct 2025 18:11:05 +0000 https://www.edge-ai-vision.com/?p=55596 The companies joined forces to develop sensor fusion based interior sensing for enhanced vehicle safety, launching at the InCabin Europe conference on October 7-9. Rochester, NY – October 6, 2025 – Tobii, with its automotive interior sensing branch Tobii Autosense, together with D3 Embedded, and HTEC today announced the development of an interior sensing solution […]

The post D3 Embedded, HTEC, Texas Instruments and Tobii Pioneer the Integration of Single-camera and Radar Interior Sensor Fusion for In-cabin Sensing appeared first on Edge AI and Vision Alliance.

]]>
The companies joined forces to develop sensor fusion based interior sensing for enhanced vehicle safety, launching at the InCabin Europe conference on October 7-9.

Rochester, NY – October 6, 2025 – Tobii, with its automotive interior sensing branch Tobii Autosense, together with D3 Embedded, and HTEC today announced the development of an interior sensing solution powered by Texas Instruments’ automotive-grade System-on-Chip (SoC) and its high-resolution radar sensor. The solution, which will make its public debut during the upcoming InCabin Europe conference in Barcelona, is the first to fuse single-camera and radar technologies on Texas Instruments’ platform for full interior sensing.

At the core we find Texas Instruments’ TDA4VEN processor, part of the Jacinto™ 7 family. Designed specifically for demanding automotive AI workloads, the processor integrates dedicated AI accelerators capable of up to 4 TOPS, enabling advanced deep learning models for real-time driver and occupant monitoring. Its heterogeneous architecture, combining ARM® CPUs, vision accelerators, and DSPs, ensures ultra-low latency, high efficiency, and reliable performance under the most stringent automotive conditions.

The solution’s visual sensing layer is powered by the STMicroelectronics VG5761 image sensor, a high-performance in-cabin sensor that delivers excellent sensitivity, wide dynamic range, and reliable operation even in low-light environments. Complementing this is the Texas Instruments AWRL6844 mmWave radar sensor, operating in the 60-GHz band, which provides enhanced robustness by detecting occupant presence, subtle body movements, and vital signs, even in complete darkness or when the line of sight is obstructed.

By fusing a single-camera solution with a single radar, Tobii Autosense enhances its interior sensing capabilities by covering unique corner cases focused on safety, such as intruder alert, child left behind, or general detections obstructed by in-cabin elements. The mmWave radar is a perfect match-up for a single-camera solution, covering for all the limitations generated by the wide-field-of view RGB/NIR sensor.

The collaboration brings together deep expertise from across the automotive ecosystem. Tobii Autosense contributes with its proprietary automotive interior sensing technology and decades of AI, machine learning, and signal processing experience. D3 Embeddedleverages over 25 years of expertise in embedded systems to deliver an advanced camera and radar fusion sense and compute platform. HTECcontributes its end-to-end ADAS expertise and deep knowledge of Texas Instruments SoCs, uniting software, radar, camera systems, optimizations, and the vision stack into one seamless solution.Texas Instruments provides cutting-edge Jacinto™ 7 TDA4VEN SoC and innovative AWRL6844 3-in-1 mmWave radar sensing technology to power the platform.

Designed to enhance passenger safety and well-being, Tobii Autosense’s single camera implementation enables driver and occupancy monitoring helping OEMs meet the upcoming global safety requirements and regulations.

“This achievement reflects true collaboration across complementary domains of expertise.” said Henrik Mawby, Head of Ecosystem and Partnerships Tobii Autosense, at Tobii. “Together with HTEC and D3 Embedded, we’ve been able to fuse camera and radar in a way the industry hasn’t seen before, delivering safer and more cost-efficient interior sensing to help automakers meet tomorrow’s safety standards.”

The attendees of InCabin Europe, held in Barcelona on October 7–9, can experience the sensor fusion solution in the demo car #3, where the partners will showcase how this approach sets a new benchmark in driver and occupant monitoring.

About Tobii

Tobii is the global leader in eye tracking and pioneer of attention computing. We are on a mission to improve the world with technology that understands human attention and intent. Creating tech for a better future, our technologies and solutions apply to areas such as behavioural studies and research, healthcare, education and training, gaming, extended reality, automotive, and many more. Tobii’s eye tracking is used by thousands of enterprises, universities, and research institutes around the globe. Headquartered in Sweden, Tobii is listed on Nasdaq Stockholm (TOBII). For more information: www.tobii.com.

About Texas Instruments

Texas Instruments Incorporated (Nasdaq: TXN) is a global semiconductor company that designs, manufactures and sells analog and embedded processing chips for markets such as industrial, automotive, personal electronics, enterprise systems and communications equipment. At our core, we have a passion to create a better world by making electronics more affordable through semiconductors. This passion is alive today as each generation of innovation builds upon the last to make our technology more reliable, more affordable and lower power, making it possible for semiconductors to go into electronics everywhere. Learn more at TI.com.

About HTEC

HTEC Group Inc. is a global AI-first provider of strategic, software and hardware embedded design and engineering services, specializing in Advanced Technologies, Financial Services, MedTech, Automotive, Telco, and Enterprise Software & Platforms. HTEC has a proven track record of helping Fortune 500 and hyper-growth companies solve complex engineering challenges, drive efficiency, reduce risks, and accelerate time to market. HTEC prides itself on attracting top talent and has strategically chosen the locations of its 20+excellence centers to enable this.

About D3 Embedded

D3 Embedded is a U.S.-based company that develops end-to-end solutions integrating sensors, connectivity, embedded processing and AI to deliver advanced perception for performance-critical applications. Using its proven DesignCore® product platforms and stage-gate development process, D3 Embedded helps its customers minimize the cost, schedule, and technical risks of product development for performance-critical applications. D3 Embedded is an Elite member of the NVIDIA Partner Network, an Intel Gold Partner, and a premium member of Texas Instruments’ third-party network. The company holds expertise in autonomous machines and robotics, electrification, sensing, imaging and optics, edge computing and detection algorithms. To support its products and services, the company offers ODM customization of hardware and software, validation testing, and in-house manufacturing services.

The post D3 Embedded, HTEC, Texas Instruments and Tobii Pioneer the Integration of Single-camera and Radar Interior Sensor Fusion for In-cabin Sensing appeared first on Edge AI and Vision Alliance.

]]>
D3 Embedded to Showcase Robotic Perception and Generative AI Solutions at the Embedded Vision Summit https://www.edge-ai-vision.com/2025/05/d3-embedded-to-showcase-robotic-perception-and-generative-ai-solutions-at-the-embedded-vision-summit/ Fri, 16 May 2025 14:13:21 +0000 https://www.edge-ai-vision.com/?p=53782 D3 Embedded to demonstrate real-time solutions integrating cameras and 3D sensors, robust connectivity, embedded processing, and generative AI at the Embedded Vision Summit. Rochester, NY – May 15, 2025 – D3 Embedded announced today it will exhibit at the 2025 Embedded Vision Summit, the premier event for practical, deployable computer vision and AI, for product […]

The post D3 Embedded to Showcase Robotic Perception and Generative AI Solutions at the Embedded Vision Summit appeared first on Edge AI and Vision Alliance.

]]>
D3 Embedded to demonstrate real-time solutions integrating cameras and 3D sensors, robust connectivity, embedded processing, and generative AI at the Embedded Vision Summit.

Rochester, NY – May 15, 2025 – D3 Embedded announced today it will exhibit at the 2025 Embedded Vision Summit, the premier event for practical, deployable computer vision and AI, for product creators who want to bring visual intelligence to products. This year’s Summit will be May 20 – 22, 2025, in Santa Clara, California.

At booth #420, D3 Embedded will showcase three demonstrations showcasing their most advanced edge AI solutions in real-time:

  1. Autonomous Mobile Robot Platform: Showcasing multiple DesignCore® Discovery ISX031 PRO Series GMSL2 Cameras from D3 Embedded and Intel® RealSense™ GMSL2 Stereo Depth D457 Cameras paired for robotic perception on Advantech AFE-R360 featuring Intel® Core™ Ultra 7/5 Processors.
  2. 6D Object Pose Estimation on NVIDIA Jetson AGX Orin™: This demonstration leverages the generative AI model FoundationPose, a unified foundation model for 6D object pose estimation and tracking, to perform 6D object pose estimation in real-time with RGBD images from Intel® RealSense™ GMSL2 Stereo Depth D457 Cameras.
  3. Visual Language Model (VLM) with D3 Embedded Rugged IP69K MIPI A-PHY Camera, Lattice Holoscan Sensor Bridge, and NVIDIA Jetson AGX Orin™: D3 Embedded’s DesignCore® Discovery ISX031 PRO-X Series MIPI A-PHY Camera from D3 Embedded, leveraging an A-PHY-compliant VA7000 chipset by Valens Semiconductor, is integrated with Lattice Semiconductor’s Holoscan Sensor Bridge and an NVIDIA Jetson AGX Orin™ platform. A VLM is run on AGX Orin, demonstrating advanced AI capabilities for real-time image and text understanding.

“D3 Embedded leverages our hardware and software expertise in imaging, radar, computer vision, and generative AI to deliver high-performance edge AI solutions for performance-critical markets,” said Scott Reardon, CEO at D3 Embedded. “Participating in the Embedded Vision Summit presents an exciting opportunity for us to demonstrate next-generation integrated rugged camera and sensor platforms with generative AI to innovators.”

To schedule an appointment with a member of D3 Embedded during the Embedded Vision Summit, interested parties are encouraged to email sales@d3embedded.com.

About D3 Embedded

D3 Embedded is a U.S.-​based company that develops end-to-end solutions integrating sensors, connectivity, embedded processing and AI to deliver advanced perception for performance-critical applications. Using its proven DesignCore® product platforms and stage-gate development process, D3 Embedded helps its customers minimize the cost, schedule, and technical risks of product development for performance-critical applications. D3 Embedded is an Elite member of the NVIDIA Partner Network. The company holds expertise in autonomous machines and robotics, electrification, sensing, imaging and optics, edge computing and detection algorithms. To support its products and services, the company offers ODM customization of hardware and software, validation testing and in-house manufacturing services. Learn more at www.D3Embedded.com.

Follow D3 Embedded on LinkedIn: https://www.linkedin.com/company/d3-embedded/

The post D3 Embedded to Showcase Robotic Perception and Generative AI Solutions at the Embedded Vision Summit appeared first on Edge AI and Vision Alliance.

]]>
D3 Embedded to Showcase Innovative Industrial and Physical AI Solutions at NVIDIA GTC https://www.edge-ai-vision.com/2025/03/d3-embedded-to-showcase-innovative-industrial-and-physical-ai-solutions-at-nvidia-gtc/ Wed, 19 Mar 2025 19:43:57 +0000 https://www.edge-ai-vision.com/?p=52956 D3 Embedded to demonstrate real-time edge AI solutions built on powerful NVIDIA platforms and frameworks including NVIDIA Isaac™ Perceptor, Holoscan Sensor Bridge, Jetson Orin™, and NVIDIA IGX Orin™, at GTC. Rochester, NY – March 18, 2025 – D3 Embedded announced it will exhibit at NVIDIA GTC at booth 147 in the Industrial & Physical AI […]

The post D3 Embedded to Showcase Innovative Industrial and Physical AI Solutions at NVIDIA GTC appeared first on Edge AI and Vision Alliance.

]]>
D3 Embedded to demonstrate real-time edge AI solutions built on powerful NVIDIA platforms and frameworks including NVIDIA Isaac™ Perceptor, Holoscan Sensor Bridge, Jetson Orin™, and NVIDIA IGX Orin™, at GTC.

Rochester, NY – March 18, 2025 – D3 Embedded announced it will exhibit at NVIDIA GTC at booth 147 in the Industrial & Physical AI area. The global AI conference will take place at the San Jose Convention Center and online from March 17-21st, 2025.

“D3 Embedded leverages its hardware and software expertise in imaging, radar, computer vision, and AI to develop next-generation edge AI solutions that push the boundaries of what’s possible,” said Scott Reardon, CEO at D3 Embedded. “Participating in NVIDIA GTC presents an exciting opportunity for us to showcase how we enable human-like intelligence for autonomous edge solutions to those who play a critical role in shaping the AI revolution.”

The D3 Embedded booth will feature four demonstrations showcasing their most advanced edge AI solutions in real-time:

  1. Synchronized Intel® RealSense™ Stereo Depth D457 Cameras Compatible with NVIDIA Isaac™ Perceptor: Showcasing multiple 2D/3D Cameras compatible with Isaac Perceptor and synchronized for 360-degree perception. The demo will utilize Intel RealSense GMSL2 Stereo Depth D457 Cameras and D3 Embedded’s DesignCore® GMSL2 Interface Card powered by an NVIDIA Jetson AGX Orin™ Developer Kit. A synchronized 360-degree point cloud visualization will be shown using ROS tools, allowing attendees to envision how this solution can supercharge robotic perception.
  2. Low-Light Camera Performance with New D3 Embedded Direct MIPI Cameras Compatible with the NVIDIA Jetson Orin Nano™ Super Developer Kit: Showcasing D3 Embedded’s new 22-pin direct MIPI DesignCore® Discovery Sony IMX678 Camera connected to a NVIDIA Jetson Orin Nano Super Developer Kit to demonstrate low-light performance. The DesignCore Discovery Sony IMX678 Camera and a standard Camera will capture a target inside a dark box, showing the capability to see and detect in challenging dark conditions. This solution unlocks new possibilities with generative AI to supercharge products for security, robotics, and more.
  3. Low-Latency “Detect and Action” with D3 Embedded Rugged IP69K MIPI A-PHY Camera, Lattice Holoscan Sensor Bridge, and NVIDIA IGX Orin™: D3 Embedded’s new IP69K Camera, leveraging an A-PHY-compliant VA7000 chipset by Valens Semiconductor, will be integrated with Lattice Semiconductor’s Holoscan Sensor Bridge and an NVIDIA IGX Orin platform for image capture. This system will demonstrate low-latency detection and the ability to capture action much faster than typical USB or GigE machine vision cameras. This enterprise-ready solution can be used to provide critical data to surgeons during operations, optimize warehouse operations for efficiency and safety, innovate in autonomous machines, and more.
  4. Visual Language Models (VLMs) with D3 Embedded Rugged IP69K MIPI A-PHY Camera, Lattice Holoscan Sensor Bridge, and NVIDIA IGX Orin™: D3 Embedded will showcase a cutting-edge demo leveraging the IGX Orin and incorporating VLMs, highlighting advanced AI capabilities for real-time image and text understanding. This demo will demonstrate how VLMs can be integrated with D3 Embedded’s Camera Systems to enable interactive, context-aware applications. The integration of VLMs with D3 Embedded solutions will enable robotics and other autonomous machines to better perceive and understand the real world through a camera.

To schedule an appointment with a member of D3 Embedded during GTC, interested parties are encouraged to email sales@d3embedded.com.

About D3 Embedded

D3 Embedded is a 100% U.S.- based company that develops end-to-end solutions integrating sensors, connectivity, embedded processing and AI to deliver advanced perception for performance-critical applications. Using its proven DesignCore® product platforms and stage-gate development process, D3 Embedded helps its customers minimize the cost, schedule, and technical risks of product development for performance-critical applications. D3 Embedded is an Elite member of the NVIDIA Partner Network. The company holds expertise in autonomous machines and robotics, electrification, sensing, imaging and optics, edge computing and detection algorithms. To support its products and services, the company offers ODM customization of hardware and software, validation testing and in-house manufacturing services. Learn more at www.D3Embedded.com.

Follow D3 Embedded on LinkedIn: https://www.linkedin.com/company/d3-embedded/

The post D3 Embedded to Showcase Innovative Industrial and Physical AI Solutions at NVIDIA GTC appeared first on Edge AI and Vision Alliance.

]]>
D3 Embedded Partners with Silicon Highway to Provide Rugged Camera Solutions to Europe https://www.edge-ai-vision.com/2025/02/d3-embedded-partners-with-silicon-highway-to-provide-rugged-camera-solutions-to-europe/ Mon, 24 Feb 2025 19:05:35 +0000 https://www.edge-ai-vision.com/?p=52666 Rochester, NY – February 12, 2025 – D3 Embedded today announced its partnership with Silicon Highway, a leading European distribution company specializing in embedded AI edge solutions, to accelerate the delivery of high-performance rugged cameras to the European market. This partnership will allow D3 Embedded to leverage Silicon Highway’s local expertise and knowledge of the […]

The post D3 Embedded Partners with Silicon Highway to Provide Rugged Camera Solutions to Europe appeared first on Edge AI and Vision Alliance.

]]>
Rochester, NY – February 12, 2025 – D3 Embedded today announced its partnership with Silicon Highway, a leading European distribution company specializing in embedded AI edge solutions, to accelerate the delivery of high-performance rugged cameras to the European market. This partnership will allow D3 Embedded to leverage Silicon Highway’s local expertise and knowledge of the European embedded systems market and expedite European customers’ time-to-market with ready-to-deploy solutions.

D3 Embedded’s high-performance cameras support edge AI devices powered by NVIDIA Jetson modules with the NVIDIA JetPack software development kit to bring accelerated AI performance to the edge in a power-efficient and compact form factor.

“We are thrilled to partner with Silicon Highway to provide European customers with ruggedized cameras optimized for the NVIDIA Jetson ecosystem,” said Scott Reardon, CEO at D3 Embedded. “By leveraging the advanced capabilities of the NVIDIA Jetson platform, D3 Embedded Cameras empower developers to achieve exceptional efficiency and accelerate their project timelines with off-the-shelf solutions that can be tailored to specific production needs.”

The introduction of D3 Embedded cameras in Europe aims to serve a wide array of performance-critical applications, including robotics, healthcare, and autonomous systems, helping ensure businesses have access to embedded vision technology that meets the highest standards of quality and precision.

“With this launch, Silicon Highway reinforces its commitment to delivering innovative embedded vision solutions powered by NVIDIA Jetson edge AI technology to the European market,” said Graham Brown, Design & Marketing Director at Silicon Highway. “The D3 Embedded Industrial Camera range offers a perfect blend of flexibility and performance, making it ideal for industries such as automation, robotics, healthcare, and more.”

Key Features of D3 Embedded Industrial Cameras

  • NVIDIA Jetson: Built to integrate with the NVIDIA Jetson platform for advanced AI capabilities.
  • Off-the-Shelf Versatility: Start prototyping immediately with pre-configured camera modules.
  • Customizable Designs: Tailor the hardware, software, and image tuning to specific production requirements.
  • Exceptional Quality: Deliver high reliability and precision for safety-critical applications, including IP67 dust and waterproof protection.
  • Rapid Time-to-Market: Accelerate development cycles with ready-to-deploy solutions.

With a robust understanding of NVIDIA Jetson technologies, Silicon Highway is uniquely positioned to deliver D3 Embedded Industrial Cameras, optimized for the NVIDIA Jetson ecosystem, helping businesses integrate cutting-edge AI into their operations.

For more information about the D3 Embedded product range and how they can benefit businesses, please visit siliconhighwaydirect.com.

About D3 Embedded

D3 Embedded is a 100% U.S.-based company that develops end-to-end solutions integrating sensors, connectivity, embedded processing and AI to deliver advanced perception for performance-critical applications. Using its proven DesignCore® product platforms and stage-gate development process, D3 Embedded helps its customers minimize the cost, schedule, and technical risks of product development for performance-critical applications. D3 Embedded is an Elite member of the NVIDIA Partner Network. The company holds expertise in autonomous machines and robotics, electrification, sensing, imaging and optics, edge computing and detection algorithms. To support its products and services, the company offers ODM customization of hardware and software, validation testing and in-house manufacturing services. Learn more at www.D3Embedded.com.

Follow D3 Embedded on LinkedIn: https://www.linkedin.com/company/d3-embedded/

The post D3 Embedded Partners with Silicon Highway to Provide Rugged Camera Solutions to Europe appeared first on Edge AI and Vision Alliance.

]]>
Alliance Members at 2025 CES https://www.edge-ai-vision.com/2024/11/alliance-members-at-2025-ces/ Tue, 12 Nov 2024 17:47:07 +0000 https://www.edge-ai-vision.com/?page_id=51233 The Edge AI and Vision Alliance 2025 CES Directory for game-changing computer vision and AI technologies Many Alliance Member companies will be showing off the latest building-block technologies that enable new capabilities for machines that see! CES is huge so we’ve created a handy checklist of these companies and where to find them including how […]

The post Alliance Members at 2025 CES appeared first on Edge AI and Vision Alliance.

]]>

The Edge AI and Vision Alliance 2025 CES Directory for game-changing computer vision and AI technologies

Many Alliance Member companies will be showing off the latest building-block technologies that enable new capabilities for machines that see! CES is huge so we’ve created a handy checklist of these companies and where to find them including how to request suite/demo access. 

AiM Future

Booth #9273

AiM Future provides artificial intelligence accelerator IP and software development kits (SDK) to fabless, design house, and FPGA companies in edge computing markets such as IoT, Smart-X, AR/VR, and automotive. Its NeuroMosAIc technology provides scalability, flexibility, configurability and multimodality. For requests to meet at the AiM Future suite, please contact info@aimfuture.ai

AIRY3D

Appointment Only

AIRY3D is commercializing the world’s first passive, single-sensor solution for 3D imaging, delivering profound savings in hardware, computation, and power usage. AIRY3D can bring 3D to your existing design in a size that is unmatched by other technologies while maintaining your original 2D image. For requests to meet at the AIRY3D suite, please contact info@airy3d.com.

Au-Zone Technologies Inc.

Appointment Only

Au-Zone Technologies leads in embedded Computer Vision, AI, and Precision technologies, enabling clients to deploy cutting-edge intelligent solutions. With 15+ years of expertise, we partner with OEMs and engineering teams in sectors like Video Telematics, Marine, Space, and Robotics, advancing autonomous machines. For requests to meet at the Au-Zone suite, please visit https://www.edgefirst.ai/

Axelera AI

Booth #60633

Axelera AI is delivering the world’s most powerful and advanced solutions for AI at the edge. Its industry-defining Metis- AI platform—a complete hardware and software solution for AI inference at the edge—makes computer vision applications more accessible, powerful and user friendly than ever before. For requests to meet at the Axelera AI suite, please visit https://www.axelera.ai/visit-axelera-ai-at-ces-2025.

BrainChip

Suite 29-312

BrainChip is the global leader in event driven computing, with its Akida platform performing inference at the lowest power on the edge today. Supported by development tools, boards, chips and IP, Akida’s event driven architecture is suitable for a wide range of use cases in speech, audio, video and other sensor data. For requests to meet at the BrainChip suite, please visit https://calendly.com/brainchip/ces-2025.

Cadence

Venetian, Level 3, Ballroom 3001A

Cadence provides a wide range of IP and Silicon Solutions for automotive, consumer, and data center markets, built on decades of interface and processor IP expertise. Our comprehensive IP portfolio includes protocol controllers, PHYs, configurable processors, low power high-efficiency DSPs, AI accelerators, and memory chiplets. 

Ceva

Bassano 2709

At Ceva, we are passionate about bringing new levels of innovation to the smart edge. Our wireless communications, sensing and edge AI technologies are at the heart of some of today’s most advanced smart edge products. We have the broadest portfolio of IP to connect, sense and infer data more reliably and efficiently. For requests to meet at the Ceva suite, please contact events@ceva-ip.com.

CLIKA

Appointment Only

CLIKA simplifies deploying AI on diverse hardware by offering automated AI model compression and format compilation. Our proprietary compression algorithm automatically downsizes AI models by 75% and accelerates inference speed by up to 40x, for any target hardware. For requests to meet at the CLIKA suite, please contact business@clika.io.

Commonlands

Appointment Only

Commonlands is a US-based optics supplier providing high-performance cost-effective M12 lenses, C-mount lenses, and related accessories. Our products are commonly used across surveillance, warehouse robotics, IoT, smart retail, factory automation, and medical industries. For requests to meet with Commonlands, please contact max.henkart@commonlands.com.

D3 Embedded

North Hall, N116

Texas Instruments will showcase D3 Embedded’s full-form-factor Automotive Front Camera Reference Design. D3 Embedded will also be demonstrating automotive in-cabin applications such as driver monitoring and occupant monitoring with STMicroelectronics and Smart Eye in STMicroelectronics’ demo vehicle. For requests to meet at the D3 suite, please contact sales@d3embedded.com.

DEEPX

Booth #9045

Founded in anticipation of an era when artificial intelligence will be as ubiquitous as electricity and Wi-Fi, DEEPX develops the underlying technology for high-performance AI semiconductors and computing solutions that can make all electronic devices intelligent. For requests to meet at the DEEPX suite, please contact sales@deepx.ai.

Digica

Appointment Only

Digica is an independent AI and data science company. We provide customized solutions by applying the latest AI tools. With 8 years of experience, 295 projects, and 90 experts, we specialize in computer vision, deep learning, synthetic imaging, large language models, and predictive maintenance. For requests to meet with Digica, please contact ben@digica.com.

e-con Systems

Booth #9517

e-con Systems has been a pioneer in the embedded vision space, designing, developing and manufacturing custom and off-the-shelf camera solutions since 2003. With a team of 300+ extremely skilled core engineers, our products are currently embedded in over 350 customer products. For requests to meet at the e-con Systems suite, please contact dharmalingam.k@e-consystems.com.

Embedl

Appointment Only

Embedl develops advanced solutions for efficient AI deployment in embedded systems. With nearly a decade of research, we help AI leaders in automotive, defense, and emerging industries build cutting-edge, resource-efficient AI products. Our technology streamlines development, driving innovation and competitive advantage. For requests to meet at the Embedl suite, please contact ola@embedl.com.

ENERZAi

Booth #61507

ENERZAi is an AI software company with cutting-edge AI optimization technology. We will demonstrate our next-generation AI inference engine, Optimium, which accelerates AI model inference on target hardware while maintaining accuracy and facilitates convenient AI model deployment across various hardware using a single tool. For requests to meet at the ENERZAi suite, please contact hanhim.chang@enerzai.com.

eYs3D

Booth #15741

eYs3D brings computer vision-based image processing capabilities to autonomous or unmanned robots, vehicles, and AIoT devices. With our own edge processor with built-in NPU, we provide a cost-effective AI platform that can be easily integrated into various applications to improve performance and enable smart decisions. 

FotoNation

Appointment Only

FotoNation is an innovator of ultra-low-power embedded hardware-accelerated edge AI solutions to enhance signal quality and enable AI analytics. We enable SoC makers to add high quality neural signal processing and analytics to their chips through IP cores and, in the future, by highly efficient FotoNation chiplets. For requests to meet with FotoNation, please contact info@fotonation.com.

Geisel Software

Booth #9649

Meet Symage by Geisel Software, the synthetic data platform built for real results. Symage generates high-fidelity, perfectly labeled image training datasets customized for your specific model. Ditch data labeling and cut your prep time by up to 90%! Stop by our booth for a demo and enter to win a Quest 3 VR headset! For requests to meet at the Geisel Software suite, please contact jr@geisel.software

Gimlet

Appointment Only

Gimlet is a developer tool for building, deploying, and monitoring AI at the edge. With Gimlet, you can deploy and monitor AI applications across your edge fleet in minutes. Gimlet helps build and refine custom models using generative AI and optimizes AI applications for your hardware without any manual configuration. 

Hailo

Venetian Tower, Suite# 29-106 and Booth #61701

Hailo develops top-performing AI processors specifically designed to enable AI tasks on the edge. For requests to meet at the Hailo suite, please contact marketing@hailo.ai

Inuitive

Appointment Only

Inuitive’s Vision-on-Chip processors and sensor modules offer an all-in-one solution with integrated capabilities, outstanding performance, optimal size, and cost efficiency. They support simultaneous depth sensing, SLAM, and AI-based object detection and recognition, enhancing robotics, AR/VR, AIoT, and 3D sensing applications. For requests to meet at the Inuitive suite, please contact jimt@inuitive-tech.com.

Lattice Semiconductor

Appointment Only

Lattice Semiconductor is the low power programmable leader. We solve customer problems across the network, from the edge to the cloud, in the growing communications, computing, industrial, automotive, and consumer markets. For requests to meet at the Lattice suite, please contact hussein.osman@latticesemi.com.

Microchip

Booth #3404/3405

Join Microchip at CES for an empowering experience of innovation with hands-on and interactive demos showcasing cutting-edge solutions in AI, automotive, and industrial technologies. Microchip is a leading provider of smart, connected and secure embedded control and processing solutions.

Nextchip

Westgate Hospitality Suites

Nextchip demonstrate a full embedded vision system for ADAS and AD. In addition, we will show conventional color camera and thermal imaging/iToF solutions. With sensing fusion, our ADAS SoC will show its performance with reference applications such as SVM and blind spot information system (BSIS). For requests to meet at the Nextchip suite, please contact kjo@nextchip.com.   

NOTA AI

Appointment Only

NOTA AI is a leading edge AI software startup with 100+ global customers. Specializing in AI model optimization, NOTA enables efficient deployment of AI on edge devices. With $50M raised in Series C funding, the company is focused on growth and innovation in the Edge AI space. 

NXP Semiconductors

Booth #CP-107

NXP (NASDAQ: NXPI) is the trusted partner for solutions in auto, industrial & IoT, mobile and comms infrastructure. Our “Brighter Together” way links leading-edge tech with pioneering people forming system solutions making the world better, safer, & secure. We have operations globally and posted revenue of $13+ billion in 2023.

OpenMV

Appointment Only

OpenMV develops OpenMV Cam, an easy-to-use, low-cost, extensible, Python-powered machine vision module. We aim to make OpenMV Cam the “Arduino of machine vision.” For requests to meet with OpenMV, please contact kwagyeman@openmv.io.

Piera Systems

Appointment Only

Piera Systems is expanding our Canāree line of air quality monitoring products to include models designed for hospitality, education and ProTech markets. We are also seeking to expand our network of distributors and value-added resellers, plus entertain parties interested in investing in Piera Systems. For requests to meet with Piera Systems, please contact howard@pierasystems.com.

Quadric

Appointment Only

Quadric—the leading licensor of general-purpose neural processor IP for SoC design—will have representatives at CES 2025 ready to meet with you to discuss your AI inference requirements for your next SoC design.  For requests to meet with Quadric, please contact info@quadric.io

Texas Instruments

Booth #N116

Texas Instruments will have a private meeting room showcasing demos for automotive, consumer and audio applications. Through perception and edge AI technologies, we’re helping you reimagine everything from vehicles to robots to personal electronics. For requests to meet at the Texas Instruments suite, please contact EdgeAI_Vision_Alliance_CES2024@list.ti.com.

VeriSilicon

Bassano Booth #2701 and #2702

At CES 2025, VeriSilicon will showcase live demonstrations of its latest technology developments. Attendees can explore cutting-edge solutions firsthand. Private viewings are available by appointment. For requests to meet with VeriSilicon, please contact yao.lu@verisilicon.com.

videantis

Booth #430

videantis is a leading supplier of processing solutions for AI and other compute-intensive DSP processing for automotive, mobile, consumer, robotics, IoT, drones, industrial and other applications with outstanding efficiency, scalability and proven in > 20 million cars on the streets today. For requests to meet at the videantis suite, please contact tony.picard@videantis.com.

Visionary.ai

Booth #61701

Named a 2023 Top 100 AI Startup, Visionary.ai enables cameras to capture clear video in extreme low light and HDR. Our edge AI software enhances video quality in real-time for medical imaging, robotics, ADAS, cinematic, surveillance, consumer, and industrial IoT applications. For requests to meet at the Visionary.ai suite, please contact david.jarmon@visionary.ai

See you at CES!

Specific venue information can be found on the CES directory here.

Like this checklist? Share with your network:

Facebook
Twitter
LinkedIn

The post Alliance Members at 2025 CES appeared first on Edge AI and Vision Alliance.

]]>
D3 Embedded Introduces Camera Modules Based on Valens Semiconductor’s VA7000 MIPI A-PHY Chipsets https://www.edge-ai-vision.com/2024/10/d3-embedded-introduces-camera-modules-based-on-valens-semiconductors-va7000-mipi-a-phy-chipsets/ Wed, 09 Oct 2024 22:02:25 +0000 https://www.edge-ai-vision.com/?p=50562 The integration of MIPI A-PHY into DesignCore® Series Cameras will accelerate time-to-market for customers developing performance-critical products for robotics, industrial vehicles, and other embedded vision applications. Rochester, NY – October 8th, 2024 – D3 Embedded, a global leader in embedded vision systems design and manufacturing, today announced that it has partnered with Valens Semiconductor, a […]

The post D3 Embedded Introduces Camera Modules Based on Valens Semiconductor’s VA7000 MIPI A-PHY Chipsets appeared first on Edge AI and Vision Alliance.

]]>
The integration of MIPI A-PHY into DesignCore® Series Cameras will accelerate time-to-market for customers developing performance-critical products for robotics, industrial vehicles, and other embedded vision applications.

Rochester, NY – October 8th, 2024 – D3 Embedded, a global leader in embedded vision systems design and manufacturing, today announced that it has partnered with Valens Semiconductor, a leader in high-performance connectivity, to offer off-the-shelf camera modules enabling automotive OEMs and Tier 1s to significantly reduce time-to-market for their A-PHY-based systems.

Since its release by the MIPI Alliance, the A-PHY standard, also adopted by the IEEE in 2021, has garnered significant momentum within the automotive industry. Today, there is a sizable number of automotive OEMs, Tier 1s and Tier 2s evaluating Valens Semiconductor’s A-PHY-compliant VA7000 chipsets for potential integration into their Advanced Driver-Assistance Systems (ADAS) platforms. This proven A-PHY technology can be leveraged to accelerate development for other performance-critical markets such as robotics/AMR, industrial vehicles, industrial automation, and medical imaging.

D3 Embedded, a company specializing in embedded AI imaging solutions, has integrated the VA7000 chip-sets into its camera modules, allowing for longer link distances and the use of simple, flexible cabling. The camera modules will include a Sony ISX031 image sensor with onboard ISP paired with the NVIDIA® Jetson AGX Orin™ Development Kit. D3 Embedded plans on adding additional sensors and drivers to support other processors in response to market requirements, all connected with Valens Semiconductor VA7000 chipsets.

“D3 Embedded is an important partner for us across embedded vision applications,” said Gabi Shriki, SVP, Head of Audio-Video at Valens Semiconductor. “Our VA7000 MIPI A-PHY chipsets have already proved their value in the automotive industry, where three carmakers will integrate them into their next-generation ADAS systems. Through our partnership with D3 Embedded, we can optimize the evaluation and design cycle of A-PHY for the many other automotive OEMs evaluating these chipsets. Leveraging their strong presence in machine vision, we’re also excited about working with D3 Embedded to introduce these chipsets to additional markets that require industrial-grade CSI-2 extension.”

“We were impressed by Valens’ ability to do multi-gig link speeds over unshielded cables and connectors,” said Scott Reardon, CEO at D3 Embedded. “This is something our customers in the industrial vehicles and robotics market have been looking for and it will open the door to more optimized embedded vision solutions. We’re happy to be working with Valens to put this innovative product in front of our customers and partners.“

Learn more about D3 Embedded’s A-PHY-based Cameras here: https://www.d3embedded.com/product-category/camera-modules/?fwp_interface=mipi-a-phy

The Valens’ VA7000-based cameras by D3 Embedded will be showcased at the Valens booth (10F80) at VISION in Stuttgart, on October 8-10, 2024.

About D3 Embedded

D3 Embedded is a 100% U.S.-based company that develops end-to-end solutions integrating sensors, connectivity, embedded processing and AI to deliver advanced perception for performance-critical applications. Using its proven DesignCore® product platforms and stage-gate development process, D3 Embedded helps its customers minimize the cost, schedule, and technical risks of product development for performance-critical applications. D3 Embedded is an Elite member of the NVIDIA Partner Network. The company holds expertise in autonomous machines and robotics, electrification, sensing, imaging and optics, edge computing and detection algorithms. To support its products and services, the company offers ODM customization of hardware and software, validation testing and in-house manufacturing services. Learn more at www.D3Embedded.com.

Follow D3 Embedded on LinkedIn: https://www.linkedin.com/company/d3-embedded/

The post D3 Embedded Introduces Camera Modules Based on Valens Semiconductor’s VA7000 MIPI A-PHY Chipsets appeared first on Edge AI and Vision Alliance.

]]>
Using D3 mmWave Radar Sensors with Robot Operating System (ROS) https://www.edge-ai-vision.com/2024/06/using-d3-mmwave-radar-sensors-with-robot-operating-system-ros/ Mon, 10 Jun 2024 14:32:24 +0000 https://www.edge-ai-vision.com/?p=48228 This blog post was originally published by D3. It is reprinted here with the permission of D3. (Note: This article was written by a human being without the aid of AI, however, at D3 we use AI heavily for robotics and other applications!) Introduction Robot Operating System (ROS)is a common open source starting point for […]

The post Using D3 mmWave Radar Sensors with Robot Operating System (ROS) appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published by D3. It is reprinted here with the permission of D3.

(Note: This article was written by a human being without the aid of AI, however, at D3 we use AI heavily for robotics and other applications!)

Introduction

Robot Operating System (ROS)is a common open source starting point for many robot applications.  It is a publish/subscribe type system where a multitude of different sensors, actuators and nodes for planning, processing, and control can work together to accomplish expected robot tasks.

ROS runs on Linux, specifically Ubuntu, so it provides a stable and comfortable environment for developers.  It provides a library of tools, drivers, and algorithms that make putting together software for a robot easier.

In this short overview, I’m going to describe how you can leverage mmWave radar sensors from D3 to enhance your robot projects.  We offer off-the-shelf sensors, but we are also ready and willing to customize hardware, software, and algorithms.

Sensors

D3 offers mmWave radar sensors and design services based on Texas Instruments devices and technologies.  We offer sensors in the 60 GHz band and the 77 GHz band.  Robotics typically rely on 60 GHz due to agency compliance rules.  We offer miniature mmWave sensors smaller than an inch cubed, so they are very suitable for even smaller robot platforms.  These sensors can be low power (RS-L6432) or higher power and fidelity (RS-6843AOP).  We also offer larger sensors with higher gain antennas (RS-6843 and other custom boards) that can support greater ranges.  Our sensors are designed as production-intent, so while they are useful for evaluation, they can be ready for deployment as well, including some in IP67 applications.

Connectivity

D3 offers mmWave radar sensors with USB-C and CAN-FD interfaces.  USB-C is useful for fast integration into a PC-like ROS solution.  CAN-FD is useful for larger industrial vehicles.  For our USB-based sensors, a loadable kernel module driver is used for the MaxLinear USB/UART bridge we use in our sensor.  The driver is available from MaxLinear’s website, and D3 provides patches for more recent Linux kernels on our website after your sensor purchase.  The good thing about the USB/UART bridge we use in our sensors is that it supports 3.125 Mbps, the highest UART data rate possible with the TI radar devices.  This is higher than many other USB serial bridges can handle.

Data Available

In a recent article, I described some details about the data you can obtain from this sort of mmWave radar sensor.  The most straightforward data that ROS can use is point cloud data.  You can review my post to see what this looks like, and another example is shown below in a warehouse robotics application. Near points are hot in color, and distant points are cooler.  ROS consumes point cloud data via the ROS driver.  This ROS driver is available from TI, and is very well documented.  Both ROS 1 and ROS 2 are supported.

Robot’s Eye View of a Warehouse with Radar Point Cloud Data Overlay

We have also fused radar tracking data with vision-based AI object recognition to provide not only identification of objects in the scene, but also where they are located and where they are heading. This is shown below. You can see the blue box identifying a person, the telemetry text in white, along with the green box indicating the tracked person’s location and a red vector showing where they are heading.

Another Robot’s Eye View, this time with Object Identification and Radar Tracking Overlay

Radar Chirp Tuning

An important factor for developing a responsive robot is the frame rate of the data stream from the sensor. In the same vein, sensing parameters can be configured to control how many points are delivered in the point cloud and the fidelity of the range, angle, and doppler speed data.

TI’s mmWave radar sensor devices accept chirp configurations via the control port but can also be hard coded to come up delivering data with a fixed chirp configuration, or even multiple chirp types using subframes. In either event, the frame rate and data fidelity are a function of the loaded chirp configuration.

Getting Started

When you purchase a D3 mmWave radar sensor for use with ROS, you can install the USB/UART Bridge driver, Install ROS (or ROS2), install the TI ROS driver for the sensor, and then begin to develop your robot application. We provide full documentation on how to take these steps when you purchase sensors.

D3 can also support you in optimizing detection and tracking parameters, providing custom software for radar sensors, and developing ROS (and non-ROS) algorithms.

For example, we collaborated with TI on an autonomous mobile robot (AMR) demonstration that highlighted the following capabilities:

  1. TI’s powerful but efficient TDA4VM processor with AI for object detection and ROS
  2. TI’s miniature DLP projector to signal the robot’s intentions on the floor in front of the robot
  3. TI’s C2000 motor controllers for real-time control of the robot’s two drive motors
  4. D3’s mmWave sensors for obstacle detection, tracking, and avoidance
  5. D3’s camera modules for 360-degree robot vision
  6. Scuttle Robot versatile robot platforms

D3 developed the hardware and software for the demo, that highlighted inventory scanning and autonomous operation using odometry and April Tag vision-based location.

The TI / D3 Autonomous Mobile Robot roving around a warehouse

Conclusion

I hope you found some interesting ideas in this article. You can contact us to help you get going with industrial robotics projects at sales@d3engineering.com, or comment or connect with me to discuss your ideas.

Tom Mayo
Lead Product Line Manager, D3

The post Using D3 mmWave Radar Sensors with Robot Operating System (ROS) appeared first on Edge AI and Vision Alliance.

]]>
Improving Perception with Multiple Radar Sensors https://www.edge-ai-vision.com/2024/02/improving-perception-with-multiple-radar-sensors/ Mon, 05 Feb 2024 15:50:02 +0000 https://www.edge-ai-vision.com/?p=46389 This blog post was originally published by D3. It is reprinted here with the permission of D3. There are variety of applications where spatial sensing is beneficial.  Applications such as automotive parking, robotics collision avoidance, and even people counting and tracking can benefit from detecting objects in the environment either around a vehicle or within […]

The post Improving Perception with Multiple Radar Sensors appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published by D3. It is reprinted here with the permission of D3.

There are variety of applications where spatial sensing is beneficial.  Applications such as automotive parking, robotics collision avoidance, and even people counting and tracking can benefit from detecting objects in the environment either around a vehicle or within a facility. D3 works with Texas Instruments millimeter wave radar devices for a multitude of applications like these.

These sensors work to detect objects using frequency modulated continuous wave (FMCW) transmissions mixed with the received signals to yield an intermediate frequency.  This intermediate frequency (IF) is sampled with an integrated Analog to Digital converter, and then processed with Fourier transforms to detect the range and doppler speed of objects.  With multiple input/multiple output (MIMO) antenna arrays, operations comparing the phases of the received signals from the different antennas can detect the angle to the the detected objects.  With these abilities, a 3-dimensional point cloud is returned by the radar sensor to map out detections in the area and their doppler speed.

Another important operation that can be performed to make better sense of the objects in the scene is clustering and tracking of the point cloud returns.  TI has implemented a processing library that works to group returns that belong to the same object, and then track this cluster from frame to frame.  This library has been optimized to run on the microprocessor core within the SOC, and it can perform clustering and tracking in real time with the detected returns.

While clustering and tracking is useful within a single radar sensor, multiple millimeter wave radar devices can also work together to provide a more detailed vision of the environment by combining the computed point clouds from several independent devices into one representation of the scene.  You can think of multiple flashlights in a dark room illuminating different parts at once so you can see the whole scene.

While you can combine the view of the room in your head without thinking in the case with multiple flashlights, it’s not the same when a processing chain is being used to make sense of the scene.  There is hope, however, because aggregating points clouds is easy for a processor.

As you can imagine, there are a couple of operations that need to be performed for everything to make sense.  First, the coordinate system of each sensor needs to be rotated and translated to the coordinate system of reference for the platform.  For example, if there are radar sensors on the front, rear, and left and right sides of a vehicle, the point cloud return coordinates can be rotated and translated to appear in the front sensor’s frame of reference.

Recall that TI’s libraries can perform clustering and tracking of radar returns using the microprocessor within the radar device.  While this is good for one device at a time, when we want to take the returns from multiple sensors together, it’s better to perform these algorithms on a common processor that considers all of the data together.  At D3, we have implemented just such an aggregating clustering and tracking mechanism by porting the library to other platforms such as a PC and embedded automotive and robotics processors.  We demonstrated this capability at Sensors Converge in Santa Clara in June 2023, and at CES in Las Vegas in January 2024.

D3 has worked with many product development customers to customize, optimize, and validate radar processing chains and algorithms.  We’d be excited to work with you to help meet your radar and processing objectives. Contact us at sales@d3engineering.com for more information.

Tom Mayo
Lead Product Line Manager, D3

The post Improving Perception with Multiple Radar Sensors appeared first on Edge AI and Vision Alliance.

]]>