Blog Posts - Edge AI and Vision Alliance https://www.edge-ai-vision.com/category/blog/ Designing machines that perceive and understand. Thu, 19 Feb 2026 22:23:45 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://www.edge-ai-vision.com/wp-content/uploads/2019/12/cropped-logo_colourplus-32x32.png Blog Posts - Edge AI and Vision Alliance https://www.edge-ai-vision.com/category/blog/ 32 32 The Forest Listener: Where edge AI meets the wild https://www.edge-ai-vision.com/2026/02/the-forest-listener-where-edge-ai-meets-the-wild/ Mon, 23 Feb 2026 09:00:52 +0000 https://www.edge-ai-vision.com/?p=56867 This blog post was originally published at Micron’s website. It is reprinted here with the permission of Micron. Let’s first discuss the power of enabling. Enabling a wide electronic ecosystem is essential for fostering innovation, scalability and resilience across industries. By supporting diverse hardware, software and connectivity standards, organizations can accelerate product development, reduce costs and […]

The post The Forest Listener: Where edge AI meets the wild appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at Micron’s website. It is reprinted here with the permission of Micron.

Let’s first discuss the power of enabling. Enabling a wide electronic ecosystem is essential for fostering innovation, scalability and resilience across industries. By supporting diverse hardware, software and connectivity standards, organizations can accelerate product development, reduce costs and enhance user experiences. A broad ecosystem encourages collaboration among manufacturers, developers and service providers, helping to drive interoperability. Enabling an ecosystem for your customers is a huge value for your product in any market, but for a market that spans many applications, it’s paramount for allowing your customers to get to the market quickly. Micron has a diverse set of ecosystem partners for broad applications like microprocessors, including STMicroelectronics (STM). We have collaborated with STM for years, matching our memory solutions to their products. Ultimately, these partnerships empower our mutual businesses to deliver smarter, more connected solutions that meet the evolving needs of consumers and enterprises alike.

The platform and the kit

There’s something uniquely satisfying about peeling back the anti-static bag and revealing the STM32MP257F-DK dev board brimming with potential. As an embedded developer, I am excited when new silicon lands on my desk, especially when it promises to redefine what’s possible at the edge. The STM32MP257F-DK from STMicroelectronics is one of those launches that truly innovates. The STM32MP257F-DK Discovery Kit is a compact, developer-friendly platform designed to bring edge AI to life. And in my case, to the forest. It became the heart of one of my most exciting projects yet: the Forest Listener, a solar-powered, AI-enabled bird-watching companion that blends embedded engineering with natural exploration.

A new kind of birdwatcher

After a few weeks of development and testing, my daughter and I headed into the woods just after sunrise — as usual, binoculars around our necks, a thermos of tea in the backpack and a quiet excitement in the air. But this time, we brought along a new companion. The Forest Listener is a smart birdwatcher, an AI-powered system that sees and hears the forest just like we do. Using a lightweight model trained with STM32’s model zoo, it identifies bird species on the spot. No cloud, no latency, just real-time inference at the edge. My daughter has mounted the device on a tripod, connected the camera and powered it on. The screen lights up. It’s ready! Suddenly, a bird flutters into view. The camera captures the moment. Within milliseconds, the 1.35 TOPS neural processing unit (NPU) kicks in, optimized for object detection. The Cortex-A35 logs the sighting (image, species, timestamp), while the Cortex-M33 manages sensors and power. My daughter, watching on a connected tablet, lights up: “Look, Dad! It found another one!” A Eurasian jay, this time.

Built for the edge … and the outdoors

Later, at home, we scroll through the logs saved on the Memory cards. The system can also upload sightings via Ethernet. She’s now learning names, songs and patterns. It’s a beautiful bridge between nature and curiosity. At the core of this seamless experience is Micron LPDDR4 memory. It delivers the high bandwidth needed for AI inference and multimedia processing, while maintaining ultra-low power consumption, critical for our solar-powered setup. Performance is only part of the story: What truly sets Micron LPDDR4 apart is its long-term reliability and support. Validated by STM for use with the STM32MP257F-DK, this memory is manufactured at Micron’s dedicated longevity fab, ensuring a more stable, multiyear supply chain. That’s a game-changer for developers to build solutions that need to last — not just in home appliances, but in the harsh field environment. Whether you’re deploying an AI app in remote forests, industrial plants or smart homes, you need components that are not only fast and efficient but also built to endure. Micron LPDDR4 is engineered to meet the stringent requirements of embedded and industrial markets, with a commitment to support and availability that gives manufacturers peace of mind.

Beyond bird-watching

The Forest Listener is just one example of what the STM32MP257F-DK and Micron LPDDR4 can enable. In factories, the same edge-AI capabilities can monitor machines, detect anomalies, and reduce downtime. In smart homes, they can power face recognition, voice control and energy monitoring — making homes more intelligent, responsive and private, all without relying on the cloud.

For more information about Micron solutions that are enabling AI at the edge, visit micron.com and check out our industrial solutions and LPDDR4/4X product insights.

Donato Bianco, Senior Ecosystem Enablement Manager, Micron Technology

 

The post The Forest Listener: Where edge AI meets the wild appeared first on Edge AI and Vision Alliance.

]]>
How Lenovo is scaling Level 4 autonomous robotaxis on Arm https://www.edge-ai-vision.com/2026/02/how-lenovo-is-scaling-level-4-autonomous-robotaxis-on-arm/ Fri, 20 Feb 2026 09:00:53 +0000 https://www.edge-ai-vision.com/?p=56863 This blog post was originally published at Arm’s website. It is reprinted here with the permission of Arm. As L4 robotaxis shift from pilot to production, Arm offers the compute foundation needed to deliver end-to-end physical AI that scales across vehicle fleets. After years of autonomous driving pilots and controlled trials, the automotive industry is moving toward the production-scale deployment of Level 4 (L4) robotaxis. This marks […]

The post How Lenovo is scaling Level 4 autonomous robotaxis on Arm appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at Arm’s website. It is reprinted here with the permission of Arm.

As L4 robotaxis shift from pilot to production, Arm offers the compute foundation needed to deliver end-to-end physical AI that scales across vehicle fleets.

After years of autonomous driving pilots and controlled trials, the automotive industry is moving toward the production-scale deployment of Level 4 (L4) robotaxis. This marks a significant moment for artificial intelligence (AI), as it moves from advising humans on recommended actions to enabling vehicles that perceive their environment, although it comes with a steep increase in technical demands.

Compared with today’s advanced L2++ vehicles, L4 systems typically require broader sensor stack, such as LiDAR, cameras and radar, which drive data processing requirements from roughly 25GB per hour to as much as 19TB per hour. This has forced a fundamental rethink of compute for physical AI.

To that effect, Lenovo has developed L4 Autonomous Driving Domain Controller AD1, a production-ready autonomous driving computing platform powered by dual Arm-based NVIDIA DRIVE AGX Thor chips. WeRide is deploying the platform in its GXR Robotaxi, which is the world’s first mass-produced L4 autonomous vehicles.

Inside Lenovo AD1

The Lenovo AD1 serves as the central brain inside the GXR Robotaxi, managing multiple functions from perception, prediction and trajectory planning, to real-time motion control and safety monitoring.  The platform is designed for production-grade L4 autonomy for robotaxis and other autonomous vehicles. Supporting over 2,000 TOPS of AI capacity, it enables dense perception, prediction, and planning models to run simultaneously for faster, better decision-making on the roads.

For robotaxis, many loosely coupled electronic control units (ECUs) cannot deliver the latency, safety, or scalability L4 requires, so instead they need centralized, high-performance compute platforms. Therefore, AD1 is powered by NVIDIA DRIVE AGX Thor, a centralized car computer built on the Arm Neoverse V3AE CPU, which brings previously separate driving, parking, cockpit, and monitoring functions into one compute domain.

Efficiency, safety, and foundation for physical AI

Arm serves as the foundational compute architecture of the NVIDIA DRIVE AGX Thor platform, enabling advanced computing capabilities that power Lenovo’s AD1 platform.

  1. Performance per watt for fleet economics: As robotaxis operate for extended hours in demanding dense urban environments, the Arm compute platform delivers server-class performance into a highly efficient power envelope, enabling large AI workloads without compromising vehicle battery or thermal design.
  2. A safety-ready architecture: The Arm ecosystem – including functional-safety-capable technologies, toolchains, software solutions, and long-established automotive partners – supports the platforms designed to meet ASIL-D and other global safety requirements, a critical factor for long-lived commercial deployments.
  3. A mature, scalable software ecosystem: Since Arm provides a unified architecture across cloud, edge and physical environments, it allows developers to build, optimize, and scale AI models using widely available software tools and frameworks.
  4. A roadmap aligned with future AI workloads: As physical AI models continue to grow in size and complexity, compute efficiency and architectural stability become increasingly important. By building on Arm, automakers gain a consistent architectural foundation with a long-term roadmap and helps avoid future redesigns and keeping the compute strategy stable even as AI evolves.

The road to autonomy is being built on Arm

The deployment of Lenovo AD1 in WeRide’s GXR Robotaxis shows how physical AI in autonomous driving systems is moving beyond controlled pilots and into real, complex urban environments. As autonomous capabilities advance through L4 robotaxis and other autonomous vehicles, the industry is converging on platforms that deliver high performance, safety, and power-efficiency through a centralized architecture.

Arm sits at the core of this shift, providing the foundation that enables companies like Lenovo and WeRide to run dense AI workloads continuously, adapt to rapidly evolving models, and support fleets that must operate reliably for years. As robotaxis expand into new cities and global markets, the Arm compute platform – built for safety and engineered to meet the real-world demands of physical AI at scale – is a critical part of the road ahead.

The post How Lenovo is scaling Level 4 autonomous robotaxis on Arm appeared first on Edge AI and Vision Alliance.

]]>
What Does a GPU Have to Do With Automotive Security? https://www.edge-ai-vision.com/2026/02/what-does-a-gpu-have-to-do-with-automotive-security/ Thu, 19 Feb 2026 09:00:05 +0000 https://www.edge-ai-vision.com/?p=56857 This blog post was originally published at Imagination Technologies’ website. It is reprinted here with the permission of Imagination Technologies. The automotive industry is undergoing the most significant transformation since the advent of electronics in cars. Vehicles are becoming software-defined, connected, AI-driven, and continuously updated. This evolution brings extraordinary new capability – but it also brings […]

The post What Does a GPU Have to Do With Automotive Security? appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at Imagination Technologies’ website. It is reprinted here with the permission of Imagination Technologies.

The automotive industry is undergoing the most significant transformation since the advent of electronics in cars. Vehicles are becoming software-defined, connected, AI-driven, and continuously updated. This evolution brings extraordinary new capability – but it also brings greater levels of cybersecurity and functional-safety risks.

The GPU, once only a graphics accelerator for infotainment screens, is now also a primary compute engine for safety-critical tasks like vehicle perception, driver monitoring and camera stitching. The modern GPU is no longer a passive block in the SoC, or something you provision for; it is cyber-relevant, safety-relevant –  and increasingly a point of focus for OEMs, Tier-1s and safety assessors.

At Imagination Technologies, we believe customer-trusted platforms start with evidence-based, secure IP, ‘certified’ with the relevant standards, that enable apple-to-apple comparisons with other products in the market. In this article we explore why GPUs have become relevant to automotive cybersecurity and the dual role that they play.

Cybersecurity and GPUs – who cares?

As vehicles converge with cloud services, AI, and IoT ecosystems, the attack surface obviously, or rather intuitively, grows significantly. Automotive platforms have now evolved from isolated ECUs to domain and zonal controllers interconnected over high-bandwidth networks, running mixed-criticality workloads, and increasingly reliant on GPU-accelerated compute.

Today you’ll find automotive GPUs involved in AI perception and sensor-fusion workloads, neural-network inference and complex 3D interfaces and real-time visualisation tools like surround-view cameras. A common theme across the above is ‘data’, with different level of value and sensitivity. And with those, attackers normally follow suit.

The GPU as Both an Attack Surface—and a Defensive Asset

The duality of the GPU is one of the most important shifts in automotive compute.

The GPU as an Attack Surface

Increasingly, GPUs deal with challenges such as:

  • Side-channel leakage from massively parallel compute (read more)
  • Privilege escalation through GPU memory or scheduling (read more)
  • Manipulation of GPU-processed AI inputs (read more)
  • Fault injection or data corruption (read more)
  • Malicious workloads exploiting shared GPU pipelines (read more)

This is why any automotive GPU requires secure memory boundaries, robust virtualisation, privilege levels, and fault detection engineered directly into the architecture.

The GPU as a Security Accelerator

At the same time, GPUs are extremely efficient for handling a variety of algorithms for encryption and decryption, hashing, digital signing, key generation, and post-quantum cryptography.  By offloading these tasks, GPUs can reduce CPU load and preserve the tight real-time constraints that are an essential requirement in modern automotive systems.

Functional Safety and Cybersecurity: Interlinked, Not Identical

Because it handles perception data, model execution, and visual outputs, a compromised GPU can indirectly influence safety-critical behaviour. For example, tampering with perception inputs can mislead ADAS decision-making.

Cybersecurity and functional safety reinforce each other, but they serve different purposes. All safety-critical functions rely on cybersecurity, because a cyber attack can force a system into a hazardous state. But not all cybersecurity events create immediate safety hazards, such as personal-data leakage.

However, a compromised GPU can indirectly influence safety logic—especially in AI-based perception and decision-making systems. This makes it essential that ISO 26262 (functional safety) and ISO 21434 (cybersecurity) objectives are addressed together from concept through deployment.

Security as a Lifecycle Discipline: Imagination’s CSMS

Cybersecurity is not a bolt-on feature. It is a continuous discipline governed by a Cybersecurity Management System (CSMS) that spans threat analysis and risk assessment, secure design and architecture, secure coding and verification, vulnerability monitoring, incident response and supply-chain assurance. Imagination operates an externally certified CSMS, enabling our partners to build compliance arguments on top of a robust, audited foundation.

PowerVR GPU Security & Safety Features

Across our BXS and DXS GPU families, Imagination integrates a comprehensive set of hardware and architectural protections, including:

  • Memory protection and integrity checking
  • Hardware-based virtualisation for domain isolation
  • Privilege boundaries and secure task separation
  • Deterministic compute paths for safety-critical workloads
  • Fault detection and diagnostics, such as Tile Region Protection or Idle Cycle Stealing
  • Secure-boot integration and alignment with system-wide trust anchors

These features are backed by ISO 26262-certified safety documentation and – for future functionally safe products – by security documentation that accelerate customer development activities and assessments.

Importantly, some of our safety mechanisms also reinforce cybersecurity.  For example, Tile Region Protection, originally designed to detect accidental data corruption in safety contexts, can also reveal abnormal access patterns characteristic of fault-injection or data-manipulation attacks. By monitoring unexpected behaviour at the hardware level, the GPU raises the difficulty of successfully executing low-level tampering attacks.

This dual benefit follows the duality explained earlier. Safety mechanisms strengthening cybersecurity pedigree is a key advantage of integrating protection directly into the architecture rather than relying on external layers.

Conclusion

GPUs now sit at the heart of automotive compute—and therefore at the heart of automotive safety and cybersecurity. As perception, AI, and real-time visualisation become central to vehicle behaviour and driver interfaces, the GPU must evolve from a performance component into a certifiable, cyber-resilient compute engine.

At Imagination Technologies, we embed safety, security, lifecycle engineering, and certified processes directly into our GPU IP—providing OEMs and Tier-1s with the foundation to build secure, high-performance, real-time systems. To find out more about our solutions, reach out to the team and book a meeting.

Antonio Priore, Senior Director, Engineering – Product Safety and Security, Imagination Technologies

The post What Does a GPU Have to Do With Automotive Security? appeared first on Edge AI and Vision Alliance.

]]>
Pushing the Limits of HDR with Ubicept https://www.edge-ai-vision.com/2026/02/pushing-the-limits-of-hdr-with-ubicept/ Wed, 18 Feb 2026 09:00:08 +0000 https://www.edge-ai-vision.com/?p=56844 This blog post was originally published at Ubicept’s website. It is reprinted here with the permission of Ubicept. Executive summary Ubicept’s SPAD-based system offers consistent HDR performance in nighttime driving conditions, preserving shadow and highlight detail where conventional cameras fall short. Unlike traditional HDR techniques which often struggle with motion artifacts, Ubicept Photon Fusion maintains […]

The post Pushing the Limits of HDR with Ubicept appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at Ubicept’s website. It is reprinted here with the permission of Ubicept.

Executive summary

  • Ubicept’s SPAD-based system offers consistent HDR performance in nighttime driving conditions, preserving shadow and highlight detail where conventional cameras fall short.
  • Unlike traditional HDR techniques which often struggle with motion artifacts, Ubicept Photon Fusion maintains clarity even when both the camera and scene are in motion.
  • Watch https://www.youtube.com/watch?v=KxucJYv63pI on an HDR-capable display to compare a conventional CMOS camera with in-sensor HDR and a SPAD camera with Ubicept processing

Introduction

At Ubicept, we often talk about the “impossible triangle”—low light, fast motion, and high dynamic range—and how our technology enables perception even when all three are present. That said, it’s been a while since we’ve highlighted our HDR capabilities, so we decided to take a spin around town with our new color setup to show them off.

Before we dive in, let’s take a moment to talk about why high dynamic range matters for perception. Our world is full of extreme lighting contrasts. On sunny days, reflections from shiny surfaces can blind both humans and machines. At night, brilliant headlights and streetlamps create intense pools of light that leave surrounding areas in deep shadow. If a perception system can’t resolve detail across both the bright and the dark, it risks missing critical information. That’s why image sensors designed for applications like advanced driver assistance systems (ADAS) often emphasize their ability to handle these challenging scenarios.

Experimental setup

For this demo, we rigged up two systems side by side:

  • Our prototype development kit, featuring a 1-megapixel SPAD sensor and Ubicept processing
  • A 5-megapixel dash camera, featuring a low-light CMOS sensor with built-in HDR capabilities

The development kit camera was mounted outside the vehicle to capture an unobstructed view. Unfortunately, the dash camera had to remain inside due to its physical design, making it more susceptible to glare from the windshield. So, while this isn’t a perfectly fair or scientific comparison, the dramatic differences you’re about to see should still offer meaningful insight into the relative performance of the two systems in real-world scenarios.

Before you press play:

  • For best results, please view this on an HDR-capable display. You can still appreciate the video on a typical SDR desktop or laptop monitor, but the results are truly stunning on an OLED smartphone or television.
  • We exported the video at half speed to highlight motion detail. The dash camera only outputs at 30 fps in HDR mode, so it will look choppy when slowed down by 50%.

Key observations

We hope the comparison video speaks for itself, but we wanted to highlight a few key moments to observe if you choose to review the footage again.

First, even though the dash camera runs in HDR mode, there are plenty of situations where its dynamic range just isn’t enough. Take this frame at 3:39:

 To see this frame in full quality, see 3:39 in the video on an HDR-capable display

The outlined area is actually well-lit by the surrounding environment, but the dash camera sacrifices shadow detail to avoid overexposing the bright building. As a consequence, the trees disappear into the noise floor. In contrast, our system preserves both highlights and shadows, revealing the entire scene clearly.

We also noticed some HDR-specific artifacts in the dash camera footage. In the frame at 0:27 below, the outlined region shows a sharp window, while the bright green container (moving at the same speed relative to the car) is blurred beyond recognition:

To see this frame in full quality, see 0:27 in the video on an HDR-capable display

This is notable because, under normal conditions, motion blur reflects how much something is moving. With conventional HDR, however, that relationship becomes more complex due to how these systems operate. They blend short exposures for bright regions with longer ones for darker areas, causing motion blur to also vary by brightness. The result is frames that are harder to interpret.

These techniques can also introduce artifacts, as shown in this frame at 3:03:

To see this frame in full quality, see 3:03 in the video on an HDR-capable display

We can’t say for sure what’s happening here, since we don’t have details about the dash camera’s HDR implementation, but suffice it to say that falsely repeated objects can be confusing for downstream perception systems. The more important point, at least for this demo, is that the SPAD camera with Ubicept processing is able to deliver consistent performance across all the situations we encountered.

Please note that the still images above were mapped down to SDR for web display, so some of the shadows and highlights may appear clipped. The video itself should show the full range, so we encourage you to view it on an HDR-capable display.

Technical notes

You might be thinking, “Wow, SPADs are amazing!” And they are, but they’re not enough on their own to produce results like this. We addressed this directly in a previous blog post, as well as on our Technology and Passive Vision pages. What we’re showing here isn’t the result of a special “HDR SPAD” or a dedicated HDR algorithm. It’s all part of the same core pipeline. Put simply, HDR is just one of many challenges our system is built to handle.

With that said, achieving the best results isn’t just about the sensor and processing. As we built this demo, we came to appreciate how important it is for all parts of the system to work together. In early tests using standard machine vision lenses, we found that glare significantly reduced contrast. That led us to the Sunex DSL428—we were admittedly skeptical at first of its “HDR-optimized” marketing, but it turns out the designation was well-earned!

We also ran into some practical challenges, like condensation forming on the optical components as the night cooled (note to self: bring some microfiber cloths next time). That’s something we’ll address in future demos, but the key takeaway is that the sensor and processing weren’t the limiting factors. Either way, we’re looking forward to showing even better results here with continued refinements to the optics and housing. Of course, if you want to see how our technology performs on your most demanding perception tasks, we’d love to hear from you!

The post Pushing the Limits of HDR with Ubicept appeared first on Edge AI and Vision Alliance.

]]>
A Practical Guide to Recall, Precision, and NDCG https://www.edge-ai-vision.com/2026/02/a-practical-guide-to-recall-precision-and-ndcg/ Tue, 17 Feb 2026 09:00:09 +0000 https://www.edge-ai-vision.com/?p=56827 This blog post was originally published at Rapidflare’s website. It is reprinted here with the permission of Rapidflare. Introduction Retrieval-Augmented Generation (RAG) is revolutionizing how Large Language Models (LLMs) access and use information. By grounding models in domain specific data from authoritative sources, RAG systems deliver more accurate and context-aware answers. But a RAG system is […]

The post A Practical Guide to Recall, Precision, and NDCG appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at Rapidflare’s website. It is reprinted here with the permission of Rapidflare.

Introduction

Retrieval-Augmented Generation (RAG) is revolutionizing how Large Language Models (LLMs) access and use information. By grounding models in domain specific data from authoritative sources, RAG systems deliver more accurate and context-aware answers.

But a RAG system is only as strong as its retrieval layer. Suboptimal retrieval performance results in low recall, poor precision, and incoherent ranking signals that degrade overall relevance and user trust.

This guide outlines a step-by-step approach to optimizing RAG retrieval performance through targeted improvements in recall, precision, and NDCG (Normalized Discounted Cumulative Gain). It’s designed to help AI researchers, engineers, and developers build more accurate and efficient retrieval pipelines.

The Basics of RAG Retrieval

Retrieval is the foundation of any Retrieval-Augmented Generation (RAG) system. There are two main retrieval methods, each offering unique strengths.

  1. Vector Search (Semantic Search)

Transforms text into numerical embeddings that capture semantic meaning and relationships. It retrieves conceptually related results, even without keyword overlap.

Example: A query for “machine learning frameworks” retrieves documents about PyTorch and TensorFlow.

  1. Full-Text Search (Keyword Search)

Matches exact phrases and keywords. It’s fast and efficient for literal queries but lacks contextual understanding.

Example: It finds “machine learning frameworks” only if the phrase appears verbatim.

Pro Tip: Use hybrid search (vector + keyword) to combine the contextual power of vector retrieval with the speed and precision of keyword matching—ideal for most RAG pipelines.


Key Metrics for RAG Retrieval Performance

Before optimizing, measure your retrieval performance using three key metrics:

  1. Recall

Did we retrieve all relevant content?
If 85 of 100 relevant documents are found, recall = 85%. Low recall means missing key data.

  1. Precision

How much irrelevant data did we avoid?
If 70 of 100 retrieved results are relevant, precision = 70%. Low precision introduces noise that reduces LLM quality.

  1. NDCG (Normalized Discounted Cumulative Gain)

Are the most relevant results ranked highest?
High NDCG ensures your system ranks top-quality documents first—essential for LLMs with limited context windows.

Optimization Priorities:

  1. Maximize Recall – capture all relevant data.
  2. Improve Precision – reduce retrieval noise.
  3. Optimize NDCG – enhance ranking quality.

Step 1: Maximize Recall

Strong recall ensures complete information coverage for your RAG retrieval pipeline.

Techniques:
  • Query Expansion: Add synonyms and related terms (e.g., “Transformer models” → “BERT,” “attention mechanisms”).
  • Hybrid Search: Combine vector and keyword results (e.g., reciprocal rank fusion).
  • Fine-Tuned Embeddings: Train on domain-specific data (finance, legal, healthcare) for improved recall.
  • Smart Chunking: Segment text into overlapping chunks (250–500 tokens) for granular coverage.
    Benchmark chunk size and overlap for best results.

Step 2: Increase Precision

After retrieving broadly, refine for relevance and context alignment.

Techniques:
  • Re-Rankers: Use transformer-based reranking models (e.g., BERT, Cohere Rerank API) to reorder top results.
  • Metadata Filtering: Exclude irrelevant or outdated documents using attributes such as date or source.
  • Thresholding: Apply similarity cutoffs (e.g., cosine > 0.5) to remove weak matches.

Higher precision means cleaner context and more accurate RAG generation.

Step 3: Optimize NDCG (Ranking Quality)

Good recall and precision mean little without effective ranking.

Techniques:
  • Advanced Reranking: Reorder top candidates by contextual relevance.
  • User Feedback Loops: Use click and dwell-time data to promote high-value results.
  • Context-Aware Retrieval: Include key entities or prior concepts from conversation history—without appending full chat logs.
  • Measure Improvement: Label a small dataset with relevance scores and track NDCG@5 or NDCG@10.
    Aim for a 5–10 % boost per iteration.

Building the Retrieval Flywheel

Effective RAG retrieval optimization is iterative:

  1. Maximize Recall – broaden coverage.
  2. Boost Precision – refine relevance.
  3. Enhance NDCG – improve ranking stability.

Continuously experiment with chunk sizes, thresholds, and rerankers. Measure, iterate, and evolve your retrieval pipeline for higher accuracy and efficiency.

RAG Retrieval Optimization Cheat Sheet

Conclusion

Optimizing retrieval in RAG systems ensures your LLM has the most relevant, high-quality grounding data.
By continuously improving recall, precision, and NDCG, you build a smarter, faster, and more reliable RAG pipeline that evolves with your data and domain.

 

Dipkumar Patel, Founding Engineer, Rapidflare

The post A Practical Guide to Recall, Precision, and NDCG appeared first on Edge AI and Vision Alliance.

]]>
Sony Pregius IMX264 vs. IMX568: A Detailed Sensor Comparison Guide https://www.edge-ai-vision.com/2026/02/sony-pregius-imx264-vs-imx568-a-detailed-sensor-comparison-guide/ Fri, 13 Feb 2026 09:00:55 +0000 https://www.edge-ai-vision.com/?p=56804 This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. The image sensor is an important component in defining the camera’s image quality. Many real-world applications pushed for smaller pixel sizes to increase resolution in compact form factors.  To address this demand, Sony has been improving […]

The post Sony Pregius IMX264 vs. IMX568: A Detailed Sensor Comparison Guide appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems.

The image sensor is an important component in defining the camera’s image quality. Many real-world applications pushed for smaller pixel sizes to increase resolution in compact form factors.  To address this demand, Sony has been improving its image sensor technology across generations. Over the years, this evolution has been focused on key aspects such as pixel size optimization, saturation capacity, pixel-level noise reduction, and light arrangement.

The advancements in Sony’s sensors have spanned four generations. Of these, Pregius S is the latest technology. It provides a stacked sensor architecture, optimal front illumination, and increased speed, sensitivity, and improved exposure control functionality relative to earlier generations.

Key Takeaways:

  • What are the IMX264 and IMX568 sensors?
  • The architectural differences between the second-generation Pregius and the fourth-generation Pregius S sensors
  • Key technologies of IMX568 over IMX264 in embedded cameras

What Are the IMX264 and IMX568 Sensors?

The IMX264 sensor was the first small-pixel sensor in the industry, with a pixel size of 3.45 µm x 3.45 µm when it was introduced. Based on Sony’s “Pregius” Generation two, this sensor takes advantage of Sony’s Exmor technology.

The IMX568 sensor is a Sony Pregius S Generation Four sensor. The ‘S’ in Pregius S refers to stacked, indicating that the sensor has a stacked design, with the photodiode on top and the circuits on the bottom. This sensor is designed with an even smaller pixel size of 2.74 µm x 2.74 µm.

Comparison of key specifications:

Parameters IMX264 IMX568
Effective Resolution ~5.07 MP ~5.10 MP
Image size Diagonal 11.1 mm (Type 2/3) Diagonal 8.8 mm (Type 1/1.8)
Architecture Front-Illuminated Back-Illuminated (Stacked)
Pixel Size 3.45 µm × 3.45 µm 2.74 µm × 2.74 µm
Sensitivity  915mV (Monochrome)
1146mV (color)
8620 Digit/lx/s
Shutter Type Global Global
Max Frame Rate (12-bit) ~35.7 fps ~67 fps
Max Frame Rate (8-bit) ~60 fps ~96 fps
Exposure Control Standard trigger Short interval + multi-exposure
Output Interface Industrial camera interfaces MIPI CSI-2

Architectural Description: Second vs. Fourth Generation Sensors

Second-generation front-illuminated design (IMX264)
The second-generation Sony sensor uses front-illuminated technology. In front-illumination technology, the conductive elements intercept light before it reaches the light-sensitive element. As a result, some of the light might not reach the light-sensitive element. This affects the performance of the camera with small pixels.

Fourth-generation back-illuminated design (IMX568)
The Pregius S architecture revolutionizes this design by flipping the structure. The photodiode layer is positioned on top with the conductive elements beneath it. This inverted configuration allows light to reach the photodiode directly, without obstruction. It dramatically improves light-collection efficiency and enables smaller pixel sizes without sacrificing sensitivity.

The image below provides a clearer view of the difference between front- and back-illuminated technologies.

IMX264 vs. IMX568: A Detailed Comparison

Global shutter performance
IMX264 already delivers true global shutter operation, eliminating motion distortion. However, IMX568 introduces a redesigned charge storage structure that dramatically reduces parasitic light sensitivity (PLS). This ensures that stored pixel charges are not contaminated by incoming light during readout.

It results in a clear image, especially under high‑contrast or high-illumination conditions in the high-inspection system.

Frame rate and throughput
The IMX568 has a frame rate that is nearly double that of the IMX264 at full resolution. The reasons for this are faster readout circuitry and SLVS‑EC high‑speed interface. For applications such as robotic guidance, motion tracking, and high‑speed inspection, this increased throughput directly translates into higher system accuracy and productivity.

Noise performance and image quality
Pregius S sensors offer lower read noise, reduced fixed pattern noise, and better dynamic range. IMX568 produces clear images in low‑light environments and maintains higher signal fidelity across varying exposure conditions.

Such an improvement reduces reliance on aggressive ISP noise reduction, preserving fine image details critical for machine vision algorithms.

Power consumption and thermal behavior
Despite higher operating speeds, IMX568 is more power‑efficient on a per‑frame basis. Improved charge transfer efficiency and readout design result in lower heat generation, making it ideal for compact, fanless, and always‑on camera systems.

System integration considerations
IMX264 uses traditional SLVS/LVDS interfaces and integrates well with legacy ISPs and FPGA platforms. IMX568 requires support for SLVS‑EC and higher data bandwidth. While this demands a modern processing platform, it also future‑proofs the system for higher-performance vision pipelines.

What Are the Advanced Imaging Features of the IMX568 Sensor?

Short interval shutter
IMX568 can perform short-interval shutters starting at 2 μs, which helps reduce the time between frames by controlling registers. This allows the cameras to capture images of fast-moving objects for industrial automation.

Multi-exposure trigger mode
The IMX568 allows multiple exposures within a single trigger sequence. This feature allows obtaining several images of the same scene at differing exposure times, both in illuminated and dark areas of the object. This reduces dependency on complex lighting and strobe tuning.

It enables IMX568-based cameras to handle challenging lighting conditions more effectively than single-exposure sensors in vision applications such as sports analytics.

Multi-frame ROI mode
This multi-ROI sensor enables simultaneous readout of up to 64 user-defined regions from arbitrary positions on the sensor.

In the image below, you can see how data from two ROIs have been read from within a single frame. The marked areas represent the ROIs.

Full Frame

Selected Two ROIs

Cropped ROIs

e-con Systems’ recently-launched e-CAM56_CUOAGX is an IMX568-based global shutter camera capable of multi-frame Region of Interest (ROI) functionality. It supports a rate of up to 1164 fps with the multi-ROI feature.

This can be very useful in real-time embedded vision use cases, where it is necessary to focus only on a specific region of the image. e-CAM56_CUOAGX can be deployed in traffic surveillance applications where the focus should only be on car motion, facial recognition applications. That way, only the facial region of the subject can be zoomed to achieve superior security surveillance.

Short exposure mode
The IMX568 supports exposure times that can be very short while ensuring image stability and sensitivity at the same time. Exposure times for this mode may vary by up to ±500 ns depending on the sample and environmental conditions, as well as other factors such as temperature and voltage levels.

Dual trigger
The IMX568 enables dual trigger operation, allowing independent control of image capture timing and readout by dividing the screen into upper and lower areas.  This enables precise synchronization with external events, lighting, and strobes, and allows flexible capture workflows in complex inspection setups.
Read the article: Trigger Modes available in See3CAMs (USB 3.0 Cameras) – e-con Systems, to know about the trigger function in USB cameras

Gradation compression
IMX568 features gradation compression to optimize the representation of brightness levels within the output image. This preserves important image details in both bright and dark regions. With this feature, the camera can deliver more usable image data without increasing bit depth or lighting complexity.

Dual ADC
The dual-ADC architecture provides faster, more flexible signal conversion. This supports high frame rates without compromising image quality and optimizes performance across the different bit depths: 8-bit / 10-bit / 12-bit. The dual ADC operation also helps IMX568-based cameras maintain high throughput and low latency in demanding vision systems.

IMX568 Sensor-Based Cameras by e-con Systems

Since 2003, e-con Systems has been designing, developing, and manufacturing cameras. e-con Systems’ embedded cameras continue to evolve with advances in sensors to meet the growing demand for embedded vision applications.

Explore our Sony Pregius Sensor-Based Cameras.

Use our Camera Selector to check out our full portfolio.

Need help selecting the right embedded camera for your application? Talk to our experts at camerasolutions@e-consystems.com.

FAQS

  1. What is Multi-ROI in image sensors?
    Multi-ROI (Multiple Regions of Interest) allows an image sensor to crop and read out multiple, user-defined areas from different locations on the sensor within a single frame, instead of reading the full frame.
  1. Can multiple ROIs be read simultaneously in the same frame?
    Yes. Multiple ROIs can be read out simultaneously within the same frame, allowing spatially separated regions to be captured without increasing frame latency.
  1. How many ROI regions can be configured on this sensor?
    The multi ROI image sensor supports up to 64 independent ROI areas, enabling flexible selection of multiple spatial regions based on application requirements.
  1. What are the benefits of using Multi-ROI instead of full-frame readout?
    Multi-ROI reduces data bandwidth and processing load, increases effective frame rates, and enables efficient monitoring of multiple areas of interest.
  1. Are all ROIs captured at the same time?
    Yes. All selected ROIs are captured within the same frame, ensuring consistent timing.


Chief Technology Officer and Head of Camera Products, e-con Systems

The post Sony Pregius IMX264 vs. IMX568: A Detailed Sensor Comparison Guide appeared first on Edge AI and Vision Alliance.

]]>
What Happens When the Inspection AI Fails: Learning from Production Line Mistakes https://www.edge-ai-vision.com/2026/02/what-happens-when-the-inspection-ai-fails-learning-from-production-line-mistakes/ Thu, 12 Feb 2026 09:00:09 +0000 https://www.edge-ai-vision.com/?p=56801 This blog post was originally published at Lincode’s website. It is reprinted here with the permission of Lincode. Studies show that about 34% of manufacturing defects are missed because inspection systems make mistakes.[1] These numbers show a big problem—when the inspection AI misses something, even a tiny defect can spread across hundreds or thousands of products. One […]

The post What Happens When the Inspection AI Fails: Learning from Production Line Mistakes appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at Lincode’s website. It is reprinted here with the permission of Lincode.

Studies show that about 34% of manufacturing defects are missed because inspection systems make mistakes.[1] These numbers show a big problem—when the inspection AI misses something, even a tiny defect can spread across hundreds or thousands of products.

One small scratch, crack, or colour mismatch can lead to rework, slowdowns, customer complaints, or even product returns. And because the production line moves quickly, these mistakes can multiply before anyone notices. That’s why an inspection AI failure affects not just one product, but the entire production line.

But here’s the good part: the problem usually comes from fixable issues like poor training data, bad lighting, or camera setup problems. When manufacturers study these mistakes closely, they can upgrade the AI, improve the dataset, and build a stronger, more reliable inspection system.

This blog explains what happens when inspection AI fails, and how these failures can actually help companies build a smarter, more accurate quality control process.

What is Inspection AI Failure?

Inspection AI failure happens when an AI system designed to spot defects in products misses, mislabels, or incorrectly flags issues. This can occur due to poor training data, changes in product appearance, lighting problems, or limitations in the AI model itself.

Such failures lead to missed defects, false alarms, and reduced confidence in automated quality checks, affecting production efficiency and product quality. DeepVision (a company working on AI vision) claims that with AI visual inspection, defect “escape rates” in some manufacturing lines dropped by as much as 83%.[2]

Why Do Visual Inspection Systems Miss Defects?

Visual inspection systems miss defects for several reasons. Sometimes, the AI isn’t trained on enough examples of real-world defects, so it doesn’t recognize unusual scratches, cracks, or color changes.

Other times, the lighting, camera angles, or image quality make it hard for the system to see small imperfections clearly. Even minor changes in product shape or texture can confuse the AI, leading to missed defects.

Another common reason is a lack of proper visual inspection error analysis. Without reviewing mistakes and understanding why the AI failed, the same errors can keep happening.

By analyzing these errors carefully, manufacturers can improve training data, adjust cameras and lighting, and fine-tune the AI model to catch more defects and reduce costly mistakes on the production line.

Real-World Impact of AI Defect Detection Failures

AI defect detection failures don’t just affect machines; they impact the entire production chain, from efficiency to customer trust.

1. Production Delays and Increased Costs

When AI defect detection misses problems, products often need rework or replacement, slowing down the production line. For example, Foxconn, a major electronics manufacturer, faced delays when their AI inspection system missed minor defects in smartphone assembly, causing additional labor and wasted components.

Similarly, Toyota reported production slowdowns in certain plants when AI visual inspection failed to catch paint imperfections, leading to costly rework and delayed deliveries.

2. Customer Dissatisfaction and Brand Damage

Defective products reaching customers can hurt a company’s reputation. Samsung once had to recall devices due to overlooked micro-defects in components, showing how AI inspection failure can impact customer trust.

Nike also faced quality complaints when automated inspection missed stitching errors in footwear. These cases highlight why reliable AI defect detection and thorough visual inspection error analysis are critical to prevent defects from reaching customers and protect brand reputation.

Ultimately, addressing AI defect detection failures through careful error analysis and improved models helps manufacturers save costs, maintain efficiency, and keep customers satisfied.

Common Causes Behind Production Line Mistakes

Understanding inspection AI failure starts with knowing why mistakes happen on the production line.

  1. Poor Training Data – AI models may miss defects if they haven’t seen enough examples during training.
  1. Changes in Product Appearance – Variations in color, shape, or texture can confuse the AI.
  1. Lighting or Camera Issues – Poor lighting, glare, or misaligned cameras can hide defects from the system.
  1. Outdated AI Models – Models not retrained for new products or updated production conditions can fail.
  1. Lack of Error Analysis – Without reviewing AI mistakes through visual inspection error analysis, recurring defects go unnoticed.

By solving these causes, manufacturers can reduce errors and improve overall production quality.

5 Easy Steps to Conduct Effective Visual Inspection Error Analysis

Performing visual inspection error analysis helps identify why AI missed defects and improves overall accuracy. Here are five simple steps:

Step 1: Collect Failed Samples – Gather images or products where the AI missed defects or gave false positives. This creates a clear starting point for analysis.

Step 2: Compare with Training Data – Check if the AI has seen similar defects before. Missing examples in the training set often cause errors.

Step 3: Check Image Quality – Review lighting, camera angles, resolution, and focus. Poor image conditions can hide defects from the system.

Step 4: Analyze Model Confidence – Look at confidence scores or outputs from the AI. Low confidence often points to areas where the model struggles.

Step 5: Document and Retrain – Record all errors and their causes, then retrain the AI with new examples to reduce future inspection AI failures.

This step-by-step process ensures errors are understood, fixed, and less likely to repeat, making your AI defect detection more reliable.

Learning From Failures: Fixing the Root Cause of AI Mistakes

Learning from inspection AI failure is not about blaming the system; it’s about understanding why mistakes happen and preventing them in the future. Here’s how manufacturers can approach it effectively:

1. Identify the Exact Error

Start by pinpointing what went wrong. Was it a missed defect, a false positive, or a misclassification? Breaking down errors into clear categories makes it easier to address the root cause.

2. Investigate the Cause

Look into the source of the error:

  • Was the AI model trained on enough defect examples?
  • Did changes in product design or material confuse the system?
  • Were environmental factors like lighting, vibration, or camera setup involved?

3. Improve Data Quality

Many failures occur because the AI hasn’t seen enough diverse defect examples. Collect new images or product samples representing edge cases, rare defects, or variations, and add them to the training dataset.

4. Update and Retrain the AI Model

After enhancing the data, retrain the AI. Fine-tune parameters and test against real production scenarios. Continuous retraining ensures the AI adapts to evolving products and production conditions.

5. Monitor and Review Continuously

Even after fixes, monitor the AI’s performance regularly. Conduct periodic visual inspection error analysis to catch new failure patterns early and maintain high-quality standards.

By following these steps, companies turn AI mistakes into actionable insights, reducing inspection AI failure and improving overall production efficiency.

Preventing Future Failures: Building a More Accurate, Reliable Inspection AI

Preventing inspection AI failure starts with creating a system that learns and adapts continuously. By using diverse and high-quality training data, improving camera setups and lighting, and retraining models regularly, manufacturers can catch even rare or subtle defects.

Adding human checks for unusual cases and monitoring AI performance in real-time further reduces errors. The goal is to build an AI-based quality inspection system that is not only fast but also consistent and dependable, keeping production smooth and products defect-free.

Why Choosing the Right AI-Based Quality Control Partner Matters

Selecting the right partner can make a huge difference in reducing inspection AI failure. Here are three key reasons:

1. Expertise in AI and Machine Vision

A skilled partner knows how to train, fine-tune, and deploy AI defect detection systems that work reliably in real production conditions.

AI-powered defect detection systems typically achieve 95‑99% accuracy, compared to just 60–90% in manual inspections.[3]

2. Customized Solutions for Your Production

Every production line is different. The right partner designs AI inspection workflows tailored to your products, lighting, cameras, and quality standards.

AI-driven QC can reduce defect rates by 20–50%, depending on the implementation.[4]

3. Continuous Support and Improvement

Reliable partners offer ongoing monitoring, retraining, and error analysis, ensuring the AI keeps improving and defects are caught before they reach customers.

In real-world deployments, AI inspection systems have reduced production‑line defects by up to 30% through continuous learning and anomaly detection.[5]

Choosing the right partner not only improves accuracy but also helps prevent costly inspection AI failure, keeping your production line efficient and your products defect-free.

Why Lincode Stands Out as Visual Inspection AI

When it comes to reliable AI defect detectionLincode sets itself apart with a combination of advanced technology and practical design. Here’s why it’s trusted by manufacturers worldwide:

Key Reasons Lincode Excels

  • High Accuracy Detection – Lincode’s AI models detect defects with over 98% accuracy, catching even the smallest scratches, cracks, or misalignments.
  • Easy Integration – It can be integrated into existing production lines in less than 48 hours, reducing downtime and implementation costs.
  • Real-Time Monitoring – The system provides instant alerts and detailed reports, enabling teams to resolve issues up to 3x faster than traditional inspection methods.
  • Continuous Learning – Lincode adapts to new products and defect types through ongoing retraining, improving defect detection rates by 15–20% within the first few months.

In short, Lincode doesn’t just detect defects; it helps companies prevent costly mistakes, improve production efficiency, and reduce inspection AI failure, keeping product quality consistently high.

FAQ

1. What is the main reason for inspection AI failure?
The main reason is usually a lack of diverse training data or changes in product design that the AI wasn’t trained to recognize. Environmental factors like poor lighting or misaligned cameras can also cause failures.

2. How often should visual inspection error analysis be conducted?
It’s best to review errors regularly, ideally once a month or after introducing a new product, to catch recurring mistakes and improve AI accuracy.

3. Can AI defect detection replace human inspection completely?
While AI can catch most defects, combining it with human checks ensures rare or unusual defects are not missed. A human-in-the-loop approach reduces inspection AI failure significantly.

4. How does retraining the AI improve defect detection?
Retraining with new defect examples and updated production data helps the AI learn from past mistakes, improving detection accuracy and reducing future failures.

5. What industries benefit most from inspection AI?
Industries like electronics, automotive, pharmaceuticals, food packaging, and consumer goods see the biggest gains because even small defects can cause costly rework or quality issues.

Bibliography:

[1] Micromachines, Journal article, 27 February 2023.
[2] AI.Business, Case‑study article, 01 May 2024.
[3] Dhīmahi Technolabs, Blog post / Insight,2025
[4] International Journal of Intelligent Systems and Applications in Engineering Journal article, 2024.
[5] International Journal of Scientific Research and Management,  Journal article, October 2024.

The post What Happens When the Inspection AI Fails: Learning from Production Line Mistakes appeared first on Edge AI and Vision Alliance.

]]>
What’s New in MIPI Security: MIPI CCISE and Security for Debug https://www.edge-ai-vision.com/2026/02/whats-new-in-mipi-security-mipi-ccise-and-security-for-debug/ Wed, 11 Feb 2026 09:00:30 +0000 https://www.edge-ai-vision.com/?p=56797 This blog post was originally published at MIPI Alliance’s website. It is reprinted here with the permission of MIPI Alliance. As the need for security becomes increasingly more critical, MIPI Alliance has continued to broaden its portfolio of standardized solutions, adding two more specifications in late 2025, and continuing work on significant updates to the MIPI Camera […]

The post What’s New in MIPI Security: MIPI CCISE and Security for Debug appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at MIPI Alliance’s website. It is reprinted here with the permission of MIPI Alliance.

As the need for security becomes increasingly more critical, MIPI Alliance has continued to broaden its portfolio of standardized solutions, adding two more specifications in late 2025, and continuing work on significant updates to the MIPI Camera Security Framework specifications slated for completion in mid-2026.

Read on to learn more about the newly released specifications and what lies ahead for the MIPI Camera Security Framework.

MIPI CCISE: Protecting Camera Command and Control Interfaces

The new MIPI Command and Control Interface Service Extensions (MIPI CCISE™) v1.0, released in December 2025, defines a set of security service extensions that can apply data integrity protection and optional encryption to the MIPI CSI-2® camera control interface based on the I2C transport interface. The protection is provided end-to-end between the image sensor and its associated SoC or electronic control unit (ECU).

MIPI CCISE rounds out the existing MIPI Camera Security Framework, which includes MIPI Camera Security v1.0, MIPI Camera Security Profiles v1.0 and MIPI Camera Service Extensions (MIPI CSE™) v2.0. Together, the specifications define a flexible approach to add end-to-end security to image sensor applications that leverage MIPI CSI-2, enabling authentication of image system components, data integrity protection, optional data encryption, and protection of image sensor command and control channels. The specifications provide implementers with a choice of protocols, cryptographic algorithms, integrity tag modes and security protection levels to offer a solution that is uniquely effective in both its security extent and implementation flexibility.

Use of MIPI camera security specifications enables an automotive system to fulfill advanced driver-assistance systems (ADAS) safety goals up to ASIL D level (per ISO 26262:2018) and supports functional safety and security mechanisms, including end-to-end protection as recommended for high diagnostic coverage of the data communication bus.

While the initial focus of the camera security framework was on securing long-reach, wired in-vehicle network connections between CSI-2 based image sensors and their related processing ECUs, the specifications are also highly relevant to non-automotive machine vision applications that leverage CSI-2-based image sensors.

A downloadable white paper, A Guide to the MIPI Camera Security Framework for Automotive Applications, provides a detailed explanation of how these specifications work together to provide application layer end-to-end data protection.

MIPI Security Specification for Debug: Enabling Remote Debug of Systems in the Field

The recently adopted MIPI Security Specification for Debug defines a standardized method for establishing secure, authenticated debug sessions between a debug and test system and a target system.

Designed to enable remote debugging in potentially hostile real-world locations outside of a test lab, the specification allows secure remote debugging of production devices without relying solely on traditional physical protections such as buried traces or restricted access to debug ports. Instead, it introduces a trusted, cryptographically protected communication path that spans end-to-end, from the physical debug tool to the target device’s package pins, through all connectors, cabling, routing and bridges.

The new speciation adds a secure messaging layer to the existing MIPI debug architecture, wrapping debug traffic in encrypted, authenticated messages while remaining interface-agnostic. Core components include a secure communications manager that is responsible for security protocol, data model processing and key generation; cryptographic message-protection functions; and secure communication management paths. To accomplish this, the specification leverages the DMTF Security Protocol and Data Model (SPDM) industry standard for platform security.

This approach ensures authenticity, confidentiality and integrity for all debug communications, regardless of the underlying transport interface, whether MIPI I3C®, USB, PCIe or others. Debugger behavior remains consistent across interfaces, simplifying implementation and validation.

The specification complements the broader MIPI debug ecosystem.

 

Coming in 2026: New “Fast Boot” Options for MIPI Camera Security

Enhancements to the suite of MIPI camera security specifications are being developed to enable faster boot times for imaging systems, minimizing the time taken from power-on to streaming of secure video data.

These enhancements will continue to leverage the DMTF SPDM framework and message formats, but will introduce an optional new security mode that will half the number of security handshake operations required to complete the establishment of a secure video streaming channel compared with currently defined security modes. Image sensors will be able to implement both current and new modes of operation to provide backward compatibility, and SoCs may only require software updates to implement the new mode of operation.

Both the MIPI Camera Security and the MIPI Camera Security Profiles specifications are scheduled to be updated to v1.1 in mid-2026. However, the companion specifications that will fully enable the enhancements, MIPI CSE v2.1 and the new CSE Exchange Format (EF) v1.0, will follow later this year.

All security specifications are currently available only to MIPI Alliance members.

 

Ian Smith
MIPI Alliance Technical Content Consultant

The post What’s New in MIPI Security: MIPI CCISE and Security for Debug appeared first on Edge AI and Vision Alliance.

]]>
Accelerating next-generation automotive designs with the TDA5 Virtualizer™ Development Kit https://www.edge-ai-vision.com/2026/02/accelerating-next-generation-automotive-designs-with-the-tda5-virtualizer-development-kit/ Tue, 10 Feb 2026 09:00:45 +0000 https://www.edge-ai-vision.com/?p=56795 This blog post was originally published at Texas Instruments’ website. It is reprinted here with the permission of Texas Instruments. Introduction Continuous innovation in high-performance, power-efficient systems-on-a-chip (SoCs) is enabling safer, smarter and more autonomous driving experiences in even more vehicles. As another big step forward, Texas Instruments and Synopsys developed a Virtualizer Development Kit™ (VDK) for the […]

The post Accelerating next-generation automotive designs with the TDA5 Virtualizer™ Development Kit appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at Texas Instruments’ website. It is reprinted here with the permission of Texas Instruments.

Introduction

Continuous innovation in high-performance, power-efficient systems-on-a-chip (SoCs) is enabling safer, smarter and more autonomous driving experiences in even more vehicles.

As another big step forward, Texas Instruments and Synopsys developed a Virtualizer Development Kit™ (VDK) for the TDA5 high-performance compute SoC family, which includes the TDA54-Q1. The TDA5 VDK enables developers to evaluate, develop and test devices in the TDA5 family ahead of initial silicon samples, providing a seamless development cycle with one software development kit (SDK) for both physical and virtual SoCs. Each device in the TDA5 family have a corresponding VDK to enable a common virtualization design and consistent user experience.

Along with the VDK, TI and Synopsys are providing additional components to create the full virtual development environment. Figure 1 provides an overview of available resources, which include:

  • The virtual prototype, which is the simulated model of a TDA5 SoC.
  • Deployment services from Synopsys, which are add-ons and interfaces that enable developers to integrate the VDK with other virtual components or tools.
  • Documentation for the TDA5 and the TDA54-Q1 software development kit.
  • Reference software examples for each TDA5 VDK and SDK to help developers get started.

Figure 1 Block diagram showing components provided by TI and Synopsys to get started with development on the VDK.

Why virtualization matters

Virtualization designs greatly reduce automotive development cycles by enabling software development without physical hardware. This allows developers to accelerate or “shift-left” development by starting software earlier and then migrating to physical hardware once available (as shown in Figure 2). Additionally, earlier software development extends to ecosystem partners, enabling key third-party software components to be available earlier.

Figure 2 Visualization of how software can be migrated from VDK to SoC.

Accelerating development with virtualization

The TDA5 VDK helps software developers work more effectively and efficiently, allowing them to use software-in-the-loop testing, so they can test and validate virtually without needing costly on-the-road testing.

Developers can use the TDA5 VDK to enhance debugging capabilities with deeper insights into internal device operations than what is typically exposed through the physical SoC pins. The TDA5 VDK also provides fault injection capabilities, enabling developers to simulate failures inside the device to get better information on how the software behaves when something goes wrong.

Scalability of virtualization

Scalability is another key benefit of the TDA5 VDK because virtualization platforms don’t require shipping, allowing development teams to ramp faster and be more responsive with resource allocation for ongoing projects. The TDA5 VDK also enables automated test environments, since development teams can replace traditional “board farms” with virtual environments running on remote computers. This helps automakers streamline continuous integration, continuous deployment (CICD) workflows to more efficiently and effectively accomplishing testing.

Since the TDA5 VDK is also available for future TDA5 SoCs, developers can scale work across multiple projects. If a developer is using the VDK for a specific TDA5 device (for example, TDA54), they can explore other products in the TDA5 family in a virtual environment without needing to change hardware configurations.

System integration

Virtualization designs such as the TDA5 VDK serve as the foundation for developers to build complete digital twins for their designs. By virtualizing the SoC, it can be integrated with other virtual components and tools to create larger simulated systems such as full ECU networks. Figure 3 shows how developers can leverage the capabilities of the Synopsys platform to integrate the VDK with other virtual components and simulate complete designs.


Figure 3 Diagram showing how the VDK can integrate with other virtual components and simulate complete designs.

 

Digital environment simulation tools can also be integrated with the TDA5 VDK to enable virtual testing in simulated driving scenarios, allowing developers to quickly perform reproducible testing. The TDA5 VDK also allows developers to leverage the broad ecosystem of tools and partners from Synopsys to get the most of their virtual development experience.

Getting started with the TDA54 VDK

The TDA54 SDK is now available on TI.com to help engineers get started with the TDA54 virtual development kit. Samples of the TDA54-Q1 SoC, the first device in the TDA5 family, will be sampling to select automotive customers by the end of 2026. Contact TI for more information about the TDA5 VDK and how to get started.

The post Accelerating next-generation automotive designs with the TDA5 Virtualizer™ Development Kit appeared first on Edge AI and Vision Alliance.

]]>
Into the Omniverse: OpenUSD and NVIDIA Halos Accelerate Safety for Robotaxis, Physical AI Systems https://www.edge-ai-vision.com/2026/02/into-the-omniverse-openusd-and-nvidia-halos-accelerate-safety-for-robotaxis-physical-ai-systems/ Mon, 09 Feb 2026 09:00:59 +0000 https://www.edge-ai-vision.com/?p=56608 This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. NVIDIA Editor’s note: This post is part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows using the latest advancements in OpenUSD and NVIDIA Omniverse. New NVIDIA safety […]

The post Into the Omniverse: OpenUSD and NVIDIA Halos Accelerate Safety for Robotaxis, Physical AI Systems appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA.

NVIDIA Editor’s note: This post is part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows using the latest advancements in OpenUSD and NVIDIA Omniverse.

New NVIDIA safety frameworks and technologies are advancing how developers build safe physical AI.

Physical AI is moving from research labs into the real world, powering intelligent robots and autonomous vehicles (AVs) — such as robotaxis — that must reliably sense, reason and act amid unpredictable conditions.

To safely scale these systems, developers need workflows that connect real-world data, high-fidelity simulation and robust AI models atop the common foundation provided by the OpenUSD framework.

The recently published OpenUSD Core Specification 1.0, OpenUSD — aka Universal Scene Description — now defines standard data types, file formats and composition behaviors, giving developers predictable, interoperable USD pipelines as they scale autonomous systems.

Powered by OpenUSD, NVIDIA Omniverse libraries combine NVIDIA RTX rendering, physics simulation and efficient runtimes to create digital twins and simulation-ready (SimReady) assets that accurately reflect real-world environments for synthetic data generation and testing.

NVIDIA Cosmos world foundation models can run on top of these simulations to amplify data variation, generating new weather, lighting and terrain conditions from the same scenes so teams can safely cover rare and challenging edge cases.

 

In addition, advancements in synthetic data generation, multimodal datasets and SimReady workflows are now converging with the NVIDIA Halos framework for AV safety, creating a standards-based path to safer, faster, more cost-effective deployment of next-generation autonomous machines.

Building the Foundation for Safe Physical AI

Open Standards and SimReady Assets

The OpenUSD Core Specification 1.0 establishes the standard data models and behaviors that underpin SimReady assets, enabling developers to build interoperable simulation pipelines for AI factories and robotics on OpenUSD.

Built on this foundation, SimReady 3D assets can be reused across tools and teams and loaded directly into NVIDIA Isaac Sim, where USDPhysics colliders, rigid body dynamics and composition-arc–based variants let teams test robots in virtual facilities that closely mirror real operations.

Open-Source Learning 

The Learn OpenUSD curriculum is now open source and available on GitHub, enabling contributors to localize and adapt templates, exercises and content for different audiences, languages and use cases. This gives educators a ready-made foundation to onboard new teams into OpenUSD-centric simulation workflows.​

Generative Worlds as Safety Multiplier

Gaussian splatting — a technique that uses editable 3D elements to render environments quickly and with high fidelity — and world models are accelerating simulation pipelines for safe robotics testing and validation.

At SIGGRAPH Asia, the NVIDIA Research team introduced Play4D, a streaming pipeline that enables 4D Gaussian splatting to accurately render dynamic scenes and improve realism.

Spatial intelligence company World Labs is using its Marble generative world model with NVIDIA Isaac Sim and Omniverse NuRec so researchers can turn text prompts and sample images into photorealistic, Gaussian-based physics-ready 3D environments in hours instead of weeks.

Those worlds can then be used for physical AI training, testing and sim-to-real transfer. This high-fidelity simulation workflow expands the range of scenarios robots can practice in while keeping experimentation safely in simulation.

Lightwheel Helps Teams Scale Robot Training With SimReady Assets

Powered by OpenUSD, Lightwheel’s SimReady asset library includes a common scene description layer, making it easy to assemble high-fidelity digital twins for robots. The SimReady assets are embedded with precise geometry, materials and validated physical properties, which can be loaded directly into NVIDIA Isaac Sim and Isaac Lab for robot training. This allows robots to experience realistic contacts, dynamics and sensor feedback as they learn.

End-to-End Autonomous Vehicle Safety

End-to-end autonomous vehicle safety advancements are accelerating with new research, open frameworks and inspection services that make validation more rigorous and scalable.

NVIDIA researchers, with collaborators at Harvard University and Stanford University, recently introduced the Sim2Val framework to statistically combine real-world and simulated test results, reducing AV developers’ need for costly physical mileage while demonstrating how robotaxis and AVs can behave safely across rare and safety-critical scenarios.

Learn more by watching NVIDIA’s “Safety in the Loop” livestream:

 

These innovations are complemented by a new, open-source NVIDIA Omniverse NuRec Fixer, a Cosmos-based model trained on AV data that removes artifacts in neural reconstructions to produce higher-quality SimReady assets.

To align these advances with rigorous global standards, the NVIDIA Halos AI Systems Inspection Lab — accredited by ANAB — provides impartial inspection and certification of Halos elements across robotaxi fleets, AV stacks, sensors and manufacturer platforms through the Halos Certification Program.

AV Ecosystem Leaders Putting Physical AI Safety to Work

BoschNuro and Wayve are among the first participants in the NVIDIA Halos AI Systems Inspection Lab, which aims to accelerate the safe, large-scale deployment of robotaxi fleets. Onsemi, which makes sensor systems for AVs, industrial automation and medical applications, has recently become the first company to pass inspection for the NVIDIA Halos AI Systems Inspection Lab.

 

The open-source CARLA simulator integrates NVIDIA NuRec and Cosmos Transfer to generate reconstructed drives and diverse scenario variations, while Voxel51’s FiftyOne engine, linked to Cosmos Dataset Search, NuRec and Cosmos Transfer, helps teams curate, annotate and evaluate multimodal datasets across the AV pipeline.​

 

Mcity at the University of Michigan is enhancing the digital twin of its 32-acre AV test facility using Omniverse libraries and technologies. The team is integrating the NVIDIA Blueprint for AV simulation and Omniverse Sensor RTX application programming interfaces to create physics-based models of camera, lidar, radar and ultrasonic sensors.

By aligning real sensor recordings with high-fidelity simulated data and sharing assets openly, Mcity enables safe, repeatable testing of rare and hazardous driving scenarios before vehicles operate on public roads.

Get Plugged Into the World of OpenUSD and Physical AI Safety

Learn more about OpenUSD, NVIDIA Halos and physical AI safety by exploring these resources:

 

Katie Washabaugh, Product Marketing Manager for Autonomous Vehicle Simulation, NVIDIA

The post Into the Omniverse: OpenUSD and NVIDIA Halos Accelerate Safety for Robotaxis, Physical AI Systems appeared first on Edge AI and Vision Alliance.

]]>