Industrial Vision (Computer Vision) - Edge AI and Vision Alliance https://www.edge-ai-vision.com/category/applications/industrial-vision-computer-vision/ Designing machines that perceive and understand. Thu, 19 Feb 2026 22:23:45 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://www.edge-ai-vision.com/wp-content/uploads/2019/12/cropped-logo_colourplus-32x32.png Industrial Vision (Computer Vision) - Edge AI and Vision Alliance https://www.edge-ai-vision.com/category/applications/industrial-vision-computer-vision/ 32 32 The Forest Listener: Where edge AI meets the wild https://www.edge-ai-vision.com/2026/02/the-forest-listener-where-edge-ai-meets-the-wild/ Mon, 23 Feb 2026 09:00:52 +0000 https://www.edge-ai-vision.com/?p=56867 This blog post was originally published at Micron’s website. It is reprinted here with the permission of Micron. Let’s first discuss the power of enabling. Enabling a wide electronic ecosystem is essential for fostering innovation, scalability and resilience across industries. By supporting diverse hardware, software and connectivity standards, organizations can accelerate product development, reduce costs and […]

The post The Forest Listener: Where edge AI meets the wild appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at Micron’s website. It is reprinted here with the permission of Micron.

Let’s first discuss the power of enabling. Enabling a wide electronic ecosystem is essential for fostering innovation, scalability and resilience across industries. By supporting diverse hardware, software and connectivity standards, organizations can accelerate product development, reduce costs and enhance user experiences. A broad ecosystem encourages collaboration among manufacturers, developers and service providers, helping to drive interoperability. Enabling an ecosystem for your customers is a huge value for your product in any market, but for a market that spans many applications, it’s paramount for allowing your customers to get to the market quickly. Micron has a diverse set of ecosystem partners for broad applications like microprocessors, including STMicroelectronics (STM). We have collaborated with STM for years, matching our memory solutions to their products. Ultimately, these partnerships empower our mutual businesses to deliver smarter, more connected solutions that meet the evolving needs of consumers and enterprises alike.

The platform and the kit

There’s something uniquely satisfying about peeling back the anti-static bag and revealing the STM32MP257F-DK dev board brimming with potential. As an embedded developer, I am excited when new silicon lands on my desk, especially when it promises to redefine what’s possible at the edge. The STM32MP257F-DK from STMicroelectronics is one of those launches that truly innovates. The STM32MP257F-DK Discovery Kit is a compact, developer-friendly platform designed to bring edge AI to life. And in my case, to the forest. It became the heart of one of my most exciting projects yet: the Forest Listener, a solar-powered, AI-enabled bird-watching companion that blends embedded engineering with natural exploration.

A new kind of birdwatcher

After a few weeks of development and testing, my daughter and I headed into the woods just after sunrise — as usual, binoculars around our necks, a thermos of tea in the backpack and a quiet excitement in the air. But this time, we brought along a new companion. The Forest Listener is a smart birdwatcher, an AI-powered system that sees and hears the forest just like we do. Using a lightweight model trained with STM32’s model zoo, it identifies bird species on the spot. No cloud, no latency, just real-time inference at the edge. My daughter has mounted the device on a tripod, connected the camera and powered it on. The screen lights up. It’s ready! Suddenly, a bird flutters into view. The camera captures the moment. Within milliseconds, the 1.35 TOPS neural processing unit (NPU) kicks in, optimized for object detection. The Cortex-A35 logs the sighting (image, species, timestamp), while the Cortex-M33 manages sensors and power. My daughter, watching on a connected tablet, lights up: “Look, Dad! It found another one!” A Eurasian jay, this time.

Built for the edge … and the outdoors

Later, at home, we scroll through the logs saved on the Memory cards. The system can also upload sightings via Ethernet. She’s now learning names, songs and patterns. It’s a beautiful bridge between nature and curiosity. At the core of this seamless experience is Micron LPDDR4 memory. It delivers the high bandwidth needed for AI inference and multimedia processing, while maintaining ultra-low power consumption, critical for our solar-powered setup. Performance is only part of the story: What truly sets Micron LPDDR4 apart is its long-term reliability and support. Validated by STM for use with the STM32MP257F-DK, this memory is manufactured at Micron’s dedicated longevity fab, ensuring a more stable, multiyear supply chain. That’s a game-changer for developers to build solutions that need to last — not just in home appliances, but in the harsh field environment. Whether you’re deploying an AI app in remote forests, industrial plants or smart homes, you need components that are not only fast and efficient but also built to endure. Micron LPDDR4 is engineered to meet the stringent requirements of embedded and industrial markets, with a commitment to support and availability that gives manufacturers peace of mind.

Beyond bird-watching

The Forest Listener is just one example of what the STM32MP257F-DK and Micron LPDDR4 can enable. In factories, the same edge-AI capabilities can monitor machines, detect anomalies, and reduce downtime. In smart homes, they can power face recognition, voice control and energy monitoring — making homes more intelligent, responsive and private, all without relying on the cloud.

For more information about Micron solutions that are enabling AI at the edge, visit micron.com and check out our industrial solutions and LPDDR4/4X product insights.

Donato Bianco, Senior Ecosystem Enablement Manager, Micron Technology

 

The post The Forest Listener: Where edge AI meets the wild appeared first on Edge AI and Vision Alliance.

]]>
What Happens When the Inspection AI Fails: Learning from Production Line Mistakes https://www.edge-ai-vision.com/2026/02/what-happens-when-the-inspection-ai-fails-learning-from-production-line-mistakes/ Thu, 12 Feb 2026 09:00:09 +0000 https://www.edge-ai-vision.com/?p=56801 This blog post was originally published at Lincode’s website. It is reprinted here with the permission of Lincode. Studies show that about 34% of manufacturing defects are missed because inspection systems make mistakes.[1] These numbers show a big problem—when the inspection AI misses something, even a tiny defect can spread across hundreds or thousands of products. One […]

The post What Happens When the Inspection AI Fails: Learning from Production Line Mistakes appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at Lincode’s website. It is reprinted here with the permission of Lincode.

Studies show that about 34% of manufacturing defects are missed because inspection systems make mistakes.[1] These numbers show a big problem—when the inspection AI misses something, even a tiny defect can spread across hundreds or thousands of products.

One small scratch, crack, or colour mismatch can lead to rework, slowdowns, customer complaints, or even product returns. And because the production line moves quickly, these mistakes can multiply before anyone notices. That’s why an inspection AI failure affects not just one product, but the entire production line.

But here’s the good part: the problem usually comes from fixable issues like poor training data, bad lighting, or camera setup problems. When manufacturers study these mistakes closely, they can upgrade the AI, improve the dataset, and build a stronger, more reliable inspection system.

This blog explains what happens when inspection AI fails, and how these failures can actually help companies build a smarter, more accurate quality control process.

What is Inspection AI Failure?

Inspection AI failure happens when an AI system designed to spot defects in products misses, mislabels, or incorrectly flags issues. This can occur due to poor training data, changes in product appearance, lighting problems, or limitations in the AI model itself.

Such failures lead to missed defects, false alarms, and reduced confidence in automated quality checks, affecting production efficiency and product quality. DeepVision (a company working on AI vision) claims that with AI visual inspection, defect “escape rates” in some manufacturing lines dropped by as much as 83%.[2]

Why Do Visual Inspection Systems Miss Defects?

Visual inspection systems miss defects for several reasons. Sometimes, the AI isn’t trained on enough examples of real-world defects, so it doesn’t recognize unusual scratches, cracks, or color changes.

Other times, the lighting, camera angles, or image quality make it hard for the system to see small imperfections clearly. Even minor changes in product shape or texture can confuse the AI, leading to missed defects.

Another common reason is a lack of proper visual inspection error analysis. Without reviewing mistakes and understanding why the AI failed, the same errors can keep happening.

By analyzing these errors carefully, manufacturers can improve training data, adjust cameras and lighting, and fine-tune the AI model to catch more defects and reduce costly mistakes on the production line.

Real-World Impact of AI Defect Detection Failures

AI defect detection failures don’t just affect machines; they impact the entire production chain, from efficiency to customer trust.

1. Production Delays and Increased Costs

When AI defect detection misses problems, products often need rework or replacement, slowing down the production line. For example, Foxconn, a major electronics manufacturer, faced delays when their AI inspection system missed minor defects in smartphone assembly, causing additional labor and wasted components.

Similarly, Toyota reported production slowdowns in certain plants when AI visual inspection failed to catch paint imperfections, leading to costly rework and delayed deliveries.

2. Customer Dissatisfaction and Brand Damage

Defective products reaching customers can hurt a company’s reputation. Samsung once had to recall devices due to overlooked micro-defects in components, showing how AI inspection failure can impact customer trust.

Nike also faced quality complaints when automated inspection missed stitching errors in footwear. These cases highlight why reliable AI defect detection and thorough visual inspection error analysis are critical to prevent defects from reaching customers and protect brand reputation.

Ultimately, addressing AI defect detection failures through careful error analysis and improved models helps manufacturers save costs, maintain efficiency, and keep customers satisfied.

Common Causes Behind Production Line Mistakes

Understanding inspection AI failure starts with knowing why mistakes happen on the production line.

  1. Poor Training Data – AI models may miss defects if they haven’t seen enough examples during training.
  1. Changes in Product Appearance – Variations in color, shape, or texture can confuse the AI.
  1. Lighting or Camera Issues – Poor lighting, glare, or misaligned cameras can hide defects from the system.
  1. Outdated AI Models – Models not retrained for new products or updated production conditions can fail.
  1. Lack of Error Analysis – Without reviewing AI mistakes through visual inspection error analysis, recurring defects go unnoticed.

By solving these causes, manufacturers can reduce errors and improve overall production quality.

5 Easy Steps to Conduct Effective Visual Inspection Error Analysis

Performing visual inspection error analysis helps identify why AI missed defects and improves overall accuracy. Here are five simple steps:

Step 1: Collect Failed Samples – Gather images or products where the AI missed defects or gave false positives. This creates a clear starting point for analysis.

Step 2: Compare with Training Data – Check if the AI has seen similar defects before. Missing examples in the training set often cause errors.

Step 3: Check Image Quality – Review lighting, camera angles, resolution, and focus. Poor image conditions can hide defects from the system.

Step 4: Analyze Model Confidence – Look at confidence scores or outputs from the AI. Low confidence often points to areas where the model struggles.

Step 5: Document and Retrain – Record all errors and their causes, then retrain the AI with new examples to reduce future inspection AI failures.

This step-by-step process ensures errors are understood, fixed, and less likely to repeat, making your AI defect detection more reliable.

Learning From Failures: Fixing the Root Cause of AI Mistakes

Learning from inspection AI failure is not about blaming the system; it’s about understanding why mistakes happen and preventing them in the future. Here’s how manufacturers can approach it effectively:

1. Identify the Exact Error

Start by pinpointing what went wrong. Was it a missed defect, a false positive, or a misclassification? Breaking down errors into clear categories makes it easier to address the root cause.

2. Investigate the Cause

Look into the source of the error:

  • Was the AI model trained on enough defect examples?
  • Did changes in product design or material confuse the system?
  • Were environmental factors like lighting, vibration, or camera setup involved?

3. Improve Data Quality

Many failures occur because the AI hasn’t seen enough diverse defect examples. Collect new images or product samples representing edge cases, rare defects, or variations, and add them to the training dataset.

4. Update and Retrain the AI Model

After enhancing the data, retrain the AI. Fine-tune parameters and test against real production scenarios. Continuous retraining ensures the AI adapts to evolving products and production conditions.

5. Monitor and Review Continuously

Even after fixes, monitor the AI’s performance regularly. Conduct periodic visual inspection error analysis to catch new failure patterns early and maintain high-quality standards.

By following these steps, companies turn AI mistakes into actionable insights, reducing inspection AI failure and improving overall production efficiency.

Preventing Future Failures: Building a More Accurate, Reliable Inspection AI

Preventing inspection AI failure starts with creating a system that learns and adapts continuously. By using diverse and high-quality training data, improving camera setups and lighting, and retraining models regularly, manufacturers can catch even rare or subtle defects.

Adding human checks for unusual cases and monitoring AI performance in real-time further reduces errors. The goal is to build an AI-based quality inspection system that is not only fast but also consistent and dependable, keeping production smooth and products defect-free.

Why Choosing the Right AI-Based Quality Control Partner Matters

Selecting the right partner can make a huge difference in reducing inspection AI failure. Here are three key reasons:

1. Expertise in AI and Machine Vision

A skilled partner knows how to train, fine-tune, and deploy AI defect detection systems that work reliably in real production conditions.

AI-powered defect detection systems typically achieve 95‑99% accuracy, compared to just 60–90% in manual inspections.[3]

2. Customized Solutions for Your Production

Every production line is different. The right partner designs AI inspection workflows tailored to your products, lighting, cameras, and quality standards.

AI-driven QC can reduce defect rates by 20–50%, depending on the implementation.[4]

3. Continuous Support and Improvement

Reliable partners offer ongoing monitoring, retraining, and error analysis, ensuring the AI keeps improving and defects are caught before they reach customers.

In real-world deployments, AI inspection systems have reduced production‑line defects by up to 30% through continuous learning and anomaly detection.[5]

Choosing the right partner not only improves accuracy but also helps prevent costly inspection AI failure, keeping your production line efficient and your products defect-free.

Why Lincode Stands Out as Visual Inspection AI

When it comes to reliable AI defect detectionLincode sets itself apart with a combination of advanced technology and practical design. Here’s why it’s trusted by manufacturers worldwide:

Key Reasons Lincode Excels

  • High Accuracy Detection – Lincode’s AI models detect defects with over 98% accuracy, catching even the smallest scratches, cracks, or misalignments.
  • Easy Integration – It can be integrated into existing production lines in less than 48 hours, reducing downtime and implementation costs.
  • Real-Time Monitoring – The system provides instant alerts and detailed reports, enabling teams to resolve issues up to 3x faster than traditional inspection methods.
  • Continuous Learning – Lincode adapts to new products and defect types through ongoing retraining, improving defect detection rates by 15–20% within the first few months.

In short, Lincode doesn’t just detect defects; it helps companies prevent costly mistakes, improve production efficiency, and reduce inspection AI failure, keeping product quality consistently high.

FAQ

1. What is the main reason for inspection AI failure?
The main reason is usually a lack of diverse training data or changes in product design that the AI wasn’t trained to recognize. Environmental factors like poor lighting or misaligned cameras can also cause failures.

2. How often should visual inspection error analysis be conducted?
It’s best to review errors regularly, ideally once a month or after introducing a new product, to catch recurring mistakes and improve AI accuracy.

3. Can AI defect detection replace human inspection completely?
While AI can catch most defects, combining it with human checks ensures rare or unusual defects are not missed. A human-in-the-loop approach reduces inspection AI failure significantly.

4. How does retraining the AI improve defect detection?
Retraining with new defect examples and updated production data helps the AI learn from past mistakes, improving detection accuracy and reducing future failures.

5. What industries benefit most from inspection AI?
Industries like electronics, automotive, pharmaceuticals, food packaging, and consumer goods see the biggest gains because even small defects can cause costly rework or quality issues.

Bibliography:

[1] Micromachines, Journal article, 27 February 2023.
[2] AI.Business, Case‑study article, 01 May 2024.
[3] Dhīmahi Technolabs, Blog post / Insight,2025
[4] International Journal of Intelligent Systems and Applications in Engineering Journal article, 2024.
[5] International Journal of Scientific Research and Management,  Journal article, October 2024.

The post What Happens When the Inspection AI Fails: Learning from Production Line Mistakes appeared first on Edge AI and Vision Alliance.

]]>
Upcoming Webinar on Industrial 3D Vision with iToF Technology https://www.edge-ai-vision.com/2026/02/upcoming-webinar-on-industrial-3d-vision-with-itof-technology/ Tue, 03 Feb 2026 18:46:13 +0000 https://www.edge-ai-vision.com/?p=56760 On February 18, 2026, at 9:00 am PST (12:00 pm EST), and on February 19, 2026 at 11:00 am CET, Alliance Member company e-con Systems in partnership with onsemi will deliver a webinar “Enabling Reliable Industrial 3D Vision with iToF Technology” From the event page: Join e-con Systems and onsemi for an exclusive joint webinar […]

The post Upcoming Webinar on Industrial 3D Vision with iToF Technology appeared first on Edge AI and Vision Alliance.

]]>
On February 18, 2026, at 9:00 am PST (12:00 pm EST), and on February 19, 2026 at 11:00 am CET, Alliance Member company e-con Systems in partnership with onsemi will deliver a webinar “Enabling Reliable Industrial 3D Vision with iToF Technology” From the event page:

Join e-con Systems and onsemi for an exclusive joint webinar on how Time-of-Flight (iToF) based 3D vision is enabling reliable perception for modern robotic applications, industrial and warehouse automation workflows.

Vision experts will discuss how industrial teams can leverage iToF sensor capabilities into deployable 3D vision solutions while addressing the perception challenges commonly faced in complex industrial environments.

Attendees will gain insights from proven customer success stories in field deployments, including parcel box dimensioning, autonomous pallet handling, obstacle detection, and collision avoidance in warehouse environments.

Register Now »

Featured Speakers:

Radhika S, Senior Project Lead, e-con Systems

Aidan Browne, Product Marketing Manager – Depth Sensing, onsemi

Key insights you’ll gain:

  • Key industrial applications driving the adoption of iToF-based 3D vision
  • Common perception challenges in industrial environments
  • Translating sensor capability into deployable robotics vision solutions
  • Proven customer success stories from field deployments

For more information and to register, visit the event page.

The post Upcoming Webinar on Industrial 3D Vision with iToF Technology appeared first on Edge AI and Vision Alliance.

]]>
Upcoming Webinar on Challenges of Depth of Field (DoF) in Macro Imaging https://www.edge-ai-vision.com/2026/01/upcoming-webinar-on-challenges-of-depth-of-field-dof-in-macro-imaging/ Tue, 27 Jan 2026 20:33:58 +0000 https://www.edge-ai-vision.com/?p=56641 On January 29, 2026, at 9:00 am PST (12:00 pm EST) Alliance Member company e-con Systems will deliver a webinar “Challenges of Depth of Field (DoF) in Macro Imaging” From the event page: We’re excited to invite you to an exclusive webinar hosted by e-con Systems: Challenges of DoF in Macro Imaging. In this session, […]

The post Upcoming Webinar on Challenges of Depth of Field (DoF) in Macro Imaging appeared first on Edge AI and Vision Alliance.

]]>
On January 29, 2026, at 9:00 am PST (12:00 pm EST) Alliance Member company e-con Systems will deliver a webinar “Challenges of Depth of Field (DoF) in Macro Imaging” From the event page:

We’re excited to invite you to an exclusive webinar hosted by e-con Systems: Challenges of DoF in Macro Imaging. In this session, our vision experts will discuss the common challenges associated with DoF in medical imaging and explain
how camera design choices directly impact it.

Explore how AI-driven cameras are redefining workplace and on-site safety through real-time detection and alerts for slip, trip, and fall events, PPE non-compliance, and unsafe worker behavior — ensuring smarter, safer industrial environments.

Register Now »

Featured Speakers:

Bharathkumar R, Market Manager – Medical Cameras, e-con Systems

Vigneshkumar R, Senior Camera Expert, e-con Systems

Key insights you’ll gain:

  • How limited DoF impacts certain medical applications
  • Key design considerations that influence DoF
  • Gain insights from a real-world intraoral imaging case study

For more information and to register, visit the event page.

The post Upcoming Webinar on Challenges of Depth of Field (DoF) in Macro Imaging appeared first on Edge AI and Vision Alliance.

]]>
STM32MP21x: It’s Never Been More Cost-effective or More Straightforward to Create Industrial Applications with Cameras https://www.edge-ai-vision.com/2026/01/stm32mp21x-its-never-been-more-cost-effective-or-more-straightforward-to-create-industrial-applications-with-cameras/ Fri, 23 Jan 2026 09:00:03 +0000 https://www.edge-ai-vision.com/?p=56583 This blog post was originally published at STMicroelectronics’ website. It is reprinted here with the permission of STMicroelectronics. ST is launching today the STM32MP21x product line, the most affordable STM32MP2, comprising a single-core Cortex-A35 running at 1.5 GHz and a Cortex-M33 at 300 MHz. It thus completes the STM32MP2 series announced in 2023, which became our first 64-bit MPUs. After the […]

The post STM32MP21x: It’s Never Been More Cost-effective or More Straightforward to Create Industrial Applications with Cameras appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at STMicroelectronics’ website. It is reprinted here with the permission of STMicroelectronics.

ST is launching today the STM32MP21x product line, the most affordable STM32MP2, comprising a single-core Cortex-A35 running at 1.5 GHz and a Cortex-M33 at 300 MHz. It thus completes the STM32MP2 series announced in 2023, which became our first 64-bit MPUs. After the STM32MP25x and its 1.35 TOPS NPU, and the STM32MP23x, which targeted industrial AI applications, the new STM32MP21x lowers the barrier to entry by still offering DDR4/LPDDR4 alongside DDR3L and the same Ethernet controllers with time-sensitive networking as the other members of the series. Consequently, teams looking to use an MPU in an industrial setting can now do it while keeping their costs even lower, whether with Linux or bare-metal software.

The contradictions pulling MPU designs apart

Power vs. efficiency

The world of embedded Linux is complex because it operates under very tight constraints. On the one hand, teams choose Linux because they need something far more powerful and extensive than a traditional real-time operating system can provide. However, the same application can significantly benefit from running some of its operations on a bare-metal system, which is why the ability to run an RTOS on ST MPUs since the STM32MP13 has been so successfulSimilarly, while teams need the computational power of an MPU, they face power-consumption and cost constraints that can make designing systems challenging.

Computational throughput vs. ease of transition

Engineers face a significant gap when transitioning to the MPU world. Usually, that happens when they have reached the limits of what’s reasonable to run on a microcontroller and must adopt a significantly more powerful device and embedded Linux. Unfortunately, the industry doesn’t always provide an MPU that makes this move easy, as it forces designers to deal with a massive bill of materials and development costs. That’s why the STM32MP21x sets a new standard for affordability, as its bare-metal capabilities mean that teams can port some of their existing applications for an even smoother transition. Moreover, they even get a modern DDR4/LPDDR4 controller with DDR3L backward compatibility to future-proof their system.

The modern solutions to make MPU designs more accessible

A flexible memory controller

The new STM32MP21x comes with a memory controller supporting 16-bit DDR4/LPDDR4 and DDR3L. Teams wishing to replace their STM32MP13x while keeping their legacy DDR3L can swap the MPU with minimal adjustments. Conversely, teams looking to adopt a more modern  architecture without substantially increasing their costs now have an alternative that will serve them for years to come. It also gives teams much more flexibility to weather the volatility of the memory market, since engineers can work with a broader range of memory types. And since the STM32MP21x operates with all memory generations at the same frequency, and the industrial applications are very rarely limited by the RAM bandwidth, the performance difference remains minimal or even imperceptible.

 

 

A resourceful architecture

To make the STM32MP21x even more practical, we made it pin-to-pin compatible with the STM32MP23x and the STM32MP25x using a 10 mm x 10 mm package. It also uses the same Cortex-M33 as the other STM32MP2 devices, making it nearly effortless to use our M33-TD implementation in our OpenSTLinux distribution across all STM32MP2s. The new STM32MP21x also handles the same wide junction temperature range (-40 ºC to 125 ºC) and targets the same SESIP Level 3 certification. It also comes with dual Gigabit Ethernet ports with time-sensitive networking, and multiple interfaces, including a CSI-2 for camera pipelines. Put simply, offering a cost-effective solution didn’t mean sacrificing important features for industrial markets.

The next steps to jump on the bandwagon

More cost-effective image processing

Thanks to its architecture, engineers can use the STM32MP21x in an application that captures data from an image sensor and cleans it up before sending it to another MPU with a neural processing unit. It helps spread the computational load while reusing a lot of the work that goes into these microprocessors. Similarly, thanks to its peripherals and security features, teams can use the STM32MP21x for processing sensor data at the edge while meeting the ever-increasing requirements imposed by governments and other regulatory bodies. Put simply, it allows many engineers to create applications that were previously too costly to conceive or lacked the proper hardware support on an MCU or competing MPU.

A Discovery Kit to get started

The best way to get started is to grab the STM32MP215F-DK Discovery Kit . It comes with a MIPI CSI-2 two-lane camera interface, one Gigabit Ethernet port with TSN support, 2 GB of LPDDR4, an M.2 connector for accessories or storage (like a Wi-Fi / BT module), and an LCD-TFT display controller for projects that require a UI. The board receives power via a USB-C 2.0 port that also transmits data for debugging and programming with ST-LINK, among other things, and a microSD card slot will help with overall storage.

In a nutshell, the STM32MP215F-DK Discovery Kit is the quickest way to experiment with capturing image or inertial sensor data and see how the STM32MP21x can impact a design. Once they move to a custom design, engineers will have the widest selection of packages, from 14 mm x 14 mm to 11 mm x 11 mm, 10 mm x 10 mm, and 8 mm x 8 mm. Once teams choose their device and configuration, they will get access to a wide range of layout examples available on ST.com to help them start with their preferred package, the PMIC (more news to come soon), and selected DRAM.

The post STM32MP21x: It’s Never Been More Cost-effective or More Straightforward to Create Industrial Applications with Cameras appeared first on Edge AI and Vision Alliance.

]]>
Qualcomm’s IE‑IoT Expansion Is Complete: Edge AI Unleashed for Developers, Enterprises & OEMs https://www.edge-ai-vision.com/2026/01/qualcomms-ie%e2%80%91iot-expansion-is-complete-edge-ai-unleashed-for-developers-enterprises-oems/ Wed, 07 Jan 2026 15:00:23 +0000 https://www.edge-ai-vision.com/?p=56404 Key Takeaways: Expanded set of processors, software, services, and developer tools including offerings and technologies from the five acquisitions of Augentix, Arduino, Edge Impulse, Focus.AI, and Foundries.io, positions the Company to help meet edge computing and AI needs for customers across virtually all verticals. Completed acquisition of Augentix, a leader in mass-market image processors, extends Qualcomm Technologies’ ability to provide system-on-chips tailored for intelligent IP cameras and vision systems. New Qualcomm Dragonwing™ Q‑7790 and Q‑8750 processors power security-focused on‑device AI across drones, smart cameras […]

The post Qualcomm’s IE‑IoT Expansion Is Complete: Edge AI Unleashed for Developers, Enterprises & OEMs appeared first on Edge AI and Vision Alliance.

]]>
Key Takeaways:
  • Expanded set of processors, software, services, and developer tools including offerings and technologies from the five acquisitions of Augentix, Arduino, Edge Impulse, Focus.AI, and Foundries.io, positions the Company to help meet edge computing and AI needs for customers across virtually all verticals.
  • Completed acquisition of Augentix, a leader in mass-market image processors, extends Qualcomm Technologies’ ability to provide system-on-chips tailored for intelligent IP cameras and vision systems.
  • New Qualcomm Dragonwing™ Q‑7790 and Q‑8750 processors power security-focused on‑device AI across drones, smart cameras & industrial vision, AI TVs/media hubs, and video collaboration systems.

Las Vegas, NV, January 5, 2026 — At CES, Qualcomm Technologies, Inc. today announced its expanded IoT product portfolio, including new Qualcomm Dragonwing™ Q-series processors. Complemented by new services and developer offerings and fueled by the acquisition of Augentix, Arduino, Edge Impulse, FocusAI, and Foundries.io in the last 18 months, Qualcomm Technologies is now positioned to address the needs of a much wider spectrum of IoT customers ranging from global enterprises to independent local developers, with the vision to become the  provider of choice for core edge compute and AI technology across all industrial and embedded verticals.

“At Qualcomm Technologies, we’re not just introducing new products—we’re launching a comprehensive new approach to help organizations of virtually all sizes, across virtually all verticals, reap the benefits of AI and edge compute in their pursuit for efficiency and new opportunities,” said Nakul Duggal, executive vice president and group general manager, automotive, industrial and embedded IoT, and robotics, Qualcomm Technologies, Inc. “Our expanded Industrial and Embedded IoT portfolio, combined with a robust developer ecosystem, positions us as the ultimate platform for building intelligent, connected business solutions that scale.”

Empowering Developers Across the Revamped Qualcomm® Industrial and Embedded IoT Portfolio

Qualcomm Technologies is redefining its Industrial and Embedded IoT (IE-IoT) business to become a leading provider of edge compute and AI solutions across industrial and embedded sectors. Through an expanded portfolio of advanced processors, software, services, and developer tools, supported by five strategic acquisitions, the Company now offers comprehensive solutions for rapid prototyping, scalable deployment, and superior AI integration at the edge. This transformation introduces distinct product lines with competitive roadmaps, a unified software architecture supporting Linux, Windows, and Android, enabling deployment-ready solutions for multiple verticals. Combined with a superior partner ecosystem and accessible developer platforms like Arduino, Edge Impulse, and Foundries.io, Qualcomm Technologies is lowering barriers to entry and accelerating innovation from prototype to commercialization.

By integrating Arduino and enhancing developer accessibility through Edge Impulse and Foundries.io, Qualcomm Technologies is empowering one of the world’s largest developer communities to innovate faster and more securely. This unified ecosystem merges Arduino’s open-source simplicity with Qualcomm Technologies’ advanced AI, connectivity, and security technologies, while Edge Impulse and Foundries.io provide powerful machine learning and security-focused deployment tools. Together, these resources simplify development, accelerate prototyping, and enable security-rich, scalable solutions, making Qualcomm Technologies’ developer tools more accessible than ever and setting the stage for significant industry expansion and revenue growth.

Dragonwing Q-8750: Advanced On‑Device AI for Drones, Media Hubs, and Multi-Angle Vision Systems

The Dragonwing Q-8750 is Qualcomm Technologies’ most advanced IoT processor to date, engineered for high-performance edge computing and immersive experiences. Its AI engine achieves 77 TOPS with support for INT4/8/16 and FP16 precision, enabling real-time inference and even on-device large language models up to 11 billion parameters, eliminating cloud dependency for critical applications. The processor’s advanced camera architecture supports up to 12 physical cameras and triple 48 MP ISPs, making it ideal for drones, media hubs, and multi-angle vision systems.

Dragonwing Q-7790: Elevating Everyday Devices with AI and Immersive Experiences in Smart Cameras and AI TVs

The Dragonwing Q-7790 brings a new level of intelligence and responsiveness to consumer and industrial IoT devices. With 24 TOPS of on-device AI performance, the Dragonwing Q-7790 enables advanced inference for applications like smart cameras, AI TVs, and collaboration systems, without relying on the cloud. Its multimedia capabilities include dual 4K60 display support, 4K60 encoding, 4K120 video decoding, including AV1 hardware decode for premium picture quality. Superior security features such as Total Management Engine, Secure Boot, and Qualcomm® Trusted Execution Environment make it ideal for environments where data integrity is paramount.

Expanded Camera Processors Portfolio

Qualcomm Technologies has completed its acquisition of Augentix Inc., a leading Taiwanese semiconductor company specializing in smart imaging and low-power vision processing chips for IP security cameras, smart home devices, and other connected video solutions. This accelerates Qualcomm Technologies’ vision for security-focused, power-efficient edge AI across smart cameras and industrial IoT, integrating Augentix’s advanced multimedia signal processing and high-resolution imaging into Qualcomm Technologies’ product roadmap. The result will be smarter, more secure IoT devices with sharper images, faster performance, and lower energy use, strengthening Qualcomm Technologies’ position in the edge video industry.

Qualcomm Insight Platform: Unlocking Actionable Video Security Intelligence at the Edge

The Qualcomm® Insight Platform is a unified, native AI-powered video intelligence solution delivered as a service for modern security and operations teams. The Insight Platform uses Edge AI with an LLM-based conversational engine to turn videos into a real-time, profile-aware data plane. Customers can modernize brownfield deployments using Qualcomm® Edge AI boxes or AI-enabled cameras, enabling use cases from enterprise security to protecting critical infrastructure. With flexible hardware options, profile-based querying, and real-time video analytics, the Insight Platform is designed to scale virtually any industry and use case.

Furthermore, with the acquisition of Augentix, the Qualcomm Insight Platform is poised to offer a broader and more flexible portfolio of smart cameras, empowering system designers to optimize camera selection for every zone while maintaining unified control and cost efficiency. For more information, please visit the Qualcomm Insight Platform Solutions page.

Qualcomm Terrestrial Positioning Service Delivers Accurate and More Precise Positioning Across IoT

Devices across  IoT verticals often rely on reliable, accurate, and precise positioning capabilities to deliver certain services. From locating devices in open-air settings, as well as underground, when offline, and in emergency settings, Qualcomm® Terrestrial Positioning Services uses a broad terrestrial signal network comprised of over 9 billion Wi-Fi access points and more than 100 million cellular towers, along with beacon-based positioning methods using Bluetooth® Low Energy (BLE), to deliver over 6 trillion annual location results, without needing GNSS, while offering the ability to complement satellite-based positioning systems as well for enhanced location and a faster time-to-fix.

Edge Impulse Integration on Dragonwing AI On-Prem Appliance Solution Enables Security-Focused, Scalable Edge AI Deployment

Edge Impulse is now fully integrated in the Qualcomm Dragonwing™ AI On-Prem Appliance, this all-in-one solution enables customers to run world-class inference and training in a sovereign, highly security-focused package, supporting both private networks and fully offline operations. Backed by the Edge Impulse platform, this new offering supports efficient inference for models up to 120B parameters. Users can manage their entire data pipeline, including AI-based synthetic data generation and labeling directly on the appliance. With innovative resource allocation, the system acts as a Physical AI Agent, capable of handling MLOps training, optimization, and localized model cascades for physical AI use cases, making it the ideal deployment for high-security environments.

For more information and to experience a broad selection of IoT demonstrations powered by Dragonwing, visit the Qualcomm Booth #5001 at CES 2026 from January 6 to 10, or go to qualcomm.com/iot.

About Qualcomm

Qualcomm relentlessly innovates to deliver intelligent computing everywhere, helping the world tackle some of its most important challenges. Building on our 40 years of technology leadership in creating era-defining breakthroughs, we deliver a broad portfolio of solutions built with our leading-edge AI, high-performance, low-power computing, and unrivaled connectivity. Our Snapdragon® platforms power extraordinary consumer experiences, and our Qualcomm Dragonwing™ products empower businesses and industries to scale to new heights. Together with our ecosystem partners, we enable next-generation digital transformation to enrich lives, improve businesses, and advance societies. At Qualcomm, we are engineering human progress.

Qualcomm Incorporated includes our licensing business, QTL, and the vast majority of our patent portfolio. Qualcomm Technologies, Inc., a subsidiary of Qualcomm Incorporated, operates, along with its subsidiaries, substantially all of our engineering and research and development functions and substantially all of our products and services businesses, including our QCT semiconductor business. Snapdragon and Qualcomm branded products are products of Qualcomm Technologies, Inc. and/or its subsidiaries. Qualcomm patents are licensed by Qualcomm Incorporated.

The post Qualcomm’s IE‑IoT Expansion Is Complete: Edge AI Unleashed for Developers, Enterprises & OEMs appeared first on Edge AI and Vision Alliance.

]]>
The Coming Robotics Revolution: How AI and Macnica’s Capture, Process, Communicate Philosophy Will Define the Next Industrial Era https://www.edge-ai-vision.com/2025/12/the-coming-robotics-revolution-how-ai-and-macnicas-capture-process-communicate-philosophy-will-define-the-next-industrial-era/ Mon, 29 Dec 2025 09:00:09 +0000 https://www.edge-ai-vision.com/?p=56312 This blog post was originally published at Macnica’s website. It is reprinted here with the permission of Macnica. Just as networking and fiber-optic infrastructure quietly laid the groundwork for the internet economy, fueling the rise of Amazon, Facebook, and the digital platforms that redefined commerce and communication, today’s breakthroughs in artificial intelligence are setting the stage […]

The post The Coming Robotics Revolution: How AI and Macnica’s Capture, Process, Communicate Philosophy Will Define the Next Industrial Era appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at Macnica’s website. It is reprinted here with the permission of Macnica.

Just as networking and fiber-optic infrastructure quietly laid the groundwork for the internet economy, fueling the rise of Amazon, Facebook, and the digital platforms that redefined commerce and communication, today’s breakthroughs in artificial intelligence are setting the stage for the next great leap: the age of robotics.

As AI advances in reasoning, perception, and adaptability, the vision of giving machines true intelligence is becoming real. The foundation is forming across three interconnected layers that Macnica calls Capture → Process → Communicate, a complete ecosystem where perception meets computation, and computation meets action.

1. Capture: Where Intelligent Behavior Begins

Intelligent robotics starts with sensing. A robot can only act as intelligently as the data it perceives.

Macnica’s capture technologies provide the sensory foundation of autonomy, enabling machines to understand the world around them, their own motion, and even their internal state.

Through an ecosystem that includes Sony industrial CMOS and global-shutter sensors, Infineon and Toppan ToF modules, Ambarella and Renesas imaging processors, and Macnica’s own Streal strain sensors, robots can now “see,” “hear,” “feel,” and “measure” their environment with unprecedented precision.

These are complemented by magnetic encoders, MEMS motion sensors, radar, and acoustic arrays, all integrated through high-speed interfaces such as SLVS-EC, MIPI CSI-2, and SPI.

Together, these inputs create the digital nervous system of autonomous intelligence, feeding the data pipelines that drive perception and decision-making.

As AI models evolve, the capture domain becomes even more powerful. Cameras and sensors no longer just record; they interpret, enabling context-aware systems that adapt in real time. As Dr. Fei-Fei Li, Stanford professor and co-director of the Stanford Human-Centered AI Institute describes it, “Vision is our most powerful sense, the richest source of information about the physical world.” In AI and robotics, the vast majority of meaningful input is still visual…  streams of light that machines must capture, interpret, and act on in real time.

2. Process: Turning Perception into Intelligence

Once data is captured, it must be processed quickly, locally, and securely.

This is where AI and edge computing converge, transforming robotics from deterministic machines into adaptive, learning systems.

Macnica partners with leaders such as Altera, Ambarella, DeepX, and iENSO to deliver compute architectures optimized for real-time vision, sensor fusion, and AI inference.

Our platforms use FPGA and SoC acceleration to handle high-bandwidth imaging, edge-AI engines for perception and path planning, and modular frameworks that let customers combine imaging, mechanical, and environmental data streams deterministically.

This approach is powerful, flexible, and IP-secure. Customers can license proven IP blocks, integrate proprietary algorithms, or co-develop solutions with Macnica engineers. In every case, the customer retains full ownership of their proprietary code, algorithms, and any custom modules developed collaboratively. Macnica’s role is to provide the expertise, frameworks, and integration tools that accelerate design while ensuring that intellectual property created by the customer remains entirely theirs.

That openness accelerates innovation while ensuring long-term sustainability, which is critical for robotics lifecycles measured in decades rather than quarters.

With the latest AI architectures, robotic systems can now learn to navigate complex spaces, detect intent, and coordinate motion across multiple actuators in real time at the edge.

3. Communicate: Connecting Machines, People, and Intelligence

In robotics, communication is the connective tissue that unites sensing, processing, and human interaction.

Macnica enables deterministic networking through time-synchronized Ethernet frameworks that coordinate multi-camera, multi-axis robotic systems with sub-millisecond precision. This ensures predictable, safe, and synchronized motion essential for industrial robotics and autonomous systems.

On the human interface side, Macnica integrates Ortustech’s industrial LCD and touch solutions for clarity and reliability across mobile, factory, and embedded environments. From bright, wide-temperature HMIs to compact rugged displays, our visual systems ensure that data is not only transmitted but also clearly understood.

Beyond the edge, technologies such as ST 2110 IP video transport and Marvell/Infineon networking solutions allow massive real-time data streams, including visual, mechanical, and environmental information, to be distributed securely across systems or even across multiple sites. This connects local intelligence to the broader enterprise and links AI robotics with industrial cloud infrastructure.

4. A Unified Ecosystem for Scalable Innovation

The three elements – Capture, Process, and Communicate – work together in harmony.

Through a carefully curated partner network, Macnica Americas connects leading suppliers across sensing, compute, display, and embedded design into one interoperable ecosystem.

Layer Key Partners Macnica’s Contribution
Capture

Camera icon
  • Sony*
  • Infineon
  • Toppan
  • Renesas
  • Macnica Streal
Imaging, strain, environmental, and motion sensor integration
Process

Gear icon
  • Altera
  • Ambarella
  • DeepX
  • iENSO
  • Connect Tech
FPGA/SoC compute, AI acceleration, and sensor-fusion frameworks
Communicate

Left and right arrows icon
  • Marvell
  • Infineon
  • Silex
  • Ortustech
  • Innolux
Deterministic networking, wireless communication, and HMI displays
Integration & Support Macnica Americas Architecture design, validation, and lifecycle enablement

This architecture transforms discrete components into validated, scalable solutions that are ready for deployment. It minimizes integration risk, shortens time-to-market, and allows customers to focus on innovation rather than infrastructure.

5. Robotics as the Physical Frontier of AI

As AI continues to expand its reasoning, perception, and creativity, robotics becomes its natural extension into the physical world.

The economic potential is vast: automation of labor, intelligent logistics, adaptive manufacturing, and human-assist systems that extend capability rather than replace it.

Robotics is where the digital meets the tangible, where intelligence does not just analyze – it acts.

As with the early internet, the leaders will be those who build the enabling infrastructure, the ones who connect perception, computation, and communication.

That is exactly what Macnica does.

By enabling systems to Capture, Process, and Communicate, we turn intelligence into motion, data into decisions, and innovation into impact.

The Bottom Line

If AI is the new electricity, robotics is the grid, and Macnica is helping wire it.

By building the interoperable foundation that allows intelligent machines to sense, think, and act together, Macnica is not just participating in the robotics revolution – it is powering it.

 

Sebastien Dignard, President, Macnica Americas, Inc.

The post The Coming Robotics Revolution: How AI and Macnica’s Capture, Process, Communicate Philosophy Will Define the Next Industrial Era appeared first on Edge AI and Vision Alliance.

]]>
Drones Market 2026-2036: Technologies, Markets, and Opportunities https://www.edge-ai-vision.com/2025/12/drones-market-2026-2036-technologies-markets-and-opportunities/ Mon, 22 Dec 2025 09:00:09 +0000 https://www.edge-ai-vision.com/?p=56294 This article was originally published at IDTechEx’s website. It is reprinted here with the permission of IDTechEx. Global Drone Market Set to Reach US$147.8 Billion by 2036, Driven by Commercial Expansion, Regulatory Maturity, and Sensor Proliferation Over the past decade, drones have moved from experimental tools into critical infrastructure across agriculture, logistics, energy, security, and public-sector […]

The post Drones Market 2026-2036: Technologies, Markets, and Opportunities appeared first on Edge AI and Vision Alliance.

]]>
This article was originally published at IDTechEx’s website. It is reprinted here with the permission of IDTechEx.

Global Drone Market Set to Reach US$147.8 Billion by 2036, Driven by Commercial Expansion, Regulatory Maturity, and Sensor Proliferation

Over the past decade, drones have moved from experimental tools into critical infrastructure across agriculture, logistics, energy, security, and public-sector operations. By 2036, the global drone market, spanning both commercial and consumer platforms, is forecast by IDTechEx to reach US$147.8 billion, growing from US$69 billion in 2026, with a CAGR of 7.9%. Commercial deployments are accelerating rapidly, with unit shipments expected to surpass 9 million in 2036. This growth reflects increasing regulatory clarity, maturing technology stacks, falling hardware costs, and the transition toward autonomous, data-driven operations.

Global Drone Market Revenue Forecast (2026-2036). Source: IDTechEx

Agriculture enters the era of large-scale digital farming

Agricultural drones have evolved from early trials to full commercial maturity, especially in China, the US, and Southeast Asia. Core applications such as spraying, seeding, and crop monitoring have become profitable and widely adopted. Multirotor platforms still dominate, but fixed-wing and hybrid VTOL (Vertical Take-Off and Landing) drones are gaining share for large-area farmland mapping and long-range autonomous missions.

In 2025, more than 30% of large farms worldwide are estimated to be using drones for field operations. Integration of AI vision, multispectral imaging, and precision analytics enables a data-centric farming model that continues to expand. Future growth will rely heavily on linking drone data with smart farming ecosystems and automated agronomic decisions.

Comparison of Battery-Endurance-Payload of Agricultural Spraying Drones. Bubble size indicates payload capacity: larger bubbles represent drones with higher liquid-carrying capacity. Colors denote regions of origin: blue = China, green = United States, orange = Europe. Source: IDTechEx

Inspection and maintenance becomes the fastest-growing segment

Energy, utilities, and infrastructure operators are rapidly shifting toward automated drone-based inspection of wind turbines, powerlines, pipelines, and oil & gas assets. Equipped with LiDAR, thermal imaging, and AI-powered defect detection, drones are replacing costly and hazardous manual inspections.

From 2025 onward, operators are expected to increasingly adopt fully automated workflows, including drone-in-a-box systems, remote fleet management, and AI cloud analytics. Inspection & maintenance is projected to exceed 25% of all commercial drone revenue by 2030, surpassing agriculture as the leading segment.

Delivery drones mature from trials to regional commercialization

Despite regulatory and logistical challenges, drone delivery is now gaining real commercial traction. Leading companies in the US, Europe, and China are expanding last-mile delivery for e-commerce, food, and medical transport, while mid-range logistics drones are emerging for remote and island supply routes.

Industry progress in automated loading, cold-chain drone logistics, and U-space/UTM (Unmanned Traffic Management) frameworks is paving the way for scaled operations. The long-term trajectory of delivery drones will depend heavily on BVLOS (Beyond Visual Line of Sight) approvals and national UTM deployment.

Security, military, and public safety maintain strong momentum

Government and law enforcement agencies are adopting drones for border patrol, surveillance, traffic management, crowd monitoring, and emergency response.

Hybrid fixed-wing VTOL drones enable long-endurance operations over large areas, while AI-based video analytics enhance situational awareness. Public safety is expected to remain a stable and steadily expanding segment through 2036.

Military drones remain the largest revenue contributor

The military drone sector continues to lead the total drone market in absolute revenue. Since 2022, regional conflicts have accelerated demand for reconnaissance drones, medium-range tactical drones, and loitering munitions.

Armed forces are also moving toward Manned-Unmanned Teaming (MUM-T) concepts, integrating drones with aircraft and armored vehicles. While dual-use technologies are increasingly repurposed for defense, the core military drone segment will continue to be highly profitable and strategically essential.

Disaster response continues to rely on drone capabilities

Drones equipped with thermal, optical, and acoustic sensors play a critical role in night-time search missions, earthquake rescue, wildfire monitoring, and post-disaster assessment.

Advances in multi-drone collaboration and AI-based geolocation algorithms have significantly improved operational efficiency. Though smaller in absolute revenue, this segment has strong government backing and consistent long-term growth.

Global regulations move toward harmonization and risk-based frameworks

Drone regulation is increasingly aligned around risk-based, tiered certification systems. The US (Part 107), EU (C0-C6), UK (CAP722), and China have all established clearer pathways for commercial operations, especially for BVLOS.

Common regulatory themes include:

  • Maximum flight heights around 120 m
  • Mandatory registration and pilot certification
  • Stricter rules for BVLOS and operations over people
  • Airspace access via automated or digital authorization

North America and the EU lead in harmonized frameworks, while Asia-Pacific, Latin America, and MENA remain more fragmented.

Sensor proliferation reshapes drone payload configurations

From 2025 to 2036, commercial drone shipments are expected to grow 2.3×, but sensor shipments grow 4×, illustrating a major shift toward higher sensor density and more advanced autonomy.

By 2036, many industrial and BVLOS drones are expected to exceed 10-15 sensors per drone, driven by:

  • Multi-camera vision systems
  • Higher-performance LiDAR and radar
  • Ultrasonic and pressure sensors for low-altitude control
  • Barometric altimeters
  • Multi-IMU redundancy for high-reliability missions

A fully rebuilt 2026-2036 forecast from IDTechEx

This report offers a comprehensive overview of the global drone industry’s progress across consumer, commercial, and defense sectors, including the regulatory constraints that shape operations and the deployment maturity in different regions. It also examines the full range of sensing and payload configurations used across major applications, from agriculture and inspection to logistics and public safety, explaining how different cost structures and mission requirements drive platform choices. Additionally, it includes a detailed list of representative commercial drone models, their technical specifications, sensor suites, pricing ranges, and market positioning, together with a fully updated 2026-2036 forecast covering revenue, unit shipments, and sensor integration trends.

IDTechEx provides a completely updated ten-year drone market forecast, including:

  • Global revenue projections for consumer & commercial drones
  • Unit shipments by fixed-wing vs rotary platforms
  • Scenario-based forecasts across 8 key commercial applications
  • Detailed sensor-per-drone modeling
  • Drone sensor market size forecasts (2026-2036)

Key Aspects

This report provides critical market intelligence about the global drone industry, covering consumer, commercial, and defense platforms and all major application sectors. This includes:

A review of the context, technology, and regulation behind drone systems:

  • History and context for the global drone market and each major application sector
  • General overview of key drone platform types (multirotor, fixed-wing, hybrid VTOL) and autonomy / navigation stacks
  • Overall look at technology trends in payloads and sensor integration, including multi-sensor configurations for BVLOS and industrial use
  • Review of global regulatory developments and risk-based frameworks shaping commercial drone operations

Full market characterization for each major drone application sector:

  • Agricultural drones, including spraying, seeding, crop monitoring, and integration with digital farming ecosystems
  • Inspection and maintenance drones for energy, utilities, and infrastructure assets, including drone-in-a-box and automated workflows
  • Delivery drones, from last-mile services to mid-range logistics and medical transport, and their UTM / U-space requirements
  • Security, public-safety, and disaster-response drones, including long-endurance hybrid VTOL platforms and AI-driven situational awareness
  • Military and defense drones, including tactical systems, reconnaissance platforms, loitering munitions, and Manned-Unmanned Teaming concepts

Market analysis throughout:

  • Reviews of drone industry players throughout each key sector, including representative commercial models, sensor suites, payload capabilities, and pricing ranges
  • Historic drone market data and deployment trends, together with a fully rebuilt 2026-2036 forecast for global drone revenue and unit shipments
  • Detailed 2026-2036 forecasts for the drone sensor market, including sensor-per-drone modeling, shipment volumes, and revenue projections
Report Metrics Details
Historic Data 2021 – 2025
CAGR The global drone market is forecast to reach US$143 b by 2036, growing with a CAGR of 10%.
Forecast Period 2026 – 2036
Forecast Units volume(units), Revenue (USD, millions)
Regions Covered Worldwide, Brazil, Europe, China, United Kingdom, United States
Segments Covered Commercial drones, Consumer drones, Fixed-wing UAVs, Rotary UAVs, Agriculture drones, Inspection drones, Logistics drones, Military drones, Search-and-rescue drones, Drone sensor technologies (IMU, cameras, LiDAR, radar, pressure, ultrasonic, altimeters), Autonomy technologies (SLAM, FCU, localisation, swarm control).

Analyst access from IDTechEx

All report purchases include up to 30 minutes telephone time with an expert analyst who will help you link key findings in the report to the business issues you’re addressing. This needs to be used within three months of purchasing the report.

Further information

If you have any questions about this report, please do not hesitate to contact our report team at research@IDTechEx.com or call one of our sales managers:

AMERICAS (USA): +1 617 577 7890
ASIA (Japan and Korea): +81 3 3216 7209
ASIA: +44 1223 810259
EUROPE (UK): +44 1223 812300

Technology Analyst, IDTechEx
Senior Technology Analyst, IDTechEx

The post Drones Market 2026-2036: Technologies, Markets, and Opportunities appeared first on Edge AI and Vision Alliance.

]]>
poLight ASA Collaborates with Image Quality Labs on M12-based Raspberry Pi TLens Studio for AI-driven Industrial Machine Vision Applications https://www.edge-ai-vision.com/2025/12/polight-asa-collaborates-with-image-quality-labs/ Thu, 18 Dec 2025 19:09:16 +0000 https://www.edge-ai-vision.com/?p=56284 TØNSBERG, Norway–(BUSINESS WIRE) — poLight ASA (OSE: PLT) and Image Quality Labs (IQL) today announced the development of an M12-based Raspberry Pi TLens® Studio evaluation and development platform, utilizing the new line of TLens® off-the-shelf (OTS) focusing camera lens. This platform enables machine vision design engineers to quickly and easily evaluate high speed, constant field-of-view […]

The post poLight ASA Collaborates with Image Quality Labs on M12-based Raspberry Pi TLens Studio for AI-driven Industrial Machine Vision Applications appeared first on Edge AI and Vision Alliance.

]]>
TØNSBERG, Norway–(BUSINESS WIRE) — poLight ASA (OSE: PLT) and Image Quality Labs (IQL) today announced the development of an M12-based Raspberry Pi TLens® Studio evaluation and development platform, utilizing the new line of TLens® off-the-shelf (OTS) focusing camera lens. This platform enables machine vision design engineers to quickly and easily evaluate high speed, constant field-of-view focusing functionality on their existing embedded computing platforms. Initially available in a 7.5mm focal length, the prototype will be available for private viewing in the poLight suite at CES (6-9 January) and in both the poLight (#6317) and Image Quality Labs (#6716) booths at SPIE (20-23 January). To schedule a CES viewing, contact info@polight.com.

Historically, industrial machine vision OEMs have been forced to rely on fixed focus cameras with a small aperture to achieve sufficient depth of focus, hindering advanced imaging for factory and warehouse automation, barcode scanners and embedded cameras. Delivering a small, cost-competitive OTS solution with constant field-of-view focusing which is based on a standard sensor platform enables industrial machine vision OEMs to quickly ramp their advanced technology.

“poLight is committed to building an ecosystem of integrators and solution providers addressing AI-driven imaging challenges,” said Dr. Øyvind Isaksen, CEO of poLight ASA. “By collaborating with Image Quality Labs, we are delivering a key aspect that solves machine vision imaging barriers in a variety of applications.”

“At IQL, our customers are continually looking for ways to push image quality, autofocus performance, and system responsiveness,” said Jason Cope, Founder & CEO of Image Quality Labs. “This Raspberry Pi TLens® Studio platform provides a plug-and-play way to evaluate TLens® behavior in minutes, not months. Partnering with poLight allows us to bring advanced focusing capabilities directly into the hands of machine vision developers.”

The M12-based Raspberry Pi TLens® Studio platform leverages poLight’s ultra-fast (~1ms), ultra-low power-consuming (~1mW) TLens®, enabling design engineers to rapidly set and change object/focal distances to accommodate different scenarios. The OTS lenses will be available at 6mm, 7.5mm, 13mm and 19mm focal lengths, in production volume by Q1 2026.

The IQL TLens® RPi 5 Evaluation Kit is a new development platform purpose-built to help engineers rapidly evaluate poLight’s TLens® constant field-of-view focusing technology on modern embedded systems. Designed around the Raspberry Pi 5 and an M12 optical architecture, the kit will initially support IQL’s TLens-enabled camera modules built on the Sony IMX462 and Sony IMX900 image sensors, two industry-proven platforms for low-light, high-speed, and machine-vision applications.

The evaluation kit provides a turnkey pathway for developers to test TLens® performance, integrate with their existing software stacks, and accelerate camera-driven innovation in industrial automation, robotics, edge AI, and inspection systems. The IQL TLens® RPi 5 Evaluation Kit, along with the first IMX462 and IMX900-based TLens® camera modules, will be available in limited early-access quantities in Q1 2026.

About poLight ASA

poLight ASA (OSE: PLT) offers a patented, proprietary tunable optics technology, starting with its first product, TLens® which replicates “the human eye” experience in autofocus cameras used in devices such as smartphones, wearables, barcode scanners, machine vision systems and various medical equipment. poLight’s TLens® enables better system performance and new user experiences due to benefits such as extremely fast focus, small footprint, no magnetic interference, low power consumption and constant field of view. poLight is based in Tønsberg, Norway, with employees in Finland, France, UK, US, China, Taiwan, Japan, and the Philippines. For more information, please visit https://www.polight.com.

About IQL:

Image Quality Labs (IQL) specializes in advanced imaging system engineering, offering custom camera module design, ISP and camera driver integration, and precision image quality tuning for embedded vision applications worldwide. IQL’s vertically integrated approach; from architecture and optics selection through production-ready hardware, software, and test systems, supports global customers across robotics, machine vision, medical imaging, automotive, and AI/ML ecosystems. With design and manufacturing operations in the U.S. and engineering collaborations spanning North America, Europe, and Asia, IQL enables rapid evaluation, optimization, and deployment of next-generation imaging solutions. Learn more at https://www.imagequalitylabs.com.

Contacts

For more information on Image Quality Labs, contact:
Jason Cope, CEO, IQL: +1 (919) 294-8421

For more information on poLight ASA, contact:
Dr. Øyvind Isaksen, CEO, poLight ASA: +47 90 87 63 98

The post poLight ASA Collaborates with Image Quality Labs on M12-based Raspberry Pi TLens Studio for AI-driven Industrial Machine Vision Applications appeared first on Edge AI and Vision Alliance.

]]>
AI On: 3 Ways to Bring Agentic AI to Computer Vision Applications https://www.edge-ai-vision.com/2025/12/ai-on-3-ways-to-bring-agentic-ai-to-computer-vision-applications/ Tue, 16 Dec 2025 09:00:27 +0000 https://www.edge-ai-vision.com/?p=56246 This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Learn how to integrate vision language models into video analytics applications, from AI-powered search to fully automated video analysis. Today’s computer vision systems excel at identifying what happens in physical spaces and processes, but lack the abilities to explain the […]

The post AI On: 3 Ways to Bring Agentic AI to Computer Vision Applications appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA.

Learn how to integrate vision language models into video analytics applications, from AI-powered search to fully automated video analysis.

Today’s computer vision systems excel at identifying what happens in physical spaces and processes, but lack the abilities to explain the details of a scene and why they matter, as well as reason about what might happen next.

Agentic intelligence powered by vision language models (VLMs) can help bridge this gap, giving teams quick, easy access to key insights and analyses that connect text descriptors with spatial-temporal information and billions of visual data points captured by their systems every day.

Three approaches organizations can use to boost their legacy computer vision systems with agentic intelligence are to:

  • Apply dense captioning for searchable visual content.
  • Augment system alerts with detailed context.
  • Use AI reasoning to summarize information from complex scenarios and answer questions.

Making Visual Content Searchable With Dense Captions

Traditional convolutional neural network (CNN)-powered video search tools are constrained by limited training, context and semantics, making gleaning insights manual, tedious and time-consuming. CNNs are tuned to perform specific visual tasks, like spotting an anomaly, and lack the multimodal ability to translate what they see into text.

Businesses can embed VLMs directly into their existing applications to generate highly detailed captions of images and videos. These captions turn unstructured content into rich, searchable metadata, enabling visual search that’s far more flexible — not constrained by file names or basic tags.

For example, automated vehicle-inspection system UVeye processes over 700 million high-resolution images each month to build one of the world’s largest vehicle and component datasets. By applying VLMs, UVeye converts this visual data into structured condition reports, detecting subtle defects, modifications or foreign objects with exceptional accuracy and reliability for search.

VLM-powered visual understanding adds essential context, ensuring transparent, consistent insights for compliance, safety and quality control. UVeye detects 96% of defects compared with 24% using manual methods, enabling early intervention to reduce downtime and control maintenance costs.

 

Relo Metrics, a provider of AI-powered sports marketing measurement, helps brands quantify the value of their media investments and optimize their spending. By combining VLMs with computer vision, Relo Metrics moves beyond basic logo detection to capture context — like a courtside banner shown during a game-winning shot — and translate it into real-time monetary value.

This contextual-insight capability highlights when and how logos appear, especially in high-impact moments, giving marketers a clearer view of return on investment and ways to optimize strategy. For example, Stanley Black & Decker, including its Dewalt brand, previously relied on end-of-season reports to evaluate sponsor asset performance, limiting timely decision-making. Using Relo Metrics for real-time insights, Stanley Black & Decker adjusted signage positioning and saved $1.3 million in potentially lost sponsor media value.

Augmenting Computer Vision System Alerts With VLM Reasoning

CNN-based computer vision systems often generate binary detection alerts such as yes or no, and true or false. Without the reasoning power of VLMs, that can mean false positives and missed details — leading to costly mistakes in safety and security, as well as lost business intelligence.Rather than replacing these CNN-based computer vision systems entirely, VLMs can easily augment these systems as an intelligent add-on. With a VLM layered on top of CNN-based computer vision systems, detection alerts are not only flagged but reviewed with contextual understanding — explaining where, how and why the incident occurred.

For smarter city traffic management, Linker Vision uses VLMs to verify critical city alerts, such as traffic accidents, flooding, or falling poles and trees from storms. This reduces false positives and adds vital context to each event to improve real-time municipal response.

Linker Vision’s architecture for agentic AI involves automating event analysis from over 50,000 diverse smart city camera streams to enable cross-department remediation — coordinating actions across teams like traffic control, utilities and first responders when incidents occur. The ability to query across all camera streams simultaneously enables systems to quickly and automatically turn observations into insights and trigger recommendations for next best actions.

Automatic Analysis of Complex Scenarios With Agentic AI

Agentic AI systems can process, reason and answer complex queries across video streams and modalities — such as audio, text, video and sensor data. This is possible by combining VLMs with reasoning models, large language models (LLMs), retrieval-augmented generation (RAG), computer vision and speech transcription.

Basic integration of a VLM into an existing computer vision pipeline is helpful in verifying short video clips of key moments. However this approach is limited by how many visual tokens a single model can process at once, resulting in surface-level answers without context over longer time periods and external knowledge.

In contrast, whole architectures built on agentic AI enable scalable, accurate processing of lengthy and multichannel video archives. This leads to deeper, more accurate and more reliable insights that go beyond surface-level understanding. Agentic systems can be used for root-cause analysis or analysis of long inspection videos to generate reports with timestamped insights.

Levatas develops visual-inspection solutions that use mobile robots and autonomous systems to enhance safety, reliability and performance of critical infrastructure assets such as electric utility substations, fuel terminals, rail yards and logistics hubs. Using VLMs, Levatas built a video analytics AI agent to automatically review inspection footage and draft detailed inspection reports, dramatically accelerating a traditionally manual and slow process.

For customers like American Electric Power (AEP), Levatas AI integrates with Skydio X10 devices to streamline inspection of electric infrastructure. Levatas enables AEP to autonomously inspect power poles, identify thermal issues and detect equipment damage. Alerts are sent instantly to the AEP team upon issue detection, enabling swift response and resolution, and ensuring reliable, clean and affordable energy delivery.

AI gaming highlight tools like Eklipse use VLM-powered agents to enrich livestreams of video games with captions and index metadata for rapid querying, summarization and creation of polished highlight reels in minutes — 10x faster than legacy solutions — leading to improved content consumption experiences.

Powering Agentic Video Intelligence With NVIDIA Technologies

For advanced search and reasoning, developers can use multimodal VLMs such as NVCLIPNVIDIA Cosmos Reason and Nemotron Nano V2 to build metadata-rich indexes for search.

To integrate VLMs into computer vision applications, developers can use the event reviewer feature in the NVIDIA Blueprint for video search and summarization (VSS), part of the NVIDIA Metropolis platform.

For more complex queries and summarization tasks, the VSS blueprint can be customized so developers can build AI agents that access VLMs directly or use VLMs in conjunction with LLMs, RAG and computer vision models. This enables smarter operations, richer video analytics and real-time process compliance that scale with organizational needs.

Esther Lee, Global Product Marketing Manager, NVIDIA

The post AI On: 3 Ways to Bring Agentic AI to Computer Vision Applications appeared first on Edge AI and Vision Alliance.

]]>