Aerospace and Defense - Edge AI and Vision Alliance https://www.edge-ai-vision.com/category/applications/aerospace-and-defense/ Designing machines that perceive and understand. Thu, 29 Jan 2026 18:39:24 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://www.edge-ai-vision.com/wp-content/uploads/2019/12/cropped-logo_colourplus-32x32.png Aerospace and Defense - Edge AI and Vision Alliance https://www.edge-ai-vision.com/category/applications/aerospace-and-defense/ 32 32 NanoXplore and STMicroelectronics Deliver European FPGA for Space Missions https://www.edge-ai-vision.com/2026/01/nanoxplore-and-stmicroelectronics-deliver-european-fpga-for-space-missions/ Wed, 28 Jan 2026 17:00:04 +0000 https://www.edge-ai-vision.com/?p=56650 Key Takeaways: NanoXplore’s NG-ULTRA FPGA becomes the first product qualified to new European ESCC 9030 standard for space applications The product leverages a supply chain fully based in the European Union, from design to manufacturing and test, and delivered by ST Its advanced digital capability enables European customers to develop higher performance, more competitive satellites […]

The post NanoXplore and STMicroelectronics Deliver European FPGA for Space Missions appeared first on Edge AI and Vision Alliance.

]]>
Key Takeaways:
  • NanoXplore’s NG-ULTRA FPGA becomes the first product qualified to new European ESCC 9030 standard for space applications
  • The product leverages a supply chain fully based in the European Union, from design to manufacturing and test, and delivered by ST
  • Its advanced digital capability enables European customers to develop higher performance, more competitive satellites and space missions

NanoXplore, the European leader in the design of SoC FPGA and radiation-hardened FPGA technologies, and STMicroelectronics, a global semiconductor leader serving customers across the spectrum of electronics applications, announce the qualification of NG-ULTRA for space applications. This radiation-hardened SoC FPGA has been designed specifically for space applications, including low- and medium-earth orbit constellations, and is set to be used in numerous satellite equipment systems, including flagship missions such as Galileo, Copernicus, and potentially IRIS².

First product certified to ESCC 9030 for the European New Space industry

This qualification marks a major industrial and technological milestone for the European space ecosystem: NG-ULTRA is the first product qualified to ESCC 9030, a new European standard dedicated to high-performance micro-circuits in flip-chip’ed on organic substrate or plastic package. This standard delivers the reliability required for space applications while enabling a transition away from traditional ceramic-packaged solutions – well suited for deep-space but heavier and more expensive – marking a key step forward for constellations and higher-volume missions.

The “new space” dynamic (constellations, Low and Medium Earth Orbits, higher volumes) is transforming requirements for onboard digital equipment and driving a shift in scale: there is a simultaneous need for greater computing power, controlled power consumption, and contained costs compatible with large-scale deployments. NG-ULTRA addresses this challenge by enabling more data to be processed directly in orbit (edge computing), thereby limiting transmission bottlenecks between space and ground.

NG-ULTRA targets strategic functions such as on-board computers, data management and routing between sub-systems, image and video processing (real-time compression and encoding), Software Defined Radio (SDR) – enabling remote evolution of communication modes, and onboard autonomy (detection, recognition, supervision).

A secure, European supply chain

Beyond performance, this program embodies a strategic ambition to secure a sovereign and sustainable European supply chain for long-duration missions by reducing critical dependencies. For NG-ULTRA, the industrial framework combines design, manufacturing, assembly, and testing capabilities across European sites, with the aim of reconciling competitiveness, volume production, and space-grade reliability.

In addition to its own R&D and design center in Paris, Grenoble and Montpellier, NanoXplore leverages various STMicroelectronics facilities in Europe, including the Grenoble R&D and design center, the 300mm digital fab of Crolles, the space-specialist packaging facility in Rennes (France), the test and reliability site in Grenoble (France) and Agrate (Italy) and additional redundant qualified sites in Europe.

Technical specifications

With an “all-in-one” SoC (System on Chip) architecture designed specifically for platform and onboard computing applications, NG-ULTRA combines a multi-core processor with programmable hardware on a single chip. This architecture allows for greater design agility, reduces electronic board complexity and component count, and optimizes latency, mass, and power consumption.

NG-ULTRA is built on STMicroelectronics’ 28nm FD-SOI digital technology platform, recognized for its advantages in energy efficiency, resistance to space radiation and advanced architecture features. Combined with a unique advanced radiation hardening technology, the NG-ULTRA is built to survive the thermal cycles, shocks, and vibrations of launch and long-term orbital life so as to ensure best in class performances and durability in the harsh space environment throughout the mission lifetime.

The NG-ULTRA has been designed to operate reliably in harsh radiation environments, offering a Total Ionizing Dose (TID) tolerance of up to 50 krad (Si) to ensure long-term performance. It also demonstrates strong resilience to single-event effects, with Single Event Latch-up (SEL) immunity tested up to 65 MeV·cm²/mg and Single Event Upset (SEU) immunity validated for Linear Energy Transfer (LET) levels exceeding 60 MeV·cm²/mg.

NG-ULTRA integrates a full SoC based on quad core Arm® Cortex® R52 and provides high computational capability (537k LUTs + 32 Mb RAM) to address the most complex onboard computer requirements.

Its streamlined architecture drastically reduces PCB complexity and system mass—two of the most critical constraints in space design. By minimizing the component count, the NG-ULTRA simultaneously lowers total power consumption and project costs while increasing overall system reliability.

In addition, the SRAM-based architecture of the NG-ULTRA enables an adaptive hardware approach, allowing for unlimited on-orbit reconfiguration. This “hardware-as-software” flexibility allows operators to update functionality post-launch, adapt to evolving communication standards, or optimize the chip for different mission phases. The NG-ULTRA thus provides a future-proof platform that extends the operational relevance of assets long after they leave the launchpad.

To facilitate adoption, NG-ULTRA is also available as an evaluation kit — a complete prototyping platform that allows to rapidly validate performance and interfaces, reduce integration risks, and accelerate software and onboard logic development prior to flight-board production.

About NanoXplore

NanoXplore is a French fabless company designing radiation-hardened FPGA components for high-reliability environments, specifically space and avionics. The company recently launched the NG-ULTRA, the world’s most advanced radiation-hardened FPGA SoC. With an international presence, NanoXplore is the European leader in the design and development of SoC FPGA technologies and a key partner to the major players in the aerospace sector.

About STMicroelectronics

At ST, we are 50,000 creators and makers of semiconductor technologies mastering the semiconductor supply chain with state-of-the-art manufacturing facilities. An integrated device manufacturer, we work with more than 200,000 customers and thousands of partners to design and build products, solutions, and ecosystems that address their challenges and opportunities, and the need to support a more sustainable world. Our technologies enable smarter mobility, more efficient power and energy management, and the wide-scale deployment of cloud-connected autonomous things. We are on track to be carbon neutral in all direct and indirect emissions (scopes 1 and 2), product transportation, business travel, and employee commuting emissions (our scope 3 focus), and to achieve our 100% renewable electricity sourcing goal by the end of 2027. Further information can be found at www.st.com.

The post NanoXplore and STMicroelectronics Deliver European FPGA for Space Missions appeared first on Edge AI and Vision Alliance.

]]>
Qualcomm’s IE‑IoT Expansion Is Complete: Edge AI Unleashed for Developers, Enterprises & OEMs https://www.edge-ai-vision.com/2026/01/qualcomms-ie%e2%80%91iot-expansion-is-complete-edge-ai-unleashed-for-developers-enterprises-oems/ Wed, 07 Jan 2026 15:00:23 +0000 https://www.edge-ai-vision.com/?p=56404 Key Takeaways: Expanded set of processors, software, services, and developer tools including offerings and technologies from the five acquisitions of Augentix, Arduino, Edge Impulse, Focus.AI, and Foundries.io, positions the Company to help meet edge computing and AI needs for customers across virtually all verticals. Completed acquisition of Augentix, a leader in mass-market image processors, extends Qualcomm Technologies’ ability to provide system-on-chips tailored for intelligent IP cameras and vision systems. New Qualcomm Dragonwing™ Q‑7790 and Q‑8750 processors power security-focused on‑device AI across drones, smart cameras […]

The post Qualcomm’s IE‑IoT Expansion Is Complete: Edge AI Unleashed for Developers, Enterprises & OEMs appeared first on Edge AI and Vision Alliance.

]]>
Key Takeaways:
  • Expanded set of processors, software, services, and developer tools including offerings and technologies from the five acquisitions of Augentix, Arduino, Edge Impulse, Focus.AI, and Foundries.io, positions the Company to help meet edge computing and AI needs for customers across virtually all verticals.
  • Completed acquisition of Augentix, a leader in mass-market image processors, extends Qualcomm Technologies’ ability to provide system-on-chips tailored for intelligent IP cameras and vision systems.
  • New Qualcomm Dragonwing™ Q‑7790 and Q‑8750 processors power security-focused on‑device AI across drones, smart cameras & industrial vision, AI TVs/media hubs, and video collaboration systems.

Las Vegas, NV, January 5, 2026 — At CES, Qualcomm Technologies, Inc. today announced its expanded IoT product portfolio, including new Qualcomm Dragonwing™ Q-series processors. Complemented by new services and developer offerings and fueled by the acquisition of Augentix, Arduino, Edge Impulse, FocusAI, and Foundries.io in the last 18 months, Qualcomm Technologies is now positioned to address the needs of a much wider spectrum of IoT customers ranging from global enterprises to independent local developers, with the vision to become the  provider of choice for core edge compute and AI technology across all industrial and embedded verticals.

“At Qualcomm Technologies, we’re not just introducing new products—we’re launching a comprehensive new approach to help organizations of virtually all sizes, across virtually all verticals, reap the benefits of AI and edge compute in their pursuit for efficiency and new opportunities,” said Nakul Duggal, executive vice president and group general manager, automotive, industrial and embedded IoT, and robotics, Qualcomm Technologies, Inc. “Our expanded Industrial and Embedded IoT portfolio, combined with a robust developer ecosystem, positions us as the ultimate platform for building intelligent, connected business solutions that scale.”

Empowering Developers Across the Revamped Qualcomm® Industrial and Embedded IoT Portfolio

Qualcomm Technologies is redefining its Industrial and Embedded IoT (IE-IoT) business to become a leading provider of edge compute and AI solutions across industrial and embedded sectors. Through an expanded portfolio of advanced processors, software, services, and developer tools, supported by five strategic acquisitions, the Company now offers comprehensive solutions for rapid prototyping, scalable deployment, and superior AI integration at the edge. This transformation introduces distinct product lines with competitive roadmaps, a unified software architecture supporting Linux, Windows, and Android, enabling deployment-ready solutions for multiple verticals. Combined with a superior partner ecosystem and accessible developer platforms like Arduino, Edge Impulse, and Foundries.io, Qualcomm Technologies is lowering barriers to entry and accelerating innovation from prototype to commercialization.

By integrating Arduino and enhancing developer accessibility through Edge Impulse and Foundries.io, Qualcomm Technologies is empowering one of the world’s largest developer communities to innovate faster and more securely. This unified ecosystem merges Arduino’s open-source simplicity with Qualcomm Technologies’ advanced AI, connectivity, and security technologies, while Edge Impulse and Foundries.io provide powerful machine learning and security-focused deployment tools. Together, these resources simplify development, accelerate prototyping, and enable security-rich, scalable solutions, making Qualcomm Technologies’ developer tools more accessible than ever and setting the stage for significant industry expansion and revenue growth.

Dragonwing Q-8750: Advanced On‑Device AI for Drones, Media Hubs, and Multi-Angle Vision Systems

The Dragonwing Q-8750 is Qualcomm Technologies’ most advanced IoT processor to date, engineered for high-performance edge computing and immersive experiences. Its AI engine achieves 77 TOPS with support for INT4/8/16 and FP16 precision, enabling real-time inference and even on-device large language models up to 11 billion parameters, eliminating cloud dependency for critical applications. The processor’s advanced camera architecture supports up to 12 physical cameras and triple 48 MP ISPs, making it ideal for drones, media hubs, and multi-angle vision systems.

Dragonwing Q-7790: Elevating Everyday Devices with AI and Immersive Experiences in Smart Cameras and AI TVs

The Dragonwing Q-7790 brings a new level of intelligence and responsiveness to consumer and industrial IoT devices. With 24 TOPS of on-device AI performance, the Dragonwing Q-7790 enables advanced inference for applications like smart cameras, AI TVs, and collaboration systems, without relying on the cloud. Its multimedia capabilities include dual 4K60 display support, 4K60 encoding, 4K120 video decoding, including AV1 hardware decode for premium picture quality. Superior security features such as Total Management Engine, Secure Boot, and Qualcomm® Trusted Execution Environment make it ideal for environments where data integrity is paramount.

Expanded Camera Processors Portfolio

Qualcomm Technologies has completed its acquisition of Augentix Inc., a leading Taiwanese semiconductor company specializing in smart imaging and low-power vision processing chips for IP security cameras, smart home devices, and other connected video solutions. This accelerates Qualcomm Technologies’ vision for security-focused, power-efficient edge AI across smart cameras and industrial IoT, integrating Augentix’s advanced multimedia signal processing and high-resolution imaging into Qualcomm Technologies’ product roadmap. The result will be smarter, more secure IoT devices with sharper images, faster performance, and lower energy use, strengthening Qualcomm Technologies’ position in the edge video industry.

Qualcomm Insight Platform: Unlocking Actionable Video Security Intelligence at the Edge

The Qualcomm® Insight Platform is a unified, native AI-powered video intelligence solution delivered as a service for modern security and operations teams. The Insight Platform uses Edge AI with an LLM-based conversational engine to turn videos into a real-time, profile-aware data plane. Customers can modernize brownfield deployments using Qualcomm® Edge AI boxes or AI-enabled cameras, enabling use cases from enterprise security to protecting critical infrastructure. With flexible hardware options, profile-based querying, and real-time video analytics, the Insight Platform is designed to scale virtually any industry and use case.

Furthermore, with the acquisition of Augentix, the Qualcomm Insight Platform is poised to offer a broader and more flexible portfolio of smart cameras, empowering system designers to optimize camera selection for every zone while maintaining unified control and cost efficiency. For more information, please visit the Qualcomm Insight Platform Solutions page.

Qualcomm Terrestrial Positioning Service Delivers Accurate and More Precise Positioning Across IoT

Devices across  IoT verticals often rely on reliable, accurate, and precise positioning capabilities to deliver certain services. From locating devices in open-air settings, as well as underground, when offline, and in emergency settings, Qualcomm® Terrestrial Positioning Services uses a broad terrestrial signal network comprised of over 9 billion Wi-Fi access points and more than 100 million cellular towers, along with beacon-based positioning methods using Bluetooth® Low Energy (BLE), to deliver over 6 trillion annual location results, without needing GNSS, while offering the ability to complement satellite-based positioning systems as well for enhanced location and a faster time-to-fix.

Edge Impulse Integration on Dragonwing AI On-Prem Appliance Solution Enables Security-Focused, Scalable Edge AI Deployment

Edge Impulse is now fully integrated in the Qualcomm Dragonwing™ AI On-Prem Appliance, this all-in-one solution enables customers to run world-class inference and training in a sovereign, highly security-focused package, supporting both private networks and fully offline operations. Backed by the Edge Impulse platform, this new offering supports efficient inference for models up to 120B parameters. Users can manage their entire data pipeline, including AI-based synthetic data generation and labeling directly on the appliance. With innovative resource allocation, the system acts as a Physical AI Agent, capable of handling MLOps training, optimization, and localized model cascades for physical AI use cases, making it the ideal deployment for high-security environments.

For more information and to experience a broad selection of IoT demonstrations powered by Dragonwing, visit the Qualcomm Booth #5001 at CES 2026 from January 6 to 10, or go to qualcomm.com/iot.

About Qualcomm

Qualcomm relentlessly innovates to deliver intelligent computing everywhere, helping the world tackle some of its most important challenges. Building on our 40 years of technology leadership in creating era-defining breakthroughs, we deliver a broad portfolio of solutions built with our leading-edge AI, high-performance, low-power computing, and unrivaled connectivity. Our Snapdragon® platforms power extraordinary consumer experiences, and our Qualcomm Dragonwing™ products empower businesses and industries to scale to new heights. Together with our ecosystem partners, we enable next-generation digital transformation to enrich lives, improve businesses, and advance societies. At Qualcomm, we are engineering human progress.

Qualcomm Incorporated includes our licensing business, QTL, and the vast majority of our patent portfolio. Qualcomm Technologies, Inc., a subsidiary of Qualcomm Incorporated, operates, along with its subsidiaries, substantially all of our engineering and research and development functions and substantially all of our products and services businesses, including our QCT semiconductor business. Snapdragon and Qualcomm branded products are products of Qualcomm Technologies, Inc. and/or its subsidiaries. Qualcomm patents are licensed by Qualcomm Incorporated.

The post Qualcomm’s IE‑IoT Expansion Is Complete: Edge AI Unleashed for Developers, Enterprises & OEMs appeared first on Edge AI and Vision Alliance.

]]>
Drones Market 2026-2036: Technologies, Markets, and Opportunities https://www.edge-ai-vision.com/2025/12/drones-market-2026-2036-technologies-markets-and-opportunities/ Mon, 22 Dec 2025 09:00:09 +0000 https://www.edge-ai-vision.com/?p=56294 This article was originally published at IDTechEx’s website. It is reprinted here with the permission of IDTechEx. Global Drone Market Set to Reach US$147.8 Billion by 2036, Driven by Commercial Expansion, Regulatory Maturity, and Sensor Proliferation Over the past decade, drones have moved from experimental tools into critical infrastructure across agriculture, logistics, energy, security, and public-sector […]

The post Drones Market 2026-2036: Technologies, Markets, and Opportunities appeared first on Edge AI and Vision Alliance.

]]>
This article was originally published at IDTechEx’s website. It is reprinted here with the permission of IDTechEx.

Global Drone Market Set to Reach US$147.8 Billion by 2036, Driven by Commercial Expansion, Regulatory Maturity, and Sensor Proliferation

Over the past decade, drones have moved from experimental tools into critical infrastructure across agriculture, logistics, energy, security, and public-sector operations. By 2036, the global drone market, spanning both commercial and consumer platforms, is forecast by IDTechEx to reach US$147.8 billion, growing from US$69 billion in 2026, with a CAGR of 7.9%. Commercial deployments are accelerating rapidly, with unit shipments expected to surpass 9 million in 2036. This growth reflects increasing regulatory clarity, maturing technology stacks, falling hardware costs, and the transition toward autonomous, data-driven operations.

Global Drone Market Revenue Forecast (2026-2036). Source: IDTechEx

Agriculture enters the era of large-scale digital farming

Agricultural drones have evolved from early trials to full commercial maturity, especially in China, the US, and Southeast Asia. Core applications such as spraying, seeding, and crop monitoring have become profitable and widely adopted. Multirotor platforms still dominate, but fixed-wing and hybrid VTOL (Vertical Take-Off and Landing) drones are gaining share for large-area farmland mapping and long-range autonomous missions.

In 2025, more than 30% of large farms worldwide are estimated to be using drones for field operations. Integration of AI vision, multispectral imaging, and precision analytics enables a data-centric farming model that continues to expand. Future growth will rely heavily on linking drone data with smart farming ecosystems and automated agronomic decisions.

Comparison of Battery-Endurance-Payload of Agricultural Spraying Drones. Bubble size indicates payload capacity: larger bubbles represent drones with higher liquid-carrying capacity. Colors denote regions of origin: blue = China, green = United States, orange = Europe. Source: IDTechEx

Inspection and maintenance becomes the fastest-growing segment

Energy, utilities, and infrastructure operators are rapidly shifting toward automated drone-based inspection of wind turbines, powerlines, pipelines, and oil & gas assets. Equipped with LiDAR, thermal imaging, and AI-powered defect detection, drones are replacing costly and hazardous manual inspections.

From 2025 onward, operators are expected to increasingly adopt fully automated workflows, including drone-in-a-box systems, remote fleet management, and AI cloud analytics. Inspection & maintenance is projected to exceed 25% of all commercial drone revenue by 2030, surpassing agriculture as the leading segment.

Delivery drones mature from trials to regional commercialization

Despite regulatory and logistical challenges, drone delivery is now gaining real commercial traction. Leading companies in the US, Europe, and China are expanding last-mile delivery for e-commerce, food, and medical transport, while mid-range logistics drones are emerging for remote and island supply routes.

Industry progress in automated loading, cold-chain drone logistics, and U-space/UTM (Unmanned Traffic Management) frameworks is paving the way for scaled operations. The long-term trajectory of delivery drones will depend heavily on BVLOS (Beyond Visual Line of Sight) approvals and national UTM deployment.

Security, military, and public safety maintain strong momentum

Government and law enforcement agencies are adopting drones for border patrol, surveillance, traffic management, crowd monitoring, and emergency response.

Hybrid fixed-wing VTOL drones enable long-endurance operations over large areas, while AI-based video analytics enhance situational awareness. Public safety is expected to remain a stable and steadily expanding segment through 2036.

Military drones remain the largest revenue contributor

The military drone sector continues to lead the total drone market in absolute revenue. Since 2022, regional conflicts have accelerated demand for reconnaissance drones, medium-range tactical drones, and loitering munitions.

Armed forces are also moving toward Manned-Unmanned Teaming (MUM-T) concepts, integrating drones with aircraft and armored vehicles. While dual-use technologies are increasingly repurposed for defense, the core military drone segment will continue to be highly profitable and strategically essential.

Disaster response continues to rely on drone capabilities

Drones equipped with thermal, optical, and acoustic sensors play a critical role in night-time search missions, earthquake rescue, wildfire monitoring, and post-disaster assessment.

Advances in multi-drone collaboration and AI-based geolocation algorithms have significantly improved operational efficiency. Though smaller in absolute revenue, this segment has strong government backing and consistent long-term growth.

Global regulations move toward harmonization and risk-based frameworks

Drone regulation is increasingly aligned around risk-based, tiered certification systems. The US (Part 107), EU (C0-C6), UK (CAP722), and China have all established clearer pathways for commercial operations, especially for BVLOS.

Common regulatory themes include:

  • Maximum flight heights around 120 m
  • Mandatory registration and pilot certification
  • Stricter rules for BVLOS and operations over people
  • Airspace access via automated or digital authorization

North America and the EU lead in harmonized frameworks, while Asia-Pacific, Latin America, and MENA remain more fragmented.

Sensor proliferation reshapes drone payload configurations

From 2025 to 2036, commercial drone shipments are expected to grow 2.3×, but sensor shipments grow 4×, illustrating a major shift toward higher sensor density and more advanced autonomy.

By 2036, many industrial and BVLOS drones are expected to exceed 10-15 sensors per drone, driven by:

  • Multi-camera vision systems
  • Higher-performance LiDAR and radar
  • Ultrasonic and pressure sensors for low-altitude control
  • Barometric altimeters
  • Multi-IMU redundancy for high-reliability missions

A fully rebuilt 2026-2036 forecast from IDTechEx

This report offers a comprehensive overview of the global drone industry’s progress across consumer, commercial, and defense sectors, including the regulatory constraints that shape operations and the deployment maturity in different regions. It also examines the full range of sensing and payload configurations used across major applications, from agriculture and inspection to logistics and public safety, explaining how different cost structures and mission requirements drive platform choices. Additionally, it includes a detailed list of representative commercial drone models, their technical specifications, sensor suites, pricing ranges, and market positioning, together with a fully updated 2026-2036 forecast covering revenue, unit shipments, and sensor integration trends.

IDTechEx provides a completely updated ten-year drone market forecast, including:

  • Global revenue projections for consumer & commercial drones
  • Unit shipments by fixed-wing vs rotary platforms
  • Scenario-based forecasts across 8 key commercial applications
  • Detailed sensor-per-drone modeling
  • Drone sensor market size forecasts (2026-2036)

Key Aspects

This report provides critical market intelligence about the global drone industry, covering consumer, commercial, and defense platforms and all major application sectors. This includes:

A review of the context, technology, and regulation behind drone systems:

  • History and context for the global drone market and each major application sector
  • General overview of key drone platform types (multirotor, fixed-wing, hybrid VTOL) and autonomy / navigation stacks
  • Overall look at technology trends in payloads and sensor integration, including multi-sensor configurations for BVLOS and industrial use
  • Review of global regulatory developments and risk-based frameworks shaping commercial drone operations

Full market characterization for each major drone application sector:

  • Agricultural drones, including spraying, seeding, crop monitoring, and integration with digital farming ecosystems
  • Inspection and maintenance drones for energy, utilities, and infrastructure assets, including drone-in-a-box and automated workflows
  • Delivery drones, from last-mile services to mid-range logistics and medical transport, and their UTM / U-space requirements
  • Security, public-safety, and disaster-response drones, including long-endurance hybrid VTOL platforms and AI-driven situational awareness
  • Military and defense drones, including tactical systems, reconnaissance platforms, loitering munitions, and Manned-Unmanned Teaming concepts

Market analysis throughout:

  • Reviews of drone industry players throughout each key sector, including representative commercial models, sensor suites, payload capabilities, and pricing ranges
  • Historic drone market data and deployment trends, together with a fully rebuilt 2026-2036 forecast for global drone revenue and unit shipments
  • Detailed 2026-2036 forecasts for the drone sensor market, including sensor-per-drone modeling, shipment volumes, and revenue projections
Report Metrics Details
Historic Data 2021 – 2025
CAGR The global drone market is forecast to reach US$143 b by 2036, growing with a CAGR of 10%.
Forecast Period 2026 – 2036
Forecast Units volume(units), Revenue (USD, millions)
Regions Covered Worldwide, Brazil, Europe, China, United Kingdom, United States
Segments Covered Commercial drones, Consumer drones, Fixed-wing UAVs, Rotary UAVs, Agriculture drones, Inspection drones, Logistics drones, Military drones, Search-and-rescue drones, Drone sensor technologies (IMU, cameras, LiDAR, radar, pressure, ultrasonic, altimeters), Autonomy technologies (SLAM, FCU, localisation, swarm control).

Analyst access from IDTechEx

All report purchases include up to 30 minutes telephone time with an expert analyst who will help you link key findings in the report to the business issues you’re addressing. This needs to be used within three months of purchasing the report.

Further information

If you have any questions about this report, please do not hesitate to contact our report team at research@IDTechEx.com or call one of our sales managers:

AMERICAS (USA): +1 617 577 7890
ASIA (Japan and Korea): +81 3 3216 7209
ASIA: +44 1223 810259
EUROPE (UK): +44 1223 812300

Technology Analyst, IDTechEx
Senior Technology Analyst, IDTechEx

The post Drones Market 2026-2036: Technologies, Markets, and Opportunities appeared first on Edge AI and Vision Alliance.

]]>
Overcoming the Skies: Navigating the Challenges of Drone Autonomy https://www.edge-ai-vision.com/2025/12/overcoming-the-skies-navigating-the-challenges-of-drone-autonomy/ Thu, 04 Dec 2025 09:00:44 +0000 https://www.edge-ai-vision.com/?p=56169 This blog post was originally published at Inuitive’s website. It is reprinted here with the permission of Inuitive. From early military prototypes to today’s complex commercial operations, drones have evolved from experimental aircraft into essential tools across industries. Since the FAA issued its first commercial permit in 2006, applications have rapidly expanded—from disaster relief and […]

The post Overcoming the Skies: Navigating the Challenges of Drone Autonomy appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at Inuitive’s website. It is reprinted here with the permission of Inuitive.

From early military prototypes to today’s complex commercial operations, drones have evolved from experimental aircraft into essential tools across industries. Since the FAA issued its first commercial permit in 2006, applications have rapidly expanded—from disaster relief and infrastructure inspection to delivery logistics and environmental monitoring. Yet behind every successful drone deployment lies a sophisticated suite of technologies that must operate reliably, safely, and autonomously—even under challenging conditions. In this article, we explore the key challenges drone manufacturers must overcome to realize the full potential of autonomous flight—and how advanced processing platforms like Inuitive’s are helping lead the way.

The Need for Autonomy in a Dynamic Environment

Commercial drones today are expected to operate in increasingly complex scenarios: flying beyond visual line of sight, navigating dense urban environments, or reacting to unforeseen changes mid-flight. At the heart of this evolution is autonomy—the drone’s ability to understand, interpret, and respond to its surroundings in real time.

Achieving this requires addressing several core technological challenges:

  • Operating Without GPS: Navigating the Unknown
    In environments where GPS is unreliable or unavailable—such as urban canyons, indoor settings, or areas subject to intentional jamming—drones must rely on other means of localization. Simultaneous Localization and Mapping (SLAM) technology, fused with inertial and visual data, plays a crucial role in maintaining stable and accurate navigation in these GPS-denied environments.High-performance vision processors enable SLAM to function in real time, empowering drones to map their surroundings while simultaneously determining their own position within it.
  • Detect and Avoid (DAA): Ensuring Safe Coexistence in the Sky
    Drones increasingly operate in shared airspace with manned aircraft and other UAVs. For safety and regulatory compliance, they must be able to detect and avoid obstacles—whether stationary or moving—without human intervention. DAA systems require advanced 3D sensing, depth perception, and real-time scene understanding. This functionality must be compact, power-efficient, and accurate, even at high speeds or in varying weather conditions.
  • Intelligent Landing Area Scanning: Precision Where It Counts
    Landing is a critical phase of flight. Whether returning to base or delivering a payload, drones must assess the landing area for flatness, stability, and potential hazards. Autonomous landing requires the system to analyze ground features, detect obstacles, and adapt dynamically to changes such as moving people or vehicles.Accurate, low-latency sensing combined with onboard AI processing is essential to make these decisions quickly and safely
  • Power and Processing Efficiency: Doing More with Less
    Drones are inherently constrained by size, weight, and power (SWaP). The onboard systems that enable autonomy—3D cameras, sensors, and compute units—must deliver high performance while maintaining minimal power draw and thermal output. This is where dedicated vision processors come into play.Platforms like Inuitive’s NU4000/NU4100 integrate stereo depth sensing, SLAM, and AI acceleration in a compact, power-efficient SoC, enabling autonomy without sacrificing flight time or payload capacity.

A Shift Toward Purpose-Built Innovation

In the past, drone manufacturers often adapted off-the-shelf components developed for unrelated applications. Today, a growing ecosystem of technologies is being purpose-built for autonomous flight. This shift enables better integration, higher reliability, and real-time decision-making at the edge.

Enabling the Next Generation of Drones

The commercial drone sector continues to push boundaries—demanding smarter, safer, and more autonomous solutions. As drones take on critical roles in delivery, inspection, defense, and public safety, the ability to perceive and act in real time becomes a defining capability.

At Inuitive, we’re committed to enabling the autonomy of tomorrow’s drones through edge-AI processing, low-power 3D vision, and real-time scene understanding. The challenges of flight may be complex—but with the right technology, drones can navigate them with intelligence and precision.

The post Overcoming the Skies: Navigating the Challenges of Drone Autonomy appeared first on Edge AI and Vision Alliance.

]]>
FRAMOS Unveils Three Specialized Camera Modules for UAV and Drone Applications https://www.edge-ai-vision.com/2025/10/framos-unveils-three-specialized-camera-modules-for-uav-and-drone-applications/ Tue, 21 Oct 2025 14:00:00 +0000 https://www.edge-ai-vision.com/?p=55674 Munich, Bavaria, Germany – October 21st, 2025 – FRAMOS, the world’s leading vision expert, unveils three new camera modules specially developed for use in drones and unmanned aerial vehicles (UAVs). These modules feature state-of-the-art image sensors from SONY, which offer exceptional precision, high speed, and energy efficiency, creating the ideal conditions for demanding vision systems. […]

The post FRAMOS Unveils Three Specialized Camera Modules for UAV and Drone Applications appeared first on Edge AI and Vision Alliance.

]]>
Munich, Bavaria, Germany – October 21st, 2025 – FRAMOS, the world’s leading vision expert, unveils three new camera modules specially developed for use in drones and unmanned aerial vehicles (UAVs). These modules feature state-of-the-art image sensors from SONY, which offer exceptional precision, high speed, and energy efficiency, creating the ideal conditions for demanding vision systems.

The new FRAMOS UAV camera modules are characterized by outstanding optical properties, high image quality, and modular designs that enable easy integration into a wide variety of drone platforms. They are preset for their intended purposes and are therefore ideally suited for applications such as precise navigation, FPV (First Person View) control, industrial inspection, mapping, surveying, agriculture, and security solutions.

The FSM:UAV-FPV is the ideal module for immersive human-controlled first-person view control of drones. It is equipped with the high-quality SONY Global Shutter sensor IMX900 and offers a wide horizontal field of view of 103°. This enables seamless live transmission with clear, rolling-shutter-artifact-free images for maximum situational awareness and precise control in real time.

The FSM:UAV-NAV is used for autonomous navigation of UAVs. It also uses SONY’s IMX900 global shutter sensor, but with a horizontal 76° field of view to provide reliable visual navigation data. The FSM:UAV-NAV is designed for visual simultaneous localization and mapping (VSLAM) with UAVs and autonomous flight control. The module enables precise, distortion-free imaging in real-world conditions, including high-speed motion and navigation in low-light conditions.

The FSM:UAV-PAY is a payload camera module designed for high-resolution inspection and mapping tasks. It features a narrow 100° horizontal field of view and enables precise image data for applications such as agriculture and security inspections. This 4K payload camera offers excellent image quality for medium to long-range use cases where clarity and precision are important.

All three modules support the PixelMate™ interface standard (MIPI CSI-2) and feature near-infrared (NIR) sensitivity.

“With our new drone camera modules, we offer our customers state-of-the-art technologies that are specifically tailored to the high demands of aerial image processing,” explains Ugur Kilic, Director of Market Strategy and Business Development at FRAMOS. “Our comprehensive solutions help developers quickly bring market-ready products to market while opening up new application possibilities.”

FRAMOS also supports its customers with complementary solutions tailored to their specific applications, such as ISP tuning, thermal stress management, and optical focusing, as well as open-source reference designs and customized consulting solutions in the imaging field. This secures the company’s position as a reliable partner for forward-looking UAV vision systems.

For more information on the new UAV camera modules, visit: https://framos.com/applications/uav-drones/

About FRAMOS

FRAMOS is the leading camera module design and manufacturing expert.

Founded in Munich, Germany, over 40 years ago, FRAMOS has empowered companies worldwide to make drones fly safely, enable robots to see and think, make diagnostics faster and more affordable, boost athletic performance with AI, and countless other innovations.

FRAMOS stands for best-in-class image quality, an open-source approach, and fast, easy integration – helping customers launch products quicker and stay ahead of the competition.

With strong expertise in electronics design, optics, and optical assembly, camera calibration, image tuning, and software integration, FRAMOS provides off-the-shelf camera modules and customized embedded vision solutions. At our advanced technology campus strategically located in the heart of Europe, we combine R&D and manufacturing services under one roof and provide scalable manufacturing capacities up to millions of pieces.

Vision AI starts here – www.framos.com and stay informed on LinkedIn, Facebook, Instagram, YouTube, and X.

The post FRAMOS Unveils Three Specialized Camera Modules for UAV and Drone Applications appeared first on Edge AI and Vision Alliance.

]]>
“Vision-based Aircraft Functions for Autonomous Flight Systems,” a Presentation from Acubed (an Airbus Innovation Center) https://www.edge-ai-vision.com/2025/08/vision-based-aircraft-functions-for-autonomous-flight-systems-a-presentation-from-acubed-an-airbus-innovation-center/ Thu, 28 Aug 2025 08:00:16 +0000 https://www.edge-ai-vision.com/?p=54990 Arne Stoschek, Vice President of AI and Autonomy at Acubed (an Airbus innovation center), presents the “Vision-based Aircraft Functions for Autonomous Flight Systems” tutorial at the May 2025 Embedded Vision Summit. At Acubed, an Airbus innovation center, the mission is to accelerate AI and autonomy in aerospace. Stoschek gives an… “Vision-based Aircraft Functions for Autonomous […]

The post “Vision-based Aircraft Functions for Autonomous Flight Systems,” a Presentation from Acubed (an Airbus Innovation Center) appeared first on Edge AI and Vision Alliance.

]]>
Arne Stoschek, Vice President of AI and Autonomy at Acubed (an Airbus innovation center), presents the “Vision-based Aircraft Functions for Autonomous Flight Systems” tutorial at the May 2025 Embedded Vision Summit. At Acubed, an Airbus innovation center, the mission is to accelerate AI and autonomy in aerospace. Stoschek gives an…

“Vision-based Aircraft Functions for Autonomous Flight Systems,” a Presentation from Acubed (an Airbus Innovation Center)

Register or sign in to access this content.

Registration is free and takes less than one minute. Click here to register and get full access to the Edge AI and Vision Alliance's valuable content.

The post “Vision-based Aircraft Functions for Autonomous Flight Systems,” a Presentation from Acubed (an Airbus Innovation Center) appeared first on Edge AI and Vision Alliance.

]]>
Key Drone Terminology: A Quick Guide for Beginners https://www.edge-ai-vision.com/2025/05/key-drone-terminology-a-quick-guide-for-beginners/ Fri, 23 May 2025 08:00:06 +0000 https://www.edge-ai-vision.com/?p=53789 This blog post was originally published at Namuga Vision Connectivity’s website. It is reprinted here with the permission of Namuga Vision Connectivity. As drone technology becomes more accessible and widespread, it’s important to get familiar with the basic terms that define how drones work and how we control them. Whether you’re a hobbyist, a content […]

The post Key Drone Terminology: A Quick Guide for Beginners appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at Namuga Vision Connectivity’s website. It is reprinted here with the permission of Namuga Vision Connectivity.

As drone technology becomes more accessible and widespread, it’s important to get familiar with the basic terms that define how drones work and how we control them. Whether you’re a hobbyist, a content creator, or someone working in industrial drone applications, understanding these concepts will help you better navigate the drone ecosystem.

In this blog, we break down essential drone terminology into intuitive categories and explain each term —supported by easy-to-follow infographics. Let’s get started!

Control-Related Terms

This group of terms refers to how we communicate with and control a drone.

  • Bind: The process of linking a drone to its controller.
  • Controller: The handheld device or app used to fly and navigate the drone.
  • First Person View (FPV): A perspective that lets the pilot see from the drone’s viewpoint, often via live video.
  • Return to Home (RTH): A safety feature where the drone automatically returns to its take-off location when the signal is lost or battery is low.

Control

Navigation & Positioning

Drones rely on various sensors and systems to understand their surroundings and maintain their position.

  • GPS: Global Positioning System – helps the drone understand where it is on the map.
  • Altitude: The height of the drone above ground level.
  • Yaw: Rotation of the drone left or right around its vertical axis.
  • Throttle: Controls how much power is sent to the motors, affecting height and speed.

Flight Controls

Geospatial & Sensor Technology

These terms are common in industrial, agricultural, or mapping-related drone applications.

  • Geofencing: A virtual boundary that restricts drone flight to a predefined area.
  • Ground Control Station (GCS): A computer system or tablet that manages drone flight remotely.
  • Inertial Navigation System (INS): A navigation method using internal motion sensors when GPS is unavailable.
  • LiDAR: Light Detection and Ranging – a sensor that maps surroundings using laser pulses.

Navigation Systems

Final Thoughts

We hope this guide gives you a clearer picture of the drone landscape. At NAMUGA, we specialize in developing advanced camera modules—including RGB, IR, 3D ToF, and LiDAR—optimized for drone and gimbal integration. Follow us for more insights and solutions in smart imaging technology.

The post Key Drone Terminology: A Quick Guide for Beginners appeared first on Edge AI and Vision Alliance.

]]>
BrainChip Partners with RTX’s Raytheon for AFRL Radar Contract https://www.edge-ai-vision.com/2025/04/brainchip-partners-with-rtxs-raytheon-for-afrl-radar-contract/ Wed, 02 Apr 2025 13:31:50 +0000 https://www.edge-ai-vision.com/?p=53099 Laguna Hills, Calif. – April 1st, 2025 – BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI, today announced that it is partnering with Raytheon Company, an RTX (NYSE: RTX) business, to service a contract for $1.8M from the Air Force Research Laboratory […]

The post BrainChip Partners with RTX’s Raytheon for AFRL Radar Contract appeared first on Edge AI and Vision Alliance.

]]>
Laguna Hills, Calif. – April 1st, 2025 BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI, today announced that it is partnering with Raytheon Company, an RTX (NYSE: RTX) business, to service a contract for $1.8M from the Air Force Research Laboratory on neuromorphic radar signaling processing.

Raytheon Company will deliver services and support as a partner with BrainChip for the completion of the contract award. The Air Force Research Labs contract, under the topic number AF242-D015, is titled “Mapping Complex Sensor Signal Processing Algorithms onto Neuromorphic Chips.” The project focuses on a specific type of radar processing known as micro-Doppler signature analysis, which offers unprecedented activity discrimination capabilities.

Neuromorphic hardware represents a low-power solution for edge devices, consuming significantly less energy than traditional computing hardware for signal processing and artificial intelligence tasks. If successful, this project could embed sophisticated radar processing solutions in power-constrained and thermally constrained weapon systems, such as missiles, drones and drone defense systems.

BrainChip’s Akida™ processor is a revolutionary computing architecture that is designed to process neural networks and machine learning algorithms at ultra-low power consumption, making it ideal for edge computing applications. The company’s neuromorphic technology improves the cognitive communication capabilities on size, weight and power & cost (SWaP-C)-constrained platforms such as military, spacecraft and robotics for commercial and government markets.

“Radar signaling processing will be implemented on ever-smaller mobile platforms, so minimizing system SWaP-C is critical,” said Sean Hehir, CEO of BrainChip. “This improved radar signaling performance per watt for the Air Force Research Laboratory showcases how neuromorphic computing can achieve significant benefits in the most mission-critical use cases.”

About BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY)

BrainChip is the worldwide leader in Edge AI on-chip processing and learning. The company’s first-to-market, fully digital, event-based AI processor, AkidaTM, uses neuromorphic principles to mimic the human brain, analyzing only essential sensor inputs at the point of acquisition, processing data with unparalleled efficiency, precision, and economy of energy. Akida uniquely enables Edge learning local to the chip, independent of the cloud, dramatically reducing latency while improving privacy and data security. Akida Neural processor IP, which can be integrated into SoCs on any process technology, has shown substantial benefits on today’s workloads and networks, and offers a platform for developers to create, tune and run their models using standard AI workflows like TensorFlow/Keras. In enabling effective Edge compute to be universally deployable across real world applications such as connected cars, consumer electronics, and industrial IoT, BrainChip is proving that on-chip AI, close to the sensor, is the future, for its customers’ products, as well as the planet. Explore the benefits of Essential AI at www.brainchip.com.

Follow BrainChip on Twitter: https://www.twitter.com/BrainChip_inc

Follow BrainChip on LinkedIn: https://www.linkedin.com/company/7792006

The post BrainChip Partners with RTX’s Raytheon for AFRL Radar Contract appeared first on Edge AI and Vision Alliance.

]]>
AI Disruption is Driving Innovation in On-device Inference https://www.edge-ai-vision.com/2025/02/ai-disruption-is-driving-innovation-in-on-device-inference/ Thu, 20 Feb 2025 09:00:02 +0000 https://www.edge-ai-vision.com/?p=52615 This article was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. How the proliferation and evolution of generative models will transform the AI landscape and unlock value. The introduction of DeepSeek R1, a cutting-edge reasoning AI model, has caused ripples throughout the tech industry. That’s because its performance is on […]

The post AI Disruption is Driving Innovation in On-device Inference appeared first on Edge AI and Vision Alliance.

]]>
This article was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm.

How the proliferation and evolution of generative models will transform the AI landscape and unlock value.

The introduction of DeepSeek R1, a cutting-edge reasoning AI model, has caused ripples throughout the tech industry. That’s because its performance is on par with or better than state-of-the-art alternatives, disrupting the conventional wisdom around AI development.

This pivotal moment is part of a broader trend that underscores the innovation in creating high-quality small language and multimodal reasoning models, and how they’re preparing AI for commercial applications and on-device inference. The fact that these new models can run on devices accelerates scale and creates demand for powerful chips at the edge.

Driving this shift are four major trends that are leading to a dramatic improvement in the quality, performance and efficiency of AI models that can now run on device:

  • Today’s state-of-the-art smaller AI models have superior performance. New techniques like model distillation and novel AI network architectures simplify the development process without sacrificing quality, allowing new models to outperform larger ones from a year ago, which could only operate on the cloud.
  • Model sizes are decreasing rapidly. State-of-the-art quantization and pruning techniques allow developers to reduce the size of models with no material impact in accuracy.
  • Developers have more to work with. The rapid proliferation of high-quality AI models means features like text summarization, coding assistants and live translation are common in devices like smartphones, making AI ready for commercial applications at scale across the edge.
  • AI is becoming the new user interface. Personalized multimodal AI agents will simplify interactions and proficiently complete tasks across various applications.

Qualcomm Technologies is strategically positioned to lead and capitalize on the transition from AI training to large-scale inference, as well as the expansion of AI computational processing from the cloud to the edge. The company has an extensive track record in developing custom central processing units (CPUs), neural processing units (NPUs), graphics processing units (GPUs), and low-power subsystems. The company’s collaboration with model makers, along with tools, frameworks and SDKs for deploying models across various edge device segments, enables developers to accelerate the adoption of AI agents and applications at the edge.

The recent disruption and reassessment of how AI models are trained validates the imminent AI landscape shift towards large-scale inference. It will create a new cycle of innovation and upgrade of inference computing at the edge. While training will continue in the cloud, inference will benefit from the scale of devices running on Qualcomm technology and create demand for more AI-enabled processors at the edge.

Quality AI models are now abundant and affordable

Innovations boost model quality and reduce development time and cost

AI has reached the point where the drop in the cost of training AI models, combined with open-source collaboration, is making the development of high-quality models accessible to more people and organizations.

This shift is driven by various technical advancements. Usage of longer context length, along with simplification of some of the training steps, saves computational costs. Newer network architectures ranging from mixture-of-experts (MoE) to state-space models (SSM) are pushing the boundary of what can be accomplished with reduced computational overhead and power consumption.

Newer AI models also integrate advanced methods such as chain-of-thought reasoning and self-verification, enabling them to perform well across various challenging domains like mathematics, coding, and scientific reasoning.

Distillation is a key technique in the development of capable small models. It allows large models to “teach” smaller models, transferring knowledge while maintaining accuracy. The use of distillation has led to a surge in smaller foundation models—many of them fine-tuned for specialized tasks.

The power of distillation is exemplified in figure 1. This presents average LiveBench results comparing the Llama 3.3 70B model with its distilled DeepSeek R1 counterpart. The chart shows how distillation significantly enhances performance in reasoning, coding, and mathematics tasks for the same number of parameters.

Figure 1: LiveBench AI average benchmark results comparing Meta Llama 70B model with its distilled counterpart by DeepSeek. Source: LiveBench.ai, Feb. 2025.

Small models achieve big capabilities at the edge

Smaller models are approaching the quality of large frontier models due to distillation and other techniques described above. Figure 2 shows benchmarks for the DeepSeek R1 distilled models compared to leading-edge alternatives. DeepSeek-distilled versions based on Qwen and Llama models show areas of significant superiority, particularly in the GPQA benchmark – achieving superior or similar scores compared to state-of-the-art models such as GPT-4o, Claude 3.5 Sonnet, and GPT-o1 mini. GPQA is a critical metric because it involves deep, multi-step reasoning to solve complex queries, which many models find challenging.

Figure 2: Mathematic and coding benchmarks. Source: DeepSeek, Jan. 2025.

Many popular model families including DeepSeek R1, Meta Llama, IBM Granite, Mistral Ministral feature small variants which overdeliver in terms of performance and benchmarks for specific tasks, regardless of their size. The reduction of large, foundational models into smaller, efficient versions enables faster inference, smaller memory footprint and lowers power consumption – all while maintaining a high bar on performance, allowing deployment of such models within devices like smartphones, PCs, and automobiles.

Further optimizations, like quantization, compression and pruning help reduce model sizes. Quantization lowers power consumption and speeds up operations by reducing precision without significantly sacrificing accuracy, while pruning eliminates unnecessary parameters.

These technical developments have led to a proliferation of high-quality generative AI models. According to data compiled by Epoch AI (Figure 3), more than 75% of large-scale AI models published in 2024 feature less than 100 billion parameters.

Figure 3: Number of large-scale AI models published by year, categorized by number of parameters. Source: Epoch AI, Jan. 2025.

The era of AI inference innovation is here

The abundance of high-quality, smaller models is bringing renewed attention to inference workloads – which is where applications and services make use of the models to provide value to businesses and consumers.

Qualcomm Technologies has worked on the optimization of numerous AI models to support the commercialization of the new generation of AI-oriented Copilot+ PCs. Similarly, the company has collaborated with OEMs such as Samsung and Xiaomi in the launch of flagship smartphones equipped with many AI-enabled features. The proliferation of AI inferencing capabilities across devices has enabled the creation of generative AI applications and assistants. Document summarization, AI-image generation and editing, and real-time language translation are now common features. Camera apps leverage AI for computational photography, object recognition and real-time scene optimization.

Next up is the development of multimodal applications which combine multiple types of data—text, vision, audio and sensor input—to deliver richer, more context-aware and personalized experiences. The Qualcomm AI Engine combines the capabilities of custom-built NPUs, CPUs and GPUs to optimize such tasks on-device, enabling AI assistants to switch between communication modes and generate multimodal outputs. Agentic AI is positioned at the heart of the next generation of user interfaces. AI systems are capable of decision-making and task management by predicting user needs and proactively executing complex workflows within devices and applications. Qualcomm Technologies’ emphasis on efficient, real-time AI processing allows these agents to function continuously and securely within the devices, while relying upon a personal knowledge graph that accurately defines the user’s preferences and needs, without any cloud dependency. Over time, these advancements are laying the groundwork for AI to become the primary UI, with natural language and image, video and gesture-based interactions simplifying how people engage with technology.

Looking ahead, Qualcomm Technologies is also positioned for the era of embodied AI, in which AI capabilities are integrated into robotics. By leveraging its expertise in inference optimization, Qualcomm Technologies aims to power real-time decision-making for robots, drones and other autonomous devices, enabling precise interactions in dynamic, real-world environments.

While numerous AI models are trained in the cloud, distilled smaller models are available for operation and run on devices often within weeks or days. For example, within less than a week, DeepSeek R1-distilled models were running on PCs and smartphones powered by Snapdragon® platforms.

Deploying inference within devices addresses immediacy through reduced latency, enhances privacy, relies on local data to provide additional context and enables continuous functionality of AI features and applications. It also reduces costs for users and/or developers by avoiding fees associated with cloud inference services. All of this creates incentives for software and service providers to deploy AI inference at the edge.

Qualcomm is set to be a leader in the AI inference era

As a leader in on-device AI, Qualcomm Technologies is strategically positioned to advance the AI inference era with its industry-leading hardware and software solutions for edge devices. These solutions encompass billions of smartphones, automobiles, XR headsets and glasses, PCs, industrial IoT devices, and more.

Qualcomm Technologies has a long history of developing custom CPUs, NPUs, GPUs and low-power subsystems, which, when combined with expertise in packaging and thermal design, form the foundation of its industry-leading system-on-chip (SoC) products. These SoCs deliver high-performance, energy-efficient AI inference directly on-device. By tightly integrating these cores, Qualcomm Technologies’ platforms can handle complex AI tasks while maintaining battery life and overall power efficiency—critical for edge use cases.

To unlock the full potential of AI on its platforms, Qualcomm Technologies has built a robust AI software stack designed to empower software developers. The Qualcomm AI Stack includes libraries, SDKs, and optimization tools that streamline model deployment and enhance performance. Developers can leverage these resources to efficiently adapt models for Qualcomm platforms, reducing time-to-market for AI-powered applications. Qualcomm Technologies’ developer-focused approach accelerates innovation by simplifying the integration of cutting-edge AI features into consumer and enterprise products.

Lastly, the company’s collaboration with AI model makers across the globe and its provision of services like the Qualcomm AI Hub are central to its strategy for scaling AI across industries. On the Qualcomm AI Hub, in three simple steps, a developer can:

  1. Pick a model or bring their own model or create a model based on their data;
  2. Pick any framework and runtime, write and test their AI apps on a cloud-based physical device farm; and
  3. Use tools to deploy their apps commercially.

The Qualcomm AI Hub supports major large language and multimodal model (LLM, LMM) families, allowing developers to deploy, optimize, and manage inference on devices powered by Qualcomm platforms. With features like pre-optimized model libraries and support for custom model optimization and integration, Qualcomm Technologies enables rapid development cycles while enhancing compatibility with diverse AI ecosystems. This collaborative approach strengthens Qualcomm Technologies’ position as a leader in enabling scalable, real-time AI applications.

Expanding across all key edge segments

Qualcomm Technologies uses on-device AI to support many industries, unlocking business value and supporting new user experiences, all enabled by enhanced performance, efficiency, responsiveness and privacy by processing AI locally on devices.

Mobile

Snapdragon mobile platforms, such as the latest Snapdragon 8 elite, are advancing the capabilities of on-device AI by enabling several cutting-edge multimodal generative models and agentic AI to operate natively on smartphones. AI has enhanced smartphone features across various categories such as communication improvement, generative image editing tools, personalization, and accessibility. On-device generative AI is being utilized to develop more intuitive, user-centric features and to automate tasks in mobile devices.

This trend towards AI-driven functionalities is evident in the latest flagship smartphone releases from major manufacturers utilizing Snapdragon platforms, including Samsung, ASUS, Xiaomi, Oppo, Vivo, and Honor.

PCs

Snapdragon X Series platforms were instrumental in defining the new category of AI PCs, with best-in-class custom NPU cores that were built from ground-up for high performance, energy efficient generative AI inference. This NPU is turbo-charging Windows apps, adding new features, boosting performance, and enhancing privacy and battery life. Developers can run generative AI inference on-device, offering cutting-edge Copilot+ PC features which debuted on the Snapdragon X Series.

Popular third-party apps like Zoom, Affinity, Djay Pro, CapCut, Moises Live, and Blackmagic Design’s DaVinci Resolve take advantage of the NPU to offer specific AI-powered capabilities on Snapdragon X Series platforms.

Automotive

Snapdragon® Digital Chassis™ solution uses on-device AI in its context-aware intelligent cockpit system designed to enhance vehicle safety and driver experience. This system leverages advanced cameras, biometric and environmental sensors, and state-of-the-art multimodal AI networks to provide real-time feedback and functionality tailored to the driver’s state and environmental conditions.

For automated driving and assistance systems, Qualcomm Technologies has developed an end-to-end architecture which uses large training datasets, fast retraining using real-world and AI-augmented data, over-the-air updates, and a state-of-the-art stack including multimodal AI models and causal reasoning in the vehicle to handle modern automated driving and assistance complexities.

Figure 4: Simplified in-vehicle AI system architecture to support intelligent cockpit and autonomous and advanced driving assistance. Source: Qualcomm Technologies, Jan. 2025.

Industrial IoT

For industrial IoT and enterprise applications, Qualcomm Technologies recently introduced its the Qualcomm® AI On-Prem Appliance Solution, an on-premises desktop or wall-mounted hardware solution, and Qualcomm® AI Inference Suite, a set of software and services for AI inferencing spanning from near-edge to cloud.

This edge AI approach allows sensitive customer data, fine-tuned models, and inference loads to remain on premises, enhancing privacy, control, energy efficiency, and low latency. That’s critical for AI-enabled business applications such as intelligent multi- lingual search, custom AI assistants and agents, code generation, and computer vision for security, safety and site monitoring.

Networking

Qualcomm Technologies has introduced an AI-enabled Wi-Fi networking platform – the Qualcomm® Networking Pro A7 Elite. The solution integrates Wi-Fi 7 and edge AI to allow access points and routers to run generative AI inference on behalf of connected devices in the network. It supports innovative applications in areas like security, energy management, virtual assistants, and health monitoring by processing data on the gateway for enhanced privacy and real-time responses.

This networking platform is expected to transform Wi-Fi routers, mesh systems, broadband gateways, and access points into private, local AI-based mini-servers within homes and enterprises.

Conclusion

AI is undergoing a transformative shift driven by falling training costs, rapid inference deployment, and innovations tailored to edge environments. The tech industry focus is no longer dominated by the race to build larger models, but by efforts to efficiently deploy them in real-world applications at the edge.

The distillation of large foundation models has unleashed a surge of smarter, smaller, and more efficient models, empowering industries to integrate AI faster and at scale – increasingly within devices themselves.

Qualcomm Technologies is uniquely positioned to lead and benefit from this change through its expertise in power-efficient chip design, advanced AI software stack, and comprehensive developer support for edge applications.

Qualcomm Technologies’ ability to integrate NPUs, GPUs, and CPUs into devices enables high-performance, energy-efficient AI inference across smartphones, PCs, automotive, and industrial IoT sectors. Qualcomm Technologies provides industries with high performance, affordable, responsive, and privacy-oriented transformative AI experiences.

The company’s ecosystem approach—encompassing its Qualcomm AI Stack, Qualcomm AI Hub, and strategic developer collaborations—accelerates the deployment of adaptive AI technologies. These solutions help meet the demands of industries prioritizing real-time performance, privacy, and efficiency.

As AI innovation explodes at the edge, Qualcomm Technologies’ investments in scalable hardware and software will further solidify its leadership. The company is enabling a new era where AI applications are more accessible, efficient, and integrated into everyday life, driving transformation across multiple sectors globally.

Durga Malladi
SVP and GM, Technology Planning and Edge Solutions, Qualcomm Technologies, Inc.

Jerry Chang
Senior Manager, Marketing, Qualcomm Technologies

Access a PDF of the white paper here.

The post AI Disruption is Driving Innovation in On-device Inference appeared first on Edge AI and Vision Alliance.

]]>
Computer Vision and AI at the Edge with a Thermal Camera Provider and a Toy Manufacturer https://www.edge-ai-vision.com/2025/01/computer-vision-and-ai-at-the-edge-with-a-thermal-camera-provider-and-a-toy-manufacturer/ Fri, 10 Jan 2025 09:00:43 +0000 https://www.edge-ai-vision.com/?p=52195 This blog post was originally published at Digica’s website. It is reprinted here with the permission of Digica. As the pace of artificial intelligence innovation accelerates, we’re seeing AI and computer vision go from science fiction tropes to enabling highly efficient and compelling applications. This integration is particularly potent at the edge, where devices locally […]

The post Computer Vision and AI at the Edge with a Thermal Camera Provider and a Toy Manufacturer appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at Digica’s website. It is reprinted here with the permission of Digica.

As the pace of artificial intelligence innovation accelerates, we’re seeing AI and computer vision go from science fiction tropes to enabling highly efficient and compelling applications. This integration is particularly potent at the edge, where devices locally perform analytics and data processing, rather than relying on centralized servers.

This blog post details two compelling real-world examples that showcase the power of CV plus AI at the edge: training thermal imaging cameras in drones to recognize multiple object types and correcting visual defects in the manufacturing lines of the leading construction brick toy company.


Training systems on a FLIR thermal imaging camera

One key player in the transformation is deploying sophisticated imaging systems, such as FLIR cameras.

What does FLIR stand for in cameras?

FLIR stands for Forward Looking InfraRed. FLIR is a thermal imaging camera designed to capture thermal radiation, which is emitted from all objects that produce or interact with heat. A thermal imaging camera creates pictures from heat, not visible light, enabling it to operate in various lighting conditions, including complete darkness.

What is the use of a FLIR camera?

FLIR cameras are extensively used for surveillance, night vision, and scientific applications. They are invaluable tools for detecting people, machinery, or materials that exhibit temperature differences with their surroundings. In industrial contexts, they help in monitoring equipment to predict failures before they occur by detecting overheating parts.

How can FLIR cameras be used for image detection?

The unique ability to capture thermal data opens doors for image detection tasks beyond the limitations of the visible spectrum. A standard camera relies on reflected light, rendering it ineffective in low-light conditions. A thermal camera however, can detect heat signatures regardless of ambient light, making it ideal for scenarios where traditional cameras struggle.

Training FLIR cameras in drones

Drones equipped with FLIR thermal cameras are revolutionizing industries by performing remote inspections, aerial surveillance, and large-scale monitoring.

Digica recently built a comprehensive system for training FLIR detectors in military-grade drones. They created, trained, and validated datasets from over 2 million images, to deliver highly accurate drone, people, vehicle, and airplane detectors. Working with petabytes of 3D data, classifiers for over 1,000 object classes were developed, including drones, vehicles, airplanes, civilians, and people in uniform.

Digica: Drone and aircraft training data for FLIR cameras

Digica: People training data for FLIR cameras

The resulting comprehensive training pipelines now enable drones to be deployed in multiple projects, far from their original military purpose. Similarly, with the new camera control capabilities, subsequent thermal camera detector training will be far simpler and will require fewer data science-orientated personnel, resulting in a massive reduction in time and cost.

By leveraging the power of AI and CV at the edge in a thermal imaging camera, we’re pushing the boundaries of what’s possible in diverse fields. These real-world examples showcase the immense potential for enhanced image detection, improved quality control, and ultimately, a new era of intelligent automation.


Visual Defects in Toy Production Lines

In traditional production lines it often falls on human operators to identify visual defects. However, in fast-paced environments like a toy manufacturer’s production line, this approach is prone to fatigue, inconsistency, and limitations in speed. By deploying a defect detection system using AI plus CV at the edge, quality control was revolutionized.

What are visual defects on a production line?

Visual defects in a product line refer to any anomalies or irregularities observed in the products that deviate from the intended design or quality standards. In toy brick manufacturing, this commonly occurs as color inconsistencies, sizing errors, and defects like cracks, scratches, and dents.

Digica: Input -> CNN -> output (heatmap)

What are the criteria for a visual inspection?

The criteria for a visual inspection typically involve parameters like alignment, color accuracy, dimensions, and the absence of physical blemishes. Automated systems are trained to compare each unit against a predefined standard or template to ensure consistency and quality across the production batch.

How to identify visual defects and correct them on a product line

In advanced manufacturing setups like toy brick production lines, visual defects are identified using high-speed, high-resolution cameras integrated with AI-driven image processing software.

Digica trained computer vision models to recognize three different types of printing errors. The output took the form of heat maps with the pixel probability of belonging to the erroneous area. The threshold was established as to whether a brick was accepted or rejected.

Digica: The jig in a factory

Technologies used included:

  • PyTorch + PyTorch Lightning + Albumentations + OpenCV – for training Smudge and Offset models
  • Makesense.ai – for making the annotations
  • OpenCV + Scikit learn – for creating the color error procedure
  • Docker + FastAPI – to implement the API server
  • Plotly – for the interactive plots

The solution was specifically trained on toy bricks and was able to see the difference in gradient dying, as well as the printing of those blocks, which wasn’t detectable on today’s standard production lines.

Using small AI models on the edge there was no need to send image data to the cloud for processing. By simply running on camera analytics, such as inference, it was possible to detect defects in real time. Small models running on cameras were able to flag defective pieces and remove them from the production line.


The application of thermal cameras and AI in tasks like training systems and quality control on production lines exemplifies the potent synergy between advanced imaging technologies and edge AI. By leveraging these technologies, industries are not only able to enhance operational efficiency but also push the boundaries of what can be achieved in terms of automation and precision in manufacturing and beyond.

Lawrence Ebringer
Co-Founder, Stealth Startup

The post Computer Vision and AI at the Edge with a Thermal Camera Provider and a Toy Manufacturer appeared first on Edge AI and Vision Alliance.

]]>