Ambarella - Edge AI and Vision Alliance https://www.edge-ai-vision.com/category/provider/ambarella/ Designing machines that perceive and understand. Wed, 18 Feb 2026 21:29:00 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://www.edge-ai-vision.com/wp-content/uploads/2019/12/cropped-logo_colourplus-32x32.png Ambarella - Edge AI and Vision Alliance https://www.edge-ai-vision.com/category/provider/ambarella/ 32 32 Ambarella to Showcase “The Ambarella Edge: From Agentic to Physical AI” at Embedded World 2026 https://www.edge-ai-vision.com/2026/02/ambarella-to-showcase-the-ambarella-edge-from-agentic-to-physical-ai-at-embedded-world-2026/ Wed, 18 Feb 2026 21:29:00 +0000 https://www.edge-ai-vision.com/?p=56852 Enabling developers to build, integrate, and deploy edge AI solutions at scale SANTA CLARA, Calif., — Ambarella, Inc. (NASDAQ: AMBA), an edge AI semiconductor company, today announced that it will exhibit at Embedded World 2026, taking place March 10-12 in Nuremberg, Germany. At the show, Ambarella’s theme, “The Ambarella Edge: From Agentic to Physical AI,” […]

The post Ambarella to Showcase “The Ambarella Edge: From Agentic to Physical AI” at Embedded World 2026 appeared first on Edge AI and Vision Alliance.

]]>
Enabling developers to build, integrate, and deploy edge AI solutions at scale

SANTA CLARA, Calif., — Ambarella, Inc. (NASDAQ: AMBA), an edge AI semiconductor company, today announced that it will exhibit at Embedded World 2026, taking place March 10-12 in Nuremberg, Germany. At the show, Ambarella’s theme, “The Ambarella Edge: From Agentic to Physical AI,” will anchor live demonstrations that highlight how Ambarella’s AI SoCs, software stack, and developer tools deliver a competitive advantage across a wide range of AI applications—from agentic automation and orchestration to physical AI systems deployed in real-world environments.

Ambarella’s exhibit will showcase a scalable AI SoC portfolio providing high AI performance per watt, complemented by a software platform that supports rapid development across diverse edge AI workloads, consistent performance characteristics, and efficient deployment at the edge. Live demos will feature differentiation at the stack-level, partner solutions, and developer workflows across robotics, industrial automation, automotive, edge infrastructure, security, and AIoT use cases.

“Developers are increasingly building AI applications that must operate under strict power, latency, and reliability constraints, while still delivering high levels of performance,” said Muneyb Minhazuddin, Customer Growth Officer at Ambarella. “Here, we are showing how Ambarella’s ecosystem—bringing together performance-efficient AI SoCs with a robust software stack, sample workflows, and engineering resources—accelerates the development of edge AI solutions for a wide range of vertical industry segments.”

Ambarella will also present its Developer Zone (DevZone), giving developers, partners, independent software vendors (ISVs), module builders, and system integrators hands-on access to software tools, optimized models, and agentic blueprints. Together, these elements make it easier for teams to integrate more efficiently and deploy at scale using Ambarella’s technology.

Ambarella’s exhibit will be located in Hall 5, Booth 5-355 at Embedded World 2026. To schedule a guided tour, please contact your Ambarella representative.

About Ambarella
Ambarella’s products are used in a wide variety of edge AI and human vision applications, including video security, advanced driver assistance systems (ADAS), electronic mirrors, telematics, driver/cabin monitoring, autonomous driving, edge infrastructure, drones and other robotics applications. Ambarella’s low-power systems-on-chip (SoCs) offer high-resolution video compression, advanced image and radar processing, and powerful deep neural network processing to enable intelligent perception, sensor fusion and planning. For more information, please visit
www.ambarella.com.

Ambarella Contacts

  • Media contact: Molly McCarthy, mmccarthy@ambarella.com, +1 408-400-1466
  • Investor contact: Louis Gerhardy, lgerhardy@ambarella.com, +1 408-636-2310
  • Sales contact: https://www.ambarella.com/contact-us/

The post Ambarella to Showcase “The Ambarella Edge: From Agentic to Physical AI” at Embedded World 2026 appeared first on Edge AI and Vision Alliance.

]]>
Ambarella Launches Powerful Edge AI 8K Vision SoC With Industry-Leading AI and Multi-Sensor Perception Performance https://www.edge-ai-vision.com/2026/01/ambarella-launches-powerful-edge-ai-8k-vision-soc-with-industry-leading-ai-and-multi-sensor-perception-performance/ Wed, 07 Jan 2026 17:00:11 +0000 https://www.edge-ai-vision.com/?p=56408 New 4nm CV7 System-on-Chip Provides Ideal Combination of Simultaneous Multi-Stream Video and Advanced On-Device Edge AI Processing With Very Low Power Consumption SANTA CLARA, Calif., Jan. 05, 2026 (GLOBE NEWSWIRE) — Ambarella, Inc. (NASDAQ: AMBA), an edge AI semiconductor company, today announced during CES the CV7 edge AI vision system-on-chip (SoC), which is optimized for a wide range […]

The post Ambarella Launches Powerful Edge AI 8K Vision SoC With Industry-Leading AI and Multi-Sensor Perception Performance appeared first on Edge AI and Vision Alliance.

]]>
New 4nm CV7 System-on-Chip Provides Ideal Combination of Simultaneous Multi-Stream Video and Advanced On-Device Edge AI Processing With Very Low Power Consumption

SANTA CLARA, Calif., Jan. 05, 2026 (GLOBE NEWSWIRE) — Ambarella, Inc. (NASDAQ: AMBA), an edge AI semiconductor company, today announced during CES the CV7 edge AI vision system-on-chip (SoC), which is optimized for a wide range of AI perception applications. Examples include advanced, AI-based 8K consumer products (e.g., action and 360-degree cameras), multi-imager enterprise security cameras, robotics (e.g. aerial drones), industrial automation and high-performance video conferencing devices. The CV7 is also ideal for multi-stream automotive designs—especially those running CNNs and transformer-based networks at the edge—such as AI vision gateways and hubs in fleet video telematics, 360-degree surround-view and video-recording applications, and passive driver assistance systems (ADAS). These applications can all leverage the CV7 for its simultaneous processing of multiple video streams up to 8Kp60 and exceptional image quality, in combination with high-performance edge AI and low power consumption.

“Joining our wide portfolio of edge AI SoCs, with more than 39 million shipped to date, the CV7 enables consumer and enterprise security camera developers to deliver the most advanced imaging features and the highest edge AI performance, for improved video analytics and higher image quality in their next-generation products,” said Fermi Wang, President and CEO of Ambarella. “Additionally, this new SoC’s extremely low power consumption reduces thermal-management requirements for smaller form factors and longer battery life across a broad range of AIoT applications, thanks to its 4nm process technology and Ambarella’s proprietary AI SoC architecture, which is purpose built for the edge.”

Compared to its predecessor, the CV7 consumes 20% less power, thanks in part to Samsung’s 4nm process technology, which is Ambarella’s first on this node. The CV7 is also architected using Ambarella’s algorithm-first design philosophy to efficiently run all processing tasks simultaneously, with extremely high performance and low power consumption—continuing the company’s leadership position for the industry’s best AI performance per watt.

In contrast to competing multi-chip solutions, the CV7 is a highly integrated SoC with multiple functional blocks, resulting in superior performance, smaller form factors, improved time-to-market and reduced bills-of-material. The CV7 incorporates Ambarella’s proprietary AI accelerator, image signal processor (ISP), and video encoding, together with Arm® cores, I/Os and other functions, to provide customers with the most powerful and efficient AI vision SoC in its class.

That high AI performance is powered by Ambarella’s proprietary, third-generation CVflow® AI accelerator, with more than 2.5x AI performance over the previous-generation CV5 SoC. This allows the CV7 to support a combination of CNNs and transformer networks, running in tandem.

The CV7 also continues Ambarella’s track record of providing industry-leading image signal processing, including high dynamic range (HDR), dewarping for fisheye cameras, and 3D motion-compensated temporal filtering (MCTF)—all with higher performance and better image quality than its predecessor—using a combination of traditional ISP techniques and AI enhancements. The result is that the CV7 provides impressive image quality in low light, down to 0.01 Lux, as well as improved HDR for video and images with more vivid details in scenes with starkly contrasting bright and dark areas.

Also contributing to the CV7’s advancements is its hardware-accelerated video encoding (H.264, H.265, MJPEG), which boosts encode performance by 2x over the CV5. This improvement enables a max video encode of a single 4Kp240 stream, or dual 8Kp30. Additionally, for the next generation of multi-imager enterprise security cameras, the CV7 can easily support, running concurrently, over 4x 4Kp30 with multiple streams, as well as the latest transformer-based AI networks and vision-language models (VLMs).

The CV7’s on-chip general-purpose processing was also upgraded to a quad-core Arm Cortex-A73, offering 2x higher CPU performance over the previous SoC. Additionally, its 64-bit DRAM interface provides a significant improvement in available DRAM bandwidth compared to the CV5.

CV7 SoC samples are available now, and it is being demonstrated at Ambarella’s invitation-only exhibition during CES in Las Vegas this week. For more information or to schedule a demo during the show, please contact your Ambarella representative or visit www.ambarella.com/products/aiot-industrial-robotics.

About Ambarella

Ambarella’s products are used in a wide variety of edge AI and human vision applications, including video security, advanced driver assistance systems (ADAS), electronic mirrors, telematics, driver/cabin monitoring, autonomous driving, edge infrastructure, drones and other robotics applications. Ambarella’s low-power systems-on-chip (SoCs) offer high-resolution video compression, advanced image and radar processing, and powerful deep neural network processing to enable intelligent perception, sensor fusion and planning. For more information, please visit www.ambarella.com.

Ambarella Contacts

All brand names, product names, or trademarks belong to their respective holders. Ambarella reserves the right to alter product and service offerings, specifications, and pricing at any time without notice. © 2026 Ambarella. All rights reserved.

The post Ambarella Launches Powerful Edge AI 8K Vision SoC With Industry-Leading AI and Multi-Sensor Perception Performance appeared first on Edge AI and Vision Alliance.

]]>
Ambarella Launches a Developer Zone to Broaden its Edge AI Ecosystem https://www.edge-ai-vision.com/2026/01/ambarella-launches-a-developer-zone-to-broaden-its-edge-ai-ecosystem/ Tue, 06 Jan 2026 19:57:36 +0000 https://www.edge-ai-vision.com/?p=56373 SANTA CLARA, Calif., Jan. 6, 2026 — Ambarella, Inc. (NASDAQ: AMBA), an edge AI semiconductor company, today announced during CES the launch of its Ambarella Developer Zone (DevZone). Located at developer.ambarella.com, the DevZone is designed to help Ambarella’s growing ecosystem of partners learn, build and deploy edge AI applications on a variety of edge systems with greater speed and clarity. It […]

The post Ambarella Launches a Developer Zone to Broaden its Edge AI Ecosystem appeared first on Edge AI and Vision Alliance.

]]>
SANTA CLARA, Calif., Jan. 6, 2026 — Ambarella, Inc. (NASDAQ: AMBA), an edge AI semiconductor company, today announced during CES the launch of its Ambarella Developer Zone (DevZone). Located at developer.ambarella.com, the DevZone is designed to help Ambarella’s growing ecosystem of partners learn, build and deploy edge AI applications on a variety of edge systems with greater speed and clarity. It provides a collection of optimized models, along with low-code and no-code agentic blueprints, to enable the rapid development of edge AI applications on Ambarella’s large portfolio of purpose-built edge AI systems-on-chip (SoCs) with Cooper development software.

As AI workloads increasingly move towards hybrid edge/cloud architectures, partners need faster, more scalable ways to develop products and services with Ambarella’s full-stack edge AI platform. The DevZone consolidates essential tools, documentation, models and community resources into a single destination, enabling system integrators, distributors, module builders, independent software vendor (ISVs) and other leading global ecosystem partners to engage, prototype and accelerate edge AI solutions for a wide range of vertical industry segments.

“The edge AI markets we serve are evolving rapidly and our partner ecosystem needs a way to stay ahead of the curve,” said Muneyb Minhazuddin, Customer Growth Officer at Ambarella. “The Ambarella Developer Zone reduces friction, while giving partners deeper access. It is a foundational part of opening our ecosystem and supporting broader go-to-market collaboration.”

The DevZone brings the full breadth of Ambarella’s development resources together in one cohesive environment, including:

  • Cooper™ Model Garden: A growing repository of validated, optimized models ready for testing and validation
  • Learning & Onboarding Library: White papers, blogs, tutorials and sample applications to help users get started immediately
  • Agentic Blueprints: Low-code and no-code templates that enable the rapid design of multi-agent systems through unique, next-generation agentic workflows

Together, these elements create a more intuitive, efficient development experience that helps teams build and validate models rapidly for broader markets.

Through its DevZone, Ambarella is strengthening support for ecosystem partners by expanding access to parts of its software stack through agentic interfaces, sample workflows and development resources. This unified entry point makes it easier for ecosystem partners to evaluate Ambarella’s technology, integrate more efficiently and bring solutions to market faster.

This new AI developer zone is positioning the company to build on its leadership in edge endpoints—with more than 39 million edge AI SoCs shipped to date—by helping to accelerate and scale its expansion into the growing edge AI infrastructure market. The DevZone allows a broader network of ecosystem partners and customers to leverage Ambarella’s leadership technology, which is purpose-built for high-performance, low-power edge AI.

Two ISVs have already leveraged the Ambarella Developer Zone to build and deploy their edge AI models to the N1-655 SoC.

“Running the Cogniac platform on the Ambarella Developer Zone gives our teams a powerful way to push the boundaries of what’s possible with edge AI,” said Quinn Curtis, CEO of Cogniac. “Ambarella’s performance‑efficient architecture aligns perfectly with our mission to deliver scalable, real‑time intelligence, and the Developer Zone makes it faster and easier for innovators to build, test, and deploy production‑ready solutions.”

“Bringing our Viana vision analytics solution into the Ambarella Developer Zone opens the door to an entirely new level of performance for edge-based vision AI. Ambarella’s AI-focused SoCs give us a powerful foundation to deliver real-time insights with extremely low latency and far less power consumption. Together, we’re making it easier for customers to run advanced AI models at scale, so they can better understand what’s happening in their physical spaces and optimize operations as it happens,”said Thor Turrecha, EVP of Global SaaS, meldCX.

Addressing Today’s Edge AI Development Challenges

Developers across both physical AI and edge infrastructure applications—such as robotics, industrial automation, smart cameras, ADAS, on-premise AI boxes and many other sectors—are navigating rising complexity; from larger multimodal models to fragmented toolchains. At the same time, they are facing growing demands for fast and secure inference in latency-sensitive environments. Ambarella’s DevZone directly addresses these challenges by:

  • Reducing friction: Centralizing tools, examples, models, onboarding library and learning resources, all in a single, developer-friendly zone
  • Strengthening ISV engagement: Providing a defined path for integration and co-launching solutions
  • Clarifying system design: Showing partners how Ambarella’s HW/SW stack fits into hybrid edge/cloud architectures
  • Preparing for future workloads: Supporting multimodal inference and hybrid AI pipelines

Launch During CES 2026

The new Ambarella Developer Zone will be demonstrated during CES in Las Vegas this week, at Ambarella’s invitation-only exhibition. Attendees will experience live demonstrations of Ambarella’s full-stack edge AI platform and can explore new partnership opportunities. For more information or to schedule a demo during the show, please contact your Ambarella representative or visit www.developer.ambarella.com.

About Ambarella

Ambarella’s products are used in a wide variety of edge AI and human vision applications, including video security, advanced driver assistance systems (ADAS), electronic mirrors, telematics, driver/cabin monitoring, autonomous driving, edge infrastructure, drones and other robotics applications. Ambarella’s low-power systems-on-chip (SoCs) offer high-resolution video compression, advanced image and radar processing, and powerful deep neural network processing to enable intelligent perception, sensor fusion and planning. For more information, please visit www.ambarella.com.

Ambarella Contacts

All brand names, product names, or trademarks belong to their respective holders. Ambarella reserves the right to alter product and service offerings, specifications, and pricing at any time without notice. © 2026 Ambarella. All rights reserved.

The post Ambarella Launches a Developer Zone to Broaden its Edge AI Ecosystem appeared first on Edge AI and Vision Alliance.

]]>
Ambarella’s CV3-AD655 Surround View with IMG BXM GPU: A Case Study https://www.edge-ai-vision.com/2025/12/ambarellas-cv3-ad655-surround-view-with-img-bxm-gpu-a-case-study/ Fri, 05 Dec 2025 09:00:36 +0000 https://www.edge-ai-vision.com/?p=56176 The CV3-AD family block diagram. This blog post was originally published at Imagination Technologies’ website. It is reprinted here with the permission of Imagination Technologies. Ambarella’s CV3-AD655 autonomous driving AI domain controller pairs energy-efficient compute with Imagination’s IMG BXM GPU to enable real-time surround-view visualisation for L2++/L3 vehicles. This case study outlines the industry shift […]

The post Ambarella’s CV3-AD655 Surround View with IMG BXM GPU: A Case Study appeared first on Edge AI and Vision Alliance.

]]>
The CV3-AD family block diagram.

This blog post was originally published at Imagination Technologies’ website. It is reprinted here with the permission of Imagination Technologies.

Ambarella’s CV3-AD655 autonomous driving AI domain controller pairs energy-efficient compute with Imagination’s IMG BXM GPU to enable real-time surround-view visualisation for L2++/L3 vehicles. This case study outlines the industry shift to centralised domain controllers, introduces the CV3-AD family and the mid-range CV3-AD655, explains what the GPU does and why it matters for driver awareness and trust, summarises why Ambarella chose us at Imagination, highlights the key IMG BXM GPU capabilities, and closes with what’s next as the platform moves toward market adoption.

If you want to download a copy of this case study, you can do so here.

The Rise of Autonomous Driving

In recent years, the capabilities of Advanced Driver Assistance Systems (ADAS) have flourished. Nearly half of all car sales in the USA offer Level 2 capabilities (such as lane keeping and adaptive cruise control) or higher, and China is pushing the market further towards Level 3 (conditional automation with driver oversight) and beyond.

The advanced functionality offered by these ADAS and autonomous systems requires an exceptional vehicle computing architecture to operate safely and in real-time. To support this, in recent years, vehicle processing has started to centralise from multiple, smaller zonal controllers and into fewer, larger, domain controllers. This not only delivers the performance required, but is also helping Original Equipment Manufacturers (OEMs) lower vehicle production costs.

Developing AI Domain Controller SoCs for Autonomy

Ambarella is a leading provider of low-power domain controllers for autonomous systems. Its Artificial Intelligence (AI) Systems on Chip (SoCs) are perfect for handling the perception, fusion and planning processing tasks that allow a vehicle to understand its surroundings and plan a sensible path in real-time, without draining the vehicle’s battery.

Their CV3-AD family launched in 2022. These energy-efficient SoCs combine AI and vector processors, CPUs, an Imagination GPU, advanced image signal processing, stereo and dense optical flow engines, a hardware security module, and a safety island for ASIL-D applications. The result is the ideal balance of flagship performance for central processing with industry-leading power efficiency.

The CV3-AD655 is the mid-range product in the CV3-AD family, offering advanced L2+ (also called L2++) and L3 autonomy with enhanced autopilot and automated parking, including support for multiple cameras, radars and other sensors. It includes an IMG BXM GPU to power advanced surround view systems and bird’s eye view applications.

What tasks does the GPU handle in the CV3AD-655?

The integrated IMG BXM GPU enables high-performance rendering and real-time image stitching from multiple camera inputs, delivering a seamless and immersive 360-degree visualisation around the vehicle.

Why is this important?

With ADAS and lower levels of semi-autonomous driving, where a driver handles complex manoeuvres like parallel parking, a responsive and detailed surround view system provides valuable information to the driver on their position in relation to their surroundings and prevents low-speed accidents.

At higher levels of autonomy, visualisation of a vehicle’s movements in relation to its surroundings plays a key role in enhancing driver awareness of what the system is perceiving. With Level 2++ and Level 3 vehicles, surround view systems and perception 3D renderings make the system’s capabilities more transparent and intuitive, which in turn builds driver trust in the vehicle’s autonomous functionalities.

Why did Ambarella choose Imagination?

Imagination Technologies is a world-class GPU IP provider with a focus on efficiency for edge devices. It is the most popular GPU IP solution for cockpit and in-vehicle infotainment systems and can be found inside the models of nearly all the major car brands. Its flexible, programmable general purpose compute capabilities are also deployed by SoC architects in ADAS and autonomous domain controllers. Its popularity in vehicles stems from its performance efficiency, its flexibility and its performance-conscious safety solutions.

Ambarella is experienced in developing with Imagination’s GPU IP; previous iterations of the CV3-AD SoCs have featured the functionally safe IMG BXS GPU IP.

“By integrating an Imagination GPU inside the CV3AD-655, we are able to achieve exceptional efficiency and visual fidelity, reinforcing Ambarella’s commitment to innovation in intelligent automotive systems.”

Jason Huang, VP of Systems, Ambarella

What does the IMG BXM bring to the CV3-AD?

Given the CV3-AD655’s focus on efficiency, the IMG BXM GPU provided the right mix of graphics performance, compute capabilities and low-power:

  • It has the performance to render a surround view system on a 1080p screen at 60 frames per second, with twice the fillrate of competing cores.
  • The innately efficient PowerVR tile-based deferred rendering architecture, with additional geometry and frame buffer compression technologies, keeps power consumption low.
  • Features like programmable, high quality anti-aliasing deliver exquisite visual quality.
  • The arithmetic logic unit (ALU) guarantees high SIMD efficiency for general purpose compute tasks, like image stitching.
  • It has exceptional multi-tasking capabilities, simultaneously processing 2D, 3D, compute and housekeeping tasks via asynchronous computing.
  • Its firmware processor manages fine-grained task switching, workload balancing and power management.
  • It is a quality managed IP suitable for an ASIL-B(D) SoC.

Inside the IMG BXM GPU

What’s next for the CV3-AD655?

The CV3-AD655 with its Imagination GPU is set to bring high performance autonomy to mass market L2++ and L3 systems while helping OEMs lower total system design costs and manage energy consumption inside the vehicle. Ambarella is already partnering with major tier-1s like Continental/AUMOVIO to bring their full-stack vehicle system solutions to cars from 2027. The goal is to enable safer mobility and shape the path towards an autonomous future.

Eleanor Brash
Senior Product Marketing Manager, Imagination

The post Ambarella’s CV3-AD655 Surround View with IMG BXM GPU: A Case Study appeared first on Edge AI and Vision Alliance.

]]>
Ambarella Redefines Edge AI Performance with Cadence https://www.edge-ai-vision.com/2025/10/ambarella-redefines-edge-ai-performance-with-cadence/ Fri, 10 Oct 2025 08:01:46 +0000 https://www.edge-ai-vision.com/?p=55565 This blog post was originally published at Cadence’s website. It is reprinted here with the permission of Cadence. Ambarella stands at the forefront of edge AI processing, pioneering low-power, high-performance systems on chip (SoCs) that power a new generation of smart devices. Ambarella’s mission is to enable intelligence at the edge, from automotive systems that […]

The post Ambarella Redefines Edge AI Performance with Cadence appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at Cadence’s website. It is reprinted here with the permission of Cadence.

Ambarella stands at the forefront of edge AI processing, pioneering low-power, high-performance systems on chip (SoCs) that power a new generation of smart devices. Ambarella’s mission is to enable intelligence at the edge, from automotive systems that make our roads safer to security cameras that protect our homes and businesses. They create SoCs for devices that perceive, understand, and react to the world in real time, pushing the boundaries of what’s possible in edge infrastructure and physical AI.

The Edge AI Challenge: A Mountain of Data, A Trickle of Power

The world is generating data at an incredible rate, with an estimated 80% originating at the “edge”—in our cars, factories, hospitals, and smart devices. Sending this massive volume of information to the cloud for processing is becoming impractical. It’s too slow for applications requiring split-second decisions, too costly for mass-market devices, and introduces significant privacy and security risks.

Ambarella recognized this shift and set out to build a solution that could bring the power of the cloud directly to the device. Their goal was to create an edge AI SoC capable of handling immense generative AI workloads—models with billions of parameters—while simultaneously processing multiple high-definition video streams. This wasn’t just about adding an AI accelerator; it required a complete, harmonized system-on-chip. The engineering team faced a formidable set of challenges:

  • Massive AI Performance: The SoC needed to run sophisticated vision language models (VLMs) and large language models (LLMs) to provide contextual awareness and natural language interaction, a task typically reserved for power-hungry data centers.
  • Extreme Power Efficiency: Every watt counts for edge devices, especially those battery-powered or in tight enclosures. The chip had to deliver its massive performance within an extremely tight power budget of around 15 watts.
  • High-Speed Data Throughput: Processing multiple 1080p video streams while running AI models demands incredible data bandwidth. The interconnects within the SoC and to other system components had to be lightning-fast, while architecting the chip to minimize calls to external DRAM, to avoid bottlenecks that would massively limit real-time performance.
  • Accelerated Time to Market: The AI landscape moves at a breakneck pace. Ambarella needed to move from architectural concept to silicon in hand quickly to maintain its competitive advantage.

Solution: Cadence IP and Solutions

To conquer these challenges, Ambarella knew it needed more than just a tool vendor; it needed a strategic partner with a deep portfolio of world-class IP and end-to-end design solutions. They turned to Cadence, building on a partnership that spanned five years and multiple product generations. Cadence provided both the critical IP and digital implementation tools needed to bring Ambarella’s architectural vision to life and optimize the performance. In parallel, Ambarella’s successful collaboration with Samsung Foundry guided the selection of Samsung’s proven 5nm process technology—offering a solid foundation for the AI acceleration, system integration, and power efficiency essential for running today’s leading multimodal VLMs and LLMs at scale. Together, these collaborations provided a springboard for Ambarella’s groundbreaking N1-655 chip.

For many on-premises and physical AI edge devices, interconnect quality is paramount, and Cadence’s long-standing stellar performance was a key part of the SoC. Ambarella used Cadence’s industry-leading IP for PCIe 5.0 to process massive AI workloads as an ultra-high-speed highway for data to move between the SoC and other critical components.

Beyond the specific VIP, Ambarella employed digital implementation, signoff, and system design solutions from Cadence to design its N1-655 SoC. Cadence’s end-to-end flow enabled Ambarella’s engineers to integrate the high-speed IP for PCIe 5.0 into the complex 5nm process technology and help manage power consumption across the chip to stay within the strict 15W envelope. Key solutions such as the Innovus Implementation System, Genus Synthesis Solution, Conformal Equivalence Checker, Voltus IC Power Integrity Solution, Tempus Timing Solution, Sigrity X Platform, and Clarity 3D Solver are integral to its workflow.

The Result: Redefining Performance at the Edge

The outcome of this powerful partnership is Ambarella’s latest edge AI SoC, the N1-655, a chip that sets a new industry benchmark. It can process LLMs with up to 8 billion parameters while simultaneously decoding 12 streams of 1080p video, all within its remarkable 15W power budget!

This achievement showcases how strategic collaboration accelerates innovation. By pairing Ambarella’s visionary architecture with Cadence’s proven design technologies and Samsung’s cutting-edge process technology, the team successfully delivered a solution that:

  • Slashed development time using a streamlined and predictable design flow.
  • Achieved significant PPA improvements, unlocking new levels of AI performance at record-low power.
  • Ensured mission-critical reliability for demanding applications in factories and security.

Cadence’s best-in-class IP solutions are essential for building the chips that power next-generation edge infrastructure and physical AI applications, with Ambarella shipping over 36 million edge AI SoCs, cumulatively. Ambarella’s N1-655 is more than just a chip; it’s a testament to what’s possible when industry leaders work together to solve the future’s biggest challenges. This isn’t just about performance—it’s about enabling real-time multimodal AI models, scaling VLMs and LLMs at the edge, and delivering industry-leading AI performance per watt. As Ambarella, Cadence, and Samsung Foundry look ahead to new projects utilizing 4nm and 2nm nodes, this story of innovation is just beginning.

Watch the full story now: Ambarella’s Edge AI Breakthrough: Powered by Samsung Foundry and Cadence.

The post Ambarella Redefines Edge AI Performance with Cadence appeared first on Edge AI and Vision Alliance.

]]>
Next-gen Fleet Telematics and Dashcams Shift to On-device AI https://www.edge-ai-vision.com/2025/09/next-gen-fleet-telematics-and-dashcams-shift-to-on-device-ai/ Mon, 08 Sep 2025 08:02:48 +0000 https://www.edge-ai-vision.com/?p=55062 This blog post was originally published at Ambarella’s website. It is reprinted here with the permission of Ambarella. The role of dashcams has changed significantly over the past decade. What began as a passive recording device has become an active, intelligent safety and operations tool. This evolution is being driven by edge AI—the ability to […]

The post Next-gen Fleet Telematics and Dashcams Shift to On-device AI appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at Ambarella’s website. It is reprinted here with the permission of Ambarella.

The role of dashcams has changed significantly over the past decade. What began as a passive recording device has become an active, intelligent safety and operations tool. This evolution is being driven by edge AI—the ability to process and analyze data directly on the device without constant cloud connectivity.

On-device processing enables near-instantaneous analysis, allowing systems to detect and respond to events as they happen. For commercial fleets, that means improved compliance, operational efficiency, and driver safety. For passenger-vehicle drivers, it delivers a more secure and informed driving experience in a compact, easy-to-install format.

Why AI Matters in Fleets and Dashcams

Edge AI is not simply an enhancement to existing systems—it fundamentally changes how dashcams operate and the value they provide. By performing real-time analysis within the device, AI-powered dashcams deliver:

  • Immediate feedback and alerts without cloud-processing delays.
  • Reduced bandwidth, storage, and cloud costs through on-device filtering of non-critical footage.
  • Consistent performance across varied driving conditions, including low-light, inclement weather, and areas with poor connectivity.

These benefits apply to both the fleet and consumer markets. Advanced Driver Assistance Systems (ADAS) features—such as collision warnings, lane departure alerts, and traffic sign recognition—help prevent accidents in any vehicle. Driver Monitoring Systems (DMS) detect distraction, fatigue, and unsafe behaviors. Event-triggered recording ensures incidents and near misses are automatically captured for review.

Transforming Fleet Telematics with Edge AI

In fleet operations, AI-powered dashcams are already delivering measurable improvements in safety, compliance, and efficiency.

Today’s systems can handle a broad range of tasks, across three main categories:

  • Advanced Driver Assistance Systems (ADAS): lane departure warnings, collision detection, and forward-collision alerts.
  • Driver Monitoring Systems (DMS): detecting distraction, drowsiness, and unsafe behaviors such as phone use or eating while driving.
  • Operational Insights: route optimization, fuel efficiency tracking, and vehicle health monitoring.

Adoption is accelerating due to rising insurance and liability costs, regulatory requirements such as Euro NCAP and driver monitoring mandates, and the expansion of last-mile delivery and service fleets.

We’ve seen this first-hand in large-scale deployments. For example, Samsara and other fleet telematics providers use Ambarella’s SoCs to power AI processing for their fleet management solutions, achieving exceptional AI performance-per-watt to meet the demanding requirements of always-on video telematics.

Enhancing Consumer and Aftermarket Dashcams

Consumer dashcams are beginning to incorporate many of the advanced AI capabilities originally developed for fleet applications.

Compact devices designed for personal vehicles can provide ADAS warnings, monitor driver alertness, and automatically capture critical events. Some models integrate facial recognition for driver authentication or theft deterrence. Selective cloud uploads ensure that only relevant clips are stored online, reducing the cost of bandwidth and cloud services while preserving access to important footage.

These capabilities are becoming more affordable as storage costs drop and SoC efficiency improves. We’re also seeing rising interest in DMS and Occupant Monitoring Systems (OMS) for personal vehicles—particularly for parents monitoring teen drivers or ride-share operators who want an added layer of safety.

The Technology Behind the Transformation

At the heart of this transformation is Ambarella’s portfolio of CVflow® Edge AI SoCs, purpose-built for high-performance computer vision at low power. This balance is essential in automotive applications, where thermal management and form factor constraints are critical.

The third generation of Ambarella SoCs, which integrate the CVflow 3.0 AI accelerator, make it possible to run Large Language Models (LLMs), Vision Transformers, and Generative AI (GenAI) models entirely on-device. This enables next-generation features such as:

  • Zero-shot learning for recognition and response to rare events without retraining.
  • Conversational driver coaching for personalized safety feedback.
  • Real-time translation for road signs and navigation instructions in multiple languages.

Our broad partner ecosystem further amplifies these capabilities, spanning ADAS and DMS software providers, open-source tools like Linux and Docker, and middleware tailored for fleet management.

Partnering for the Road Ahead

The dashcams of the future will be even more intelligent, connected, and proactive—analyzing, assisting, and protecting drivers in real time without relying on constant cloud connectivity.

Ambarella’s Edge AI SoCs are making this future possible today, delivering industry-leading efficiency, reliability, and performance for both fleet and consumer dashcams.

If you’re ready to advance your dashcam or fleet telematics solutions with cutting-edge AI, contact us to learn how Ambarella can help you deliver the next generation of intelligent automotive safety and efficiency.

Ram Subramanian
Director of Automotive Product Marketing, Ambarella

The post Next-gen Fleet Telematics and Dashcams Shift to On-device AI appeared first on Edge AI and Vision Alliance.

]]>
Collaborating With Robots: How AI Is Enabling the Next Generation of Cobots https://www.edge-ai-vision.com/2025/08/collaborating-with-robots-how-ai-is-enabling-the-next-generation-of-cobots/ Mon, 11 Aug 2025 08:01:13 +0000 https://www.edge-ai-vision.com/?p=54753 This blog post was originally published at Ambarella’s website. It is reprinted here with the permission of Ambarella. Collaborative robots, or cobots, are reshaping how we interact with machines. Designed to operate safely in shared environments, AI-enabled cobots are now embedded across manufacturing, logistics, healthcare, and even the home. But their role goes beyond automation—they […]

The post Collaborating With Robots: How AI Is Enabling the Next Generation of Cobots appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at Ambarella’s website. It is reprinted here with the permission of Ambarella.

Collaborative robots, or cobots, are reshaping how we interact with machines. Designed to operate safely in shared environments, AI-enabled cobots are now embedded across manufacturing, logistics, healthcare, and even the home. But their role goes beyond automation—they are collaborative partners, built to adapt, understand, and make decisions in real-time.

Unlike legacy robots designed for isolated, repetitive tasks, cobots are purpose-built for fluid engagement in dynamic, unpredictable settings. This demands more than mechanical precision; it requires real-time perception, contextual understanding, and continuous learning. All of which are made possible by AI.

The Limits of Traditional Robots

Traditional robots thrive in structured environments where variables are fixed. But the moment conditions shift—a person walks into the robot’s path, an object changes position, or a task varies—these systems falter. Their behavior is hard-coded, and modifying it often requires manual reprogramming.

Cobots, by contrast, are able to adeptly handle complexity on the fly. They can interpret sensor data, understand spoken commands, identify objects, and make split-second decisions based on human behavior and proximity. Whether adjusting to a worker’s unexpected movement or adapting to a new task on the fly, cobots are designed to handle the messiness of real-world environments—without needing to be reprogrammed for each change.

Safety Through Intelligence

Safety is central to the cobot design philosophy. Working in close proximity to humans means cobots must perceive, anticipate, and respond to potential hazards in real-time. This includes slowing down when a person enters their workspace, pausing operations if a collision seems imminent, or rerouting to avoid obstacles.

AI makes these safety behaviors possible. With on-device computer vision, cobots can detect human limbs, monitor proximity, and adapt accordingly. Reinforcement learning enables them to refine responses over time—reacting faster to familiar situations and adjusting to new ones as they emerge.

In healthcare, this might mean recognizing and responding to a patient’s fall. In industrial settings, it could involve adapting to irregular part placement on a fast-moving assembly line. These capabilities aren’t the product of predefined rules; they’re powered by continual learning and context-aware perception.

Flexibility in Unstructured Environments

One of the most important recent on-device advancements in AI-enabled cobots is GenAI-enabled versatility. With lightweight vision-language models (VLMs) and speech recognition, cobots can follow natural-language instructions like “bring the red box to station A” without relying on rigid programming.

Perception, speech and vision-language models allow cobots to analyze what they see and hear and convert that analysis into actions—with or without the need for an internet connection. The result is a flexible, low-latency system that can navigate cluttered spaces, handle a wide range of objects, and respond naturally to human input—all while operating online or offline and within limited power constraints.

This adaptability is critical in applications where workflows shift often, from flexible manufacturing lines to in-home assistance. Cobots remain useful as conditions change, without the need for constant retraining.

Distributed Intelligence in Collaborative Cobot Systems

Cobots don’t just operate alongside humans—they can now operate as coordinated teams. With edge GenAI advancements being married to real-time communication and learning transfer, cobots can now work as distributed teams—coordinating tasks, sharing environmental data, and adjusting behaviors based on what their peers observe (i.e., environmental data from outside the range of their own sensors).

For example, if one cobot detects a blocked aisle or hardware issue, it can notify others to reroute or reassign tasks. These local networks function as collaborative meshes—decentralized, resilient, and increasingly autonomous.

AI makes this coordination possible, but it also introduces new performance demands: communication must be fast, decision-making must be distributed, and perception and analysis must remain sharp even in the absence of external connectivity.

Why Edge AI Hardware Matters

Running AI at the edge takes more than just sophisticated models—it requires hardware built to handle real-time inference within tight power, space, and thermal constraints. Cobots need to execute multiple workloads in parallel, from perception and motion planning to language understanding and safety checks, all without relying on cloud connectivity.

This is where the robot’s AI system-on-chip (SoC) architecture becomes critical. The ideal SoC must offer high AI performance per watt, integrate smoothly into compact robotic systems, and support the multimodal sensing and GenAI-based reasoning and decision-making tasks that define modern cobot mobility and operation.

Ambarella’s edge AI SoCs are optimized for these needs, combining advanced vision processing, multimodal sensor fusion, and low-latency inference within a compact, energy-efficient design. Whether deployed in mobile platforms (AMRs) or robotic arms, they enable cobots to make intelligent decisions locally and maintain autonomy, even in bandwidth-constrained environments. For mobile cobots, that power efficiency also helps extend battery life, which enables more efficient operations in use cases like factories.

Built for the Real World

Deploying cobots into human environments—e.g., homes, hospitals, and industrial sites—means solving for more than just intelligence. It requires platforms that are compact, energy-efficient, and robust enough to handle edge AI workloads without sacrificing overall system performance.

Customers are leveraging Ambarella’s CV7x and N1x Edge AI SoC families to address this challenge to deliver:

  • High AI performance per watt, optimized for real-time computer vision tasks like semantic segmentation and depth estimation, combined with lightweight GenAI vision-language models
  • Compact form factors, ideal for articulated arms, AMRs, and other space-constrained systems
  • Multi-modal sensor fusion, enabling cobots to leverage data from a diverse range of inputs for a more comprehensive understanding of their environment and surroundings
  • On-device inference, enabling autonomy and safety-critical decision-making without relying on cloud connectivity—including on-device GenAI.

These capabilities make Ambarella’s Edge AI SoCs well-suited to the demands of cobots operating in real-world scenarios—where power, safety, and responsiveness are non-negotiable.

Whether you’re building the next wave of cobots, designing systems that integrate teams of robots, or seeking high-performance AI hardware for similar new product platforms, we invite you to contact us.

Learn more at our AIoT, Industrial & Robotics products page.

Sophie Yang
Director of Edge AI Applications Engineering, Ambarella

The post Collaborating With Robots: How AI Is Enabling the Next Generation of Cobots appeared first on Edge AI and Vision Alliance.

]]>
Achieving High-speed Automatic Emergency Braking with AI-driven 4D Imaging Radar https://www.edge-ai-vision.com/2025/07/achieving-high-speed-automatic-emergency-braking-with-ai-driven-4d-imaging-radar/ Mon, 07 Jul 2025 08:02:44 +0000 https://www.edge-ai-vision.com/?p=54360 This blog post was originally published at Ambarella’s website. It is reprinted here with the permission of Ambarella. Across the globe, regulators are accelerating efforts to make roads safer through the widespread adoption of Automatic Emergency Braking (AEB). In the United States, the National Highway Traffic Safety Administration (NHTSA) implemented a sweeping regulation that requires […]

The post Achieving High-speed Automatic Emergency Braking with AI-driven 4D Imaging Radar appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at Ambarella’s website. It is reprinted here with the permission of Ambarella.

Across the globe, regulators are accelerating efforts to make roads safer through the widespread adoption of Automatic Emergency Braking (AEB). In the United States, the National Highway Traffic Safety Administration (NHTSA) implemented a sweeping regulation that requires all new light vehicles to include advanced AEB capabilities by 2029. These systems must operate effectively at highway speeds and in a range of real-world conditions—including low light and pedestrian scenarios.

The U.S. isn’t alone. In the European Union, AEB is now mandatory for new vehicle registrations under the General Safety Regulation (GSR), and China has announced that AEB will be required for all new cars starting in 2028. While the technical thresholds vary, the global direction is clear: AEB is no longer a premium feature—it’s a baseline expectation.

This shift marks a significant challenge for the industry. Delivering consistent, reliable AEB performance at higher speeds demands more than incremental improvements to today’s systems. It calls for a rethinking of the perception stack, sensor fusion strategies, and real-time decision-making—especially in scenarios where milliseconds can mean the difference between a near miss and a serious collision.

High-Speed AEB: Where Performance Meets Complexity

Low-speed AEB systems are well understood and increasingly common. They’re effective in urban environments and stop-and-go traffic, where reaction times and stopping distances are more forgiving. But at highway speeds the margin for error narrows, as illustrated by the following…

  • Braking distance is proportional to the speed^2: A vehicle traveling at 120 km/h (~75 mph) requires significantly more distance to stop in time.
  • Detection time shrinks: At highway speeds, the time between initial object detection and the need to apply brakes is measured in milliseconds.
  • Sensor coverage becomes more demanding: Vehicles must detect objects farther away with greater precision, to anticipate threats early enough for safe braking.
  • Environmental complexity increases: Night driving, inclement weather, and occluded objects introduce additional ambiguity that perception systems must resolve.

Indeed, perception is at the very heart of this challenge. To perform reliably at high speeds, a vehicle’s AEB system must detect not only large vehicles, but also smaller, less reflective, or fast-moving objects—such as a child running across the road or a fallen motorcycle—often in low-light or degraded conditions.

The Limits of Conventional Sensor Stacks

Most currently available sensor stacks struggle to meet the demands of high-speed AEB. Cameras, while effective for object classification, are limited by the lack of direct distance and relative speed—especially at night or in adverse weather. Lidar can extend perception farther, but it remains costly and typically the effective functional range maxes out at around 80–100 meters. Traditional 3D radar offers only basic azimuth and range data, and often lacks the angular resolution or detection range needed to detect small objects with confidence.

These limitations are magnified at high speeds, where vehicles must interpret threats quickly and accurately. A child stepping onto the road, a motorcycle veering across lanes, or even a stationary box in the middle of the road all require sensors that can detect, classify, and track objects in real-time.

Pushing AEB into these scenarios reveals the shortcomings of legacy systems—and the need for a more advanced approach to perception.

Why Ambarella’s Oculii™ 4D AI Imaging Radar is Different

High-speed AEB demands precise, long-range perception across a wide range of conditions. Conventional radar often falls short—struggling with resolution, object tracking, and false positives. Oculii™ 4D imaging radar takes a fundamentally different approach, combining adaptive AI waveform processing with hardware efficiency to deliver high-resolution performance without the cost and complexity of massive antenna arrays.

Ambarella’s Oculii radar technology passed a global OEM’s high-speed AEB testing, across a range of objects and scenarios, including this one showing a small child in motion.

In third-party tests conducted by a global automotive OEM, Ambarella’s radar system detected small, low-profile objects—down to the size of a water bottle—at distances beyond 100 meters, including at night. These evaluations included more than ten object types, from a pedestrian dummy and a motorcycle, to a small stuffed puppy, a traffic cone, and a cardboard box. Crucially, our system not only located these objects, but accurately assessed their speed and trajectory—enabling smarter, more selective AEB activation when warranted.

Critically, we also outperformed other 3D and 4D radar systems that were tested in suppressing false positives. Ghosting and noise are persistent issues in traditional radar systems that, unlike our Oculii technology, don’t employ AI algorithms to adapt radar waveforms to the environment. As a result, when those competing radar systems falsely trigger AEB at high speed, the result can be just as dangerous as a missed detection (think multi-car pileups). With our Oculii radar technology, we reduce these risks through a combination of high angular resolution, enhanced vertical separation, and AI-powered waveform adaptation.

This video shows Ambarella’s high-speed AEB detection of a small stuffed dog, in motion at night.

Our Oculii technology’s AEB performance was validated by the OEM at speeds up to 120 km/h, in both daylight and low-light conditions, on closed-course roads. To our knowledge, no other radar-based AEB system has demonstrated this level of precision and reliability in comparable high-speed tests using radar alone.

A Centralized, Scalable Approach

Our Oculii radar technology is not just high-performing—it’s also built for cost-effective scalability. Paired with Ambarella’s CV3-AD family of AI domain controller SoCs for centralized radar processing, we can leverage that together with our lightweight, AI-based Oculii software—which can use radar heads with only 6 transmit and 8 receive antennas—to enable a centralized radar architecture that delivers high-resolution perception while minimizing hardware overhead. This approach allows us to reduce data bandwidth, simplify integration with other sensors, and reduce overall system complexity (competing 4D radar systems require multiple-fold additional antennas, generating a lot more data and power).

That matters for OEMs building cost-sensitive platforms. High-speed AEB shouldn’t be reserved for premium vehicles. With the right architecture, advanced perception can scale across an OEM’s entire range of makes and models.

The Road Ahead

At Ambarella, our goal is to push the boundaries of what’s possible in vehicle safety, and to do so in a way that’s accessible, scalable, and regulation-ready.

With our Oculii radar technology, we’ve shown that it’s possible to deliver high-speed AEB using radar—without sacrificing performance. We believe this technology will be essential in helping OEMs meet the challenges of the coming decade, while protecting the lives of drivers, passengers, and pedestrians alike.

We invite additional OEMs to conduct their own AEB tests using Ambarella’s radar technology. Safety starts with better perception—and we’re ready to help you bring that to the world’s highways.

Ted Chua, Ph.D.
Director of Radar Technology Marketing, Ambarella

The post Achieving High-speed Automatic Emergency Braking with AI-driven 4D Imaging Radar appeared first on Edge AI and Vision Alliance.

]]>
Italian National Automobile Museum Exhibit Honors Legacy and Future of Autonomous Driving https://www.edge-ai-vision.com/2025/06/italian-national-automobile-museum-exhibit-honors-legacy-and-future-of-autonomous-driving/ Mon, 02 Jun 2025 08:01:26 +0000 https://www.edge-ai-vision.com/?p=53906 This blog post was originally published at Ambarella’s website. It is reprinted here with the permission of Ambarella. Fifteen years ago, before we were acquired by Ambarella and became one of the company’s automotive R&D centers, VisLab sent four driverless vehicles from Parma to Shanghai. Traveling over 15,000 kilometers from July to October of 2010, […]

The post Italian National Automobile Museum Exhibit Honors Legacy and Future of Autonomous Driving appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at Ambarella’s website. It is reprinted here with the permission of Ambarella.

Fifteen years ago, before we were acquired by Ambarella and became one of the company’s automotive R&D centers, VisLab sent four driverless vehicles from Parma to Shanghai. Traveling over 15,000 kilometers from July to October of 2010, the vehicles navigated across nine different nations and two continents —testing autonomous driving functions. That journey, part of the VisLab Intercontinental Autonomous Challenge (VIAC), was the world’s first, and remains, to this day, the only intercontinental autonomous driving experiment ever completed.

Last month, one of those historic vehicles took its place in autonomous automotive history, as part of the Italian National Automobile Museum (MAUTO)’s new Spazio Futuro (The Future Unfolds) exhibition. The exhibit celebrates an ever-evolving dialogue between the past, present, and future of mobility, and I’m honored that Ambarella and VisLab were chosen to be a part of this rich history.

Visitors to the exhibit will be able to see our above original VIAC prototype, alongside video projections, innovative models and interactive installations illustrating the evolution of autonomous driving. It’s a moment of great pride—for me personally, and for our team in Parma. What was once a radical experiment has now been recognized as a foundational milestone in the global journey toward increasing levels of autonomy.

Yet, our story didn’t begin with VIAC. Back in 1998, in one of the world’s first autonomous driving experiments ever conducted, we outfitted a Lancia Thema with a PC and a pair of videophone cameras and drove 2,000 kilometers around Italy in semi-autonomous mode on open roads. In 2005, we joined the DARPA Grand Challenge, with our TerraMax vehicle completing a fully autonomous 132-mile route across the Mojave Desert. In 2013, we marked another world first with our BRAiVE vehicle driving autonomously, with no one in the driver’s seat, through downtown Parma.

At every stage, VisLab and Ambarella have continued to innovate and stay ahead of the curve. Since becoming part of Ambarella in 2015, we’ve brought together VisLab’s deep experience in autonomous driving with Ambarella’s cutting-edge AI semiconductor designs. Each year, we showcase our continuing progress with new driving demos during the Consumer Electronics Show (CES) in Las Vegas, as shown in the above fully autonomous example during CES 2024. We’re refining vision and radar-based perception, path planning, sensor fusion and efficient edge computing—preparing for safer, smarter vehicles of the future.

To reach this future, we’re pursuing an approach that’s lean, scalable, and deeply informed by real-world complexity. We have already demonstrated on the public roads of multiple continents that our CV3-AD system-on-chip family, combined with our AD software stack (installed in our R&D vehicles) can achieve L2+ to L4 autonomy using one or two, power-efficient processing chips. For our AD stack, the shift to becoming fully deep learning-based has transformed not only how our systems see the world, but how they understand it. The perception, decision-making and planning that used to rely on traditional algorithms and pre-loaded HD maps is now driven by deep learning AI processing capable of leveraging massive, real-time datasets for increased performance. Furthermore, one of our biggest technical challenges today is modeling behavioral prediction. This requires not just physical sensing, but large-scale real-world data and neural networks capable of anticipating human behavior. It’s a hard problem, but one we’re deeply committed to solving.

The following picture shows an example result of the precise detections that we’ve achieved for road objects and their classification, using real-time neural image processing on a single CV3-AD chip:

Even with all the progress we’ve made, our historic VIAC vehicle will always be special. Not because of what it was, but because of what it proved 15 years ago. Autonomy wasn’t science fiction, it was achievable. And it could be done with ingenuity, grit, and a team of researchers from Parma who had the courage to follow through on their vision.

I invite you to visit the Spazio Futuro exhibit at MAUTO and see the vehicle that helped the world change how we think about transportation. It’s more than a museum piece—it’s a reminder that the future is built by those willing to experiment early, iterate relentlessly, and bring bold ideas to the road.

Alberto Broggi
General Manager, Ambarella

The post Italian National Automobile Museum Exhibit Honors Legacy and Future of Autonomous Driving appeared first on Edge AI and Vision Alliance.

]]>
Advancing Generative AI at the Edge During CES 2025 https://www.edge-ai-vision.com/2025/05/advancing-generative-ai-at-the-edge-during-ces-2025/ Fri, 09 May 2025 08:00:14 +0000 https://www.edge-ai-vision.com/?p=53623 This blog post was originally published at Ambarella’s website. It is reprinted here with the permission of Ambarella. For this year’s CES, our theme was Your GenAI Edge—highlighting how Ambarella’s AI SoCs continue to redefine what’s possible with generative AI at the edge. Building on last year’s edge GenAI demos, we debuted a new 25-stream, […]

The post Advancing Generative AI at the Edge During CES 2025 appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at Ambarella’s website. It is reprinted here with the permission of Ambarella.

For this year’s CES, our theme was Your GenAI Edge—highlighting how Ambarella’s AI SoCs continue to redefine what’s possible with generative AI at the edge. Building on last year’s edge GenAI demos, we debuted a new 25-stream, multi-channel demo, combining video decoding with visual analytics powered by the CLIP and LLaVA One-Vision models. We also debuted our first automotive and fleet GenAI demonstrations, as well as a real-world vision-language model (VLM) implementation by autonomous trucking customer Kodiak. Over the course of four packed days, we had hundreds of customer, investor, press, analyst and partner meetings that showcased our latest innovations in edge AI. Check out this short recap video…

In total, we had nearly 40 demos—breaking our previous record for indoor demos by four. From Automotive to Security, along with Robotics, Videoconferencing and Action Cameras, our demonstrations spanned a wide range of on-device and on-premise applications, showcasing how we can deliver high-performance and power-efficient GenAI and vision AI at the edge. Additionally, we debuted our new Partner Showcase room with numerous next-generation products and demonstrations from our continuously growing ecosystem.

Most of our visitors also took drives in our two demo vehicles, both running on a CV3-AD SoC AI domain controller. One vehicle featured our latest Oculii™ centralized-radar demo, running five 4D imaging radars and displaying point clouds of the surrounding environment in real time—with enough detail to identify individual pedestrians walking by! The second showcased both live GenAI VLM scene analysis, and traditional-CNN camera + radar perception/fusion for the volume L2+ market—one of the world’s first VLM + sensor fusion driving demos, as shown in the below screen capture…

Some of the other highlights from our exhibition include the first-ever display showing samples of the full Continental/Ambarella Joint ECU Portfolio, robotics fleet telematics & control featuring LLMs and multi-chip cooperation, customer LG’s live driver monitoring system (DMS) demo (currently in production with a global automotive OEM), an LLM running on CV3-AD that described automotive scenes, and a CV75 reference design demonstrating transformer-based AI search for home security cameras.

During CES, we announced our new N1-655 edge GenAI SoC, targeting on-premise, multi-channel VLM and NN processing in under 20 watts. This latest member of our N1 family is ideal for on-premise AI boxes, autonomous robotics and smart city security, bringing high-performance GenAI to power- and cost-constrained edge applications—consuming a fraction of the power needed by cloud processors.

We also announced an ecosystem collaboration with DeepEdge to integrate their end-to-end AI development platform with our portfolio of edge AI SoCs. Debuting during CES, the DeepEdge.ai Platform, combined with their Virtual Benchmark Lab, is designed to deliver an accelerated AI developer journey, streamlining the AI lifecycle—from data preparation and model training to optimization, deployment and monitoring—for Ambarella’s entire portfolio of CVflow® edge AI SoCs.

If you missed us this year, we’re offering customers and partners virtual guided tours featuring videos from all of our show demos. Contact your Ambarella representative to schedule.

The post Advancing Generative AI at the Edge During CES 2025 appeared first on Edge AI and Vision Alliance.

]]>