Articles tagged with "sensor-fusion"
German military drones, robots hunt radioactive waste in danger zones
German researchers at the Fraunhofer Institute for Communication, Information Processing and Ergonomics (FKIE) have developed AI-powered drones and robots designed to rapidly detect radioactive waste in hazardous or inaccessible environments. These unmanned aerial systems (UAS) and ground vehicles (UGV) integrate gamma detectors with electro-optical and infrared cameras, combined with advanced sensor fusion, automation, and probabilistic search algorithms. The technology can localize radioactive sources within a few feet in minutes, significantly reducing the time needed to find radioactive, chemical, or biological hazards during emergencies. This project is supported by the Bundeswehr Research Institute for Protective Technologies and CBRN Protection (WIS). The detection process involves two phases: an initial exploration phase where the drone follows a predefined flight path measuring background radiation, and a targeted search phase triggered by anomaly detection. In the search phase, the drone autonomously adjusts its flight path using real-time sensor data and stochastic algorithms that estimate the probable location of radioactive sources. The system generates
robotsdronesradioactive-waste-detectionAI-automationsensor-fusionhazardous-environment-monitoringmilitary-technologyWestwood Robotics unveils new humanoid robot that works while walking
Westwood Robotics has introduced Themis Gen 2.5, a full-size humanoid robot that marks a significant advancement in mobile manipulation by enabling simultaneous walking and object manipulation. Unlike traditional humanoid robots that must stop to perform tasks, Themis Gen 2.5 integrates locomotion and manipulation in real time through its new AI-Augmented Humanoid Operating System (AOS). This software framework combines perception, planning, and control using sensor-fusion-based state estimation to maintain balance and precision while navigating dynamic environments. The robot also employs a navigation module with multi-layer mapping and semantic understanding, alongside an Object-Centric Vision-Action Model (OC-VAM) that links visual perception directly to physical actions for efficient task planning. In addition to software innovations, Themis Gen 2.5 features significant hardware upgrades, including a redesigned structure with 40% greater impact resistance and arms with seven degrees of freedom capable of handling payloads over five kilograms. Its lower body incorporates Mountain BEAR actuators in
roboticshumanoid-robotmobile-manipulationAI-operating-systemsensor-fusionactuator-technologyindustrial-automationRobot Talk Episode 140 – Robot balance and agility, with Amir Patel - Robohub
In the Robot Talk Episode 140, Claire interviews Amir Patel, an Associate Professor of Robotics and AI at University College London, about his work on designing robots that emulate the agility and maneuverability of cheetahs. Patel’s research integrates robotics techniques such as sensor fusion, computer vision, mechanical modeling, and optimal control to analyze and quantify the locomotion of high-speed predators. The goal is to apply these biological insights to develop bio-inspired robotic machines with enhanced balance and agility. Patel’s background includes founding the African Robotics Unit at the University of Cape Town, highlighting his extensive experience in robotics research and education. The episode delves into how understanding animal movement can inform the creation of more capable autonomous robots, advancing the fields of robotics and artificial intelligence. Robot Talk, as a weekly podcast, continues to explore cutting-edge developments in robotics and autonomous systems through expert conversations like this one.
roboticsrobot-balancerobot-agilitybio-inspired-robotssensor-fusioncomputer-visionautonomous-machinesThis Khosla-based startup can track drones, trucks, and robotaxis, inch by inch
Point One Navigation, a San Francisco-based startup founded in 2016, specializes in highly precise location technology applicable across various moving vehicles and devices, including drones, autonomous vehicles, agricultural equipment, and wearables. Their positioning engine combines augmented global navigation satellite systems (GNSS), computer vision, and sensor fusion to achieve location accuracy within 1 centimeter under optimal conditions. The technology is primarily delivered via an API integrated into vehicles equipped with necessary hardware, while a chipset is provided for other devices. Initially focused on automotive clients, Point One's technology now supports over 150,000 vehicles from an EV manufacturer and serves sectors such as turf care, last-mile delivery fleets, and bike manufacturing. Recently, Point One raised $35 million in a Series C funding round led by Khosla Ventures, bringing its valuation to $230 million. The company has expanded rapidly since 2021, with a tenfold increase in manufacturers using its platform across automotive, robotics, industrial, and wearable sectors. The new funding
robotIoTautonomous-vehiclespositioning-technologyGNSSsensor-fusionprecision-navigation6 trends shaping robotics and AI - The Robot Report
The article from The Robot Report, based on a survey conducted by MassRobotics with support from Lattice Semiconductor, identifies six key trends shaping the robotics and AI industries. Among the most notable is the widespread use of sensor fusion, particularly the combination of LiDAR and cameras, which 75.7% of respondents found most effective for object detection. However, challenges such as high costs, integration complexity, and maintenance needs remain significant barriers, underscoring the industry's demand for more streamlined and affordable multi-sensor solutions. Another major trend is the growing adoption of Edge AI, with half of the surveyed professionals already implementing AI at the sensor level to reduce latency and improve real-time performance. This shift drives demand for low-power AI hardware capable of on-device inference. Motor control also remains critical, with emphasis on real-time responsiveness and power efficiency, highlighting the need for advanced control systems that minimize latency and optimize energy use. Power consumption continues to be a persistent challenge, with moderate satisfaction reported and a strong market
roboticsartificial-intelligencesensor-fusionedge-AImotor-controlmachine-learningautonomous-systemsCan China’s J-20 Detect the F-35?
The article examines the ongoing technological contest between two advanced stealth fighters: China’s upgraded J-20 “Mighty Dragon” and the U.S. F-35 Lightning II. China asserts that its J-20 can now detect the F-35 at distances exceeding 700 kilometers, leveraging next-generation AESA radar and infrared search systems enhanced by silicon carbide technology. This claim highlights significant advancements in China’s sensor capabilities aimed at countering the F-35’s stealth features. However, the article emphasizes that despite these improvements, the F-35 retains critical advantages through its sophisticated sensor fusion, extremely low radar cross-section, and integrated networked data links. These capabilities collectively enhance the F-35’s situational awareness and survivability in combat. Ultimately, the piece argues that future air engagements will be less about individual aircraft performance and more about the effectiveness of integrated systems and networked warfare, shifting the paradigm from jet-versus-jet dogfights to system-versus-system battles.
materialssilicon-carbideradar-technologystealth-technologysensor-fusionaerospace-technologymilitary-technologyThe algorithms steering the future of maritime navigation
The article "The algorithms steering the future of maritime navigation" outlines the transformative shift in maritime engineering from traditional manual navigation methods to advanced autonomous shipping systems powered by artificial intelligence (AI) and sensor integration. Historically reliant on human crews for navigation and decision-making, modern vessels are increasingly equipped with sophisticated control systems that combine radar, LIDAR, GPS, sonar, cameras, and AI to enable real-time environmental awareness and autonomous decision-making. These systems allow ships to plan routes, avoid obstacles, and adjust operations dynamically, while human supervisors monitor performance remotely and intervene when necessary, especially during emergencies. The International Maritime Organization (IMO) categorizes autonomous ships into four degrees of autonomy, ranging from basic onboard automation (Degree 1) to fully autonomous vessels capable of independent navigation and decision-making (Degree 4). Despite technological advances, most autonomous ships in operation today are semi-autonomous (Degrees 1 and 2), with over 95% market share in 2023, reflecting the current preference for
robotIoTenergyautonomous-shipsmaritime-navigationAI-control-systemssensor-fusionmachine-learningmaritime-robotics4D1 launches T2 for rugged, millimeter-level 3D indoor positioning - The Robot Report
4D1 has launched the T2, a precise indoor positioning system designed to deliver millimeter-level 3D positioning with six degrees of freedom (6DoF) for industrial environments such as factories and process-centric industries. The T2 system addresses common challenges in indoor positioning like accuracy loss, drift, and bulky hardware by providing drift-free, real-time location tracking that includes full orientation for both robots and human operators. Its rugged, compact design is IP54-rated for dust and water resistance, making it suitable for harsh industrial settings. The system uses advanced sensor fusion, combining ultrasonic signals with an inertial measurement unit (IMU), enabling calibration-free operation and rapid deployment with existing industrial equipment. 4D1 emphasizes that T2 facilitates seamless collaboration between humans, robots, and AI systems, enhancing efficiency, safety, and productivity on the shop floor. The system generates AI-ready operational data that supports task validation, faster workforce upskilling, and actionable insights, contributing to smarter decision-making and AI-driven
robotindoor-positioningindustrial-automationAIcollaborative-robotssensor-fusionIIoTRealSense deepens NVIDIA ties with new D555 depth camera
RealSense, now an independent company, is strengthening its collaboration with NVIDIA, a partnership previously limited due to RealSense’s former ownership by Intel. This collaboration centers on the new RealSense D555 depth camera, which features the v5 Vision Processor with on-chip Power over Ethernet, a global shutter, integrated IMU, and native ROS 2 support. The D555 enables direct streaming into NVIDIA’s Holoscan platform for ultra-low-latency sensor fusion and real-time computing, enhancing obstacle avoidance, safe human-robot interaction, and navigation for autonomous mobile robots (AMRs) and humanoids. The integration leverages NVIDIA’s Jetson Thor platform, powered by the Blackwell GPU architecture, delivering up to 2070 teraflops of AI performance with significantly improved energy efficiency compared to its predecessor, Jetson Orin. This allows robotics developers to run large-scale generative AI models and advanced perception pipelines at the edge. The partnership offers three main benefits: accelerated simulation-to-deployment
roboticsdepth-cameraNVIDIA-JetsonAI-perceptionsensor-fusionautonomous-mobile-robotsreal-time-computee-con Systems adds camera, compute solutions for NVIDIA Jetson Thor
e-con Systems has announced comprehensive support for NVIDIA’s newly launched Jetson Thor modules, which deliver up to 2070 FP4 TFLOPS of AI compute power aimed at next-generation robotics and AI-enabled machines. Their support spans a broad portfolio of vision products, including USB Series cameras, RouteCAM GigE Ethernet cameras with ONVIF compliance, 10G Holoscan Camera solutions, and a compact ECU platform designed for real-time edge AI applications. These solutions leverage multi-sensor fusion, ultra-low latency, and resolutions up to 20 MP, enabling accelerated development of advanced AI vision applications. A key highlight is e-con’s 10G e-con HSB solution that uses Camera Over Ethernet (CoE) protocol with a custom FPGA-based TintE ISP board, allowing direct data transfer to GPU memory with minimal CPU usage. This setup supports various high-quality sensors such as Sony IMX715 and onsemi AR0234, facilitating real-time operation and quicker response times. Additionally, e
robotAIembedded-visionNVIDIA-Jetson-Thorcamera-solutionsedge-AIsensor-fusionHow AV developers use virtual driving simulations to stress-test adverse weather - The Robot Report
The article discusses the significant challenges adverse weather conditions pose to autonomous vehicle (AV) systems, highlighting that rain, snow, fog, glare, and varying road surfaces can severely distort sensor inputs and decision-making processes. While AV technologies have advanced in ideal conditions, real-world environments with bad weather introduce complex disruptions that traditional training data often fail to address. Each sensor type—cameras, lidar, and radar—faces unique vulnerabilities: cameras suffer from obscured vision and noise, lidar can be affected by precipitation scattering laser beams, and radar, despite better penetration through fog and rain, experiences reduced resolution and clutter. When multiple sensors degrade simultaneously, overall system performance deteriorates sharply. These sensor challenges lead to perception and prediction failures, where objects may be missed or misclassified, and behavioral predictions become unreliable due to altered pedestrian and vehicle behaviors in bad weather. Such failures can cascade into unsafe planning and control decisions by the AV. Real-world incidents have demonstrated AV prototypes disengaging or misbehaving in adverse weather,
robotautonomous-vehiclessensor-fusionvirtual-simulationadverse-weather-testingperception-systemsself-driving-technologyUK’s war brain tech cuts strike decision time from hours to minutes
The UK Army has introduced ASGARD (Autonomous Strike Guidance and Reconnaissance Device), a cutting-edge digital targeting system designed to drastically reduce strike decision times from hours to minutes and enhance battlefield lethality by tenfold. Developed in response to operational lessons from the Ukraine conflict, ASGARD integrates artificial intelligence, sensor fusion, and secure digital networks to create a real-time battlefield web. This system enables commanders to detect, decide, and engage targets rapidly across dispersed forces, effectively doubling the lethality of British troops. ASGARD has already undergone successful field tests with NATO forces in Estonia and is a key component of the UK’s broader Strategic Defence Review aimed at modernizing combat capabilities by 2027. ASGARD’s rapid development—from contract signing in January 2025 to a working prototype deployed within four months—demonstrates a shift toward faster procurement and modular, digital-first military technology acquisition. The system connects sensors, shooters, and decision-makers across land, sea, air, and
IoTmilitary-technologyartificial-intelligencesensor-fusiondigital-networksautonomous-systemsbattlefield-technology