RIEM News LogoRIEM News

Articles tagged with "computer-vision"

  • How machine vision is enhancing automation safety and efficiency - The Robot Report

    The article explains how machine vision technologies enhance automation safety and efficiency by enabling automated systems to interpret and understand their environments through image analysis. Machine vision involves extracting meaningful information from images—not limited to visible light but also including infrared, laser, X-ray, and ultrasound imaging. This capability allows robots and automated equipment to identify and manipulate objects in complex settings, such as picking specific parts from a bin with randomly arranged items, regardless of their orientation or distance from the camera. Advanced machine vision systems also support 3D scanning and modeling, which can be used for applications like 3D printing. The article distinguishes machine vision from computer vision, noting that machine vision typically refers to established, efficient mathematical methods for image analysis, while computer vision often involves more computationally intensive approaches, including AI and machine learning. However, the terms can overlap in practice. Key techniques in machine vision include digital image processing (enhancement, restoration, compression), photogrammetry (extracting measurements and 3D information from images),

    robotmachine-visionautomationindustrial-roboticscomputer-visionAI3D-scanning
  • Simbe for Merchants suite offers retailers chain-wide visibility - The Robot Report

    Simbe Robotics has launched Simbe for Merchants, a comprehensive suite designed to provide retailers with real-time, chain-wide visibility into product placement and inventory on store shelves. Central to the offering are “realograms,” automatically generated, to-scale diagrams that accurately depict what is physically present on shelves, alongside real-time planogram dashboards. This technology addresses significant retail challenges, as retailers lose an estimated 5.5% of sales and 5% of margin due to in-store inefficiencies, with planogram execution averaging only 60%. Simbe’s solution enables merchandising teams to monitor shelf conditions daily, ensuring correct product placement, display setups, and vendor alignment, ultimately driving measurable sales and margin improvements. The system leverages Simbe’s Tally robots and Tally Spot shelf-mounted cameras, combining computer vision and RFID capabilities to capture detailed, frequent data across hundreds of stores worldwide. Features like Multi-Store View allow retailers to instantly compare shelf conditions for specific products or categories across multiple locations, facilitating efficient

    robotretail-roboticsinventory-managementcomputer-visionRFID-technologyshelf-intelligenceretail-automation
  • Robotic exoskeleton gives YouTuber 63% aim boost, 17ms latency

    YouTuber Nick Zetta, known as Basically Homeless, developed a robotic exoskeleton aimed at enhancing aiming performance in the Aimlabs training program. Combining Nvidia Jetson hardware with a YOLO-powered AI vision system, motors, and 3D-printed components, the device physically guides his wrist and fingers to improve target acquisition. Initial tests showed a 20% accuracy drop as Zetta adapted to the system, but after hardware optimizations—such as reducing latency from 50ms to 17ms and increasing motor strength—he achieved a 63% boost in his Aimlabs score, propelling him to second place on the global leaderboard. The exoskeleton attaches to the forearm using 3D-printed hinges, with Kevlar lines and gimbal motors controlling wrist movements and solenoids managing finger clicks. A high-speed camera feeds real-time target data to the AI, which directs the motors to adjust hand positioning, effectively acting as a physical aimbot. Unlike

    roboticsrobotic-exoskeletonAI-visioncomputer-visionNvidia-Jetson3D-printingassistive-technology
  • Self-supervised learning for soccer ball detection and beyond: interview with winners of the RoboCup 2025 best paper award - Robohub

    The article highlights the award-winning research on autonomous soccer ball detection by the SPQR team, who received the best paper award at RoboCup 2025 held in Salvador, Brazil. The team addressed a key challenge in robotic soccer: accurate ball detection under varying conditions. Traditional deep learning approaches require large labeled datasets, which are difficult and labor-intensive to produce for highly specific tasks like RoboCup. To overcome this, the researchers developed a self-supervised learning framework that reduces the need for manual labeling by leveraging pretext tasks that exploit the structure of unlabeled image data. Their method also incorporates external guidance from a pretrained object detection model (YOLO) to refine predictions from a general bounding box to a more precise circular detection around the ball. Deployed at RoboCup 2025, the new model demonstrated significant improvements over their 2024 benchmark, notably requiring less training data and exhibiting greater robustness to different lighting and environmental conditions. This adaptability is crucial given the variability of competition venues. The SPQR team

    robotautonomous-robotsself-supervised-learningdeep-learningRoboCupsoccer-robotscomputer-vision
  • Symage to spotlight future of vision model training at RoboBusiness

    Symage, a company specializing in physics-based, high-fidelity synthetic image data for AI and computer vision training, will showcase its technology at RoboBusiness 2025, held October 15-16 at the Santa Clara Convention Center. Unlike generative AI approaches, Symage’s platform generates photorealistic synthetic datasets without visual artifacts or model degradation, resulting in faster training, improved accuracy, better edge case coverage, and reduced bias. CEO Brian Geisel emphasizes that this approach enables robotics teams to develop and test vision models more efficiently and reliably, supporting advancements in smarter and safer robotics systems. At RoboBusiness, which attracts over 2,000 robotics professionals and features 100+ exhibitors and numerous educational sessions, Geisel will present on how synthetic data accelerates vision model development, particularly in warehouse automation, agriculture technology, and mobile robotics. Symage’s offerings highlight the potential of physics-accurate synthetic data to train models before hardware availability, addressing critical edge cases and improving data quality. The

    roboticsAI-trainingsynthetic-datacomputer-visionrobotics-developmentautomationrobotics-innovation
  • Circus SE completes first production of CA-1 robots in high-volume facility - The Robot Report

    Circus SE, a Munich-based developer of AI-powered autonomous food preparation robots, has announced the start of production for its fourth-generation CA-1 robot at a newly established high-volume manufacturing facility. The factory, designed with an intelligent modular setup, enables industrial-scale production of the complex CA-1 robot, which comprises over 29,000 components—comparable in complexity to a small car. The CA-1 robot can prepare meals in three to four minutes and integrates advanced features such as smart food silos for inventory tracking, induction cooking for energy-efficient rapid heating, robotic arms for dispensing and plating, AI-driven computer vision for operational monitoring, and a self-cleaning system for low maintenance. Each unit undergoes more than 150 precision tests to ensure enterprise-grade reliability akin to automotive standards. Circus SE is expanding its global presence with support from Celestica, its production partner experienced in engineering and supply chain management, enabling the company to scale production to thousands of units annually. The firm recently

    roboticsAIautonomous-systemsfood-preparation-robotsindustrial-productioncomputer-visionenergy-efficiency
  • Orchard Robotics, founded by a Thiel fellow Cornell dropout, raises $22M for farm vision AI 

    Orchard Robotics, founded by Charlie Wu—a Cornell computer science dropout and Thiel fellow inspired by his grandparents’ apple farming background—has raised $22 million in a Series A funding round led by Quiet Capital and Shine Capital. The startup develops AI-powered vision technology to help fruit growers more accurately monitor crop health and yield. Using small cameras mounted on tractors, Orchard Robotics captures ultra-high-resolution images of fruit, which are analyzed by AI to assess size, color, and health. This data is then uploaded to a cloud-based platform that assists farmers in making informed decisions about fertilization, pruning, labor needs, and marketing. Despite the concept of computer vision for specialty crops not being new, most large U.S. farms still rely on manual sampling, which provides imprecise estimates of crop conditions. Orchard Robotics aims to address this gap by offering more precise, scalable data collection and analysis. The company’s technology is already deployed on major apple and grape farms and is expanding to other crops such as blueberries

    roboticsartificial-intelligenceagriculture-technologyfarm-automationcomputer-visionIoT-in-agricultureprecision-farming
  • AI system slashes GPS errors almost 40 times in urban settings

    Researchers at the University of Surrey have developed an AI system called Pose-Enhanced Geo-Localisation (PEnG) that dramatically improves location accuracy in urban environments where GPS signals are often unreliable. By combining satellite imagery with street-level images and using relative pose estimation to determine camera orientation, PEnG reduces localization errors from 734 meters to just 22 meters. The system operates using a simple monocular camera, common in vehicles, making it practical and accessible for real-world applications, especially in areas like tunnels or dense cities where GPS coverage is weak or unavailable. PEnG offers a GPS-independent navigation solution that could significantly enhance the reliability and resilience of autonomous vehicles, robotics, and other navigation-dependent industries such as logistics and aviation. The researchers emphasize that this approach not only improves everyday convenience but also addresses safety concerns linked to GPS outages or interference. Supported by the University of Surrey’s PhD Foundership Award, the team is working on a prototype for real-world testing and has made their research open

    robotAIautonomous-vehiclesnavigationGPS-alternativescomputer-visionrobotics
  • Secret lighting codes could make spotting deepfake videos easier

    Cornell researchers have developed a novel light-based watermarking technique to combat the growing threat of deepfake videos, which have become increasingly convincing due to advances in generative AI. Unlike traditional digital watermarks that require cooperation from cameras or AI models, this method embeds nearly invisible codes directly into the lighting environment during video recording. By subtly varying the brightness of light sources—such as computer screens or lamps equipped with small computer chips—the system creates a hidden signature that is imperceptible to the human eye but can later be used to verify video authenticity. This “noise-coded” lighting approach leverages natural light fluctuations, making the embedded codes difficult to detect or remove without knowledge of the secret pattern. Each light source carries a unique code, enabling forensic analysts to identify manipulated or missing footage by comparing the original lighting pattern with recovered “code videos,” which reveal inconsistencies in altered sections. The technique supports multiple simultaneous codes within a scene, increasing the difficulty for adversaries who would need to replicate all codes consistently to

    IoTlighting-technologydeepfake-detectionvideo-watermarkingcomputer-visionAI-securitydigital-forensics
  • Simbe makes Tally more effective in fresh departments with latest update - The Robot Report

    Simbe Robotics has enhanced its Store Intelligence platform with new capabilities specifically designed for fresh grocery departments such as produce, deli, bakery, and prepared foods. These updates leverage the Tally autonomous mobile robot (AMR), fixed sensors, RFID technology, and virtual tours to provide near real-time visibility into inventory levels, product locations, pricing, and freshness. This multimodal approach addresses the operational complexity and high shrink rates—averaging 6.6% in perimeter departments—that characterize fresh zones, which now represent 42% of total grocery sales and 41% of online grocery revenue. The expanded platform aims to help grocers reduce shrink, improve product availability, and enhance shopper trust by automating manual processes and delivering actionable insights. Features include Tally’s daily scans of packaged fresh goods to identify out-of-stocks and pricing errors, Tally Spot’s high-frequency monitoring of fast-selling items, panoramic virtual tours for remote merchandising assessment, and RFID-enabled freshness tracking. Simbe emphasizes that fresh departments are

    robotautonomous-mobile-robotretail-automationcomputer-visionartificial-intelligenceinventory-managementgrocery-technology
  • MIT vision system teaches robots to understand their bodies

    MIT researchers at CSAIL have developed a novel robotic control system called Neural Jacobian Fields (NJF) that enables robots to learn how their bodies move in response to motor commands purely through visual observation, without relying on embedded sensors or hand-coded models. Using a single camera and random exploratory movements, NJF allows robots—ranging from soft robotic hands to rigid arms and rotating platforms—to autonomously build an internal model of their 3D geometry and control sensitivities. This approach mimics how humans learn to control their limbs by observing and adapting to their own movements, shifting robotics from traditional programming toward teaching robots through experience. NJF’s key innovation lies in decoupling robot control from hardware constraints, enabling designers to create soft, deformable, or irregularly shaped robots without embedding sensors or modifying structures for easier modeling. By leveraging a neural network inspired by neural radiance fields (NeRF), NJF reconstructs the robot’s shape and its response to control inputs solely from visual data. This

    roboticsmachine-learningsoft-roboticsrobotic-control-systemsneural-networks3D-printingcomputer-vision
  • Wayve CEO Alex Kendall brings the future of autonomous AI to TechCrunch Disrupt 2025

    At TechCrunch Disrupt 2025, taking place from October 27–29 at Moscone West in San Francisco, Alex Kendall, co-founder and CEO of Wayve, will be featured on an AI-focused panel discussing the future of autonomous AI. Kendall, who founded Wayve in 2017, has pioneered a new approach to autonomous driving that relies on embodied intelligence powered by deep learning and computer vision, rather than traditional handcrafted rules or maps. His work demonstrated that machines can interpret their environment and make real-time driving decisions without manual coding, marking a significant breakthrough in self-driving technology. Currently, Kendall is spearheading the development of AV2.0, a next-generation autonomous vehicle architecture designed for global scalability. His role as CEO involves integrating strategy, research, partnerships, and commercialization efforts to bring intelligent driving systems to market. With a strong academic background, including a PhD in Computer Vision and Robotics and recognition on Forbes 30 Under 30, Kendall brings a unique combination of scientific expertise

    robotautonomous-vehiclesAIdeep-learningcomputer-visionembodied-intelligenceself-driving-systems
  • AI can see whatever you want with US engineers' new attack technique

    US engineers have developed a novel attack technique called RisingAttacK that can manipulate AI computer vision systems to control what the AI "sees." This method targets widely used vision models in applications such as autonomous vehicles, healthcare, and security, where AI accuracy is critical for safety. RisingAttacK works by identifying key visual features in an image and making minimal, targeted changes to those features, causing the AI to misinterpret or fail to detect objects that remain clearly visible to humans. For example, an AI might recognize a car in one image but fail to do so in a nearly identical altered image. The researchers tested RisingAttacK against four popular vision AI models—ResNet-50, DenseNet-121, ViTB, and DEiT-B—and found it effective in manipulating all of them. The technique highlights vulnerabilities in deep neural networks, particularly in the context of adversarial attacks where input data is subtly altered to deceive AI systems. The team is now exploring the applicability of this

    robotAI-securityautonomous-vehiclescomputer-visionadversarial-attacksartificial-intelligencecybersecurity
  • Digital Teammate from Badger Technologies uses multipurpose robots - The Robot Report

    Badger Technologies LLC recently launched its Digital Teammate platform, featuring autonomous mobile robots (AMRs) designed to work collaboratively with retail store associates to enhance productivity and operational efficiency. These multipurpose robots integrate computer vision and artificial intelligence to assist employees by automating tasks such as hazard detection, inventory monitoring, price accuracy, planogram compliance, and security. The platform aims to complement rather than replace human workers, providing critical data that improves store operations and customer shopping experiences. Badger emphasizes that the robots act as digital teammates, extending staff capabilities and enabling more meaningful human interactions. The Digital Teammate platform combines hardware and software, including RFID detection and retail media network advertising, to augment existing retail systems and data analytics. A mobile app delivers prioritized tasks and insights to all levels of retail staff, from floor associates to executives, facilitating data-driven decision-making without requiring users to become analysts. The robots help retailers "triangulate" data by comparing expected inventory with actual shelf conditions and support a persona-based

    robotautonomous-mobile-robotsretail-automationartificial-intelligencecomputer-visioninventory-managementRFID-technology
  • US unleashes smart rifle scopes that shoot enemy drones on their own

    The US Army has begun deploying the SMASH 2000L, an AI-enabled smart rifle scope developed by Israeli defense firm Smart Shooter, designed to counter small unmanned aerial systems (sUAS). This advanced fire control system integrates electro-optical sensors, computer vision, and proprietary target acquisition software to detect, lock on, and track small aerial targets such as quadcopters or fixed-wing drones. The system only permits the rifle to fire when a guaranteed hit is calculated, effectively eliminating human error in timing and enabling soldiers to engage drones with high precision. The SMASH 2000L was recently demonstrated during Project Flytrap, a multinational live-fire exercise in Germany, where US soldiers successfully used it mounted on M4A1 carbines. The SMASH 2000L is a lighter, more compact evolution of earlier SMASH variants already in use by NATO partners and combat forces, weighing about 2.5 pounds and fitting standard Picatinny rails. It offers real-time image processing

    robotartificial-intelligencesmart-rifle-scopesdrone-defensemilitary-technologycomputer-visionautonomous-targeting
  • How Do Robots See?

    The article "How Do Robots See?" explores the mechanisms behind robotic vision beyond the simple use of cameras as eyes. It delves into how robots process visual information to understand their environment, including determining the size of objects and recognizing different items. This involves advanced technologies and algorithms that enable robots to interpret visual data in a meaningful way. Boston Dynamics is highlighted as an example, demonstrating how their robots utilize these vision systems to navigate and interact with the world. The article emphasizes that robotic vision is not just about capturing images but involves complex processing to enable perception and decision-making. However, the content provided is incomplete and lacks detailed explanations of the specific technologies or methods used.

    roboticscomputer-visionBoston-Dynamicsrobot-sensingmachine-perceptionartificial-intelligencerobotics-technology
  • Simbe upgrades vision platform with AI-powered capabilities - The Robot Report

    robotAIcomputer-visioninventory-managementretail-technologyautomationoperational-efficiency
  • Anduril is working on the difficult AI-related task of real-time edge computing

    IoTedge-computingmilitary-technologyautonomous-systemscomputer-visiondata-processing