RIEM News LogoRIEM News

Articles tagged with "edge-computing"

  • Quadric rides the shift from cloud AI to on-device inference — and it’s paying off

    Quadric, a chip-IP startup founded by veterans of bitcoin mining firm 21E6, is capitalizing on the growing demand for on-device AI inference as companies and governments seek to reduce cloud infrastructure costs and enhance sovereign AI capabilities. Originally focused on automotive applications like driver assistance, Quadric has expanded into laptops, industrial devices, and other markets, leveraging its programmable AI processor IP that customers can embed into their own silicon. This approach, combined with a software stack and toolchain for running models locally, has driven significant growth: Quadric’s licensing revenue surged from about $4 million in 2024 to $15–20 million in 2025, with a target of $35 million in 2026, boosting its valuation to $270–300 million. The shift toward on-device AI is fueled by the rise of transformer-based models and the increasing cost and complexity of centralized AI infrastructure. Quadric’s chip-agnostic technology supports distributed AI setups where inference runs locally on devices such as

    IoTAI-inferenceon-device-AIchip-IPautomotive-AIedge-computingsemiconductor-technology
  • Caterpillar rolls out autonomous excavators, trucks, dozers

    At CES 2026, Caterpillar Inc. introduced a new generation of intelligent, fully autonomous construction machines, including excavators, haul trucks, dozers, loaders, and compactors. These machines are designed to enhance safety, precision, and productivity on jobsites by embedding autonomy directly into construction workflows. Caterpillar’s lineup also features connected site systems like Cat VisionLink and Cat MineStar, which enable coordinated, data-driven fleet operations by allowing machines to share information and adapt to changing site conditions in real time. Caterpillar’s autonomous technology reflects over 30 years of research and development, beginning with early collaborations in the 1980s on software, GPS, and perception systems. The company has since advanced sensing, positioning, and control technologies, achieving Level 4 autonomy where machines operate independently. Today, Caterpillar manages one of the world’s largest autonomous mining fleets, which has safely moved more than 11 billion tonnes of material across 380 million kilometers. The autonomous systems leverage AI,

    robotautonomous-vehiclesconstruction-technologyAImachine-learningedge-computingmining-automation
  • Nvidia wants to be the Android of generalist robotics 

    At CES 2026, Nvidia unveiled a comprehensive robotics ecosystem aimed at becoming the default platform for generalist robotics, analogous to Android’s role in smartphones. This ecosystem includes new open foundation models—such as Cosmos Transfer 2.5, Cosmos Predict 2.5, a vision language model (VLM), and Isaac GR00T N1.6—that enable robots to reason, plan, and adapt across diverse tasks and environments, moving beyond narrow, task-specific bots. Nvidia also introduced Isaac Lab-Arena, an open-source simulation framework designed to safely and efficiently test robotic capabilities in virtual environments, addressing the high cost and risk of physical validation. Supporting this ecosystem is Nvidia OSMO, an open-source command center that integrates workflows from data generation to training across desktop and cloud platforms. To power these innovations, Nvidia launched the Jetson T4000 graphics card, delivering 1200 teraflops of AI compute with efficient power consumption, targeting cost-effective on-device processing. Nvidia is

    roboticsAINvidiasimulationedge-computingrobot-foundation-modelsJetson-Thor
  • US startup unveils AI supercomputer OMNIA the size of a carry-on

    Californian startup ODINN has introduced OMNIA, an AI supercomputer the size of a carry-on suitcase, designed to deliver data center-level AI performance without the need for building large, on-site facilities. OMNIA targets sectors requiring data privacy and low latency—such as defense, government, finance, and healthcare—where sending sensitive data to cloud-based centers is not viable. The system integrates high-end CPUs, GPUs, memory, and storage into a compact, self-contained unit with a proprietary closed-loop cooling system, enabling quiet operation and rapid deployment in minutes within standard office or secure environments. To address scalability, ODINN developed the Infinity Cube, a modular cluster combining multiple OMNIA units within a single enclosure, allowing organizations to build customizable AI clusters without the complexity and time of traditional data center construction. Complementing the hardware, ODINN’s NeuroEdge software manages job scheduling and deployment, integrating with NVIDIA’s AI ecosystem to optimize performance and reduce operational overhead. At CES

    energyAI-supercomputerdata-centercooling-systemscalable-computingmodular-data-centeredge-computing
  • AI Box lets carmakers add an AI brain inside vehicles without redesign

    At CES 2026 in Las Vegas, South Korean semiconductor company BOS Semiconductors will unveil an innovative plug-in AI Box designed to add advanced artificial intelligence capabilities to existing vehicles without requiring redesign or replacement of current infotainment systems. This external AI computing module, powered by the Eagle-N AI accelerator, enables carmakers to rapidly integrate high-performance AI features such as autonomous driving support, software-defined vehicle functions, and real-time physical AI decision-making into both new and refreshed car models. The AI Box connects flexibly with existing vehicle electronics, allowing AI-intensive tasks to be offloaded to the module while preserving the original system functions. The AI Box operates on an on-device AI architecture, processing sensitive data like voice and video within the vehicle itself rather than relying on cloud computing. This approach enhances data privacy, security, and system stability regardless of network connectivity, while also reducing cloud traffic and associated costs over time. By minimizing platform changes and cutting development time and expenses, BOS Semiconductors aims to accelerate the

    IoTautomotive-technologyAI-integrationautonomous-vehiclesedge-computingsemiconductorsmart-vehicles
  • MayimFlow wants to stop data center leaks before they happen

    MayimFlow is a startup focused on preventing damaging water leaks in data centers, a critical issue given the extensive water use in these facilities and the costly downtime leaks can cause. Founded by John Khazraee, who has over 15 years of experience building infrastructure for major tech companies like IBM, Oracle, and Microsoft, MayimFlow combines IoT sensors with edge-deployed machine learning models to detect early signs of leaks. Unlike many data centers that rely on reactive leak detection, MayimFlow aims to provide operators with 24 to 48 hours of advanced warning, potentially saving millions in remediation costs and avoiding service disruptions. The company’s team includes experts such as Jim Wong, chief strategy officer with decades of data center experience, and Ray Lok, CTO specializing in water management and IoT infrastructure. Khazraee’s motivation stems from a personal background valuing efficiency and frugality, which informs the startup’s mission to optimize water use and prevent leaks. While initially targeting data centers,

    IoTdata-centerswater-leak-detectionmachine-learningedge-computinginfrastructure-monitoringpredictive-maintenance
  • New tech can help US Army drones to operate in GPS-denied environments

    A Florida-based company, Safe Pro, has developed advanced AI algorithms integrated into its patented Safe Pro Object Threat Detection (SPOTD) technology, enabling U.S. military drones to operate effectively in GPS-denied environments. SPOTD is a rapid battlefield image analysis platform that identifies and maps small explosive threats such as landmines and ambush drones using high-resolution drone imagery and GPS-tagged geospatial data. The technology, tested in real-world exercises in Ukraine, can create 2D/3D threat models and is designed to function both on the Amazon Web Services (AWS) Cloud and at the Edge, offering up to a tenfold reduction in processing time. The enhanced SPOTD capabilities will be showcased at the U.S. Army 2026 Concept Focused Warfighter Experiment (CFWE) at Fort Hood, Texas. Safe Pro emphasizes that the system provides significant operational advantages in electronic warfare-contested environments by improving situational awareness and actionable intelligence for military reconnaissance, planning, and

    robotAIdronesmilitary-technologycomputer-visionedge-computingGPS-denied-environments
  • AI from orbit: 6G research explores satellites as moving edge servers

    The article discusses innovative research in 6G networks that envisions satellites as integral components of edge computing, enabling seamless artificial intelligence (AI) services on a global scale. With 6G commercialization anticipated around 2030, researchers from the University of Hong Kong and Xidian University propose a "space–ground fluid AI" framework that integrates satellites into space–ground integrated networks (SGINs). This approach transforms satellites into both communication hubs and computing servers, addressing challenges such as satellite mobility and limited space–ground link capacity. The framework enables AI models and data to flow continuously between satellites and ground stations, extending traditional edge AI architectures into orbit. The space–ground fluid AI framework is built on three core techniques: fluid learning, fluid inference, and fluid model downloading. Fluid learning uses an infrastructure-free federated learning scheme that leverages satellite motion to mix and spread AI model parameters, turning satellite movement into an advantage for faster training convergence and improved accuracy. Fluid inference optimizes real-time AI decision

    IoTedge-computing6GsatellitesAIspace-ground-networkscommunication-technology
  • US: World's smallest AI supercomputer that fits in a pocket unveiled

    US startup Tiiny AI has unveiled the Tiiny AI Pocket Lab, officially recognized by Guinness World Records as the world’s smallest personal AI supercomputer. This pocket-sized device, resembling a power bank, can locally run large language models (LLMs) with up to 120 billion parameters without needing cloud connectivity, servers, or high-end GPUs. The Pocket Lab aims to reduce reliance on cloud infrastructure and GPUs, addressing sustainability concerns, rising energy costs, and privacy risks associated with cloud-based AI. By enabling advanced AI capabilities on a personal device, Tiiny AI seeks to make AI more accessible, private, and energy-efficient. Designed for a wide range of users including creators, developers, researchers, and students, the Pocket Lab supports complex AI tasks such as multi-step reasoning, deep context understanding, and secure processing of sensitive data—all while keeping data stored locally with bank-level encryption. It runs models between 10 billion and 100 billion parameters, covering over 80% of real-world AI tasks,

    AIsupercomputerenergy-efficiencyedge-computingprivacydeep-learninglow-power-devices
  • Forterra brings in $238M to scale AI platforms for defense applications - The Robot Report

    Forterra, a defense-focused company specializing in scalable autonomous hardware and software, has raised $238 million in a Series C funding round led by Moore Strategic Ventures, with participation from investors including Salesforce Ventures and Franklin Templeton. The company plans to use the capital to advance innovation in communications, command, and control systems, and to expand production capacity for edge computing platforms that serve defense and emerging mission domains. Forterra’s CEO, Josh Araujo, emphasized the critical role of autonomous systems in modern military operations, describing the company’s technology as a “force multiplier” that enhances reach, survivability, and effectiveness across battlespace and industrial applications. Forterra’s product suite includes AutoDrive, a self-driving system for diverse terrains; TerraLink, a platform for real-time vehicle oversight; Vektor, a communication and data-brokering layer optimized for disrupted and low-bandwidth environments; Oasis, an interoperability platform; and goTenna, mesh networking devices for secure off-grid connectivity.

    robotautonomous-systemsmilitary-roboticsedge-computingcommunication-systemsself-driving-technologydrone-swarms
  • The brain may be the blueprint for the next computing frontier

    The article discusses the rapid advancement of neuromorphic computing, a technology that models hardware on the brain’s neurons and spiking activity to achieve highly energy-efficient and low-latency data processing. Unlike traditional deep neural networks (DNNs) that rely on continuous numeric activations and consume significant power, spiking neural networks (SNNs) use asynchronous, event-driven spikes inspired by biological neurons. This approach enables dramatic reductions in energy use and processing time; for instance, Intel’s Loihi chips reportedly perform AI inference 50 times faster and with 100 times less energy than conventional CPUs and GPUs, while IBM’s TrueNorth chip achieves unprecedented energy efficiency at 400 billion operations per second per watt. However, SNNs currently face challenges in accuracy and training tool maturity compared to traditional AI models. The global race to develop neuromorphic hardware is intensifying, with major players like Intel and IBM in the US leading early efforts through chips such as Loihi and TrueNorth, and startups

    energyneuromorphic-computingspiking-neural-networksAI-chipsbrain-inspired-hardwareenergy-efficiencyedge-computing
  • New brain-like computer could bring self-learning AI to devices

    Engineers at The University of Texas at Dallas, led by Dr. Joseph S. Friedman, have developed a small-scale brain-inspired computer prototype that learns and processes information more like the human brain. Unlike traditional AI systems, which separate memory and processing and require extensive training with large labeled datasets, this neuromorphic hardware integrates memory and computation, enabling it to recognize patterns and make predictions with significantly fewer training computations and lower energy consumption. The design is based on Hebb’s law, where connections between artificial neurons strengthen when they activate together, allowing continuous self-learning. The prototype uses magnetic tunnel junctions (MTJs)—nanoscale devices with two magnetic layers separated by an insulator—that adjust their connectivity dynamically as signals pass through, mimicking synaptic changes in the brain. MTJs also provide reliable binary data storage, overcoming limitations seen in other neuromorphic approaches. Dr. Friedman aims to scale up this technology to handle more complex tasks, potentially enabling smart devices like phones and wearables to run

    neuromorphic-computingbrain-inspired-AImagnetic-tunnel-junctionsenergy-efficient-AIedge-computingself-learning-AIsmart-devices
  • Advantech introduces edge AI systems for a range of robot embodiments - The Robot Report

    Advantech has launched a new lineup of Edge AI systems powered by NVIDIA’s Jetson Thor platform, targeting real-world robotics, medical AI, and data AI applications. These systems integrate application-specific hardware with pre-installed JetPack 7.0, remote management tools, and vertical software suites like Robotic Suite and GenAI Studio. Built on a container-based architecture, they offer enhanced flexibility and faster development cycles. The NVIDIA Jetson Thor modules deliver up to 2070 FP4 TFLOPS of AI performance, alongside improved CPU performance and energy efficiency. Advantech also collaborates with ecosystem partners on sensor and camera integration and thermal design to facilitate faster and more efficient edge AI application deployment. Advantech’s robotic controllers, ASR-A702 and AFE-A702, are designed for humanoids, autonomous mobile robots (AMRs), and unmanned vehicles, providing real-time AI inference with GPU-accelerated SLAM and support for multi-camera and sensor inputs. These controllers feature hardware

    robotedge-AINVIDIA-Jetsonrobotic-controllersIoTAI-in-roboticsedge-computing
  • Mbodi will show how it can train a robot using AI agents at TechCrunch Disrupt 2025

    Mbodi, a New York-based startup founded by former Google engineers Xavier Chi and Sebastian Peralta, has developed a cloud-to-edge hybrid computing system designed to accelerate robot training using multiple AI agents. Their software integrates with existing robotic technology stacks and allows users to train robots via natural language prompts. The system breaks down complex tasks into smaller subtasks, enabling AI agents to collaborate and gather the necessary information to teach robots new skills more efficiently. Mbodi’s approach addresses the challenge of adapting robots to the infinite variability of real-world physical environments, where traditional robot programming is often too rigid and time-consuming. Since launching in 2024 with a focus on picking and packaging tasks, Mbodi has gained recognition by winning an ABB Robotics AI startup competition and securing a partnership with a Swiss robotics organization valued at $5.4 billion. The company is currently working on a proof of concept with a Fortune 100 consumer packaged goods (CPG) company, aiming to automate packing tasks that frequently change and are difficult to

    roboticsartificial-intelligenceAI-trainingcloud-computingedge-computingautomationrobotic-software
  • Skyline Nav AI’s software can guide you anywhere, without GPS — find it at TechCrunch Disrupt 2025

    Skyline Nav AI, founded by Kanwar Singh, has developed Pathfinder, an AI-driven vision-based navigation system that can guide users without relying on GPS. The software matches visual inputs—such as buildings, roads, or aerial views—to a database to provide real-time navigation, making it especially useful in environments where GPS signals are blocked, like urban canyons or mountainous terrain. Beyond civilian applications, the technology addresses critical national security concerns by serving as a backup against GPS jamming, a growing threat in modern warfare. This capability has already attracted partnerships with the Department of Defense, NASA, and defense contractor Kearfott, despite Skyline being a small startup with just eight employees. At TechCrunch Disrupt 2025, Singh introduced Pathfinder Edge, a compact edge computing device that runs a streamlined version of Pathfinder, enabling GPS-independent navigation on various platforms without requiring cellular or Wi-Fi connectivity. Singh envisions Skyline’s technology complementing GPS rather than replacing it, similar to how modern communication systems seamlessly

    AI-navigationedge-computingGPS-independent-navigationdefense-technologyautonomous-systemsvisual-navigationGPS-jamming-countermeasures
  • Starship Technologies obtains funding for autonomous deliveries across the U.S. - The Robot Report

    Starship Technologies, a company founded in 2014 by Skype co-founders Ahti Heinla and Janus Friis, has raised $50 million in a Series C funding round, bringing its total investment to over $280 million. The San Francisco-based firm operates what it claims is the largest autonomous delivery network globally, with more than 2,700 robots completing over 9 million deliveries across 270+ locations in seven countries. Starship plans to expand its robotic delivery services from U.S. university campuses and European cities into broader North American urban markets, aiming to offer sub-30-minute deliveries to millions of consumers. The company emphasizes its progress in achieving SAE Level 4 autonomy, improving robot autonomy by double-digit percentages annually, and addressing challenges such as safety validation, regulatory compliance, all-weather reliability, and profitability at scale. Starship leverages a combination of classical algorithms, computer vision, and neural networks optimized for edge computing to enhance its robots' performance while maintaining rigorous safety standards.

    robotautonomous-deliveryroboticsurban-logisticsAIedge-computingSAE-Level-4-autonomy
  • Canadian drones to operate in swarms for military missions using US tech

    Canadian drone developer Draganfly has partnered with U.S.-based Palladyne AI to integrate advanced autonomy and swarming capabilities into its unmanned aerial vehicles (UAVs). Using Palladyne’s Pilot AI software, which is platform-agnostic and edge-based, Draganfly’s drones will be able to operate in coordinated swarms controlled by a single operator. This technology enables multiple UAVs to collaborate seamlessly, enhancing large-scale coordinated drone operations for military and defense missions. The software leverages sensor fusion to allow drones to independently and collaboratively track, classify, and identify targets while dynamically interfacing with autopilots, enabling autonomous swarm behavior and reducing operator workload. Draganfly’s modular drone platforms, including quadcopters and multirotor drones like the high-endurance Commander model, will benefit from these enhanced autonomy features. The integration aims to expand mission capabilities such as real-time intelligence, surveillance, reconnaissance (ISR), and mission-specific specialization across challenging environments. Draganfly has over

    robotdrone-technologyautonomous-systemsAI-softwareUAV-swarmmilitary-technologyedge-computing
  • AI at the edge: How startups are powering the future of space at TechCrunch Disrupt 2025

    TechCrunch Disrupt 2025, starting October 27 in San Francisco, will feature a dedicated Space Stage focused on how AI is revolutionizing space technology. Leading experts including Adam Maher (Ursa Space Systems), Dr. Lucy Hoag (Violet Labs), and Dr. Debra L. Emmons (The Aerospace Corporation) will discuss the transformative role of AI in orbit. The event highlights the shift from traditional space hardware like rockets and satellites to intelligent edge computing systems that enable autonomous decision-making and real-time data processing in space. This AI-driven approach is enhancing mission speed, efficiency, and resilience, marking a new era of on-orbit intelligence. The featured speakers bring diverse expertise: Dr. Debra Emmons, CTO of The Aerospace Corporation, oversees technology strategy and innovation across multiple labs focused on advancing U.S. space capabilities; Adam Maher, founder and CEO of Ursa Space Systems, specializes in synthetic aperture radar data to improve decision-making; and Dr. Lucy Hoag

    IoTAIedge-computingspace-technologyautonomous-systemssatellite-dataaerospace-innovation
  • Edge computing and AI: A conversation with Palladyne AI's Ben Wolff

    In Episode 216 of The Robot Report Podcast, hosts Steve Crowe and Mike Oitzman feature an interview with Ben Wolff, CEO of Palladyne AI, highlighting the company's advancements in AI and robotics. Palladyne AI focuses on simplifying robot programming through an improved user interface, developing autonomous drone swarming technology, and creating hardware-agnostic AI solutions. Wolff underscores the benefits of edge computing and stresses a customer-centric approach to ensure their products are essential and user-friendly. The episode also covers significant industry news, including ABB Group’s sale of its Robotics & Discrete Automation division to SoftBank for $5.375 billion amid declining orders and revenues. The report reviews SoftBank’s varied robotics investments over the years, such as acquisitions and divestitures involving Aldebaran Robotics, Boston Dynamics, and others. Additionally, Boston Dynamics showcased its latest humanoid hand design optimized for industrial durability and affordability, while Figure AI unveiled its Figure 03 humanoid robot aimed at safe, scalable

    roboticsAIedge-computingautonomous-dronesrobot-programminghumanoid-robotsSoftBank-robotics-investments
  • Intel expands Panther Lake processor edge applications to robotics - The Robot Report

    Intel has unveiled detailed architectural information about its Intel Core Ultra Series 3 processor, codenamed Panther Lake, highlighting its expanded edge applications including robotics. To support this, Intel introduced a new Robotics AI software suite and reference board designed to help customers rapidly develop cost-effective robots with advanced AI capabilities for control and perception. Panther Lake, Intel’s first product built on the cutting-edge 18A semiconductor process, is set to begin high-volume production in 2024 at Intel’s new Fab 52 facility in Chandler, Arizona, with initial shipments expected by the end of the year and broad availability in January 2026. The Panther Lake processor leverages Intel’s 18A process, the most advanced semiconductor technology developed and manufactured in the U.S., featuring innovations such as RibbonFET transistor architecture and PowerVia backside power delivery. The processor offers a scalable multi-chiplet design, combining up to 16 performance and efficient cores, a new Intel Arc GPU with up to 12 Xe cores, and

    roboticsIntel-Panther-LakeAI-processorssemiconductor-technologyedge-computingAI-accelerationadvanced-manufacturing
  • Intel unveils 18A chips in major push to revive US semiconductor edge

    Intel has unveiled its most advanced processors to date—the Core Ultra series 3 (codenamed Panther Lake) and Xeon 6+—built on its cutting-edge 18A semiconductor process. Panther Lake targets consumer and commercial AI PCs, gaming, and edge computing, featuring a scalable multi-chiplet architecture with up to 16 new performance and efficient cores, delivering over 50% faster CPU performance than its predecessor. It also includes an Intel Arc GPU with up to 12 Xe cores for 50% faster graphics and supports AI acceleration up to 180 TOPS. Additionally, Intel is expanding Panther Lake’s reach into robotics and edge applications through a new AI software suite and reference board. Xeon 6+, Intel’s first 18A-based server processor, is designed for hyperscale data centers and cloud providers, offering up to 288 efficient cores and a 17% increase in instructions per cycle, with availability expected in early 2026. The 18A process represents a

    semiconductorsIntel-18AAI-chipsroboticsedge-computingenergy-efficiencymaterials-engineering
  • Edge-to-cloud robotics: eInfochips teams up with InOrbit - The Robot Report

    eInfochips, an Arrow Electronics company specializing in product engineering and digital transformation, has formed a strategic partnership with InOrbit, a provider of AI-powered robot orchestration. This collaboration aims to deliver scalable, optimized edge-to-cloud robotics solutions for industries requiring large-scale autonomous mobile robot (AMR) deployments, such as warehouses, factories, and industrial hubs. Leveraging eInfochips’ Robotics Center of Excellence, the partnership will support the entire robotics stack—from hardware design and sensor fusion to edge AI and digital twins—while InOrbit’s Space Intelligence platform will provide tools for real-time fleet management, incident response, multi-vehicle orchestration, and continuous performance optimization. The integrated offering is designed to simplify and accelerate the deployment of AMR fleets, enabling businesses to automate repetitive tasks like material handling and sorting with greater flexibility and operational scale. eInfochips brings extensive expertise in AI, hardware integration, and partnerships with platform providers like NVIDIA and Qualcomm, while InOrbit contributes its experience in managing thousands of robots

    roboticsedge-computingautonomous-mobile-robotsAIIoTcloud-roboticsindustrial-automation
  • Hance will demo its kilobyte-size AI audio processing software at TechCrunch Disrupt 2025

    Norwegian startup Hance is showcasing its ultra-compact AI-driven audio processing software at TechCrunch Disrupt 2025. The company has developed models as small as 242 kB that run on-device with just 10 milliseconds of latency, enabling real-time noise reduction, sound separation, echo and reverb removal, and speech clarity enhancement. This technology is particularly valuable in high-stakes environments like Formula One racing, where clear communication is critical, and has already attracted clients such as Intel and Riedel Communications, the official radio supplier to F1. Hance’s team, including co-founders with deep audio industry experience, trained their AI models on a diverse range of high-quality sounds, from F1 car roars to volcanic eruptions. Their software’s small size and energy efficiency allow it to operate on various devices without relying on cloud processing, making it suitable for professional applications in sports broadcasting, law enforcement, and defense. The company is actively partnering with chipmakers like Intel to optimize

    AIaudio-processingenergy-efficient-softwareedge-computingneural-processing-unitsreal-time-audio-enhancementnoise-reduction
  • Why you can’t miss the aerospace content at TechCrunch Disrupt 2025

    TechCrunch Disrupt 2025 will feature significant aerospace content presented by the Aerospace Corporation, emphasizing how artificial intelligence (AI) is transforming the space economy beyond traditional hardware like rockets and satellites. The event includes two key sessions on October 27 that highlight startups addressing critical challenges in space exploration, orbital intelligence, and space infrastructure through AI-driven innovations. These startups are developing solutions for automating mission planning, preventing satellite collisions, and optimizing communications and servicing in orbit, showcasing early-stage companies tackling complex, high-stakes problems in the space industry. The second session focuses on "AI at the edge," addressing the unique constraints of space environments such as latency and bandwidth limitations that make cloud computing impractical. It highlights advancements in autonomous systems, resilient computing architectures, and onboard intelligence that enable spacecraft to process data in real-time and operate more safely and efficiently. Together, these sessions provide insight into how AI and cutting-edge technology are converging to redefine space missions and infrastructure, positioning the space sector as a rapidly evolving

    robotAIaerospaceautonomous-systemsspace-technologysatelliteedge-computing
  • Amazon unveils new Echo devices, powered by its Al, Alexa+

    At its annual hardware event, Amazon unveiled a new lineup of Echo devices powered by its advanced AI assistant, Alexa+. The four new models—the Echo Dot Max, Echo Studio, Echo Show 8, and Echo Show 11—feature enhanced processing power and memory, enabled by Amazon’s custom-designed AZ3 and AZ3 Pro silicon chips. These chips improve wake word detection, conversation recognition, and support advanced AI models and vision transformers. Notably, the AZ3 Pro devices incorporate Omnisense, a sensor platform that uses cameras, audio, ultrasound, Wi-Fi radar, and other inputs to enable Alexa to respond contextually to events in the home, such as recognizing when a person enters a room or alerting users to an open garage door. The Echo Dot Max ($99.99) offers significantly improved sound with nearly three times the bass, while the Echo Studio ($219.99) boasts a smaller spherical design, spatial audio, Dolby Atmos support, and an upgraded light ring. Both can

    IoTsmart-homeAlexaAI-assistantAmazon-Echoedge-computingsmart-devices
  • AI and the Future of Defense: Mach Industries’ Ethan Thornton at TechCrunch Disrupt 2025

    At TechCrunch Disrupt 2025, Ethan Thornton, CEO and founder of Mach Industries, highlighted the transformative role of AI in the defense sector. Founded in 2023 out of MIT, Mach Industries aims to develop decentralized, next-generation defense technologies that enhance global security by integrating AI-native innovation and startup agility into an industry traditionally dominated by legacy players. Thornton emphasized the importance of rethinking fundamental infrastructure to build autonomous systems and edge computing solutions that operate effectively in high-stakes environments. The discussion also explored the broader implications of AI in defense, including the emergence of dual-use technologies that blur the lines between commercial and military applications. Thornton addressed critical topics such as funding, regulation, and ethical responsibility at the intersection of technology and geopolitics. With rising global tensions and increased defense investments, AI is not only powering new capabilities but also reshaping global power dynamics, security strategies, and sovereignty. The session underscored the growing role of AI startups in national defense and the urgent need to adapt to

    robotAIautonomous-systemsdefense-technologyedge-computingmilitary-innovationstartup-technology
  • Data-driven maintenance is changing factory economics

    The article highlights how data-driven predictive maintenance is revolutionizing factory economics by significantly reducing unplanned downtime, which can cost factories millions of dollars annually. Traditional reactive “break-and-fix” approaches are being replaced by smart strategies that leverage IoT sensors and AI to detect equipment faults weeks before failures occur. Studies from the US Department of Energy and industry surveys show that mature predictive maintenance programs can yield a 10× return on investment and reduce downtime by 35–45 percent. Additionally, companies adopting these technologies report substantial cost savings, fewer breakdowns, and extended equipment life, with Deloitte and IBM data supporting reductions of up to 70 percent in breakdowns and 25–30 percent in maintenance costs. The article explains the anatomy of a smart factory’s sensor system, where multiple IoT sensors continuously monitor parameters such as vibration, temperature, and fluid levels. These sensors feed data into edge computing nodes and cloud platforms, where AI algorithms analyze deviations from normal operating baselines to identify early signs of wear

    IoTpredictive-maintenancesmart-factoryAIindustrial-sensorsedge-computingenergy-efficiency
  • Anduril lands $159M Army contract for ‘superhero’ soldier headset

    Anduril Industries has secured a $159 million contract from the U.S. Army to develop a prototype helmet-mounted mixed reality system under the Soldier Borne Mission Command (SBMC) program, the successor to the Army’s earlier Integrated Visual Augmentation System (IVAS). This new system aims to provide soldiers with enhanced battlefield awareness by integrating night vision, augmented reality, artificial intelligence, and real-time intelligence overlays into a single modular platform. The goal is to enable faster decision-making and clearer situational understanding in contested environments, addressing previous IVAS issues such as user discomfort and technical delays. The SBMC system, built on Anduril’s Lattice platform and developed in partnership with companies like Meta, Qualcomm, and Palantir, offers modular hardware components tailored to mission needs and a software architecture (SBMC-A) that unifies helmet displays with edge computing and battlefield sensors. Recent field trials demonstrated capabilities such as soldiers controlling drones over three kilometers away directly from their headsets without dedicated operators.

    robotaugmented-realitymilitary-technologywearable-technologyedge-computingartificial-intelligencebattlefield-sensors
  • How does NVIDIA's Jetson Thor compare with other robot brains on the market? - The Robot Report

    NVIDIA recently introduced the Jetson AGX Thor, a powerful AI and robotics developer kit designed to deliver supercomputer-level artificial intelligence performance within a compact, energy-efficient module consuming up to 130 watts. The Jetson Thor provides up to 2,070 FP4 teraflops of AI compute, enabling robots and machines to perform advanced “physical AI” tasks such as perception, decision-making, and control in real time directly on the device, without dependence on cloud computing. This capability addresses a major challenge in robotics by supporting multi-AI workflows that facilitate intelligent, real-time interactions between robots, humans, and the physical environment. The Jetson Thor is powered by the comprehensive NVIDIA Jetson software platform, which supports popular AI frameworks and generative AI models, ensuring compatibility across NVIDIA’s broader software ecosystem—from cloud to edge. This includes tools like NVIDIA Isaac for robotics simulation and development, NVIDIA Metropolis for vision AI, and Holoscan for real-time processing. The module’s high-performance

    robotAINVIDIA-Jetsonrobotics-hardwareedge-computingphysical-AIAI-inference
  • Buzzy AI startup Multiverse creates two of the smallest high-performing models ever

    Multiverse Computing, a leading European AI startup based in Spain, has developed two of the smallest yet high-performing AI models, humorously named after animal brain sizes: SuperFly and ChickBrain. These models are designed to be embedded in Internet of Things (IoT) devices and run locally on smartphones, tablets, and PCs without requiring an internet connection. SuperFly, inspired by a fly’s brain, is a compressed version of Hugging Face’s SmolLM2 135 model with 94 million parameters, optimized for limited data and voice-command applications in home appliances. ChickBrain, with 3.2 billion parameters, is a compressed version of Meta’s Llama 3.1 8B model and offers advanced reasoning capabilities, outperforming the original in several benchmarks such as MMLU-Pro, Math 500, GSM8K, and GPQA Diamond. The key technology behind these models is Multiverse’s proprietary quantum-inspired compression algorithm called CompactifAI, which significantly reduces model

    IoTAI-modelsmodel-compressionedge-computingembedded-AIquantum-inspired-algorithmssmart-devices
  • US: 'Microwave brain' chip for ultrafast, wireless computing unveiled

    Cornell University researchers have developed a novel low-power microchip dubbed the ‘microwave brain,’ which functions as a microwave neural network capable of ultrafast, wireless computing. Unlike traditional digital chips that process data sequentially, this chip uses analog microwave signals at tens of gigahertz frequencies, enabling it to handle complex tasks such as radio signal decoding, radar target tracking, and digital data processing in real time while consuming only about 200 milliwatts of power. Its design leverages programmable frequency distortions and special waveguides to detect patterns and learn from data, bypassing many conventional digital signal processing steps. The chip demonstrated high accuracy—88 percent or more—in classifying wireless signal types, matching digital neural networks but with significantly lower power and space requirements. Its probabilistic computing approach maintains accuracy across both simple and complex tasks without the increased circuitry or error correction typical in digital systems. Due to its sensitivity to microwave signals, the chip is well-suited for hardware applications like detecting anomalies in

    IoTwireless-communicationmicrowave-neural-networklow-power-microchipedge-computingsignal-processingsilicon-microchip
  • DigiKey, onsemi discuss the intersection of robotics and physical AI - The Robot Report

    DigiKey and onsemi recently explored how advancements in sensing technologies and physical AI are driving the evolution of autonomous mobile robots (AMRs), which have the potential to transform industrial and commercial sectors. AMRs utilize a variety of sensors—including lidar, cameras, ultrasonic detectors, and radar—to enhance safety, improve productivity, and navigate complex environments. Similar to self-driving vehicles, AMRs employ technologies such as simultaneous localization and mapping (SLAM) to create real-time maps and localize themselves, enabling them to operate beyond controlled indoor settings into more unpredictable outdoor environments. These developments are supported by improvements in sensor integration, edge computing, and AI, which collectively make AMRs more autonomous, adaptive, and capable of performing a wider range of tasks safely alongside humans. The discussion also highlighted the shift in communication protocols within AMRs, moving from traditional CAN (Controller Area Network) to the newer 10BASE-T1S Ethernet-based protocol, led by onsemi. This protocol offers higher data rates (10 Mbps

    roboticsautonomous-mobile-robotsphysical-AIsensorsindustrial-robotsedge-computingAI-integration
  • The new face of defense tech — Ethan Thornton of Mach Industries — takes the AI stage at TechCrunch Disrupt 2025

    At TechCrunch Disrupt 2025, Ethan Thornton, CEO and founder of Mach Industries, highlighted how AI is fundamentally transforming defense technology today, not just in the future. Launching his startup out of MIT in 2023, Thornton aims to develop decentralized, next-generation defense systems that integrate advanced hardware, software, and autonomous capabilities. His approach challenges traditional defense industry norms by leveraging AI-native innovation to enhance national security on a global scale. Mach Industries exemplifies a new breed of startups that bridge commercial technology and military applications, focusing on autonomous systems, edge computing, and dual-use technologies. Thornton’s discussion emphasized the complexities of navigating funding, regulatory environments, and ethical responsibilities at the intersection of technology and geopolitics. With rising global tensions and increased defense tech investments, his session underscored AI’s critical role in reshaping security strategies and the future of sovereignty worldwide.

    robotartificial-intelligenceautonomous-systemsdefense-technologyedge-computingstartup-innovationmilitary-technology
  • Intel is spinning off its Network and Edge group

    Intel is continuing its business restructuring by planning to spin off its Network and Edge group, which develops chips for the telecommunications industry. The new entity will operate as a standalone business, with Intel remaining an anchor investor while also seeking additional outside capital. This move follows Intel's earlier decision to spin off its RealSense stereoscopic imaging technology business, which secured $50 million in venture funding and became independent during former CEO Pat Gelsinger’s tenure. The Network and Edge group had been a significant part of Intel’s operations, though specific financial details and the timeline for the spinout have not been fully disclosed. Intel’s strategy appears to focus on streamlining its core business and allowing specialized units to grow independently with targeted investment. Further details about the spinout’s plans and schedule are pending as Intel has yet to provide comprehensive information.

    IoTedge-computingtelecom-chipsnetwork-technologyIntel-spin-offsemiconductor-industryventure-funding
  • Rethinking global connectivity: Why stratospheric UAVs could outperform satellites - The Robot Report

    The article discusses the emerging role of high-altitude, long-endurance (HALE) unmanned aerial vehicles (UAVs) as a promising alternative to traditional satellite communication networks. With the exponential growth of data generation and the saturation of orbital space, existing satellite infrastructure faces limitations in bandwidth, latency, and flexibility. HALE UAVs, operating in the stratosphere at altitudes between 60,000 and 80,000 feet, offer significant advantages including reduced latency due to their proximity to users, persistent coverage for weeks at a time, and the ability to be rapidly deployed and repositioned in response to dynamic, time-sensitive situations such as natural disasters, agricultural monitoring, and live event coverage. Additionally, HALE UAVs provide operational flexibility through modular payloads that can be swapped to support diverse missions across telecommunications, defense, and environmental monitoring without the need for new hardware designs. Unlike satellite constellations that require extensive redundancy for resilience, stratospheric UAVs can be serviced,

    robotUAVstratospheric-dronesIoT-connectivitysolar-powered-UAVedge-computinghigh-altitude-communication
  • Liquid AI releases on-device foundation model LFM2 - The Robot Report

    Liquid AI has launched LFM2, its latest Liquid Foundation Model designed for on-device deployment, aiming to balance quality, latency, and memory efficiency tailored to specific tasks and hardware. By moving large generative models from cloud servers to local devices such as phones, laptops, cars, and robots, LFM2 offers millisecond latency, offline functionality, and enhanced data privacy. The model features a new hybrid architecture that delivers twice the decode and prefill speed on CPUs compared to Qwen3 and outperforms similarly sized models across benchmarks in knowledge, mathematics, instruction following, and multilingual capabilities. Additionally, LFM2 achieves three times faster training efficiency than its predecessor. LFM2’s architecture includes 16 blocks combining double-gated short-range convolution and grouped query attention, enabling efficient operation on CPUs, GPUs, and NPUs across various devices. Liquid AI provides three model sizes (0.35B, 0.7B, and 1.2B parameters) available under an open

    robotartificial-intelligenceon-device-AIedge-computingfoundation-modelsmachine-learningAI-deployment
  • Estonian engineers turn $9 trash phones into pocket-sized data centers

    Researchers at the University of Tartu’s Institute of Computer Science in Estonia have repurposed discarded 15-year-old smartphones into low-cost, pocket-sized data centers capable of outperforming popular single-board computers like the Raspberry Pi. By removing batteries from old Google Nexus phones, fitting them with 3-D-printed holders, and powering them externally, the team created clusters costing about €8 (US$9.20) per phone. These clusters run a Linux-based system (PostmarketOS) instead of Android, enabling direct hardware control and enhanced security. The phones, linked as a “master” and “worker” nodes, handle tasks such as AI-powered image recognition and website hosting, demonstrating efficient, high-energy processing in a compact form. The project addresses the environmental issue of e-waste, as billions of smartphones are discarded annually, with most components not properly recycled. By extending the functional life of obsolete devices, the researchers aim to reduce landfill waste and the environmental impact of building new servers.

    IoTedge-computinge-waste-recyclingenergy-efficient-computingsmartphone-clustersAI-image-recognitionsustainable-technology
  • New Gemini AI lets humanoid robots think and act without internet

    Google DeepMind has introduced Gemini Robotics On-Device, a new AI model that enables humanoid robots to operate autonomously without internet connectivity. Unlike its cloud-dependent predecessor, this on-device version runs entirely on the robot, allowing for faster, low-latency responses and reliable performance in environments with poor or no connectivity. The model incorporates Gemini 2.0’s multimodal reasoning, natural language understanding, task generalization, and fine motor control, enabling robots to perform complex tasks such as unzipping bags and folding clothes. It is efficient enough to run locally with minimal data—requiring only 50 to 100 demonstrations to adapt to new tasks—and supports fine-tuning through teleoperation, making it highly adaptable across different robotic platforms. The Gemini Robotics On-Device model is designed with privacy and offline performance in mind, processing all data locally, which is particularly beneficial for security-sensitive applications like healthcare. Developers can access the model through Google’s trusted tester program and utilize a full software development kit

    roboticsartificial-intelligencehumanoid-robotsoffline-AIedge-computingrobotics-controlGoogle-DeepMind
  • World’s first quantum satellite computer launched in historic SpaceX rideshare

    The world’s first quantum satellite computer was launched into orbit on June 23, 2025, aboard a SpaceX Falcon 9 rocket as part of the Transporter 14 rideshare mission. Developed by an international team led by Philip Walther at the University of Vienna, this compact photonic quantum processor is designed to operate approximately 550 kilometers above Earth. The satellite aims to test the durability and performance of quantum hardware in the harsh conditions of space, including extreme temperature fluctuations, radiation, and vibrations. The device was assembled rapidly in a clean room at the German Aerospace Center, marking a significant engineering achievement. This quantum computer’s primary advantage lies in its ability to perform edge computing in orbit, processing data onboard rather than transmitting raw data back to Earth. This capability can enhance applications such as forest fire detection by reducing energy consumption and improving response times. Utilizing light-based optical systems, the processor efficiently handles complex computational tasks like Fourier transforms and convolutions. The system is adaptable for future missions and holds

    quantum-computingsatellite-technologyspace-technologyenergy-efficiencyedge-computingEarth-observationphotonic-quantum-computer
  • Uptime Industries wants to boost localized AI usage with an ‘AI-in-a-box’ called Lemony AI

    Uptime Industries has developed Lemony AI, a compact “AI-in-a-box” device designed to run large language models (LLMs), AI agents, and workflows locally on-premise. About the size of a sandwich and consuming only 65 watts of power, each Lemony node can support LLMs with up to 75 billion parameters, hosting both open-source and adapted closed models. Multiple devices can be stacked to form clusters, allowing different models to run simultaneously. The company has partnered with IBM and JetBrains to facilitate customer access to various AI models, including IBM’s proprietary ones. The concept originated from a side project by Uptime’s co-founders, who explored distributing language models on small devices like Raspberry Pis. Recognizing the potential for localized AI to enhance adoption—especially among enterprises wary of cloud-based solutions—they focused on creating a small, privacy-centric device that teams could deploy without extensive organizational approval. This approach appeals particularly to regulated sectors such as finance, healthcare, and law, where data privacy is critical since all data and models remain within the device. Uptime has raised $2 million in seed funding to advance development, plans to extend its Lemony OS software to other hardware platforms, and aims to evolve from single-user to team-based software functionality. Lemony AI is offered at $499 per month for up to five users.

    energyAI-hardwareedge-computingon-premise-AIlow-power-devicesAI-clustersdata-privacy
  • Anduril is working on the difficult AI-related task of real-time edge computing

    IoTedge-computingmilitary-technologyautonomous-systemscomputer-visiondata-processing