RIEM News LogoRIEM News

Articles tagged with "robotics-development"

  • Foxglove raises $40M to scale its data platform for roboticists

    Foxglove, a San Francisco-based startup founded in 2021 by former Cruise engineers Adrian Macneil and Roman Shtylman, has raised $40 million in a Series B funding round, bringing its total funding to over $58 million. The company develops a data and observability platform designed to help robotics companies collect, analyze, and visualize sensor data from their robots, aiming to accelerate development and improve robot reliability. Foxglove’s platform provides robotics startups with infrastructure similar to that used internally by industry leaders like Waymo and Tesla, but without requiring large engineering teams. Its customers include Amazon, NVIDIA, Shield AI, and Dexterity, among others. Foxglove’s tools have demonstrated significant impact, such as helping Dexterity reduce tooling and development time by over 20%, saving $150,000 annually. Notably, Shield AI integrated Foxglove’s platform into its HiveMind autonomy stack, embedding it as part of its software development kit, highlighting Foxglove’s role

    roboticsdata-platformrobotics-developmentmachine-learningautonomous-robotsrobotics-startupssoftware-development-kit
  • Lessons from robotics successes and failures

    The article, based on Dorota Shortell’s presentation at RoboBusiness 2025, distills key lessons from over 20 years of experience in robotics hardware and automated systems, supplemented by interviews with leaders from more than a dozen robotics companies. These lessons are organized into five main categories: product market fit, funding, development process, team, and scaling. The primary focus is on product market fit, which is emphasized as a critical factor distinguishing successful robotics ventures from failures. A central takeaway is that robotics companies should prioritize understanding and solving critical customer problems before developing new technology. Building a robot should follow only after confirming that it addresses a meaningful need, avoiding the pitfall of creating sophisticated but ultimately unwanted technology. Successful companies employ fast feedback cycles involving ideation, customer validation, and iterative design, often working closely with clients through co-development and service-based models. This approach, exemplified by firms like Intuitive and Dusty Robotics, involves engineers collaborating directly with customers to refine the product and uncover un

    roboticsrobotics-industryproduct-market-fitrobotic-systemsautomated-robotsrobotics-developmentrobotics-startups
  • What Tesla’s Optimus robot can do in 2025 and where it still lags

    Tesla aims to produce 5,000 Optimus humanoid robots by 2025, positioning the robot as central to its future under the vision of integrating AI into the physical world. CEO Elon Musk has claimed that 80% of Tesla’s future value will derive from Optimus and related AI ventures, signaling a shift from purely an automaker to a “physical AI” platform. Demonstrations through 2024 and 2025 have shown Optimus performing basic locomotion with improved heel-to-toe walking, simple household chores like sweeping and trash removal, and basic manipulation tasks such as handling car parts. These capabilities are enabled by a unified control policy—a single neural network trained using vision-based inputs and human video data—which Tesla highlights as a scalable approach to skill acquisition. However, Optimus’s current functionality is largely limited to structured or lightly staged environments with known objects and controlled lighting, lacking robust autonomy in unstructured homes or fully operational industrial settings. While the robot shows smoother full-body coordination and

    robothumanoid-robotTesla-OptimusAI-roboticsautomationneural-networksrobotics-development
  • Symage to spotlight future of vision model training at RoboBusiness

    Symage, a company specializing in physics-based, high-fidelity synthetic image data for AI and computer vision training, will showcase its technology at RoboBusiness 2025, held October 15-16 at the Santa Clara Convention Center. Unlike generative AI approaches, Symage’s platform generates photorealistic synthetic datasets without visual artifacts or model degradation, resulting in faster training, improved accuracy, better edge case coverage, and reduced bias. CEO Brian Geisel emphasizes that this approach enables robotics teams to develop and test vision models more efficiently and reliably, supporting advancements in smarter and safer robotics systems. At RoboBusiness, which attracts over 2,000 robotics professionals and features 100+ exhibitors and numerous educational sessions, Geisel will present on how synthetic data accelerates vision model development, particularly in warehouse automation, agriculture technology, and mobile robotics. Symage’s offerings highlight the potential of physics-accurate synthetic data to train models before hardware availability, addressing critical edge cases and improving data quality. The

    roboticsAI-trainingsynthetic-datacomputer-visionrobotics-developmentautomationrobotics-innovation
  • NVIDIA Jetson Thor computer gives humanoid robots 7.5x power boost

    NVIDIA has launched the Jetson AGX Thor developer kit and production modules, delivering a significant leap in AI computing power for robotics applications. The Jetson Thor offers up to 2,070 FP4 teraflops of AI compute and 128 GB of memory within a 130-watt power envelope, providing 7.5 times more AI performance and 3.5 times greater energy efficiency than its predecessor, Jetson Orin. Powered by NVIDIA’s Blackwell GPU, the system can run multiple AI models simultaneously, including vision-language-action models and large language models, enabling robots to perceive, reason, and act in real time without relying on cloud servers. This makes it suitable for a wide range of applications, from humanoid robots and industrial machines to surgical assistants and precision farming. The Jetson Thor platform is supported by NVIDIA’s comprehensive software stack, including Isaac for robotics simulation, Metropolis for vision AI, and Holoscan for sensor processing. Early adopters such as Amazon

    robotAI-computinghumanoid-robotsNVIDIA-Jetson-Thorindustrial-robotsedge-AIrobotics-development
  • How a once-tiny research lab helped Nvidia become a $4 trillion-dollar company

    The article chronicles the evolution of Nvidia’s research lab from a small group of about a dozen people in 2009, primarily focused on ray tracing, into a robust team of over 400 researchers that has been instrumental in transforming Nvidia from a video game GPU startup into a $4 trillion company driving the AI revolution. Bill Dally, who joined the lab after being persuaded by Nvidia leadership, expanded the lab’s focus beyond graphics to include circuit design and VLSI chip integration. Early on, the lab recognized the potential of AI and began developing specialized GPUs and software for AI applications well before the current surge in AI demand, positioning Nvidia as a leader in AI hardware. Currently, Nvidia’s research efforts are pivoting toward physical AI and robotics, aiming to develop the core technologies that will power future robots. This shift is exemplified by the work of Sanja Fidler, who joined Nvidia in 2018 to lead the Omniverse research lab in Toronto, focusing on simulation models for robotics and

    robotartificial-intelligenceNvidiaGPUsrobotics-developmentAI-hardwaretechnology-research
  • ShengShu Technology launches Vidar multi-view physical AI training model - The Robot Report

    ShengShu Technology, a Beijing-based company founded in March 2023 specializing in multimodal large language models, has launched Vidar, a multi-view physical AI training model designed to accelerate robot development. Vidar, which stands for “video diffusion for action reasoning,” leverages a combination of limited physical training data and generative video simulations to train embodied AI models. Unlike traditional methods that rely heavily on costly, hardware-dependent physical data collection or purely simulated environments lacking real-world variability, Vidar creates lifelike multi-view virtual training environments. This approach allows for scalable, robust training of AI agents capable of real-world tasks, reducing the need for extensive physical data by up to 1/80 to 1/1,200 compared to industry-leading models. Built on ShengShu’s flagship video-generation platform Vidu, Vidar employs a modular two-stage learning architecture that separates perceptual understanding from motor control. In the first stage, large-scale general and embodied video data train the perceptual

    robotembodied-AIAI-training-modelsimulationgenerative-videorobotics-developmentphysical-AI
  • Fundamental Research Labs nabs $30M+ to build AI agents across verticals

    Fundamental Research Labs, an applied AI research company formerly known as Altera, has raised $33 million in a Series A funding round led by Prosus, with participation from Stripe co-founder Patrick Collison. The company operates with a unique structure, maintaining multiple teams focused on diverse AI applications across different verticals, including gaming, prosumer apps, core research, and platform development. Founded by Dr. Robert Yang, a former MIT faculty member, the startup aims to be a “historical” company by eschewing typical startup norms and is already generating revenue by charging users for its AI agents after a seven-day trial. Among its products, Fundamental Research Labs offers a general-purpose consumer assistant and a spreadsheet-based AI agent called Shortcut, which has demonstrated impressive performance by outperforming first-year analysts from McKinsey and Goldman Sachs in head-to-head evaluations. The company has raised over $40 million to date and is focused on productivity applications as a primary value driver, with long-term ambitions to develop

    robotAI-agentsautomationproductivity-appsdigital-humansmachine-learningrobotics-development
  • NVIDIA VP Deepu Talla to discuss physical AI at RoboBusiness - The Robot Report

    At RoboBusiness 2025, Deepu Talla, NVIDIA’s vice president of robotics and edge AI, will deliver the opening keynote titled “Physical AI for the New Era of Robotics.” Scheduled for October 15 in Santa Clara, California, Talla will discuss how physical AI—where models perceive, reason, and act in real-world environments—is transforming robotics from static, rule-based automation to adaptable, intelligent autonomy capable of managing complex, unstructured tasks. NVIDIA is accelerating this shift through simulation-first development, foundation models, and real-time edge deployment, training robots in virtual environments before scaling them into physical applications. This advancement marks a significant milestone in integrating intelligent machines into the $50 trillion global economy. NVIDIA has positioned itself as a leader in physical AI with recent innovations such as Isaac GR00T N1.5, an updated customizable foundation model for humanoid robot reasoning, and Isaac GR00T-Dreams, a synthetic motion data generation blueprint. The NVIDIA Isaac platform is widely adopted

    roboticsphysical-AINVIDIA-Isaachumanoid-robotsedge-AIautonomous-machinesrobotics-development
  • Foxglove includes audio support in latest platform update

    Foxglove has released an updated version of its robotics visualization platform, introducing key new features and performance enhancements aimed at simplifying robotics development. The major additions include audio support through a new Audio panel and RawAudio message schema, enabling users to zoom, pan, and jump to specific points in audio waveforms—addressing the need for audio playback in robots equipped with microphones. Additionally, the 2D follow-mode camera has been improved to maintain a fixed frame orientation, preventing the previous rolling and pitching behavior and providing a more intuitive top-down view. Another notable update is the ability to control the render order of grid messages in the 3D panel via a new "Draw behind" setting, allowing grids to render before other scene elements or normally with depth testing. Beyond these headline features, Foxglove also implemented numerous fixes and stability improvements. These include enhanced app stability, better scrubbing performance, and more reliable automatic x-value ranges in multi-series plots. Specific bugs addressed involve playback issues with multiple m

    robotrobotics-developmentaudio-support2D-follow-moderobot-observabilityFoxglove-platformrobotics-tools
  • Chinese robot moonwalks straight into the floor in a hilarious fight

    The article highlights a recent event in Hangzhou, China, where two humanoid robots engaged in a boxing match at the ZheBA sports event, showcasing impressive human-like movements such as punches, kicks, and a 360-degree spin. Despite their agility, one robot humorously tripped and fell, eliciting laughter from the audience before recovering and standing up again. This incident underscores both the progress and current limitations of humanoid robotics, which are rapidly advancing in China but still prone to occasional mishaps. Beyond this event, the article notes other milestones in Chinese robotics, including a robot-only football tournament called RoBoLeague and a humanoid robot named Shuang Shuang participating in a graduation ceremony with lifelike gestures. While these developments demonstrate significant technological progress and potential benefits for human life, the article also cautions about challenges and risks, citing incidents where humanoid robots posed dangers or behaved unpredictably, requiring intervention. Overall, the piece emphasizes that as robotics innovation accelerates, balancing excitement with

    roboticshumanoid-robotsChina-roboticsrobot-boxingrobotics-innovationrobot-technologyrobotics-development