Articles tagged with "human-machine-interaction"
Waymo is testing Gemini as an in-car AI assistant in its robotaxis
Waymo is reportedly testing the integration of Google’s Gemini AI chatbot as an in-car assistant within its robotaxis, aiming to enhance the rider experience by providing a helpful, friendly AI companion. According to a discovery in Waymo’s mobile app code by a researcher named Wong, the assistant—referred to internally as the “Waymo Ride Assistant Meta-Prompt”—is designed to answer rider questions, manage certain in-cabin functions such as climate control, lighting, and music, and offer reassurance when needed. The assistant uses clear, simple language, keeps responses brief, and personalizes interactions by addressing riders by name and referencing contextual data like their trip history. However, it does not control features like volume, route changes, seat adjustments, or windows, and it deflects requests beyond its capabilities with aspirational phrases. The Gemini-based assistant maintains a clear separation between itself and the autonomous driving system, known as the Waymo Driver, avoiding direct commentary on driving performance or incidents. It is instructed
robotAI-assistantautonomous-vehiclesWaymoin-car-technologyhuman-machine-interactionself-driving-carsUS engineers create AI bionic hand that grips object like a human hand
Engineers at the University of Utah have developed an AI-enhanced bionic hand that mimics the natural grasping ability of a human hand by integrating pressure and proximity sensors with an artificial neural network trained on natural hand movements. This prosthetic hand can intuitively and securely grip objects, allowing users to perform everyday tasks such as picking up small items or drinking from a cup with greater precision and less mental effort, even without extensive practice. The sensors are sensitive enough to detect very light touches, and the AI independently adjusts each finger’s position to form an optimal grasp, resulting in a prosthetic that functions more naturally and reduces cognitive strain. A key innovation of this system is its bioinspired control scheme that balances user intent with AI assistance, allowing the prosthetic to adapt when the user wants to release an object or modify their grip. Tested on four amputee participants, the hand improved performance on standardized assessments and enabled fine motor tasks that were previously difficult, enhancing usability and user confidence. This breakthrough points
roboticsbionic-handAI-prostheticsneural-networkssensor-technologyhuman-machine-interactionprosthetic-controlWorld’s first Robot Phone by Honor moves and emotes like 'Wall-E'
Honor unveiled a concept for the world’s first “Robot Phone,” a device that combines AI, robotics, and mobile technology to create a new category of smartphone. Unlike traditional phones, this concept features a gimbal-mounted camera that can move independently, swivel, and express emotions through sounds and movements reminiscent of characters like Wall-E and BB-8. Honor describes the Robot Phone as an “emotional companion” capable of sensing, adapting, and evolving autonomously to enrich users’ lives with emotional engagement, aiming to redefine human-machine interaction. The Robot Phone concept hints at a future where AI is given a visible, expressive form to make digital assistants more approachable and comfortable to interact with, moving beyond voice commands alone. The device’s robotic camera and personality-driven features build on earlier innovations like flip-up cameras but add a layer of AI-powered motion and emotional expression. Currently, the Robot Phone exists only as a CGI concept with no physical prototype or detailed specs released. Honor plans to share more information and potentially reveal
robotAIroboticsmobile-technologyhuman-machine-interactionemotional-AIsmart-devicesFundamental XR launches Fundamental Touch for wireless haptics - The Robot Report
Fundamental XR has launched Fundamental Touch, a wireless haptics platform designed to deliver precise, untethered tactile feedback across multiple industries beyond healthcare, including robotics, industrial training, automotive, aerospace, retail, and gaming. This new software removes the traditional physical tether required by high-fidelity kinesthetic haptic devices, enabling greater user mobility and performance parity. Built on a client-server architecture, Fundamental Touch decouples haptic simulations from visual rendering and user interfaces, allowing sub-100ms latency and scalable, real-time force feedback via a peer-to-peer network layer. The system supports various output devices such as XR headsets (e.g., Apple Vision Pro, Meta Quest), robotic platforms (e.g., Boston Dynamics’ Spot), and gaming peripherals. Fundamental XR, formerly FundamentalVR, has a strong track record in healthcare, where its immersive technologies have reduced onboarding time by over 60%, improved surgical accuracy by 44%, and increased sales performance by 22%. The company has delivered
robotwireless-hapticshuman-machine-interactionaugmented-realityvirtual-realityprecision-kinesthetic-hapticsimmersive-technologyNeuralink’s Bid to Trademark ‘Telepathy’ and ‘Telekinesis’ Faces Legal Issues
Neuralink, the brain implant company co-founded by Elon Musk, has encountered legal challenges in its attempt to trademark the terms "Telepathy" and "Telekinesis." The United States Patent and Trademark Office (USPTO) rejected Neuralink’s applications due to prior filings by Wesley Berry, a computer scientist and co-founder of tech startup Prophetic, who submitted trademark applications for "Telepathy" in May 2023 and "Telekinesis" in August 2024. Berry’s applications, filed as “intent-to-use,” describe software analyzing EEG data to decode internal dialogue for device control, though he has not yet commercialized products under these names. Additionally, the USPTO cited an existing trademark for Telepathy Labs, a company offering voice and chatbot technology, in its refusal to advance Neuralink’s application for "Telepathy." Neuralink has been using the name "Telepathy" for its brain implant product designed to enable paralyzed individuals to operate phones and computers via thought.
robotbrain-computer-interfaceneural-implantswearable-technologyEEG-analysisassistive-technologyhuman-machine-interactionMIT Kitchen Cosmo scans ingredients and prints out AI recipes
MIT’s Kitchen Cosmo is an innovative AI-powered kitchen device developed by Ayah Mahmoud and C Jacob Payne as part of MIT’s Interaction Intelligence course. Unlike conventional smart kitchen appliances that emphasize automation and efficiency, Kitchen Cosmo fosters collaboration, creativity, and play by generating personalized recipes based on scanned ingredients, user-set constraints, and emotional inputs. The device uses a webcam to visually scan available ingredients and combines this data with tactile inputs—such as dials and switches representing time, mood, and dietary preferences—to produce context-specific recipes. These recipes are then printed on thermal paper, reinforcing a screenless, physical interaction that encourages mindful and embodied cooking experiences. Inspired by the retrofuturistic 1969 Honeywell Kitchen Computer, Kitchen Cosmo critiques the history of prescriptive smart devices by offering an improvisational and human-centered alternative. Its bold red cylindrical design doubles as a recipe archive, blending mid-century aesthetics with modern generative AI powered by GPT-4o. A unique feature is the “
IoTartificial-intelligencesmart-kitchenAI-recipeshuman-machine-interactionsensor-technologykitchen-automationNeuralink helps paralysed woman write her name after 20 years
Audrey Crews, a woman paralyzed for over 20 years, has successfully written her name using only her mind, thanks to Elon Musk’s Neuralink brain-computer interface (BCI) technology. Crews, who lost movement at age 16, is the first woman to receive the Neuralink implant, which involves brain surgery to insert 128 threads into her motor cortex. The chip, about the size of a quarter, enables her to control a computer purely through brain signals, marking a significant milestone in BCI development. However, Crews clarified that the implant does not restore physical mobility but is designed solely for telepathic control of digital devices. Neuralink’s PRIME Study, which tests these implants in human subjects, includes other participants such as Nick Wray, who also shared positive early experiences with the technology. Wray, living with ALS, expressed hope and excitement about the potential for digital autonomy and the future impact of BCIs. Founded in 2016, Neural
robotbrain-computer-interfaceNeuralinkassistive-technologymedical-implanthuman-machine-interactionneurotechnologyThe Very Real Case for Brain-Computer Implants
The article discusses the emerging and rapidly advancing technology of brain-computer interfaces (BCIs), focusing on the competitive efforts of companies like Synchron to develop commercial implants that enable direct communication between the human brain and digital devices. These implants allow users to control computers or phones through thought alone, a concept once confined to science fiction but now becoming a tangible reality. The piece highlights the significance of this technology in Silicon Valley's tech landscape and its potential to transform human-computer interaction. Additionally, the content is drawn from an episode of WIRED’s podcast "Uncanny Valley," where hosts and guests explore the implications, challenges, and progress in the BCI field. While the transcript includes casual conversation and podcast logistics, the core takeaway centers on the promise and ongoing development of brain implants as a groundbreaking interface technology, underscoring a heated race among companies to bring effective, user-friendly BCIs to market. However, the article’s transcript is incomplete and somewhat fragmented, limiting detailed insights into technical specifics or broader
brain-computer-interfaceneurotechnologybiomedical-implantshuman-machine-interactionneural-implantsbrain-computer-communicationmedical-technologyUS' new AI assistant will help astronauts tackle emergencies in space
Researchers at Texas A&M University, led by Dr. Daniel Selva, have developed Daphne-AT, a virtual assistant designed to help astronauts quickly diagnose and resolve spacecraft anomalies during long-duration space missions. Daphne-AT continuously monitors critical life support and environmental systems, such as oxygen and carbon dioxide levels, using real-time spacecraft data to detect anomalies and provide clear, step-by-step guidance to astronauts. This system aims to reduce mental workload and improve problem-solving efficiency when immediate expert support is unavailable. Testing of Daphne-AT involved virtual reality simulations at NASA’s Human Exploration Research Analog (HERA) facility with participants of varying aerospace expertise. Results showed that the assistant helped participants solve anomalies faster and handle more issues without compromising situational awareness. However, in longer-duration tests with trained professionals at HERA, the time to resolve anomalies did not significantly change, likely due to participants’ experience and the limited number of anomalies presented. Beyond space missions, Daphne-AT’s approach could also benefit emergency responders by providing timely
IoTvirtual-assistantspace-technologyanomaly-detectionreal-time-dataaerospace-engineeringhuman-machine-interactionControl A Robot By Sitting In This Chair
The article introduces the Capsule Interface developed by H2L, a groundbreaking device that allows users to control a robot simply by sitting in a specialized chair. This innovation merges virtual reality and robotics, offering an immersive and intuitive way to operate robotic systems. The Capsule Interface captures the user's movements and intentions, translating them into precise robotic actions, potentially revolutionizing how humans interact with machines. While the article hints at the futuristic potential of the Capsule Interface, it does not provide detailed technical specifications or specific applications. However, it suggests that this technology could lead to significant advancements in fields such as remote operation, telepresence, and enhanced VR experiences, opening new possibilities for both entertainment and practical uses in robotics.
robotroboticsVR-interfacehuman-machine-interactionwearable-technologycontrol-systemsNew capsule lets users teleport full‑body motion to robots remotely
H2L, a Tokyo-based company, has developed the Capsule Interface, a novel teleoperation system that uses advanced muscle displacement sensors to capture subtle shifts in muscle tension and intent in real time. Unlike traditional teleoperation methods relying on motion sensors (IMUs, exoskeletons, optical trackers), this muscle-centric approach enables humanoid robots to replicate not only a user’s movements but also the force and effort behind them. This results in more realistic, immersive, and emotionally resonant remote interactions, as robots can mirror the intensity of actions such as lifting heavy objects, enhancing haptic authenticity and a sense of embodiment. The Capsule Interface transforms the user’s body into a remote control for humanoid robots, allowing full-body motion and force transmission while the user remains seated or lying down. Equipped with speakers, a display, and muscle sensors, the system offers a low-effort, natural experience that can be integrated into everyday furniture like beds or chairs, avoiding the need for bulky equipment or extensive training.
robothumanoid-robotsteleoperationmuscle-sensorsremote-controlhuman-machine-interactionhaptic-feedback