Articles tagged with "human-computer-interaction"
OpenAI Invests in Sam Altman’s New Brain Tech Startup Merge Labs
OpenAI has invested in Merge Labs, a neurotechnology startup co-founded by OpenAI CEO Sam Altman, aiming to develop brain-computer interfaces (BCIs) that connect human brains to computers using ultrasound technology. Merge Labs has raised $252 million from investors including OpenAI, Bain Capital, and Gabe Newell. Unlike Elon Musk’s Neuralink, which implants electrodes directly into the brain, Merge plans to use non-invasive methods involving molecules and ultrasound to read and modulate neural activity without implants. The company envisions interfaces that integrate biology, devices, and AI to create accessible, user-friendly brain-computer connections. AI will be central to Merge’s approach, with OpenAI collaborating on scientific foundation models to interpret neural signals, adapt to individuals, and improve interface reliability despite noisy data. This could enable more complex brain-computer interactions beyond current capabilities, such as controlling cursors or robotic arms. Merge is a spinoff of Forest Neurotech, a nonprofit focused on brain research, particularly mental
IoTbrain-computer-interfaceneurotechnologyAIultrasound-technologywearable-deviceshuman-computer-interactionOpenAI bets big on audio as Silicon Valley declares war on screens
OpenAI is making a significant strategic shift toward audio AI, consolidating multiple teams to develop advanced audio models in preparation for an audio-first personal device expected to launch around early 2026. This initiative goes beyond improving ChatGPT’s voice capabilities; it aims to create natural-sounding, conversational AI that can handle interruptions and even speak simultaneously with users, mimicking real human interaction. The company envisions a family of devices—potentially including glasses or screenless smart speakers—that function more like companions than traditional tools, reflecting a broader industry trend toward audio-centric interfaces. This shift aligns with a wider movement in Silicon Valley, where major tech players like Meta, Google, and Tesla are investing heavily in voice and audio technologies to replace or supplement screen-based interactions. Meta’s Ray-Ban smart glasses use advanced microphones for directional listening, Google is experimenting with conversational search summaries, and Tesla is integrating large language models into vehicles for voice-controlled assistance. Startups are also entering the space with innovative but varied products,
IoTaudio-AIsmart-devicesvoice-assistantswearable-technologynatural-language-processinghuman-computer-interactionThe phone is dead. Long live . . . what exactly?
True Ventures co-founder Jon Callaghan predicts that smartphones as we know them will become obsolete within five to ten years, replaced by fundamentally different human-computer interfaces. Callaghan argues that current phones are inefficient and disruptive tools for interacting with digital intelligence, prompting True Ventures to invest heavily in exploring alternative interfaces, both hardware and software. This approach reflects the firm’s history of early bets on transformative technologies like Fitbit, Peloton, and Ring—each representing new, more natural ways for humans to engage with technology. The latest embodiment of this vision is Sandbar, a wearable device worn on the index finger designed to capture and organize thoughts through voice notes, functioning as a “thought companion.” Unlike other wearables focused on health or passive recording, Sandbar aims to meet a core human behavioral need by being an active partner in idea capture, supported by AI and an associated app. True Ventures was drawn not only to the product but also to the founders Mina Fahmi and Kirak Hong, whose background in neural
IoTwearable-technologyhuman-computer-interactionvoice-interfacesmart-devicesfuture-technologyhardware-innovationSkin patch lets users type and read messages through touch
Researchers have developed a soft, skin-like patch that enables users to type and receive text messages through touch, leveraging advances in stretchable electronics, gel-based sensors, and AI. Unlike conventional digital devices that detect only simple taps and swipes, this patch uses an iontronic sensor array embedded in a flexible, stretchable copper circuit layered with silicone to detect subtle pressure changes on the skin. The patch encodes ASCII characters by dividing each character into four two-bit segments, with each sensor registering presses that correspond to segment values. Feedback is provided via vibration patterns, where actuators vibrate a specific number of times to represent each segment, creating a tactile communication system aligned with the ASCII standard. To interpret touch inputs without requiring extensive data collection, the researchers developed a mathematical model simulating pressing behavior, capturing variations in force, duration, and press count. Demonstrations of the patch include typing the phrase “Go!” with tactile confirmation and controlling a racing game where presses steer the vehicle and vibration intensity indicates proximity
IoTwearable-technologysoft-materialshuman-computer-interactiontactile-sensorsstretchable-electronicsAI-algorithmsAltman describes OpenAI’s forthcoming AI device as more peaceful and calm than the iPhone
OpenAI CEO Sam Altman and former Apple chief designer Jony Ive have revealed insights into their upcoming AI hardware device, currently in prototype form, emphasizing its simplicity and calm user experience. Altman anticipates that initial reactions to the device will be underwhelming due to its minimalistic design, contrasting sharply with the complexity and distractions of modern technology. He compared the device’s vibe to “sitting in the most beautiful cabin by a lake,” highlighting its focus on peace, calm, and contextual awareness rather than the flashy, notification-heavy experience typical of current smartphones like the iPhone. Altman criticized existing devices for their overwhelming distractions, likening their use to navigating a noisy, chaotic environment filled with flashing lights and interruptions. In contrast, the new AI device aims to filter information intelligently, presenting it at appropriate times and earning the user’s trust over long-term use. Ive expressed a design philosophy centered on creating products that feel both sophisticated and intuitively simple, encouraging effortless interaction without intimidation. The device
IoTAI-deviceconsumer-electronicssmart-technologyhuman-computer-interactionwearable-technologyambient-computingUltra-thin patch delivers high-precision feel on flat screens
Northwestern University engineers have developed VoxeLite, an ultra-thin, fingertip-worn haptic device that achieves human-level resolution in touch by delivering highly precise tactile sensations on flat screens. Unlike previous haptic technologies that relied on coarse vibrations, VoxeLite uses a dense grid of tactile pixels—small nodes embedded in a stretchable latex sheet—that create electroadhesion to modulate friction and mechanical force on the skin. This allows users to feel virtual textures with clarity matching the spatial and temporal acuity of the human fingertip, enabling realistic sensations such as roughness or smoothness by adjusting voltage levels. VoxeLite supports two operational modes: an active mode where nodes rapidly tilt up to 800 times per second to generate dynamic virtual textures, and a passive mode that maintains comfort and normal touch interaction without removal. User tests demonstrated high accuracy in recognizing directional cues (87%) and identifying fabric textures like leather and corduroy (81%). The device weighs less than a gram and is designed
robothapticswearable-technologytactile-interfacehuman-computer-interactionelectroadhesiondigital-touchscreensFormer Meta employees launch Sandbar, a smart ring that takes voice notes and controls music
Former Meta employees Mina Fahmi and Kirak Hong have launched Sandbar, a smart ring called Stream designed to capture voice notes and control music through a discreet, wearable interface. Both founders have extensive backgrounds in human-computer interaction and neural interfaces, having worked at companies like Kernel, Magic Leap, Google, and CTRL-Labs before their time at Meta. Motivated by the challenge of capturing fleeting thoughts without interrupting daily activities or drawing attention, they developed Stream to enable users to record whispered voice notes via a touch-activated microphone embedded in a ring worn on the dominant hand’s index finger. The ring’s companion iOS app transcribes these notes and includes an AI chatbot that helps organize and edit the content, offering personalized voice feedback and haptic confirmation for silent use in public. Beyond voice capture, the Stream ring functions as a media controller, allowing users to play, pause, skip tracks, and adjust volume without needing to access their phone or headphones. Sandbar is opening pre-orders for the
IoTwearable-technologysmart-ringvoice-controlAI-assistanthuman-computer-interactionpersonal-productivity-devicesFormer Meta employees launch a ring to take voice notes and control music
Former Meta employees Mina Fahmi and Kirak Hong have launched Sandbar, introducing Stream, a smart ring designed to capture voice notes and control music discreetly. Drawing on their extensive backgrounds in human-computer interfaces and neural tech, the founders created Stream to address the challenge of capturing fleeting thoughts without interrupting daily activities or speaking aloud in public. The ring, worn on the dominant hand’s index finger, features microphones activated by a touchpad gesture, enabling users to record whispers that are transcribed in a companion iOS app. The app includes an AI chatbot that interacts with users during recording, helping organize and edit notes, with a personalized assistant voice that resembles the user’s own. Beyond voice note-taking, the Stream ring functions as a media controller, allowing users to play, pause, skip tracks, and adjust volume without needing to access their phone or headphones. The device provides haptic feedback to confirm inputs and supports private conversations via headphones in noisy environments. Sandbar is opening pre-orders for Stream
IoTwearable-technologyvoice-interfacesmart-ringhuman-computer-interactionAI-assistantpersonal-productivity-devicesYou Can Now Feel Touch In VR
The USC Viterbi School of Engineering has developed a new haptic system that enables users to experience the sense of touch within virtual reality environments. This innovation marks a significant advancement in VR technology by adding tactile feedback, allowing users to physically feel interactions in a digital space. The system enhances immersion and could transform how people engage with virtual content, making online interactions more realistic and intuitive. This breakthrough has broad implications for various applications, including gaming, remote collaboration, education, and training, where the ability to feel virtual objects or interactions can improve user experience and effectiveness. While the article does not provide detailed technical specifications or deployment timelines, the introduction of touch sensation in VR represents a major step toward more comprehensive and multisensory virtual experiences.
robothapticsvirtual-realityhuman-computer-interactionwearable-technologysensory-technologyUSC-ViterbiPhotos: Meta's new wristband translates hand movements to digital commands
Meta researchers have developed a novel wristband called sEMG-RD (surface electromyography research device) that translates hand gestures into digital commands by interpreting electrical motor nerve signals from muscle movements at the wrist. The device uses 16 gold-plated dry electrodes arranged around the wrist to capture muscle contraction signals at a high sampling rate, enabling real-time gesture recognition without the need for skin preparation or conductive gels. Its modular design accommodates different wrist sizes and muscle configurations, while separating the heavier processing components into a separate capsule to enhance user comfort. The sEMG-RD supports a wide range of computer interactions beyond simple cursor control, including finger pinches, thumb swipes, thumb taps, and handwriting-like text entry at speeds of about 20.9 words per minute. By employing deep learning models trained on data from many users, the system can decode gestures generically without requiring personalized calibration, facilitating broad usability. The device is designed for ease of use, supporting both left- and right-handed users
IoTwearable-technologyelectromyographyBluetooth-deviceshuman-computer-interactiongesture-recognitionassistive-technologyMeta researchers are developing a gesture-controlled wristband that can interact with a computer
Meta researchers at Meta Reality Labs are developing a gesture-controlled wristband that enables users to interact with computers through hand gestures, such as moving a cursor, opening applications, and writing messages in the air. The device detects electrical signals generated by muscle activity (sEMG signals) to interpret a user's intended movements, even before they physically execute them. This technology aims to offer a less invasive and more accessible computer interface, particularly benefiting individuals with motor disabilities. The wristband is designed to assist people with spinal cord injuries who may have limited or no hand mobility but still exhibit some muscle activity. Unlike more invasive alternatives, such as Elon Musk’s neural implants, Meta’s device operates without surgical intervention and functions at a higher frequency than EEG-based systems. The researchers plan to test the wristband with users who have paralysis to validate its effectiveness in enabling computer control through subtle muscle signals.
IoTwearable-technologygesture-controlassistive-deviceshuman-computer-interactionmuscle-signal-detectionMeta-Reality-LabsMeta researchers are developing a gesture-controlled wristband that can control a computer
Meta researchers at Meta Reality Labs are developing a gesture-controlled wristband that enables users to control a computer through hand gestures, such as moving a cursor, opening applications, and writing messages in the air. The wristband detects electrical signals generated by muscle activity (sEMG signals) to interpret user intentions, even before the physical movement occurs. This technology aims to provide a less invasive and more accessible way for people, especially those with motor disabilities or spinal cord injuries, to interact with computers. The device is notable for its ability to detect muscle activity even in individuals with complete hand paralysis, allowing them to perform intended actions without full arm or hand mobility. Unlike other brain-computer interface projects that may require surgical implants, Meta’s wristband operates non-invasively and at a higher frequency than EEG-based systems, offering immediate usability without surgery. The research, published in the journal Nature, highlights the potential of this wristband to improve computer accessibility for people with severe motor impairments.
robotwearable-technologygesture-controlassistive-deviceshuman-computer-interactionmuscle-signal-detectionMeta-Reality-Labs