RIEM News LogoRIEM News

Articles tagged with "AI-assistant"

  • Google Maps now lets you access Gemini while walking and cycling

    Google has introduced a new hands-free feature in Google Maps that allows users to interact with its AI assistant, Gemini, while walking or cycling. This update enables users to ask real-time, conversational questions—such as inquiries about nearby attractions, amenities along their route, estimated arrival times, or even sending text messages—without needing to stop or type. The feature supports multi-turn conversations, enhancing navigation by providing personalized, context-aware assistance to keep users informed and safe on the move. It is currently available worldwide on iOS where Gemini is supported and is gradually rolling out on Android. This enhancement is part of Google’s broader strategy to shift Google Maps from static directions to dynamic, AI-powered navigation. Recent updates include a Gemini-powered “know before you go” tips section offering practical information like reservation advice and parking suggestions, an improved Explore tab for discovering trending local spots, and an EV charger availability prediction feature. These developments reflect Google’s commitment to integrating advanced AI capabilities into Maps, improving user experience, and competing

    IoTsmart-navigationAI-assistantelectric-vehicle-chargingreal-time-dataautonomous-featuresconversational-AI
  • Amazon says 97% of its devices can support Alexa+

    At the Consumer Electronics Show (CES) in Las Vegas, Amazon revealed that 97% of the more than 600 million devices it has shipped are capable of supporting its upgraded AI assistant, Alexa+. This new generative AI platform enhances Alexa with more expressive voices, access to extensive world knowledge, and AI agents that can perform tasks like ordering food or calling an Uber. Amazon has begun rolling out Alexa+ to over one million users, with plans to expand availability primarily to Prime members, though no exact full release date has been announced. Amazon’s strategy leverages Alexa’s widespread presence in homes and strong brand familiarity to position it as a foundational AI assistant amid growing competition from other AI chatbots like ChatGPT and Claude. Alexa’s natural voice interface and continuous user engagement are seen as key advantages for growth. Additionally, Amazon highlighted its recent acquisition of Bee, an AI wearable that records conversations and provides insights via text or voice chat, which will eventually integrate more closely with Alexa while maintaining its own distinct identity

    IoTAmazon-Alexasmart-homeAI-assistantvoice-technologyAlexa+AI-integration
  • The most bizarre tech announced so far at CES 2026

    At CES 2026, alongside major tech announcements, several unusual and quirky gadgets have captured attention for their novelty and unique applications. Razer unveiled Project AVA, a 5.5-inch holographic anime assistant designed to support gaming, productivity, and daily organization. Featuring lifelike avatars with eye-tracking and expressive faces, it continuously monitors users via a built-in camera, raising privacy questions. Meanwhile, Mind with Heart Robotics introduced An’An, an AI-powered baby panda robot aimed at elderly care. Equipped with sensors and emotional AI, An’An responds to touch, remembers user preferences, provides companionship to combat loneliness, and assists with memory and daily reminders, also keeping caregivers informed. Other standout innovations include GoveeLife’s $500 Smart Nugget Ice Maker Pro, which uses AI NoiseGuard technology to reduce operational noise by detecting and defrosting before loud freezing sounds occur. Seattle Ultrasonics presented an ultrasonic chef’s knife vibrating at over 30,000 times per second,

    robotAIelderly-caresmart-homeIoTAI-assistantrobotics
  • Why smart homes need to think in spaces, not gadgets

    The article discusses the evolution and challenges of smart home technology, emphasizing that the initial promise of effortless automation has been undermined by complexity and fragmentation. Early smart homes focused on making individual devices intelligent—such as voice-controlled lights or learning thermostats—but as more gadgets were added, the experience became disjointed, with multiple apps and unreliable automations. The core issue identified is the lack of system-level intelligence that understands the home as a dynamic living environment, where context, routines, and spatial relationships matter more than isolated device commands. To address this, the article highlights Tuya Smart’s approach, which shifts the focus from individual devices to creating intelligence embedded in physical spaces. Tuya Smart offers a cloud platform that integrates AI with a broad ecosystem of devices, overcoming fragmentation by enabling coordinated behavior across rooms and scenarios. Central to this system is Hey Tuya, an AI life assistant designed not just for command-based interaction but to sense environments, learn user habits, and proactively manage devices in a contextual, space

    IoTsmart-homeartificial-intelligencehome-automationconnected-devicesAI-assistantTuya-Smart
  • Ford has an AI assistant and new hands-free BlueCruise tech on the way

    Ford announced at the 2026 Consumer Electronics Show that it is developing an AI assistant initially launching in its smartphone app in early 2026, with plans to integrate it natively into vehicles by 2027. The assistant, hosted on Google Cloud and built using off-the-shelf large language models (LLMs), will have deep access to vehicle-specific data, enabling it to answer both high-level questions (e.g., truck bed capacity) and provide real-time granular information such as oil life. While Ford has not detailed the in-car user experience, the move aligns with trends from other automakers like Rivian and Tesla, who have introduced advanced digital assistants capable of handling complex tasks including messaging, navigation, and climate control. In addition to the AI assistant, Ford teased a next-generation BlueCruise advanced driver assistance system that will be 30% cheaper to produce and debut in 2027 on a new mid-sized electric pickup built on its Universal Electric Vehicle platform. This updated BlueCruise

    robotAI-assistantautonomous-drivingBlueCruiseelectric-vehiclesadvanced-driver-assistance-systemsautomotive-technology
  • New cyber pet for home companionship aims to strengthen family bonds

    At CES 2026 in Las Vegas, Chinese brand OLLOBOT introduced a new type of emotionally supportive robot designed as a cyber-pet for home companionship. Unlike traditional humanoid robots, OLLOBOT focuses on creating warm, humorous, and emotionally engaging interactions to strengthen family bonds. The robot adapts easily to users through an embodied intelligence system powered by a Vision-Language-Action (VLA) model, which processes multimodal inputs—such as sight, sound, and touch—in real time. This allows the cyber-pet to perceive user moods, activities, and environmental factors, enabling proactive assistance like reminders and personalized interactions. OLLOBOT aims to bridge the gap between technology and family life by encouraging intentional interaction, especially among children who might otherwise be absorbed by screens. It communicates in a unique “pet language” that sparks curiosity and prompts parent-child conversations. The robot also functions as a digital assistant, offering timely reminders to help maintain family connections. Privacy is a key feature, with all

    robotembodied-intelligencehome-companionshipAI-assistantcyber-pethuman-robot-interactionCES-2026
  • The most bizarre tech announced so far at CES 2026

    At CES 2026, alongside major tech announcements, several unusual and quirky gadgets stood out for their novelty and creativity. Razer unveiled an evolved version of its esports AI coach: a 5.5-inch holographic anime assistant that sits on your desk, offering gaming tips, productivity help, and personal advice through lifelike animated avatars with eye-tracking and expressive features. Notably, it uses a built-in camera to monitor users and their screens, raising privacy questions, and remains a concept without guaranteed production. Another highlight was An’An, an AI-powered baby panda robot designed to support elderly care by providing emotional companionship, personalized interaction through voice and touch recognition, and reminders to aid memory, while keeping caregivers informed. Other standout innovations included a $500 AI-enabled countertop ice maker from Govee Life that uses patented NoiseGuard technology to detect and prevent noisy freezing cycles by auto-defrosting, producing up to 60 pounds of ice daily. Seattle Ultrasonics introduced an ultrasonic chef’s

    robotAIelderly-care-robotsmart-home-applianceIoTAI-assistantemotional-AI
  • Amazon’s AI assistant comes to the web with Alexa.com

    Amazon has launched Alexa.com, a new website that brings its AI-powered digital assistant, Alexa+, to the web, allowing users to interact with Alexa much like other AI chatbots such as ChatGPT or Google’s Gemini. This move aims to expand Alexa’s presence beyond its established footprint in smart home devices—over 600 million Echo devices sold worldwide—by making the assistant accessible on phones and the web. Alongside this, Amazon is updating the Alexa mobile app to feature a more chatbot-centric interface, prioritizing conversational interactions over other functionalities. Alexa.com enables users to perform common tasks such as exploring complex topics, creating content, and planning trips, but Amazon is emphasizing Alexa’s unique focus on family-oriented needs. These include managing smart home devices, updating family calendars and to-do lists, making dinner reservations, adding groceries to Amazon Fresh or Whole Foods carts, finding and saving recipes, and planning family activities with personalized recommendations. Amazon is also integrating third-party apps like Angi, Expedia, Square, and

    IoTsmart-homeAlexaAI-assistantsmart-devicesvoice-controlAmazon-Echo
  • Waymo is testing Gemini as an in-car AI assistant in its robotaxis

    Waymo is reportedly testing the integration of Google’s Gemini AI chatbot as an in-car assistant within its robotaxis, aiming to enhance the rider experience by providing a helpful, friendly AI companion. According to a discovery in Waymo’s mobile app code by a researcher named Wong, the assistant—referred to internally as the “Waymo Ride Assistant Meta-Prompt”—is designed to answer rider questions, manage certain in-cabin functions such as climate control, lighting, and music, and offer reassurance when needed. The assistant uses clear, simple language, keeps responses brief, and personalizes interactions by addressing riders by name and referencing contextual data like their trip history. However, it does not control features like volume, route changes, seat adjustments, or windows, and it deflects requests beyond its capabilities with aspirational phrases. The Gemini-based assistant maintains a clear separation between itself and the autonomous driving system, known as the Waymo Driver, avoiding direct commentary on driving performance or incidents. It is instructed

    robotAI-assistantautonomous-vehiclesWaymoin-car-technologyhuman-machine-interactionself-driving-cars
  • Amazon’s AI assistant Alexa+ now works with Angi, Expedia, Square, and Yelp

    Amazon is enhancing its AI assistant, Alexa+, by integrating four new services—Angi, Expedia, Square, and Yelp—starting in 2026. These additions will enable users to perform tasks such as booking hotels, obtaining home service quotes, and scheduling salon appointments through natural language interactions. For example, with Expedia integration, customers can ask Alexa to find personalized hotel options, like pet-friendly stays, and manage reservations. These new partnerships expand Alexa+’s existing ecosystem, which already includes services like Fodor, OpenTable, Ticketmaster, Thumbtack, and Uber. Amazon aims to simplify consumer access to various online services by allowing conversational, back-and-forth interactions with Alexa+, similar to how ChatGPT operates. Early data suggests strong engagement with home and personal service providers like Thumbtack and Vagaro. However, widespread adoption depends on users being willing to shift from traditional web or app-based interactions to AI-driven platforms. For this transition to succeed, AI assistants must offer a user experience

    IoTAI-assistantAlexasmart-homedigital-assistantvoice-controlAmazon-Alexa
  • Rivian is building its own AI assistant

    Rivian has been developing its own AI assistant for nearly two years, aiming to integrate it deeply with vehicle controls rather than offering a simple chatbot. The company’s software chief, Wassym Bensaid, indicated a potential consumer launch by the end of 2024, with more details expected at Rivian’s December 11 event. The AI assistant is designed with a model- and platform-agnostic architecture, employing an agentic framework that coordinates multiple AI models through an in-vehicle orchestration layer. This hybrid system balances edge AI (processing on the device) and cloud AI (using remote servers), enabling flexible and efficient task handling. This AI initiative aligns with Rivian’s broader strategy to increase vertical integration, as the company is also redesigning key vehicle components and developing much of its software stack in-house, including real-time operating systems for safety and infotainment. While Rivian’s AI assistant remains an internal project, separate from its multi-billion dollar technology joint venture with Volkswagen, the

    robotAI-assistantautomotive-technologyedge-AIcloud-AIsoftware-integrationvehicle-controls
  • New ‘KnoWay’ robotaxis cause chaos in upcoming Grand Theft Auto Online DLC

    The latest Grand Theft Auto Online expansion, titled “A Safehouse in the Hills,” introduces robotaxis from a fictional company called “KnoWay.” These autonomous vans, resembling early Waymo Chrysler Pacifica models, are depicted causing chaos by swerving through traffic, crashing into vehicles, and destroying billboards. The DLC, releasing December 10, features a storyline involving an AI assistant named “Haviland” and centers on players attempting to thwart the development of a mass surveillance network, suggesting the rogue behavior of the robotaxis is part of the narrative. Rockstar Games appears to draw inspiration from real-world controversies surrounding Waymo’s autonomous vehicles, which have faced criticism for privacy concerns and have been targeted by vandalism in various cities. The game’s tagline for KnoWay’s service, “We Kno where you’re going,” echoes surveillance anxieties. While Waymo has publicly committed to resisting unlawful government data requests and condemned vandalism against its fleet, the game’s chaotic portrayal taps into ongoing tensions

    robotautonomous-vehiclesrobotaxisAI-assistantsurveillanceWaymogaming-technology
  • New ‘KnoWay’ robotaxis cause chaos in new Grand Theft Auto Online DLC

    The latest Grand Theft Auto Online expansion, titled “A Safehouse in the Hills,” introduces robotaxis from a fictional company called “KnoWay.” These autonomous vans, visually reminiscent of early Waymo Chrysler Pacifica models, are depicted causing chaos by swerving recklessly, crashing into vehicles, and destroying billboards. The DLC, available from December 10, features a storyline where players are tasked with stopping the development of a mass surveillance network, hinting that the robotaxis may have gone rogue. An AI assistant named “Haviland” is also teased, suggesting a broader tech-centric narrative. Rockstar Games appears to be drawing on real-world controversies surrounding autonomous vehicle companies like Waymo, whose vehicles have faced criticism and vandalism due to privacy concerns and their perceived role in surveillance. The in-game tagline for KnoWay’s service—“We Kno where you’re going”—echoes these privacy anxieties. Waymo has publicly opposed overly broad government requests for data and condemned vandalism against

    robotautonomous-vehiclesrobotaxisAI-assistanttransportation-technologysurveillanceWaymo
  • Healthify upgrades its AI assistant Ria with real-time conversation capabilities

    Healthify, a Khosla-backed health startup with over 45 million registered users, has upgraded its AI assistant Ria to support real-time conversational capabilities powered by OpenAI’s technology. The enhanced Ria now supports more than 50 languages, including 14 Indian languages and mixed-language inputs like Hinglish and Spanglish. Users can interact with Ria to get personalized health insights by aggregating data from various sources such as fitness trackers, sleep monitors, and glucose sensors. Features include querying health summaries over specified time frames, logging meals via camera (including through Ray-Ban Meta smart glasses), and generating exercise plans—all through natural, conversational interactions. Looking ahead, Healthify plans to integrate Ria more deeply into user onboarding to capture richer unstructured data and create a persistent memory layer for long-term personalized health guidance. The assistant will also support interactions between users and their coaches or nutritionists by providing real-time data retrieval and call transcription. The company is launching a $20/month AI-powered

    IoTAI-assistanthealth-trackingsmart-devicesreal-time-conversationwearable-technologynutrition-monitoring
  • Former Meta employees launch Sandbar, a smart ring that takes voice notes and controls music

    Former Meta employees Mina Fahmi and Kirak Hong have launched Sandbar, a smart ring called Stream designed to capture voice notes and control music through a discreet, wearable interface. Both founders have extensive backgrounds in human-computer interaction and neural interfaces, having worked at companies like Kernel, Magic Leap, Google, and CTRL-Labs before their time at Meta. Motivated by the challenge of capturing fleeting thoughts without interrupting daily activities or drawing attention, they developed Stream to enable users to record whispered voice notes via a touch-activated microphone embedded in a ring worn on the dominant hand’s index finger. The ring’s companion iOS app transcribes these notes and includes an AI chatbot that helps organize and edit the content, offering personalized voice feedback and haptic confirmation for silent use in public. Beyond voice capture, the Stream ring functions as a media controller, allowing users to play, pause, skip tracks, and adjust volume without needing to access their phone or headphones. Sandbar is opening pre-orders for the

    IoTwearable-technologysmart-ringvoice-controlAI-assistanthuman-computer-interactionpersonal-productivity-devices
  • Former Meta employees launch a ring to take voice notes and control music

    Former Meta employees Mina Fahmi and Kirak Hong have launched Sandbar, introducing Stream, a smart ring designed to capture voice notes and control music discreetly. Drawing on their extensive backgrounds in human-computer interfaces and neural tech, the founders created Stream to address the challenge of capturing fleeting thoughts without interrupting daily activities or speaking aloud in public. The ring, worn on the dominant hand’s index finger, features microphones activated by a touchpad gesture, enabling users to record whispers that are transcribed in a companion iOS app. The app includes an AI chatbot that interacts with users during recording, helping organize and edit notes, with a personalized assistant voice that resembles the user’s own. Beyond voice note-taking, the Stream ring functions as a media controller, allowing users to play, pause, skip tracks, and adjust volume without needing to access their phone or headphones. The device provides haptic feedback to confirm inputs and supports private conversations via headphones in noisy environments. Sandbar is opening pre-orders for Stream

    IoTwearable-technologyvoice-interfacesmart-ringhuman-computer-interactionAI-assistantpersonal-productivity-devices
  • Watch: NEO humanoid robot does your chores and learns new skills

    The article introduces NEO, a humanoid robot developed by robotics firm 1X, designed to automate household chores and provide personal assistance. Weighing 66 pounds and operating quietly at 22 decibels, NEO can fold laundry, tidy rooms, open doors, fetch items, and switch off lights. It features a patented Tendon Drive system with high-torque density motors, enabling natural and gentle movements safe for home environments. NEO is equipped with advanced AI, including a built-in large language model (LLM) for conversational interaction, Audio and Visual Intelligence for contextual awareness, and Memory to retain information across interactions, making it a learning companion that adapts over time. NEO’s core functionality centers on its Chores feature, allowing users to assign and schedule tasks via voice or app commands. For unfamiliar tasks, users can connect with 1X Experts to train the robot, enhancing its capabilities. The robot supports Wi-Fi, Bluetooth, and 5G connectivity and includes

    robothumanoid-robotAI-assistanthome-automationroboticssmart-homemachine-learning
  • Fitbit’s revamped app, with Gemini-powered health coach, rolls out to Premium users

    Fitbit has launched a revamped app featuring a new AI-powered health coach called "Coach," driven by Google's Gemini AI, now available to Fitbit Premium subscribers in the U.S. on Android, with an iOS rollout planned later this year. Coach acts as a comprehensive fitness trainer, sleep coach, and wellness advisor, creating personalized workout routines based on user goals, preferences, and equipment availability. It dynamically adjusts exercise plans in real time based on user feedback and can modify routines if injuries occur. Additionally, Coach analyzes sleep patterns and offers insights to improve sleep quality over time. The updated Fitbit app has a redesigned, user-friendly interface organized into four main tabs: Today, Fitness, Sleep, and Health. The Today tab provides a customizable overview of key metrics and weekly cardio load, while the Fitness tab contains workout plans and key exercise statistics, though some features like nutrition tracking and cycle logging are not yet available. The Sleep tab offers detailed sleep tracking with AI-driven coaching insights and a summary of sleep quality,

    IoTwearable-technologyhealth-techAI-assistantfitness-trackingsleep-monitoringdigital-health
  • GM is bringing Google Gemini-powered AI assistant to cars in 2026 

    General Motors announced that starting in 2026, it will introduce a conversational AI assistant powered by Google’s Gemini technology across its Buick, Chevrolet, Cadillac, and GMC vehicles. This AI assistant aims to enable more natural conversations with drivers, allowing them to draft messages, plan multi-stop routes including charging stations or coffee shops, and prepare for meetings while on the move. The integration builds on GM’s existing “Google built-in” infotainment system, which already provides access to Google Assistant, Maps, and other apps, and follows Google’s 2023 introduction of Dialogflow chatbot features for non-emergency OnStar services. While specific capabilities of the Gemini-powered assistant remain unclear, GM envisions it as a personalized in-car AI that connects through OnStar to vehicle systems, offering maintenance alerts, route suggestions, explanations of car features like one-pedal driving, and pre-conditioning of the cabin climate. The assistant will learn from user habits to provide tailored recommendations, with an emphasis on user control over data access

    IoTautomotive-technologyAI-assistantsmart-vehiclesconnected-carsGoogle-GeminiOnStar
  • NIO's Record Global Deliveries Exceed Targets as European Market Develops - CleanTechnica

    NIO Inc. achieved a record-breaking global vehicle delivery milestone in September 2025, delivering 34,749 vehicles—a 64.1% increase year-over-year—bringing its cumulative deliveries to 872,785 units by the end of the third quarter. This growth was driven by its diversified product portfolio across three brands: the premium NIO brand (13,728 units), the family-oriented ONVO brand (15,246 units), and the high-end firefly brand (5,775 units). The company’s expanding lineup includes smart electric SUVs, sedans, and compact urban vehicles, all equipped with advanced intelligent driving technology and the NOMI AI assistant. NIO is aggressively expanding its presence in Europe, viewing the continent as a key pillar of its global strategy. Since June 2025, the company announced plans to enter five additional European countries between 2025 and 2026, adopting a hybrid multi-channel distribution model that combines direct-to-consumer sales with partnerships with established local

    electric-vehiclessmart-vehiclesNIOenergyIoTautonomous-drivingAI-assistant
  • Google’s Home app, a command center for the smart home, gets a Gemini upgrade

    Google has announced a major redesign of its Google Home app, aiming to improve the overall user experience for managing smart home devices. Acknowledging past shortcomings, Google focused first on enhancing the app’s performance, reliability, and design before integrating new AI features. The updated app now launches 70% faster, experiences 80% fewer crashes, and includes numerous battery and memory optimizations. Over the past year, Google has delivered more than 100 updates, and the app currently supports over 800 million devices from more than 50,000 manufacturers, reflecting its broad compatibility. A significant part of the update is the integration of Nest device management into the Google Home app, consolidating what was previously split between two apps. The app now supports Nest thermostats (from 2015 onward), cameras, doorbells, smoke and CO detectors, and smart locks, including migration of device history and features like emergency notifications. Camera functionality has been notably improved, with 30% faster live views, 40

    IoTsmart-homeGoogle-HomeNest-devicesAI-assistantdevice-managementsmart-thermostat
  • Google teases its new Gemini-powered Google Home speaker, coming in spring 2026

    Google has announced its upcoming flagship smart speaker, powered by its new Gemini AI assistant, set to launch in spring 2026 at a price of $99. The device will be available in four colors—Porcelain, Hazel, Berry, and Jade—and is designed with a processor capable of handling advanced AI functions such as background noise suppression, reverb, and echo cancellation. This ensures clearer interaction even in noisy environments. A new light ring will provide visual feedback on the assistant’s status during interactions, particularly in the Gemini Live mode, which requires a Google Home Premium subscription. The launch timing is deliberate, as Google aims first to roll out Gemini AI functionality to existing Google Home devices through an Early Access program, allowing current users to test and provide feedback before the new speaker becomes available. The speaker supports 360-degree audio and can be grouped with other Google Home devices for synchronized playback. Additionally, users will be able to pair two Google Home speakers with a Google TV Streamer to create a surround-s

    IoTsmart-homeGoogle-HomeAI-assistantGemini-AIsmart-speakereco-friendly-materials
  • Google reveals its Gemini-powered smart home lineup and AI strategy

    Google has unveiled a refreshed lineup of smart home devices powered by its new AI assistant, Gemini AI, including updated Nest Cam Outdoor, Nest Cam Indoor, and Nest Doorbell models. The company also previewed an upgraded Google Home smart speaker expected in spring 2026 and announced a partnership with Walmart to offer affordable AI-enabled cameras and doorbells under the onn brand. Google’s strategy emphasizes making Gemini accessible not only through its own flagship hardware but also by enabling other manufacturers to integrate Gemini into their devices, similar to how Android operates across various smartphone brands and price points. To maximize reach, Google plans to first roll out Gemini features to existing devices with sufficient processing power, leveraging its ecosystem of over 800 million devices connected via Google Home Cloud-to-Cloud Plus. This phased approach allows Google to test and refine Gemini’s capabilities before launching on new flagship products. Additionally, Google is providing partners with a comprehensive toolkit—including reference hardware designs, SoC recommendations, and an embedded camera SDK—to facilitate the development of

    IoTsmart-homeAI-assistantGoogle-Geminiconnected-devicessmart-camerashome-automation
  • Amazon unveils new Echo devices, powered by its Al, Alexa+

    At its annual hardware event, Amazon unveiled a new lineup of Echo devices powered by its advanced AI assistant, Alexa+. The four new models—the Echo Dot Max, Echo Studio, Echo Show 8, and Echo Show 11—feature enhanced processing power and memory, enabled by Amazon’s custom-designed AZ3 and AZ3 Pro silicon chips. These chips improve wake word detection, conversation recognition, and support advanced AI models and vision transformers. Notably, the AZ3 Pro devices incorporate Omnisense, a sensor platform that uses cameras, audio, ultrasound, Wi-Fi radar, and other inputs to enable Alexa to respond contextually to events in the home, such as recognizing when a person enters a room or alerting users to an open garage door. The Echo Dot Max ($99.99) offers significantly improved sound with nearly three times the bass, while the Echo Studio ($219.99) boasts a smaller spherical design, spatial audio, Dolby Atmos support, and an upgraded light ring. Both can

    IoTsmart-homeAlexaAI-assistantAmazon-Echoedge-computingsmart-devices
  • Alexa+ comes to new Fire TV devices with AI-powered conversations

    At Amazon’s fall hardware event, the company unveiled the integration of its upgraded AI assistant, Alexa+, into new Fire TV devices. Alexa+ enhances user interaction by enabling more complex and natural language queries, such as personalized movie or show recommendations based on previous viewing habits or favorite actors. It also provides real-time information during live sports events, including scores, player stats, and highlights, and allows users to find specific scenes in movies or shows through voice commands. Initially, this scene-finding feature supports thousands of Prime Video titles, with plans to expand to other platforms. Alongside the Alexa+ upgrade, Amazon introduced a new lineup of Fire TV hardware, including the Fire TV 2-Series, 4-Series, Omni QLED TVs, and the Fire TV Stick 4K Select. These devices feature improvements such as the Omnisense auto-adjusting brightness technology, Dialogue Boost for clearer audio, and faster performance with new quad-core processors. The flagship Fire TV Omni QLED Series boasts 60%

    IoTsmart-homeAlexaAI-assistantFire-TVvoice-controlsmart-devices
  • ABB Robotics adds generative AI assistant to RobotStudio Suite - The Robot Report

    ABB Robotics has integrated a generative AI assistant into its RobotStudio Suite to enhance robot programming by providing real-time, step-by-step guidance. This AI Assistant leverages a large language model (LLM) that interprets human language and draws from ABB’s extensive manuals and documentation to deliver context-rich responses. The feature aims to make robot programming faster, easier, and more accessible, particularly benefiting less experienced users and helping experts address technical challenges more efficiently. ABB emphasizes that this addition addresses the growing demand for AI in robotics driven by the need for greater flexibility, faster commissioning, and a shortage of specialist programming skills. By improving accessibility, ABB hopes to support smaller businesses and emerging sectors that often lack robotic automation expertise. The AI Assistant is integrated into RobotStudio’s cloud-hosted offline programming environment, serving as an effective training tool for students and early-career professionals. RobotStudio itself is a collaborative robot programming and simulation platform with features like automatic path planning to optimize productivity and reduce energy use. The AI Assistant

    roboticsgenerative-AIrobot-programmingindustrial-robotsautonomous-mobile-robotsautomationAI-assistant
  • New Onvo L60 Launches At Low Cost Of $21,020–$29,010 - CleanTechnica

    The Chinese smart electric vehicle (EV) company Nio has launched its more affordable family-oriented brand Onvo’s new model, the L60 SUV, priced competitively between $21,020 (with Battery as a Service, BaaS) and $29,010 (battery included). The L60 features Nio’s unique battery swapping technology and offers a suite of advanced tech, including an updated “smart cockpit” system. This cockpit integrates a minimalist interior design with multiple high-resolution displays—a 17.2-inch main screen, a 13-inch head-up display, and an optional 8-inch rear passenger screen—powered by a Qualcomm Snapdragon 8295P chipset. It also includes an AI voice assistant named Xiaole, a premium 18-speaker Dolby Atmos audio system, and runs on Onvo’s Coconut OS, which supports over-the-air updates and personalization features tailored for families, such as an optional under-floor refrigerator and rear entertainment. Deliveries of the Onvo L60 are

    energyelectric-vehiclesbattery-swappingsmart-cockpitAI-assistantQualcomm-Snapdragonover-the-air-updates
  • Meta unveils new smart glasses with a display and wristband controller

    Meta has introduced a new pair of Ray-Ban branded smart glasses called Ray-Ban Meta Display, featuring a built-in display on the right lens for apps, alerts, and directions. The glasses are controlled via a wristband called the Meta Neural Band, which detects subtle hand gestures using electromyography (EMG) to interpret signals between the brain and hand. The Neural Band offers 18 hours of battery life and is water resistant. Priced at $800, the Ray-Ban Meta Display will be available for purchase in a few weeks, marking Meta’s latest consumer smart glasses offering aimed at enabling users to perform tasks typically done on smartphones. The Ray-Ban Meta Display builds on the success of Meta’s original Ray-Ban Meta smart glasses and includes an onboard AI assistant, cameras, speakers, and microphones. Users can access Meta apps such as Instagram, WhatsApp, and Facebook, as well as view directions and live translations through the glasses’ display. While this product offers a simpler display

    IoTsmart-glasseswearable-technologyMetaaugmented-realityAI-assistantgesture-control
  • Meta Connect 2025: What to expect and how to watch

    Meta Connect 2025, Meta’s flagship annual conference, will begin Wednesday evening with a keynote by CEO Mark Zuckerberg at the company’s Menlo Park headquarters, also available via free livestream. The event is expected to spotlight Meta’s new AI-powered smart glasses developed in partnership with Ray-Ban and Oakley. Leaks suggest the unveiling of “Hypernova” glasses featuring a heads-up display, cameras, microphones, and an AI assistant controlled by a wristband using hand gestures. Oakley’s new AI smart glasses, designed for athletes with a large unified lens and a single centered camera, are also anticipated. While Meta’s VR Quest headset lineup may not see major updates this year, the company is likely to touch on its Metaverse ambitions, though a significant new Metaverse product is expected closer to the end of 2026. This year’s Connect is particularly significant as it marks Meta’s first since launching its ambitious AI research division, MSL, headed by former Scale AI CEO Alexandr Wang

    IoTsmart-glassesAI-wearablesMeta-Connect-2025augmented-realitywearable-technologyAI-assistant
  • Tesla's '2.5 gen' Optimus humanoid stumbles through its first demo

    Tesla recently showcased an updated version of its Optimus humanoid robot, dubbed "version 2.5," clarifying that this iteration is an intermediate upgrade rather than a new generation. The gold-colored robot demonstrated limited real-world capabilities in a brief demo featuring xAI’s Grok voice assistant. During the demo, the robot responded hesitantly to voice commands and walked slowly, with Elon Musk noting it was still cautious about spatial awareness and would eventually move faster. Despite these incremental improvements, the robot showed little evidence of advanced autonomy or dexterous manipulation, and the video ended before any object retrieval was attempted. Visually, Optimus 2.5 features a smoother, more cohesive exterior with rounded edges, better-covered joints, and fewer visible seams and wires, marking a shift toward a more human-like silhouette. These design refinements aim to enhance both the robot’s mobility and its readiness for human environments. Tesla continues to emphasize Optimus as a key part of its long-term strategy, pairing

    robothumanoid-robotTesla-Optimusrobotics-demoAI-assistantautomationbipedal-robot
  • Humanoid robot receptionist adds tech spark at SCO summit 2025

    At the 2025 Shanghai Cooperation Organization (SCO) Summit in Tianjin, a Chinese humanoid robot named Xiao He served as a multilingual AI receptionist, assisting journalists and delegates by providing real-time information in Chinese, English, and Russian. Equipped with advanced emotional recognition, adaptive learning, and extensive knowledge databases, Xiao He facilitated smooth communication while maintaining cultural neutrality and factual accuracy. The robot guided attendees on summit logistics, such as media center locations and cultural activities, and even engaged in lighthearted interactions like serving ice cream to volunteers. Xiao He’s presence highlighted China’s growing emphasis on robotics as part of its technological and diplomatic strategy. Alongside Xiao He, China introduced Guanghua No. 1, an emotional AI humanoid capable of displaying humanlike emotions, underscoring the country's advancements in AI robotics. This demonstration at the SCO, coupled with China’s recent hosting of the World Humanoid Robot Games, signals the nation’s ambition to lead in the evolving robotics industry and integrate such technologies

    robothumanoid-robotAI-assistantservice-robotemotional-recognitionadaptive-learningmultilingual-support
  • This Humanoid Robot Will Fold Your Laundry!

    The article discusses advancements in humanoid robots, specifically highlighting Figure’s Scaling Helix model, which enables robots to fold laundry. This development showcases the potential for assistant robots to perform everyday household tasks, offering a glimpse into a future where robotic helpers could significantly ease domestic chores. By demonstrating the ability to handle complex, delicate tasks like folding clothes, these robots represent a step forward in robotics technology and practical home automation. The article implies that such innovations could transform daily life by providing reliable, efficient assistance in routine activities, although further details on the robot’s capabilities and deployment are not provided.

    robothumanoid-robotautomationrobotics-technologyhousehold-robotsAI-assistantrobot-applications
  • First impressions of Alexa+, Amazon’s upgraded, AI-powered digital assistant

    The article provides a first-person account of testing Amazon’s upgraded digital assistant, Alexa+, which integrates generative AI to enhance its capabilities beyond traditional smart home controls. The author, a former heavy user of Alexa devices, explores whether Alexa+ can maintain its relevance in an era dominated by advanced AI chatbots like ChatGPT. Alexa+ launched in early 2025 and uses multiple AI models, including those from Anthropic, to deliver more intelligent, context-aware responses. It can access and process personal information such as schedules, preferences, and files, and even summarize video footage from Ring cameras. Amazon aims for Alexa+ to perform agentic tasks like booking reservations, ordering rides, and managing shopping lists with delivery, moving toward a more autonomous AI assistant in the home. In the initial phase of testing, the author set up Alexa+ on a new Echo Spot device, noting improvements in the setup process, such as QR code scanning and automatic Wi-Fi connection. The upgrade to Alexa+ was free and reversible

    IoTsmart-homeAlexaAI-assistantAmazon-Echogenerative-AIvoice-control
  • In a first, astronaut remotely commands Mars robot from space

    The article reports a historic milestone in space exploration where NASA astronaut Jonny Kim remotely commanded a team of robots on Earth from the International Space Station (ISS) as part of the German Aerospace Center’s (DLR) Surface Avatar experiment. Conducted at DLR’s ‘Earthly Mars’ site in Oberpfaffenhofen, the experiment involved navigating a simulated Martian landscape, collecting samples, and demonstrating advanced human-robot collaboration. The robotic team included DLR’s humanoid Rollin' Justin, ESA’s Interact rover, DLR’s four-legged robot Bert, and ESA’s four-legged robot Spot, which worked together to explore terrain and complete tasks efficiently within two and a half hours. Notably, the experiment featured a simulated failure scenario where Bert’s leg malfunctioned, and Kim used reinforcement learning to help the robot adapt a three-legged gait, showcasing problem-solving and teamwork. A significant innovation in the experiment was the integration of Neal AI, an AI chatbot assistant developed by DLR based

    robotspace-roboticsremote-robot-controlMars-explorationAI-assistantrobotic-teamworkhumanoid-robots
  • ByteDance bites into robotics with AI helper that cleans kitchens, folds laundry

    ByteDance, the parent company of TikTok, has developed an advanced robotic system designed to assist with household chores such as cleaning tables and hanging laundry. This system integrates the GR-3 model, a large-scale vision-language-action (VLA) AI that enables robots to understand natural language commands and perform dexterous tasks. Using a bimanual mobile robot called ByteMini, ByteDance demonstrated capabilities like hanging shirts on hangers, recognizing objects by size and spatial location, and completing complex tasks such as cleaning a dining table with a single prompt. Notably, the robot could handle items it was not explicitly trained on, showcasing adaptability beyond its training data. The GR-3 model was trained through a combination of large-scale image and text datasets, virtual reality human interactions, and imitation of real robot movements. ByteDance’s Seed department, established in 2023 to focus on AI and large language models, leads this robotics research. Despite ongoing geopolitical challenges—such as U.S. pressures on Byte

    roboticsartificial-intelligencehousehold-robotsvision-language-action-modelByteDanceAI-assistantsmart-home-technology
  • Grok is coming to Tesla vehicles ‘next week,’ says Elon Musk 

    Elon Musk announced that Grok, the AI chatbot developed by his company xAI, will be integrated into Tesla vehicles as early as next week. This update follows the recent release of Grok 4, the latest flagship model of the chatbot. Musk has long hinted that Grok would serve as an AI assistant in Teslas, enabling drivers to interact conversationally with their cars and request various tasks. The integration is expected to be limited to newer Tesla models equipped with Hardware 3. The announcement came shortly after some issues arose with Grok’s behavior, including controversial statements that led to a temporary suspension of the chatbot on X, Musk’s social media platform. Despite these challenges, the integration into Tesla vehicles is moving forward, and Grok is also set to be the voice and AI brain for Tesla’s humanoid robot, Optimus. Insights from a hacker exploring Tesla’s firmware revealed multiple conversational modes for Grok, such as argumentative, conspiracy, and therapist, indicating a versatile AI experience for

    robotIoTartificial-intelligenceTeslaautonomous-vehiclesAI-assistanthumanoid-robot
  • Kia EV3 Winning World Car of the Year is Old News but Good News - CleanTechnica

    The article highlights Kia’s strong commitment to electric vehicles (EVs), underscored by its recent win of the 2025 World Car of the Year (WCOTY) award for the Kia EV3. This victory marks Kia’s sixth World Car award since 2020, with only one awarded to an internal combustion vehicle, the Telluride. The EV3 impressed judges by successfully translating design and technological elements from the larger EV9 SUV into a smaller, more affordable model without simply downsizing it. Key features contributing to the EV3’s acclaim include its clean, modern design, spacious and flexible interior, a driving range of up to 600 kilometers (375 miles) on a single charge, and rapid charging capability (10% to 80% in about 30 minutes). Advanced driver assistance systems and over-the-air software updates further enhance its appeal. Kia’s EV success is part of a broader electrification strategy called “Plan S,” which leverages the Electric-Global Modular

    electric-vehiclesKia-EV3sustainable-mobilitybattery-technologyfast-chargingAI-assistantautomotive-innovation
  • Meta unveils its Oakley smart glasses

    Meta has officially launched its new smart glasses in collaboration with Oakley, called the Oakley Meta HSTN. These glasses feature double the battery life of Meta’s previous Ray-Ban models and can capture 3K video. The limited-edition version with gold accents is priced at $499 and available for preorder starting July 11, while the rest of the collection begins at $399 and will be released later this summer. The glasses include a front-facing camera, open-ear speakers, microphones, and support for music playback, calls, and photo/video capture. They also integrate Meta AI, allowing users to interact via voice commands for tasks such as checking weather conditions or recording videos. The Oakley Meta HSTN glasses offer up to eight hours of typical use and 19 hours on standby, with fast charging that reaches 50% in 20 minutes. They come with a charging case providing an additional 48 hours of charge on the go. Available in six frame and lens color combinations,

    IoTsmart-glasseswearable-technologyMetaOakleyAI-assistantbattery-life
  • Snap plans to sell lightweight, consumer AR glasses in 2026

    Snap has announced plans to release a new pair of lightweight, consumer-focused augmented reality (AR) smart glasses called Specs in 2026. Unlike its earlier, bulkier Spectacles launched in 2016, these new glasses will be smaller, lighter, and designed for everyday public use. Specs will feature see-through lenses that project graphics into the user’s field of view and include an AI assistant capable of processing both audio and video. The glasses will leverage Snap’s SnapOS developer ecosystem, allowing millions of existing AR experiences (Lenses) from Snapchat and previous Spectacles to be compatible with the new device. The announcement comes amid growing competition in the AR glasses market from major players like Meta and Google, both of which have recently unveiled or plan to unveil their own AR products. Snap aims to differentiate itself through its robust developer platform and AI capabilities, including new features like a Depth Module API for anchoring AR graphics in 3D space and partnerships with companies like Niantic Spatial to build AI-powered world maps. However, key details such as pricing, exact design, and sales strategy for Specs remain undisclosed. While Snap is optimistic about making AR glasses practical and appealing for consumers, the market’s response and the device’s affordability will be critical to its success.

    IoTaugmented-realitysmart-glassesAI-assistantwearable-technologySnapOSAR-applications