RIEM News LogoRIEM News

Articles tagged with "traffic-safety"

  • Tesla Removed Autopilot. The Data Says Safety Wasn’t Lost - CleanTechnica

    The article discusses Tesla's recent removal of Autopilot and Autosteer as standard features in North America, initially perceived by the author as a potential step back for safety and a move to push the Full Self Driving subscription. While Autopilot has been widely regarded as a safety-enhancing feature that reduces driver workload and smooths control, the author emphasizes that such assumptions require rigorous testing through large-scale, independent data rather than relying on driver perception or small datasets. Traffic safety outcomes like fatalities are extremely rare events (about one per 100 million miles), making it difficult to draw confident conclusions from limited data due to the "law of small numbers," where small samples can produce misleading results dominated by randomness. The author highlights the challenge of evaluating Autopilot’s safety using Tesla’s own published statistics, which compare crash rates with and without Autopilot engagement. These statistics are not independently verified and lack normalization for important factors such as road type, driver behavior, and exposure context. Since Autopilot

    robotautonomous-vehiclesTesla-Autopilotdriver-assistance-systemstraffic-safetyself-driving-technologyautomotive-robotics
  • US grants Tesla more time in Full Self-Driving traffic probe review

    The U.S. National Highway Traffic Safety Administration (NHTSA) has granted Tesla a five-week extension, moving the deadline to February 23, for responding to a federal investigation into alleged traffic law violations involving its Full Self-Driving (FSD) system. Tesla requested the extension to thoroughly review over 8,000 internal records related to incidents potentially linked to FSD use, citing the complexity and volume of documents as well as the strain of managing multiple concurrent federal probes, including investigations into delayed crash reporting and malfunctioning exterior door handles. The NHTSA’s broader review aims to determine whether Tesla’s FSD-equipped vehicles comply with traffic laws and safety standards, with 62 consumer complaints and additional crash reports under analysis. This regulatory development coincides with Tesla’s strategic shift in how customers access FSD. CEO Elon Musk announced that after February 14, Tesla will discontinue outright sales of the software, offering it exclusively via a monthly subscription model. Previously, buyers could purchase FSD for a

    robotautonomous-vehiclesTeslafull-self-drivingdriver-assistance-technologytraffic-safetysubscription-model
  • Feds find more complaints of Tesla’s FSD running red lights and crossing lanes

    The National Highway Traffic Safety Administration (NHTSA) has identified at least 80 instances where Tesla’s Full Self-Driving (FSD) software allegedly violated traffic rules by running red lights or crossing into incorrect lanes. This marks an increase from around 50 violations reported when the agency opened its investigation in October 2025. The complaints include 62 from Tesla drivers, 14 submitted by Tesla itself, and four from media reports. NHTSA’s Office of Defects Investigation (ODI) is examining whether Tesla’s software can reliably detect and respond to traffic signals, signs, and lane markings, and whether it provides adequate warnings to drivers. Tesla’s responses to these inquiries are due by January 19, 2026. The investigation also seeks detailed data from Tesla, including the number of vehicles equipped with FSD, frequency of software engagement, and any related customer complaints, including those from fleet operators or legal proceedings. This probe follows a previous NHTSA investigation started in October

    robotautonomous-vehiclesTesla-FSDdriver-assistance-softwaretraffic-safetyNHTSA-investigationvehicle-automation
  • Feds ask Waymo about robotaxis repeatedly passing school buses in Austin

    The National Highway Traffic Safety Administration (NHTSA) has requested detailed information from Waymo regarding its self-driving system and operations after the Austin School District reported 19 instances in 2025-26 where Waymo’s robotaxis illegally passed stopped school buses. This inquiry follows an ongoing investigation initiated in October 2025 by NHTSA’s Office of Defects Investigation (ODI), triggered by footage showing a Waymo autonomous vehicle maneuvering dangerously around a stopped school bus in Atlanta. Waymo acknowledged the incident, attributing it to limited visibility caused by the bus partially blocking a driveway, and subsequently issued a software update aimed at improving safety. Despite this, the Austin School District reported continued violations, including at least five occurrences after the November 17 software update. Waymo maintains that safety is its top priority and claims its robotaxis have significantly reduced injury-related crashes compared to human drivers. The company asserts that its software updates have meaningfully improved performance, surpassing human driver safety in this area

    robotautonomous-vehiclesWaymoself-driving-technologytraffic-safetysoftware-updatesNHTSA
  • Car Crashes Are A Public Health Crisis. Autonomous Cars Are The Cure. - CleanTechnica

    The article highlights the severe public health crisis posed by motor vehicle accidents in the United States, where nearly 40,000 people die and about 6 million collisions occur annually. Neurosurgeon Jonathan Slotkin, who frequently treats crash victims, analyzed safety data from Waymo, a leading autonomous vehicle company that uniquely publishes comprehensive accident reports. His analysis of nearly 100 million driverless miles across four U.S. cities through mid-2025 revealed that Waymo’s self-driving cars experienced 91% fewer serious injury or fatal crashes and 80% fewer injury-causing crashes overall compared to human drivers on the same roads. Notably, injury-causing crashes at intersections—a common site of deadly accidents—were 96% lower with Waymo vehicles. Slotkin argues that autonomous vehicles represent a major public health breakthrough because they strictly follow traffic rules, maintain constant awareness, and avoid distractions and high-speed conflicts that often lead to fatal crashes. While acknowledging that the technology is not flawless—citing minor incidents

    robotautonomous-vehiclesself-driving-carsWaymotraffic-safetyAI-in-transportationpublic-health-technology
  • Tesla’s ‘Full Self-Driving’ software under investigation for traffic safety violations

    The National Highway Traffic Safety Administration (NHTSA) has launched a formal investigation into Tesla’s Full Self-Driving (FSD) software following over 50 reports alleging that the system caused vehicles to run red lights or enter incorrect lanes, with four incidents resulting in injuries. This probe marks one of the first targeted examinations of Tesla’s FSD driver assistance technology. The investigation comes shortly after Tesla released a new FSD version, which reportedly incorporates data from its limited robotaxi pilot in Austin, Texas. The NHTSA’s Office of Defects Investigation (ODI) has received numerous complaints and media reports detailing failures such as FSD not stopping at red lights, crossing double-yellow lines, entering opposing traffic lanes, and making improper turns despite clear signage. Some incidents were concentrated at a specific intersection in Joppa, Maryland, prompting Tesla to take corrective action there. This investigation follows previous NHTSA inquiries into Tesla’s Autopilot system, including a closed probe in April 2024

    robotautonomous-vehiclesTesla-Full-Self-Drivingdriver-assistance-softwaretraffic-safetyNHTSA-investigationautonomous-driving-technology
  • Driverless cars can now make better decisions, new technique validated

    Researchers at North Carolina State University have validated a new technique to improve moral decision-making in driverless cars by applying the Agent-Deed-Consequences (ADC) model. This model assesses moral judgments based on three factors: the agent’s character or intent, the deed or action taken, and the consequences of that action. The study involved 274 professional philosophers who evaluated a range of low-stakes traffic scenarios, focusing on everyday driving decisions rather than high-profile ethical dilemmas like the trolley problem. The researchers aimed to collect quantifiable data on how people judge the morality of routine driving behaviors to better train autonomous vehicles (AVs) in making ethical choices. The study found that all three components of the ADC model significantly influenced moral judgments, with positive attributes in the agent, deed, and consequences leading to higher moral acceptability. Importantly, these findings were consistent across different ethical frameworks, including utilitarianism, deontology, and virtue ethics, suggesting a broad consensus on what constitutes moral behavior in traffic

    robotautonomous-vehiclesAI-ethicsdriverless-carsmoral-decision-makingtraffic-safetyAI-training
  • Autonomous cars that 'think' like humans cut traffic risk by 26%

    Researchers at the Hong Kong University of Science and Technology (HKUST) have developed a novel cognitive encoding framework that enables autonomous vehicles (AVs) to make decisions with human-like moral reasoning and situational awareness. Unlike current AV systems that assess risks in a limited pairwise manner, this new approach evaluates multiple road users simultaneously, prioritizing vulnerable pedestrians and cyclists through a concept called “social sensitivity.” The system ranks risks based on vulnerability and ethical considerations, allowing AVs to yield or stop for pedestrians even when traffic rules permit movement, and anticipates the impact of its maneuvers on overall traffic flow. Tested in 2,000 simulated traffic scenarios, the framework demonstrated a 26.3% reduction in total traffic risk, with pedestrian and cyclist risk exposure dropping by 51.7%, and an 8.3% risk reduction for the AVs themselves. Notably, these safety improvements were achieved alongside a 13.9% increase in task completion speed. The system’s adaptability allows it to be tailored to different regional driving norms and legal frameworks, enhancing its potential for global implementation. This breakthrough addresses critical limitations in current autonomous driving technology, promising safer streets and more socially responsible AV behavior in complex, real-world environments.

    robotautonomous-vehiclesartificial-intelligencetraffic-safetyhuman-like-decision-makingsocial-sensitivityrisk-assessment
  • Waymo Robotaxis Are Much Safer — Part Deux

    robotWaymorobotaxisautonomous-vehiclestraffic-safetycrash-reductionpedestrian-safety