The Legal Examiner Affiliate Network The Legal Examiner The Legal Examiner The Legal Examiner search instagram avvo phone envelope checkmark mail-reply spinner error close The Legal Examiner The Legal Examiner The Legal Examiner
Skip to main content

A recent incident involving a self-driving car in California has reemerged concerns about the safety of testing these vehicles on public roads. When a vehicle abruptly moved into the lane that the self-driving car was in, the test driver took control of the car to avoid crashing into it, which caused the car to hit and injure a motorcyclist instead. The manufacturer of the car, Waymo, released a statement that the car would have used its high-technology software to avoid the crash on its own if the test driver had not intervened. Unfortunately, this was not the first time that self-driving cars have caused public safety concerns in the Southwestern United States.

This March, a self-driving car being tested by Uber crashed into and killed a pedestrian who was crossing the street. It was eventually revealed that the collision occurred due to a combination of Uber’s software failing to detect the woman and the testing driver using her phone instead of paying attention to the road. This news story was spoken of nationwide and resulted in many states halting self-driving car testing altogether.

In addition to collisions there is also concern over whether or not the Artificial Intelligence (AI) will be able to incorporate ethical decision-making while on the road, especially in life-or-death safety situations. According to Edmond Awad, a post-doctoral associate at the Massachusetts Institute of Technology (MIT), it is important for driverless car manufacturers to program AI technology with the capability to make ethical decisions in these types of situations. Awad illustrates what this may look like by providing an example: “Say a car is driving in the right lane, and there’s a truck in the lane to the left and a bicyclist just to the right. The car might edge closer to the truck to make sure the cyclist is safer, but that would put more risk on the occupant of the car. Or it could do the opposite. Whatever decision the algorithm makes in that scenario would be implemented in millions of cars.”

As of last month, Nevada is only one of five states that participate in on-road self-driving vehicle testing. This in combination with the fact that multiple major companies are working on self-driving initiatives right now (e. g., Audi, BMW, Ford, General Motors, Mercedes, Nissan, Tesla, Toyota, and Volvo) mean that safety should be a top priority to manufacturers when testing these vehicles. Moving forward, only time will tell whether manufacturers are more invested in keeping the public out of harm’s way than they are in racing to put a car on the market before their competitors.

Comments for this article are closed.