Autonomous Vehicles, Accidents, Liability and Regulation
- Arif Gökçek
- Oct 6
- 3 min read

Autonomous vehicles are shaping the future of transportation with the promise of reducing human error. Although they have the potential to significantly decrease accident rates, zero risk is impossible. This brings us to a critical question: when an autonomous vehicle is involved in an accident, who is responsible? The manufacturer, the software developer, the user, or the public authority managing the road? For this reason, it is essential to establish legal, ethical, and technical frameworks that work in harmony.
Today, countries around the world are taking different steps on this issue. The United Nations’ UNECE regulations—such as Regulation No. 157—aim to create a global common ground by defining approval criteria for systems like automated lane keeping. The European Union, through its General Safety Regulation, mandates safety features in vehicles and promotes testing grounds and “regulatory sandbox” applications. Germany has already built the legal infrastructure allowing Level-4 vehicles to operate in traffic without human intervention in certain areas. In the United States, the NHTSA adopts a more flexible approach through voluntary guidelines and state-based permissions.For many countries, the situation remains limited to pilot projects and temporary regulations. This fragmented structure creates significant uncertainty, particularly in cross-border operations and compensation processes.
When it comes to liability, the picture becomes even more complex. In the event of an accident, the manufacturer can be held directly responsible for sensor or software faults. Software providers are accountable for algorithmic decision-making processes and the integrity of training data, while users can be found at fault for failing to perform updates or operating the vehicle outside its intended limits. Fleet managers—especially in robotaxi applications—may bear responsibility for operational negligence. Public authorities, meanwhile, are directly tied to infrastructure and road safety; if poor markings or insufficient V2X infrastructure contributed to an accident, they too can be included in the chain of liability.
To resolve this complex landscape, technical evidence must be reliable and transparent. Event data recorders (EDRs), sensor outputs, software version histories, and telemetry logs are crucial for determining who did what, and when. However, ensuring that such data cannot be manipulated in favor of the manufacturer is not only a matter of safety, but also of justice. Therefore, black box data should not be accessible only to the manufacturer—it must be stored in a way that allows independent institutions to audit and verify it.UNECE and EU regulations are making progress in setting standards for the recording and accessibility of such data, but there is still no globally binding audit methodology. The analytical methods used during post-accident investigations must be transparent, accessible, and verifiable to ensure legal security.
The ethical dimension is even more complex than the technical and legal debates. In an unavoidable crash scenario, should the vehicle prioritize the safety of its passengers or that of pedestrians? MIT’s Moral Machine project revealed that answers to this question vary greatly across cultures. Some societies favor protecting pedestrians, while others prioritize the vehicle’s occupants. Therefore, in practice, regulators tend to prefer approaches grounded in adherence to traffic laws and transparency: the vehicle’s behavior must be pre-tested, documented, and approved by regulatory authorities.
Among the proposed solutions, hybrid models that combine product liability with operational negligence are gaining traction. In this model, if the fault stems from a design or software issue, the manufacturer is held accountable; if it results from maintenance or operational negligence, the user or operator becomes liable. The insurance sector is also adapting to this new era—policies covering manufacturer liability or fleet-based insurance packages are becoming increasingly important.
On the technical side, safety relies on multiple sensor combinations, redundant systems, simulation-based verification processes, and explainable AI applications. The security of software updates, protection against cyberattacks, and data transparency are no longer purely engineering concerns—they have become legal obligations.
In conclusion, the safety promise of autonomous vehicles can only be realized when law, technology, and ethics progress together. Neither manufacturers, nor users, nor governments alone can bear this responsibility. Without joint coordination mechanisms, the disputes that arise after an accident could be more destructive than the technology itself. To earn public trust, regulatory harmonization and transparency by both manufacturers and public authorities are essential.
The future of autonomous vehicles will not only depend on how well algorithms make decisions, but on the frameworks within which we accept and supervise those decisions.




Comments