There is an implicit assumption that regulation improves safety. On closer observation though, there does not seem to be a simple, linear relationship between regulation & safety. If you're building safety systems, the burden of regulation will slow down deployment and can result in reduced cumulative safety conferred (The total area under a Safety vs Time curve).
An existing example being Medicine. Has it slowed down due to increased regulation? Has that reduced cumulative safety?
We tend to over-regulate because the costs of over-regulation aren't easily visible but costs due to new deployments are much more apparent. The costs of the seen vs the unseen.
An extreme example of this bias; A new medicine can cure a fatal disease, it saves 99 lives but 1 in 100 folk shoot themselves because of the medication. Would you have it? Would you deploy it?
Where in our lives could we be over-regulating due to this bias?
Being aware of this seen vs unseen bias may confer appropriate controls on regulation in our public and private spheres.
AI, self-driving are two upcoming industries where these biases will play out
Musk seems to be a vocal proponent1 and implementer2 of this concept. For example, in his decision to deploy auto-pilot "early".
There is a related but different concept of weighing the known vs unknown. This is not what is being discussed here. Unknown effects from a new medicine may cost more damage then the know effects. This is a valid concern. This issue in it's extreme is not computable.
The seen vs unseen works completely in the domain of the known.
In the example medication above only the known positive (saves lives) and negatives (person shoots themselves) were discussed.
"
•A new $800,000 four-seat airplane or $5 million turboprop won’t have 1/100th of the intelligence of a $500 DJI drone
•A $27 million certified-in-2018 business jet has nearly every knob, button, and dial as a 1944 B-29. Why not one button “configure yourself for takeoff?”
•There is no such thing as regulatory error.
"
This example exemplifies what you are conveying. The airline regulatory framework is so stringent, that it takes more than a decade for new technology to get certified and become commonplace. Therefore all planes are operating with several decade-old tech. Simple obvious tech like terrain collision avoidance etc are missing from contemporary planes.
On the other hand, some of the best-in-class drones today are impossible to crash.
Philip Greenspun, who gave these lectures makes an interesting argument. He asserts that the entire FAA book of regulations can be replaced with just a single mandate requiring you to have insurance more than a certain amount to fly a certain plane.
What would happen as a result of this is that actuaries would do a better job of figuring out how safe your plane is.
The current approach is very prescriptive. You must do all these exactly like this for us to certify your plane. This can make you get stuck in a local minima, because any new tech that you try out will initially reduce your safety.
ML is an interesting example of how we got better results as we became less prescriptive. An optimizer and appropriate tests/incentives go a long way, be it for machines or humans 😄
Similarly, Declarative programs by specifying what they want allow the language to evolve and optimize how that is actually implemented. Imperative programs by specifying how they want something, constrain this avenue for optimization.1