Yes, you can trust a DNN to drive your car.
I often hear the argument that you cannot trust a DNN to drive your car, since the DNN is a black box and cannot explain how it arrived at a certain decision.
My counter-argument revolves around other examples where we have trusted processes(even medical!) without knowing the mechanism involved to great success.
Acetaminophen - The mechanism of action isn't fully understood yet we use it effectively to manage fever.
Drug repositioning - Several drugs are created for one purpose but then are found to produce an unexpected effect during clinical trials. The drug then gets marketed for the unexpected side effect. eg: Rogaine(Minoxidil) initially designed to treat hypertension was found to result in hair growth. Today it's prescribed to treat hair loss. Viagra(Sildenafil) is arguably the most popular example of drug repositioning. In most of these cases, the mechanism of action is only understood retrospectively after the success of the drug. The mechanism of action of Rogaine is still not understood.
ECT(Electro Convulsive Therapy) has also been used successfully. The mechanism of action remains elusive.
Fire - We have tamed and controlled fire for several hundred thousands of years while theories like Phlogiston(https://en.wikipedia.org/wiki/Phlogiston_theory) existed up until the 18th century.
Great point! I think the argument for correctness stems from the close historical ties between Computer Scientists and Mathematicians and a general Math, Science envy1. The craft aspects of ML, CS become difficult to digest.
Explainability in ML and Proofs in Software systems are great to haves but should only be required in very specific scenarios.
These fields would grow faster if they let go of such concerns.