Kulraj Smagh, head of Open Innovation Group at EY, shares a human perspective on the technological, legal and moral developments of autonomous vehicles.
In 1982, the fictional Knight Industries Two Thousand or KITT, sidekick to ‘The Hoff’s’ Knight Rider, was one of the very first visions of what an artificially intelligent autonomous vehicle might become. Now in the year 2016, the vision is somewhat realised, albeit minus the hairdo, backchat and bag of 007-esque trickery.
In the last couple of months we have observed some ground-breaking developments. In late August, the very first driverless taxis hit the streets of Singapore for a trial headed up by an MIT spin-off and its accompaniment of Einsteinium IQs. We also saw very public declarations of bringing the autonomous vehicle to the masses by some of the biggest players in the automotive game, as early as 2019 in some cases. And finally, we saw Musk’s ‘Master Plan, Part Deux’ describing, in some form, the method he will adopt to make autonomous vehicles truly mainstream and self-sufficient regarding energy. This future gives rise to lots of questions around legality, morality and techno-ethics, which are worth digging into a little deeper.
There are legions of designers, developers, data scientists, AI specialists, engineers and Orcs hammering away to bring to life these autonomous vehicles. If you are anything like me, you are imagining a scene similar to the one in The Lord of The Rings movie, where in the deepest caverns of Middle Earth the Orcs are secretly creating the Uruk-Hai, away from the prying eyes of man.
The artificial intelligence serum that will imbue these vehicles will be multi-flavoured, looking at spatial, logical, kinaesthetic and, quite possibly, existential intelligence amongst a gaggle of other areas.
The individual or corporation coding the algorithms for the AI to self-learn and perform its function, will also be coding a little of themselves in there. Whether consciously or subconsciously an element of their being, their culture and even their bias will transcend into the ethernet.
Coding the AI to react to safety-critical situations where it must decide on reducing harm to the passenger or the pedestrian, a toddler or an octogenarian, manifests in the quaintly named ‘trolley problem’. This is being studied by several groups with a range of responses, solutions and outcomes being theorised by the greatest minds on techno-ethics.
The irony is that it is most likely the AI that will resolve the issue, the ability for these deep-learning algorithms behind the AI to self-learn and recalibrate to achieve the optimum result is part of their DNA. The conversion of acceptable morality to code will be done by the AI.
If this comes to fruition, we will see a future where people are no longer injured by cars. Musk believes his product will only escape beta-testing when it is statistically 10 times safer than human drivers. The most fascinating part of the story is that the machines need not be perfect, just better than humans. Their ability to share knowledge and experience with each other in nanoseconds is unparalleled. That this will be done through vast deep learning neural-nets self-generating algorithm updates is astounding, and they would share a hive mind a bit like the Borg in Star Trek, with the Queen being a clever clique of entrepreneurs, PHDs and MBAs is even more amazing.
But with so many players, someone will have to set the quality standard to define what is acceptable, translating some of those AI algorithms back into text that we humans can interpret into policy and regulations. Will the highways of the future become ‘autonomous only’ allowing passage only to those who reach a particular algorithm standard? Or will the insurers only back a certain algorithm and cryptographic cybersecurity standard?
The original equipment manufacturers (OEMs) are largely undecided on how to play the legalities surrounding the actions of their automaton products, as the area is still grey with no definitive policy, regulation or case-law. The US government has recently gone as far as setting out a 15-point set of ‘safety assessment’ guidelines for autonomous vehicles, in order to guide development and experimenting. However, there are still so many questions to be answered from a morality perspective of who is responsible.
Is it the vehicle owner, the manufacturer, the insurer, the infrastructure provider, the data provider, or the data network provider? In a world where product manufacturers take liability, what would happen if the owner didn’t allow an update to happen or the car couldn’t update its algorithm set due to having no network connection? Would the owner then be responsible if an incident occurred?
When Karl Marx talked about the post-revolution world in his communist manifesto, he described the average day along the lines of fishing in the morning, grazing cattle in the afternoon and chilling out in the evening, there was no ‘work’.
If we will live in a world where, as an array of futurists predict, one in 10 Americans may lose their job to a driverless vehicle or 30% of all jobs will be overtaken by robots in the next 10 years, then the Marxist vision does not seem too far off. What else will people do in the post-automaton world? Will we be lost in realms of perpetual VR, which are created by deep learning machines and their neural nets to stop us from getting bored? Or are we setting ourselves up for Utopia?
The truth is, only time will lead us to the reality that befalls us. Control of this automaton future is an illusion. The future is a competition that will likely be won by the machines that we create. Our blessing is the ability to imagine the multi-verse, its infinite futures and realise the best version of it, to keep pushing the boundaries of humankind and our creations. Our curse is to think of the worst possible future, to polarise the array of outcomes, demonise that which we don’t understand and lose touch with reality in the windmills of our mind.