Trust in tech governance: the one success factor common to all 4IR technologies
Consider Tesla and self-driving cars, or Facebook and Cambridge Analytica. While policymakers, and much of society, continue to see Tesla cars and Facebook as appealing and important products, recent incidents triggered a broad call for review and scrutiny of their respective governance. A common perception held that, besides the technologies themselves, it was the rules, procedures, requirements and institutions that had failed to protect data privacy, human health, and even human lives.
The following three recommendations can help those crafting and shaping tech governance for emerging 4IR technologies:
Learn from failings
First, it is worth looking across silos, to learnings from other 4IR sectors. In biotechnology, the case of genetically modified organisms (GMOs) became a poster child for significant and persistent lack of trust in governance, especially in Europe. Those seeking to govern AI or CRISPR should look at the lessons from the past two decades of GMO governance around the world.
Don’t count on it
Second, beware the flawed assumption that, as long as boxes are ticked in common sense governance elements such as transparency, participation and communication, trust will fall into place. Cognitive insights from eminent scientists, some of whom were awarded the Nobel Prize for their work in this field (Daniel Kahneman and Richard Thaler), show that common sense is an unreliable guide for predicting human reactions.
Trust by design
Third, designing the earning of trust into the very governance framework seems like an obvious thing to do. But this doesn’t mean it has been done. Time and again, the drafting and reform of governance frameworks – for example, in the area of GMOs – exhausted itself in struggles over technical requirements, without dedicating due attention to the earning of trust in governance. Sure enough, trust did not come.
A Forum project integrating participants and expertise from biotech, governance, human rights and behavioural sciences is seeking to develop principles for trust in tech governance. Its aim is to increase chances for earning trust, compared to historical poster child cases such as GMOs.
It is important to stress that these principles will be designed to be technology agnostic and outcome agnostic. This is not about finding ways of enabling or impeding the introduction of technologies into society. It is about designing governance that increases the chance of earning political and societal trust, which is important and desirable in its own right.
Identifying such principles, and helping those in charge of designing tech governance actually apply them, is an important task and responsibility. It may indeed be the one common critical success factor for reaping the benefits of emerging 4IR technologies.
InnoValeur Conseil | Data Science | Smart Data | Machine Learning | AI