AI is no longer the future—it’s here. In our pockets, in our houses, in our cars. It has quickly become ubiquitous as AI technology has continued to expand its role in our lives. As this growth is set to continue indefinitely and lead to more and more ground-breaking achievements, there is an important question that we need to answer: What level of trust should we place in AI, and how can we build this trust among users?
This might sound like a straightforward question, but we have all seen the fearmongering headlines about how ongoing developments in advanced AI could lead to it taking over, enabling it to steal our jobs, or worse, enslave us. While this dystopian scenario is something that is easy for those of us in the know to laugh off, it is a serious concern for regular, non-technical people who are understandably hesitant.
So, just as trust needs to be established in our personal and professional relationships, it also needs to be established between AI systems and their users if the technology is to continue growing and benefiting our world. Technologies such as autonomous vehicles, for example, will only be possible if there are clear benchmarks for establishing trust in AI; nobody is going to put their life in the hands of a self-driving vehicle if they don’t trust the technology that controls it.
According to IBM, building trust in AI will require a significant effort to instil a sense of morality in it, operate in full transparency, and provide education about the opportunities it will create for businesses and consumers. This effort, IBM says, must be collaborative across all scientific disciplines, industries, and government.
The most obvious way to achieve this would be to instil human values in AI. Indeed, as AI has grown, the concern over how we can trust that it reflects human values has also grown.
One scenario that has arguably been cited more than any other is the moral decision that an autonomous car might have to make to avoid a collision. In this scenario, a bus is heading directly towards a driver who must swerve to avoid being hit. If the car swerves left, it will hit a mother and baby. If it swerves right, it will hit an elderly person. What should the car do in this situation? Swerve left, swerve right, or continue straight ahead? This is of course impossible to answer. All three outcomes lead to a terrible outcome, and arguments can be made for and against all three courses of action.
It is also important to consider the problem of bias affecting the machine’s decision. As the Senior VP of Hybrid Cloud and Director of IBM Research, Arvind Krishna, puts it, it is possible to have the bias of a machine’s programmer playing a part in determining decisions and outcomes without proper care in programming. There are already several high-profile examples of machines demonstrating bias, and this makes it more difficult to build trust in AI systems.
Software company DataRobot has organized the concept of trust in AI into three dimensions—performance, operations, and ethics. Each of these categories contains a series of areas that ML teams can look to optimize to start building trust in their own ML models and AI systems.
When evaluating the trustworthiness of AI systems, performance matters. If your model isn’t performing at its optimum, then it isn’t making accurate predictions based on the data it is analyzing. This naturally makes it less trustworthy. Key metrics for performance include:
Ensuring best practices are met in machine operations is just as important for building trust as the performance of the model itself.
Ethics is relatively new in the context of AI. However, AI systems and the data that they use can have a huge impact, so it is important that they reflect the values of users and stakeholders.
While there is no globally agreed process or standard for building trust in AI, enough forethought and understanding of what trust means can make all the difference in developing a robust—and of course, trustworthy—system that reflects the values of its users.
One factor that AI thought leaders note is more important than any in building trust between AI systems and users is transparency. To trust the decision of a machine, ethical or otherwise, the user needs to know how it arrives at conclusions.
Right now, it can be argued that deep learning performs poorly in this regard, but there are AI systems out there that are able to point to documents in their knowledge basis from which they draw their conclusions. Transparency is improving then, albeit very slowly.
Rachel Bellamy, IBM Research Manager for human-agent collaboration, reckons that we will get to a point within the next few years where an AI system can better explain why it has made its recommendation. This is something that is needed in all areas where AI is used. Once transparency in AI has been achieved, users will naturally have a much higher level of trust in the technology.
Not everyone in the AI space sees trust, ethics, and transparency as a priority. At Qwak, however, we are committed to helping our clients build, develop, and deploy ML models that meet the three dimensions of trust that we discussed above.
Qwak achieves this by making it possible for our clients to record, document, reproduce, and manage everything in our end-to-end cloud space, helping them take their first steps towards more transparent and ethical ML model development without having to think about it.
Want to learn more about the power of Qwak and how it could help you deploy better and more powerful ML models? Get in touch for your free demo!