Transparency – A No Compromise Requirement for the Future of AI

Team Pecan

Want to learn more?

Contact us
After clicking, you will be returned to the page

When IBM's Watson provided erroneous treatments to cancer patients, when an Amazon Alexa device offered adult content to children, and when a Microsoft Chatbot decided to become a white supremacist – you know that under the hood (or in the 'Black Box') AI is full of flaws of which you don't know their origins and reasons. 

Before we allow AI to control more spaces of the economy and our lives it must provide us with full transparency into its logic and decision-making process. Otherwise we could lose not only money, and get hurt emotionally, but literally lose lives.  Indeed, there is widespread mistrust in AI and a demand for full transparency. The 'Black Box' argument will not be tolerated anymore in the near future.

According to Gartner, enterprises need artificial-intelligence-specific governance to reduce risks and tolerate the ambiguity intrinsic to AI’s predictive nature. Data and analytics leaders must focus on trust, transparency and diversity to define AI governance in light of their aims and opportunities.

Gartner predicts that by 2023, over 75% of large organizations will hire AI behavior forensics, as well as privacy and customer trust specialists to reduce brand and reputation risk. By 2023, 60% of organizations with more than 20 data scientists will require a professional code of conduct incorporating ethical use of data and AI.

Transparency should span much beyond the spheres of CEOs, data scientists and other professionals and reach every and each private user. According to Accenture, AI-based decisions should be understandable to those impacted and adhere to existing rules and regulations. For example, the 1974 Equal Credit Opportunity Act has long required that those denied credit must be advised as to the reasons behind that decision. More recently, 2011 Federal Reserve System SR 11-7: Guidance on Model Risk Management advised banks to employ active risk management to guard against faulty assessment models. In New York City, the Council recently created a task force charged with determining which “automated decision systems” used by the city should be subject to further governance procedures that include:

  1. Allowing citizens to request an explanation as to how decisions were derived using these systems
  2. Assessing whether those decisions disproportionately impact age, race, creed, color, religion, national origin, gender, disability, marital status, partnership status, caregiver status, sexual orientation, alienage or citizenship status.

Recently the EU pioneered a set of guidelines on how companies and governments should develop ethical applications of artificial intelligence. The guidelines aren’t legally binding, but they could shape any future legislation drafted by the European Union. The EU has repeatedly said it wants to be a leader in ethical AI, and it has shown with GDPR that it’s willing to create far-reaching laws that protect digital rights. 

The guidelines put forward a set of 7 key requirements that AI systems should meet in order to be deemed trustworthy. A specific assessment list aims to help verify the application of each of the key requirements:

The Pillars of Trusted AI

Human agency and oversight: AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights. At the same time, proper oversight mechanisms need to be ensured, which can be achieved through human-in-the-loop, human-on-the-loop, and human-in-command approaches

Technical Robustness and safety: AI systems need to be resilient and secure. They need to be safe, ensuring a fall back plan in case something goes wrong, as well as being accurate, reliable and reproducible. That is the only way to ensure that unintentional harm can be minimized and prevented.

Privacy and data governance: besides ensuring full respect for privacy and data protection, adequate data governance mechanisms must also be ensured, taking into account the quality and integrity of the data, and ensuring legitimized access to data.

Transparency: the data, system and AI business models should be transparent. Traceability mechanisms can help achieve this. Moreover, AI systems and their decisions should be explained in a manner adapted to the stakeholders concerned. Humans need to be aware that they are interacting with an AI system, and must be informed of the system’s capabilities and limitations.

Diversity, non-discrimination and fairness: Unfair bias must be avoided, as it could have multiple negative implications, from the marginalization of vulnerable groups, to the exacerbation of prejudice and discrimination. Fostering diversity, AI systems should be accessible to all, regardless of disability, and involve relevant stakeholders throughout their entire life cycle.

Societal and environmental well-being: AI systems should benefit all human beings, including future generations. It must hence be ensured that they are sustainable and environmentally friendly. Moreover, they should take into account the environment, including other living beings, and their social and societal impact should be carefully considered.

Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes. Auditability, which enables the assessment of algorithms, data and design processes plays a key role therein, especially in critical applications. Moreover, adequate and accessible redress should be ensured.

How To Make AI More Transparent?

There are essentially two different ways to interpret a model: on a global level and on an individual level. Here is an example which will help in understanding the two ways. 

Suppose we wish to predict whether a person watches football based on the features gender and occupation. In many models it is possible to recognize those features that were most informative for the task. In our example, it is reasonable that the gender feature would emerge as very informative since men are more likely to watch football. This kind of model interpretation is global. Now let's consider a woman who works as a sports commentator. Obviously, for her, the more important feature is occupation. How can we explain to her why the model thinks she watches football? She would not be satisfied with the previous explanation that gender is generally important. Individual level interpretations tries to explain the prediction for a specific sample and are, in general, much more complicated and computationally involved. We at Pecan provide our clients with the state of the art of both levels of interpretations, thus, enabling them to gain deep insights to their problems.

Conclusion

As awareness to transparency is widening and the need is regarded as critical, entrepreneurs and developers should set it as a high priority goal in their new solutions. Transparency is a technical challenge no different than dozens of other challenges facing R&D teams. Transparency should be embedded in an AI solutions by design, not as a made-up stitched add-on. We should all be prepared for the transparent AI era.


It's time to plug your organization into its future.

Start your free demo
By clicking the button, you agree to the EULA
Click Here to Read Pecan's Security and Privacy Policy
After clicking, you will be returned to the page