Unbiased Human Centric AI Systems: The Basics you Need to Know

Unbiased Human Centric AI Systems: The Basics you Need to Know
share on
by Sanjeev Kapoor 26 Nov 2021

In recent years, there is a surge of interest in Artificial Intelligence (AI) algorithms and applications. This interest is largely due to the proliferation of the data points that are available for building AI systems, as well as due to the unprecedented growth in the available computing and storage capacities of the systems that manage and analyze these data. Therefore, the number of AI deployments is increasing at a rapid pace. Nevertheless, recent experiences with the development and deployment of AI systems show that AI success is not only a matter of developing advanced technology and finding effective business models. It turns out that the ever-important human factors play a decisive role in the success of AI deployment. Future AI systems must exhibit human-centric properties such as transparency, trustworthiness and explainability. These properties will ensure that humans understand how AI systems operate in the scope of a specific application context. As such they will be a foundation for ensuring that humans trust the operation of AI systems and are willing to adopt and use them at scale.

Human-centric AI applications must be unbiased i.e., they must operate in an objective and fair way, which leads to inclusive applications and leave no citizen behind. For instance, AI systems must not favour any user group over another and must avoid taking decisions that cannot be adequately justified to humans. This is a challenging data science problem given that bias is a very common issue in the development of AI systems. Bias can be caused by a variety of factors such as the lack of representative data or the repurposing of an AI system for use in an application context different than the context where the system was trained at the first place. Human intelligence suffers from numerous types of bias such as the well-known “placebo” bias, the choice supportive bias and other forms of cognitive biases. Artificial Intelligence systems are no different than humans in this respect. When trained with biased data or in non-representative contexts, they are bound to lead to subjective choices and decisions.

 

Artificial Intelligence or something else.
Let's help you with your IT project.

Understanding Unintended Biases

One of the most common problems with AI bias is that it is in most cases unintended. This means that many data scientists and ai experts build biased systems without understanding their problems and the implications of their use. In principle biased systems can be classified into two very broad categories:

  • Data biased systems, where AI algorithms become biased because they are trained with non-representative data. This results in biased systems with wrong AI-based decision-making.
  • Societal AI biased systems, where ai techniques are developed in ways that incorporate existing biases of our society. In essence, such systems incorporate biases in their decision-making, simply because data scientists base their development on legacy biased systems.

In this context, biased systems are unintentionally created in one or more of the following ways:

  • Historical Bias: These are cases where AI systems are developed based on large historical datasets that comprise biased decisions. For instance, training a hiring algorithm for senior managers in tech companies using past hiring data will result in a gender-biased machine learning system. This is because tech enterprises tend to favor male over female candidates (e.g., less than 10% of the CEOs of deep tech companies are female).
  • Representation Bias: In many cases data scientists train ai systems using data that ignore entire population segments. For instance, training smart city systems for citizens’ service based on data from city apps and social media results in algorithms that do not account for the needs of elderly and low-income citizens. The later citizen groups do not use actively internet apps and are likely to be underrepresented in the collected data sets.
  • Aggregation Bias: AI systems are sometimes trained based on data that aggregate datasets from different sources and population groups. For example, ai algorithms for disease diagnosis and prognosis can be trained over different datasets from US, European and Asian citizen databases. This is a common practice towards creating larger datasets that can effectively train deep neural networks. Once this is done, the developed ai system is used for diagnosis or prognosis over any population group, yet the outcomes will be biased towards the group that is the majority in the aggregated dataset. In this way, aggregation leads to a biased system that is bound to produce problematic decisions.
  • Deployment bias: This is the case where a system is trained and developed for a certain purpose, yet used for another purpose. For example, imagine an ai system that is trained to predict the future behavior of an imprisoned individual, based on data available during his/her trial. It is wrong to use this system to evaluate whether it is appropriate to reduce the sentence of the prisoner three years later, as the system has not been designed and developed with this in mind.

 

Guidelines and Techniques for Unbiased Systems

The above list of biased is non-exhaustive, yet it provides a good starting point for understanding the problem of unintended biases in AI. Following the understanding of the biases, enterprises and their ai experts had better take some the following actions:

  • Employ Bias Detection Processes: It is important to put in place systematic bias detection processes at system design and development stages. For example, companies must specify and implement and explorative data analysis processes that unveil potential sources of bias like unbalanced training data. These processes should be always executed as part of the development of an AI system.
  • Bias Mitigation and Removal: Upon the detection of a bias, companies must specify the steps that they need to undertake in order to mitigate or remove the bias. In this direction, a well thought mitigation process must be specified and executed. It could for example entail collecting and integrating more data, removing data aggregations, or improving the frequency and quality measurement taken by some instrument.
  • Regulatory Compliance: AI technology is currently being regulated to ensure that AI systems operate in a trustworthy, human-centric and reliable way. This is for example the case in Europe, where the European Parliament has recently put into consultation a proposal for an AI Act. Companies must therefore keep an eye on such regulations and make sure that they comply with them.
  • AI Audits: Soon it will become possible to carry out external audits on the trustworthiness, security, and reliability of AI systems. Such audits will be particularly useful in high-stake environments, were financial assets or even human lives are at risk. External audits can reveal possible biases and suggest artificial intelligence techniques for mitigating them.

 

Overall, when developing advanced ai technology, there is no point in ignoring human factors and trustworthiness aspects. Bias detection and removal are among the most important development steps of a human-centric AI system. Modern enterprises must therefore look for potential biases in ai systems towards ensuring their fair and objective operation. Moreover, they must comply with the emerging regulatory environment for AI systems and applications.

Recent Posts

get in touch

We're here to help!

Terms of use
Privacy Policy
Cookie Policy
Site Map
2020 IT Exchange, Inc