Keeping ML models on track with greater safety and predictability

Keeping ML models on track with greater safety and predictability
share on
by Sanjeev Kapoor 25 Mar 2022

As Machine Learning (ML) systems become more pervasive than ever before, the stakes are getting higher and higher. This is because these systems are proving to be very valuable. They’re in self-driving vehicles, robot helpers, medical systems, and other applications ranging from critical to trivial. There is too much at stake for machine learning failures to continue as common occurrences — loss of a few thousand dollars in poker games is one thing, loss of a person’s life or the safety of an airplane is another thing entirely. Hence, whether the ML models are based on neural networks, Bayesian statistical analysis, random forests or batch gradient descent — they must do their job better than ever.

While many recent advances in ML methods have improved the performance of machine learning applications, mitigating the associated risks has not been a top focus. To ensure that our ML systems are safe and predictable, it is essential to ground ML models in a meaningful way. However, ML models are still not as accurate and predictable as we once believed them to be. This can cause a malfunction or at least a huge risk in many systems they are implemented into, when they are supervised with high-level of noise or just high amount of data with questionable quality. Fortunately, some solutions have already been developed and proposed that are aiming to make these models more safe, trustworthy, and practical.

 

 

ML Transparency and Explainability

One of the main trustworthiness challenges of modern ML systems is that many algorithms are complex and difficult to understand. Therefore, when a model makes an error or does something unexpected, it can be challenging to understand why. This is especially true for deep neural networks, whose internal representations are often hard to interpret — even for experts who designed the models.

Machine Learning or something else.
Let's help you with your IT project.

To ensure that machine learning models are safe and predictable, they must be transparent and understandable by humans. This is a fundamental pillar of human-centered AI research. That’s also where explainable AI comes in.  Explainable AI aims at making machine learning systems understandable to humans. This can help build user trust in these systems. At the same time, it enables engineers to debug them when they fail. For instance, explainable AI can help mitigate the risk of harmful bias being introduced into these systems.

Many approaches have been proposed for explaining models and predictions, including the LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations) techniques. But there’s more work to be done to make these tools easier for domain experts to use. There are also technical challenges in training models that improve with human feedback instead of just minimizing error rates. These efforts will help us build ML systems that support human decision-making in fields like health care, transportation and law enforcement while minimizing concerns about bias and opacity in future systems.

 

Explaining Black-Boxes

Explainable AI techniques is particularly important when it comes to using neural networks and deep learning. Specifically, deep neural networks are very effective, but they operate as black-boxes. That is, it’s difficult to explain why a neural network makes a particular decision. As we rely more and more on deep learning for decision-making, this has serious consequences for fairness, reliability, and safety. Explainable AI can make deep learning more understandable to humans by decomposing neural networks on the machine learning features that matter the most in their decisions. This helps end-users gain insights into how the models work and understand why they make certain decisions. Understanding the features that drive ML decisions is also a key prerequisite for machine learning feature scaling that deals with the varying magnitudes of values and leads to the development of effective applications.

In practice, explainable AI is a very challenging problem because of the complexity of modern neural networks and our inability to fully interpret all the information within them. To overcome this challenge, researchers have focused on decomposing individual predictions made by neural networks using a technique known as attribution. Nevertheless, these techniques are still far from being perfect, yet they help us shed light on black boxes. While the vision of perfect explainability and transparency of AI models remains a utopia, the above-listed methods bring deep neural networks closer to their applicability and wider use in real-life settings.

 

Regulation to the Rescue

AI explainability and transparency will be among the main requirements of future regulations about AI. As early as 2018, the European Union’s General Data Protection Regulation (GDPR) explicitly requires that AI models be explained. Also, earlier in November 2017, the US Department of Transportation published a set of ‘automated vehicle principles’ as a guide for policymakers at the state level as well as outside organizations. The goal is to help federal agencies develop a framework for ensuring the safety of automated vehicle technologies while allowing innovation to occur. Similar efforts have been made by other governments around the world.

In April 2019, a bipartisan group of legislators introduced the Explainable Artificial Intelligence Act of 2019 in the US Senate, which would require federal agencies to develop plans for ensuring that AI uses are explainable. More recently, in April 2021, the European Union has introduced a complete AI regulation proposal, which is the first law on AI by a major regulator. This proposal takes a risk-based approach to the regulation of AI systems. The development and deployment of high-risk AI systems come with a set of stringent obligations for their developers and operators. Explainability, transparency and human oversight are among the mandatory requirements for operating high risk AI systems. On the other hand, the requirements, and obligations for operating low risk systems are much less restrictive and more lightweight.

 

Overall, ML applications have become an indispensable part of our world, from the technology that helps us find our way around town to the tools that recommend products on your favorite websites. Machine learning systems have a tremendous impact on our daily lives and will play an even larger role in the future. Thus, the safe and predictable operation of advanced AI systems will be among our greatest technological challenges, but it is a challenge we must solve. It’s the only way to ensure that future AI systems will continue to bring us the positive benefits of technology while taking the negative ones off the table—especially those risks related to intentionally unsafe or unpredictable behavior.

Recent Posts

get in touch

We're here to help!

Terms of use
Privacy Policy
Cookie Policy
Site Map
2020 IT Exchange, Inc