小程序
传感搜
传感圈

Using Explainable AI in Decision-Making Applications

2022-10-06
关注

Using Explainable AI in Decision-Making Applications
Illustration: © IoT For All

There is no instruction for a decision-making process. However, important decisions are usually made by analyzing tons of data to find the optimal way to solve a problem. That’s where we truly rely on logic and deduction. That’s why surgeons dig into anamnesis, or businesses gather key persons to see a bigger picture before making a turn. 

'Relying on AI decision-making can significantly reduce the time spent on research and data gathering.' -MobiDevClick To Tweet

Relying on AI decision-making can significantly reduce the time spent on research and data gathering. But, as the final decision is up to the human, we still need to understand how our support system came up with these insights. So, in this article we’ll discuss AI explainability and why it’s important for areas such as healthcare, jurisprudence, or finance to justify any piece of information.

What is Explainability in AI?

Explainability in AI
Explainability in AI

AI decision-support systems are used in a range of industries that base their decisions on information. This is what’s called a data-driven approach – when we try to analyze available records to extract insights and support decision-makers. 

The key value here is data processing, something a regular person can’t do fast. On the flip side, neural networks can grasp enormous amounts of data to find correlations and patterns or simply search for the required item within an array of information. This can be applied to domain data like stock market reports, insurance claims, medical analysis, or radiology imagery. 

The problem is that, while AI algorithms can derive logical statements from data, it remains unclear what these determinations were based on. For instance, some computer vision systems for lung cancer detection show up to 94 percent accuracy in their predictions. But can a pulmonologist rely solely on a model prediction without knowing if it has mistaken a tumor for fluid?

This concept can be represented as a black box, as the decision process inside the algorithm is hidden and often can’t be understood even by its designer.

https://mobidev.biz/blog/using-explainable-ai-in-decision-making-applications
Block box concept illustrated

AI explainability refers to techniques by which a model can interpret its findings in a way humans can understand. In other words, it’s a part of AI functionality that is responsible for explaining how the AI came up with a specific output. To interpret the logic of AI decision processes, we need to compare three factors:

  1. Data input
  2. Patterns found by the model
  3. Model prediction

These components are the essence of AI explainability implemented into decision-making systems. The explanation, in this case, can be defined as how exactly the system provides insights to the user. Depending on the implementation method, this can be presented as details on how the model derived its opinion, a decision tree path, or data visualization, for example.

Production-oriented AI decision-making
Production-oriented AI decision-making

Production-oriented systems usually deal with critical decisions that require justification. This is like how a person in charge of a final decision is responsible for any possible financial loss or harm. In this case, the AI model acts as an assistant for professional activity, providing explicit information for a decision-maker. 

So now, let’s look at the examples of explainability in production-oriented AI systems and how they can support decision-makers in healthcare, finance, social monitoring, and other industries.

Medical Image Processing and Cancer Screening

Computer vision techniques are actively used in the processing of medical images, such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Whole-Slide images (WSI). Typically, these are 2D images used for diagnostics, surgical planning, or research tasks. However, CT and WSI can often depict the 3D structure of the human organism by reaching enormous sizes. 

For example, the WSI of human tissues can reach 21,500 × 21,500 pixel size and even more. Efficient, fast, and accurate processing of such images serves as the basis for early detection of various cancer types, such as:

  • Renal Cell Carcinoma
  • Non-small cell lung cancer
  • Breast cancer lymph node metastasis

The doctor’s task, in this case, is to strip the information down to visually analyze the image to find suspicious patterns in the cellular structure. Based on the analysis, the doctor will provide a diagnosis or decide on surgical intervention. 

However, since WSI is a really large image, it takes heaps of time, attention, and qualification to analyze. Traditional computer vision methods will also require too many computational resources for end-to-end processing. So how can explainable AI help here?

In terms of WSI analysis, explainable AI will act as a support system that scans image sectors and highlights regions of interest with suspicious cellular structures. The machine won’t make any decisions but will speed up the process and make the work of a doctor easier. This is possible because WSI is more accurate and quicker in terms of image scanning, making it less likely that specific regions will be omitted. Here is an example of renal clear cell carcinoma on the low-resolution WSI.

Kidney renal clear cell carcinoma is predicted to be high risk (left), with the AI-predicted “risk heatmap” on the right; red patches correspond to “high-risk” and blue patches to “low-risk."
Kidney renal clear cell carcinoma is predicted to be high risk (left), with the AI-predicted “risk heatmap” on the right; red patches correspond to “high-risk” and blue patches to “low-risk”.

As you can see in the image on the right, the AI support system highlights the regions of high risk where cancer cells are more likely to be. This eliminates the need to physically analyze the whole image of a kidney, providing hints for medical expertise and attention. Similar heatmaps are used for even closer and more high-resolution analysis of kidney tissue.

Example of WSI kidney tissue in high resolution (left), with the AI-predicted attention heatmap on the right; Blue color indicates low cancer risk zones, red — high risk.
Example of WSI kidney tissue in high resolution (left), with the AI-predicted attention heatmap on the right; Blue color indicates low cancer risk zones, red — high risk.

The advantage of such systems is not only in operational speed and greater accuracy of the analysis. The image can also be manipulated in different ways to provide a fuller picture for diagnostics and to make more accurate surgical decisions if needed. You can check the live demo of kidney tissue WSI analysis to see how the interface looks and how exactly it works.

Text Processing

Text processing is a standalone task, as well as a part of bigger workflows in different industries such as jurisprudence, healthcare, finance, media, and marketing. There are plenty of techniques for deep learning that are capable of analyzing text from different angles. Here we can list natural language processing (NLP), natural language understanding (NLU), and generation (NLG). These algorithms can classify text by its tone and emotional shades. It can also detect spam, produce a summary, or generate a large contextual text based on a small initial piece. 

Besides its use in document processing, machine translation, and communication technologies, explainable AI has other applications.

FINANCIAL NEWS MINING

Another interesting case is the use of financial news texts in conjunction with fundamental analysis to forecast stock values. A prime example of this would be “trading on Elon Musk’s tweets.” Musk’s tweets had a dramatic effect on the value of several cryptocurrencies (Dogecoin and Bitcoin), as well as the value of his own company. The point is that if someone can impact stock prices so much through social media, it is worth knowing about these posts in advance. 

AI algorithms allow you to process a huge number of news headlines and texts in the shortest possible time by:

  • analyzing the tonality of the text
  • identifying the runoff or company referred to in the article
  • identifying the name and position of the specialist quoted by the publication

All this allows you to quickly make decisions on a purchase or sale based on the summary of social media information. In other words, such AI applications can support individuals and businesses in their decision by supplying information about recent events that could impact stock prices and bring fluctuations in the market. 

The explainability part is responsible for justifying the data by recognizing company, event, persona, or resource entities with named entity recognition technology. This allows the decision-support system user to understand the logic behind the model’s prediction. 

What do we need to make the work of AI algorithms with text understandable? We need to uncover the patterns AI “saw” in the text. We want to see what words and connections between the words that the algorithm relied on, proving its conclusion about the tonality of the text.

How AI Explainability Works in Text Processing

Consider explainable AI using the example of financial news headlines of articles. Below you will see several headings and you’re invited to assess the tonality of these headings (positive, neutral, negative), as well as try to track and reflect on which words in the title you, as a reader, relied on when you assigned the title to a particular group:

  • Brazil’s Real and Stocks Are World’s Best as Foreigners Pile In
  • Analysis: U.S. Treasury Market Pain Amplifies Worry about Liquidity
  • Policymaker Warns of Prolonged Inflation due to Political Crisis
  • A 10-Point Plan to Reduce the European Union’s Reliance on Russian Natural Gas
  • Company Continues Diversification with Renewables Market Software Firm Partnership

Now, let’s compare your expectations with how AI classified these headers and see what words were important for the algorithm to make a decision.

In the pictures below, the words on which the AI algorithm relies in making a decision are highlighted. The redder the shade of the highlight, the more important the word was in making the decision. The more the shade was blue, the more the word tilted the algorithm toward making the opposite decision.

1. Brazil’s Real and Stocks Are World’s Best as Foreigners Pile In
Class: Positive

2. Analysis: U.S. Treasury Market Pain Amplifies Worry about Liquidity
Class: Negative

3. Policymaker Warns of Prolonged Inflation due to Political Crisis
Class: Negative

4. A 10-Point Plan to Reduce the European Union’s Reliance on Russian Natural Gas
Class: Neutral

5. Company Continues Diversification with Renewables Market Software Firm Partnership
Class: Positive

How long did it take you to read all these headlines, analyze and make a decision? Now imagine that AI algorithms can do the same, but hundreds of times faster!

How to Integrate Explainable AI Features Into Your Project

One might think that the integration of explainable AI into a project is quite a sophisticated and challenging step. Let us demystify it.

In general, from the perspective of explainability, AI systems may be classified into two classes:

  • Those which support explainability out-of-the-box
  • Those which need some third-party libraries to be applied to make them explainable

The first class of AI includes instance and semantic segmentation algorithms. AI algorithms based on attention are good examples of self-explainable algorithms, as we’ve shown above based on examples of WSI kidney tissue cancer risk images.

The second class of AI algorithms in general doesn’t support out-of-the-box self-explainability. But this doesn’t make them completely unexplainable. As an example of such an algorithm, we’ve shown text classification above. We can use third-party libraries on top of AI algorithms that make them explainable. Usually, it is easy to use LIME or SHAP libraries to make your AI explainable. However, the explainability requirements may differ on the project or user objectives, so we suggest you enlist the support of an AI expert’s team to receive proper consulting and development.

Enhanced Trust

One of the key factors that prevent the mass adoption of AI is the lack of trust. Decision-support tools in data-driven organizations already perform routine tasks of reporting and analysis. But at the end of the day, these systems are still not capable of providing a justified opinion on business-critical or even life-critical decisions. 

Explainable AI is a step further in data-driven decision-making, as it breaks the black box concept and makes the AI process more transparent, verifiable, and trusted. Depending on the implementation, a support system can provide more than just a hint, because, as we track down the decision process of an AI algorithm, we can uncover new ways to solve our tasks or be provided with optional types of decisions.

Tweet

Share

Share

Email

  • Artificial Intelligence
  • Data Analytics
  • Health and Wellness
  • Healthcare

  • Artificial Intelligence
  • Data Analytics
  • Health and Wellness
  • Healthcare

参考译文
在决策应用中使用可解释的AI
没有关于决策过程的指导。然而,重要的决策通常是通过分析大量的数据来找到解决问题的最佳方法。这就是我们真正依赖逻辑和演绎的地方。这就是为什么外科医生会深入研究记忆,或者企业会召集关键人物,在做出转变之前看到更大的前景。依靠人工智能决策可以显著减少研究和数据收集的时间。但是,由于最终的决定取决于人类,我们仍然需要理解我们的支持系统是如何得出这些见解的。因此,在本文中,我们将讨论AI的可解释性,以及为什么它对医疗保健、法理学或金融等领域的任何信息都具有合理性。人工智能决策支持系统被用于一系列基于信息的决策行业。这就是所谓的数据驱动方法——当我们试图分析可用的记录以提取见解并支持决策者时。这里的关键价值是数据处理,这是普通人无法快速完成的。另一方面,神经网络可以掌握大量的数据来寻找相关性和模式,或者只是在一组信息中搜索所需的项目。这可以应用于领域数据,如股票市场报告、保险索赔、医疗分析或放射成像。问题是,尽管人工智能算法可以从数据中推导出逻辑语句,但尚不清楚这些决定是基于什么。例如,一些用于肺癌检测的计算机视觉系统的预测准确率高达94%。但是,一个肺科医生能仅仅依靠模型预测而不知道是否把肿瘤误认为是液体吗?这个概念可以表示为一个黑箱,因为算法中的决策过程是隐藏的,即使是它的设计者也常常无法理解。人工智能可解释性指的是模型能够以人类能够理解的方式解释其发现的技术。换句话说,这是AI功能的一部分,负责解释AI如何想出特定的输出。为了解释AI决策过程的逻辑,我们需要比较三个因素:这些组件是AI可解释性实现到决策系统的本质。在这种情况下,解释可以定义为系统如何准确地向用户提供见解。根据实现方法的不同,这可以表示为关于模型如何获得其意见、决策树路径或数据可视化的详细信息。面向生产的系统通常处理需要论证的关键决策。这就像一个负责最终决定的人要对任何可能的经济损失或伤害负责一样。在这种情况下,AI模型充当专业活动的助手,为决策者提供明确的信息。现在,让我们看看面向生产的AI系统的可解释性的例子,以及它们如何支持医疗保健、金融、社会监测和其他行业的决策者。计算机视觉技术被积极地应用于医学图像的处理,如计算机断层扫描(CT)、磁共振成像(MRI)和全幻灯片图像(WSI)。通常,这些是用于诊断、手术计划或研究任务的2D图像。然而,CT和WSI往往可以描绘人体组织的三维结构达到巨大的尺寸。例如,人体组织的WSI可以达到21500 × 21500像素大小甚至更多。高效、快速和准确地处理这些图像是早期检测各种癌症类型的基础,例如:在这种情况下,医生的任务是剥离信息,对图像进行可视化分析,以发现细胞结构中的可疑模式。基于分析,医生将提供诊断或决定手术干预。 然而,由于WSI是一个非常大的图像,分析它需要大量的时间、精力和资格。传统的计算机视觉方法对于端到端处理也需要太多的计算资源。那么,可解释的AI在这方面有什么帮助呢?在WSI分析方面,可解释的AI将作为一个支持系统,扫描图像扇区,并突出具有可疑细胞结构的感兴趣区域。机器不会做任何决定,但会加快过程,使医生的工作更容易。这是可能的,因为WSI在图像扫描方面更准确和更快,使特定区域不太可能被忽略。低分辨率WSI显示肾透明细胞癌。正如你在右边的图片中看到的,人工智能支持系统突出显示了癌症细胞更可能存在的高风险区域。这样就不需要对肾脏的整个图像进行物理分析,为医学专业知识和关注提供了提示。类似的热图被用于更近距离和更高分辨率的肾脏组织分析。这种系统的优势不仅在于运算速度和更高的分析精度。图像还可以通过不同的方式进行处理,为诊断提供更全面的图像,并在需要时做出更准确的手术决定。你可以查看肾脏组织WSI分析的现场演示,看看界面看起来如何,以及它到底是如何工作的。文本处理是一个独立的任务,也是不同行业(如法律学、医疗保健、金融、媒体和营销)更大工作流的一部分。有很多深度学习技术能够从不同的角度分析文本。这里我们可以列出自然语言处理(NLP)、自然语言理解(NLU)和生成(NLG)。这些算法可以根据文本的语气和情感阴影对其进行分类。它还可以检测垃圾邮件,生成摘要,或基于一个小的初始片段生成一个大的上下文文本。除了在文件处理、机器翻译和通信技术方面的应用,可解释人工智能还有其他应用。另一个有趣的例子是使用财经新闻文本结合基本面分析来预测股票价值。一个典型的例子就是“在埃隆·马斯克的推特上交易”。马斯克的推文对几种加密货币(狗狗币和比特币)的价值以及他自己公司的价值产生了巨大影响。关键是,如果有人能通过社交媒体对股价产生如此大的影响,那么提前知道这些帖子是值得的。AI算法可以让你在最短的时间内处理大量的新闻标题和文本:所有这些都可以让你在总结社交媒体信息的基础上快速做出购买或销售的决定。换句话说,这种人工智能应用程序可以提供有关最近可能影响股价和市场波动的事件的信息,从而支持个人和企业的决策。可解释性部分负责用命名实体识别技术识别公司、事件、角色或资源实体,从而证明数据的合理性。这允许决策支持系统用户理解模型预测背后的逻辑。我们需要什么才能让人工智能算法的文本工作变得容易理解?我们需要发现AI在文本中“看到”的模式。我们想看看算法依赖的单词和单词之间的联系,以证明它关于文本调性的结论。以财经新闻标题为例,考虑可解释的AI。下面你将看到几个标题,请你评估这些标题的调性(积极的,中性的,消极的),并试图追踪和反思标题中的哪些词,作为一个读者,当你把标题分配给一个特定的群体时: 现在,让我们将您的期望与AI如何对这些标题进行分类进行比较,看看哪些词对算法做出决定是重要的。在下面的图片中,人工智能算法在做决定时所依赖的单词被突出显示。高光的颜色越红,这个词在做决定时就越重要。蓝色越深,这个词就越倾向于让算法做出相反的决定。巴西雷亚尔和股市是世界上最好的,外国投资者纷纷买进巴西股票。分析:美国国债市场低迷加剧对流动性的担忧评级:负3。政策制定者警告政治危机导致通货膨胀将持续减少欧盟对俄罗斯天然气依赖的10点计划公司继续与可再生能源市场软件公司合作实现多元化课堂:积极你花了多长时间来阅读所有这些标题,分析并做出决定?现在想象一下,人工智能算法可以做同样的事情,但速度要快上百倍!有人可能会认为,将可解释的AI集成到项目中是相当复杂和具有挑战性的步骤。让我们揭开它的神秘面纱。一般来说,从可解释性的角度来看,AI系统可以分为两类:第一类AI包括实例和语义分割算法。基于注意力的AI算法是自解释算法的好例子,正如我们上面基于WSI肾组织癌症风险图像的例子所展示的那样。第二类AI算法通常不支持开箱即用的自解释性。但这并不意味着它们完全无法解释。作为这种算法的一个例子,我们在上面展示了文本分类。我们可以在AI算法的基础上使用第三方库,使它们更容易解释。通常情况下,使用LIME或SHAP库很容易让你的AI变得可解释。然而,可解释性要求可能因项目或用户目标而异,因此我们建议您获得人工智能专家团队的支持,以获得适当的咨询和开发。阻碍人工智能大规模应用的关键因素之一是缺乏信任。数据驱动组织中的决策支持工具已经执行了报告和分析的常规任务。但是在一天结束的时候,这些系统仍然不能为关键业务甚至关键生命的决策提供合理的意见。可解释AI在数据驱动决策方面又向前迈进了一步,因为它打破了黑箱概念,使AI过程更加透明、可验证和可信。根据实现的不同,支持系统提供的不仅仅是提示,因为当我们追踪AI算法的决策过程时,我们可以发现解决任务的新方法,或者提供可选的决策类型。
您觉得本篇内容如何
评分

评论

您需要登录才可以回复|注册

提交评论

iotforall

这家伙很懒,什么描述也没留下

关注

点击进入下一篇

燧原科技打造一站式人工智能算力中心

提取码
复制提取码
点击跳转至百度网盘