Visualized learning12/14/2023 ![]() ![]() Next, we applied the system in a real-world project for automatic insurance acceptance at Achmea, which showed that professional data scientists were able to understand a complex model and improve their production model based on these insights. We show that our different visualizations enable data scientists to check for and ascertain the validity of these strategies. In an online survey with 22 machine learning professionals and visualization experts, we show our visualization increases correctness and confidence and reduces the time needed to obtain an insight compared to previous techniques.įinally, we introduce StrategyAtlas: an interactive system which provides a global understanding of complex machine learning models by showing if a model treats different customers differently (in other words, the model has multiple prediction strategies). ![]() It shows how much differents features in the data contribution towards a prediction, but also which feature values are most relevant. Next, we propose the Contribution-Value plot as a new building block for interpretability visualization. In a use case with data scientists from the debtor management department at Achmea, we show the participants could effectively use explanations to diagnose a model and find problems, identify areas where the model can be improved, and support their everyday decision-making process. Next, we leverage the domain knowledge of the data scientist to determine whether the explanation makes sense and is appropriate. ![]() We first introduce ExplainExplore: an interactive explanation system to locally explore explanations of individual predictions. In this dissertation, we explore interactive visualization for machine learning interpretation from different perspectives, ranging from local explanation of single predictions to global explanation of the entire model. Interactive visualization provides an excellent opportunity to both involve and empower experts. We argue we should actively involve data scientists in the process of generating explanations, and leverage their expertise in the domain and machine learning. However, because interpretability is an inherently subjective concept it remains challenging to define what a good explanation is. In recent years, various techniques have been proposed to open up the black box of machine learning. The field of eXplainable Artificial Intelligence (XAI) aims to help experts understand complex machine learning models. Understanding models is particularly important in high-impact domains such as credit, employment, and housing, where the decisions made using machine learning impact the lives of real people. Besides increasing accuracy, currently there is a strong demand for understanding how specific models operate and how certain decisions are made. It enables businesses to make sense of their data and make predictions about future events. Machine learning has firmly established itself as a valuable and ubiquitous technique in commercial applications. ![]()
0 Comments
Leave a Reply.AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |