![]() ![]() The example Amazon provides is of a credit scoring system, with an overall score influenced by variables in positive and negative ways, such as employment status or income level. SHAP analyses the individual contribution of feature values to the predicted output for each data instance and represents them as a positive or negative value. One of the biggest features, however, is explaining AI model predictions, which SageMaker Clarify claims to do via support for a popular technique called SHapely Additive exPlanations (SHAP). These metrics, specifically, include the difference in positive proportions in labels, the difference in positive proportions in predicted labels, accuracy difference, and the counterfactuals-flip test. Once the AI model is trained, however, Clarify can also run a bias analysis (including automatic deployment) and compute another set of bias metrics. This information can then be used to add your own bias reduction techniques to the data processing pipeline. Beyond this, users will be able to measure bias using a host of metrics, as well as to detect bias drift over time.ĭetecting bias would be the first step, and the firm claims that SageMaker can simply examine your dataset and produce a set of bias metrics. There are several capabilities that will stand out for customers, chief among them being that Amazon claims the service will help data scientists detect bias in datasets both before and after training models. ![]() These tools will be integrated with the web-based development environment SageMaker Studio, as well as other services such as SageMaker Data Wrangler, SageMaker Experiments, and SageMaker Model Monitor. What can SageMaker Clarify do?Ĭlarify serves as an additional set of capabilities for the broader SageMaker fully-managed machine learning service. It aims to address one of the biggest challenges with AI, in that it’s often difficult to understand why a model has come to any particular prediction. In this context, Amazon has launched its SageMaker Clarify bias-detection tool that offers customers increased transparency when running machine learning models. MIT research finds ethnic and gender bias in Amazon Rekognition.Companies to combat AI bias with forensic experts.How to find and root out unconscious bias.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |