Manifold: Visual Debugging Tool for Machine Learning at Uber

uber manifold

Uber Research has introduced its framework for Machine Learning debugging and interpretation, called Manifolds. According to Uber, they developed Manifolds to make the model iteration process more informed and actionable.

Manifold is a model-agnostic visual debugging tool, that allows researchers and engineers to look beyond overall summary metrics for their machine learning models. The framework follows a specific design that is supposed to offer transparency into the black box of ML model development as well as scalability.

 

“With Manifold’s design, we turned the traditional ML model visualization challenge on its head. Instead of inspecting models, we inspect individual data points, by (1) identifying the data segments that make a model perform well or poorly, and how this data affects performance between models, and (2) assessing the aggregate feature characteristics of these data segments to identify the reasons for certain model behaviors. “- explain Lezhi Li and Yang Wang from Uber Labs.

 

Researchers at Uber wanted to accomplish some very tangible goals in machine learning modeling by developing Manifold. Some of them are: debug code errors in ML models, understand strengths and weaknesses of one model both in isolation and in comparison, with other models, compare and ensemble different models, incorporate insights gathered through inspection and performance analysis into model iterations.

The framework follows an analysis workflow of three main steps: Inspection, Explanation, and Refinement. A simple user interface provides visualizations and useful insights at each of the analysis steps.

More about Manifold can be read at the official page, where the visualizations and the workflow designs are also described. Also, the paper was published on arxiv.

 

Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments

aischool