Interactive Tool Allows Users to See and Regulate How Automated Model Searches Function

Scientists from MIT and other universities have built an interactive tool that, for the first time, allows users to see and regulate how automated machine-learning systems function. The aim is to boost confidence in these systems and discover ways to enhance them.

Researchers from MIT and elsewhere have developed an interactive tool that, for the first time, lets users see and control how increasingly popular automated machine-learning (AutoML) systems work. (Image: Chelsea Turner, MIT)

Developing a machine-learning model for a particular task — such as disease diagnoses, image classification, and stock market prediction — is a difficult, time-consuming process. Professionals first select from among many different algorithms to construct the model around. Then, they manually modify “hyperparameters” — which establish the model’s complete structure — before the model begins training.

Recently built automated machine-learning (AutoML) systems iteratively test and alter algorithms and those hyperparameters, and choose the ideal models. But the systems work as “black boxes,” meaning their selection methods are concealed from users. Thus, users may not trust the results and can find it hard to modify the systems to their search requirements.

In a paper put forth at the ACM CHI Conference on Human Factors in Computing Systems, researchers from MIT, the Hong Kong University of Science and Technology (HKUST), and Zhejiang University talk about a tool that puts the analyses and control of AutoML techniques into the hands of users. Known as ATMSeer, the tool takes as input an AutoML system, a dataset, and some data about a user’s task. Then, it envisages the search process in an easy to use interface, which presents comprehensive information on the performance of the models.

“We let users pick and see how the AutoML systems works,” says co-author Kalyan Veeramachaneni, a chief research researcher in the MIT Laboratory for Information and Decision Systems (LIDS), who leads the Data to AI group. “You might simply choose the top-performing model, or you might have other considerations or use domain expertise to guide the system to search for some models over others.”

In case studies with science graduate students, who were AutoML beginners, the scientists found about 85% of participants who tried out ATMSeer were self-assured in the models chosen by the system. Almost all participants stated that using the tool made them sufficiently comfortable to use AutoML systems in the future.

We found people were more likely to use AutoML as a result of opening up that black box and seeing and controlling how the system operates,” says Micah Smith, a graduate student in the Department of Electrical Engineering and Computer Science (EECS) and a scientist in LIDS.

Data visualization is an effective approach toward better collaboration between humans and machines. ATMSeer exemplifies this idea. ATMSeer will mostly benefit machine-learning practitioners, regardless of their domain, [who] have a certain level of expertise. It can relieve the pain of manually selecting machine-learning algorithms and tuning hyperparameters.

Qianwen Wang, Lead Author, HKUST

Partnering with Smith, Veeramachaneni, and Wang on the paper are: Yao Ming, Qiaomu Shen, Dongyu Liu, and Huamin Qu, all of HKUST; and Zhihua Jin of Zhejiang University.

Tuning the model

At the center of the new tool is a custom AutoML system, known as “Auto-Tuned Models” (ATM), developed by Veeramachaneni and other scientists in 2017. In contrast to traditional AutoML systems, ATM completely catalogs all search results as it attempts to fit models to data.

ATM uses as input any dataset and an encoded prediction task. The system casually chooses an algorithm class — such as decision trees, neural networks, random forest, and logistic regression — and the model’s hyperparameters, such as the size of a decision tree or the number of neural network layers.

Then, the system tests the model against the dataset, iteratively adjusts the hyperparameters, and computes performance. It uses the information it has learned about that model’s performance to select another model, and so on. Eventually, the system outputs numerous top-performing models for a task.

The trick is that each model can fundamentally be seen as one data point with some variables: hyperparameters, algorithm, and performance. Based on that work, the scientists engineered a system that plots the data points and variables on designated charts and graphs. From there, they formulated a separate method that also allows them reconfigure that data in real time. “The trick is that, with these tools, anything you can visualize, you can also modify,” Smith says.

Similar visualization tools are custom-made toward testing only one particular machine-learning model, and allow restricted customization of the search space. “Therefore, they offer limited support for the AutoML process, in which the configurations of many searched models need to be analyzed,” Wang says. “In contrast, ATMSeer supports the analysis of machine-learning models generated with various algorithms.”

User control and confidence

ATMSeer’s interface comprises of three parts. A control panel enables users to upload datasets and an AutoML system, and start or pause the search process. Underneath that is an overview panel that displays standard statistics — such as the number of algorithms and hyperparameters searched — and a “leaderboard” of top-performing models in descending order. “This might be the view you’re most interested in if you’re not an expert diving into the nitty gritty details,” Veeramachaneni says.

ATMSeer comprises of an “AutoML Profiler,” with panels having detailed information about the algorithms and hyperparameters, which can all be tweaked. One panel signifies all algorithm classes as histograms — a bar chart that illustrates the distribution of the algorithm’s performance scores, on a scale of 0 to 10, based on their hyperparameters. A separate panel shows scatter plots that picture the tradeoffs in performance for various hyperparameters and algorithm classes.

Case studies with machine-learning professionals, who had no AutoML familiarity, showed that user control does help enhance the efficiency and performance of AutoML selection. User studies with 13 graduate students in varied scientific fields — such as finance and biology— were also enlightening. Results show three key factors —system runtime, number of algorithms searched, and finding the top-performing model — established how users modified their AutoML searches. That information can be used to customize the systems to users, the scientists say.

We are just starting to see the beginning of the different ways people use these systems and make selections. That’s because now that this information is all in one place, and people can see what’s going on behind the scenes and have the power to control it.

Kalyan Veeramachaneni, Study Co-Author and Chief Research Researcher, LIDS, MIT

ATMSeer: Increasing Transparency and Controllability in Automated Machine Learning

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.