Logo

The Data Daily

Building Optimized Models in a few steps with AutoML

Building Optimized Models in a few steps with AutoML

Building Optimized Models in a few steps with AutoML
Loren Shure
Jun 13 · 8 min read
or: Optimized Machine Learning without the expertise
Today I’d like to introduce Bernhard Suhm who works as Product Manager for Machine Learning here at MathWorks. Prior to joining MathWorks Bernhard led analyst teams and developed methods applying analytics to optimizing the delivery of customer service in call centers. In today’s post, Bernhard discusses how obtaining optimized machine learning models gets a lot easier and faster by applying AutoML. Instead of requiring significant machine learning expertise and following a lengthy iterative optimization process, AutoML delivers good models in a few steps.
Software requirements: Executing this script requires MATLAB version R2020a.
Contents
Example Application: A Human Activity Classifier
Step 1: Extract Initial Features Automatically by Applying Wavelet Scattering
Step 2: Automated Feature Selection
Step 3: Obtain Optimized Model In One-Step
Summary
Appendix: Wavelet scattering function
What is AutoML?
Building good machine learning models is an iterative process (as shown in the figure below). Achieving good performance requires significant effort: choosing a model (and then choosing a different model), identifying and extracting features, and even adding more or better training data.
Even experienced machine learning experts know it’s a lot of trial and error to finally arrive at a performant machine learning model.
Today, I’m going to show you how you can use AutoML to automate one (or all) of the following phases.
Identifying features that have predictive power yet are not redundant
Reducing the feature set to avoid overfitting and (if your application requires) fit the model on hardware with limited power and memory
Selecting the best model and tune its hyperparameters
Figure 1 above shows a typical machine learning workflow. The orange boxes represent the steps we will automate with AutoML, and the following example will walk through the process of automating the feature extraction, model selection, hyperparameter tuning, and feature selection steps.
Example Application: A Human Activity Classifier
We’ll demonstrate AutoML using the task of distinguishing what kind of activity you are doing based on data obtained from the accelerometer in your mobile device. For this example, we are trying to distinguish five activities: standing, sitting, walking upstairs vs downstairs vs straight. Figure 2 below shows thThe classifier is processing buffers of 128 samples, representing 2.56 seconds of activity, and windowing overlaps the signal by half that. We’ll use a training set comprising of 25 subjects and 6873 observations, and a test set comprising of 5 subjects and about 600 observations.
Let’s first prepare our workshop and load the example data.
warning off;
Load accelerometer data divided in training and test sets
load dataHumanActivity.mat
Get handle on number of samples
unbufferedCounts = groupcounts(unbufferedTrain,{'subject','activity'});
Now we are ready to discuss the three primarily steps of applying AutoML to signal and image data, and apply them to our human activity classification problem:
Extract Initial features by applying Wavelet Scattering
Automatically select a small subset of features
Model selection and optimization
Step 1: Extract Initial Features Automatically by Applying Wavelet Scattering
Machine learning practitioners know obtaining good features can take a lot of time and be outright daunting for someone without the necessary domain knowledge, like with signal processing. AutoML provides feature generation methods that automatically extract high performing features from sensor and image data. One such method is Wavelet scattering which derives, with minimal configuration, low-variance features from signal (and image) data for use in machine learning and deep learning application.
You don’t need to understand Wavelet scattering to successfully apply it, but in brief, Wavelets transform small deformations in the signal by separating the variations across different scales. For many natural signals, the wavelet transform also provides a sparse representation. It turns out the filters in the initial layers of fully trained deep networks resemble wavelet like filters. The wavelet scattering network represents such a set of pre-defined filters.
To apply Wavelet scattering you only need the sample frequency, the minimum number of samples across the buffers in your data set, and a function that applies the Wavelet transformation using built-in function featureMatrix across a set of signal data. We included one way to do so in the appendix; or as an alternative you can apply featureMatrix across a datastore.
N = min(unbufferedCounts.GroupCount);
trainWavFeatures = extractFeatures(unbufferedTrain,sf,N);
testWavFeatures = extractFeatures(unbufferedTest,sf,N);
On this human activity data, we obtain 468 wavelet features — quite many — but automated feature selection will help us pare them down.
Step 2: Automated Feature Selection
Feature selection is typically used for the following two main reasons:
Reduce the size of a large models so they fit on memory (and power) constrained embedded devices
Prevent overfitting
Since wavelet scattering typically extracts hundreds of features from the signal or image, the need to reduce them to a smaller number is more pressing than with a few dozen manually engineered features.
Many methods are available to automatically select features. Here is a comprehensive overview what’s available in MATLAB. Based on experience, Neighborhood Component Analysis () and Maximum Relevance Minimum Redundancy () deliver good results with limited runtime. Let’s apply MRMR to our human activity data and plot the first 50 ranked features:
[mrmrFeatures , scores] = fscmrmr(trainWavFeatures, 'activity'); stem(scores(mrmrFeatures(1:50)),'bo');
Once we have all the features ranked, we need to decide how many predictors to use. Turns out a fairly low number of the Wavelet features provide good performance. For this example, to be able to compare the performance of the model obtained with AutoML to previous versions of the human activity classifier, we pick the same number that we selected leaving out low variance features from the >60 manually engineered ones. Optimizing accuracy on cross validation suggests a modestly higher number of features between 16 and 28, depending on which feature selection method you apply.
topFeaturesMRMR = mrmrFeatures(1:14);
Step 3: Obtain Optimized Model In One-Step
There is no one-size fits all in machine learning — you need to try out multiple models to find a performant one. Further, optimal performance requires careful tuning of hyperparameters that control the algorithm’s behavior. Manual hyperparameter tuning requires expert experience, rules of thumb, or brute-force search of numerous parameter combinations. Automated hyperparameter tuning makes it easy to find the best settings, and computational burden can be minimized by applying Bayesian optimization. Bayesian optimization internally maintains a surrogate model of the objective function, and in each iteration determines the most promising next parameter combination, balancing progressing towards an optimum that may be local with exploring areas that have not yet been evaluated.
Bayesian optimization can also be applied to identifying the type of model. Our new function, which was released with R2020a, uses a meta-learning model to narrow the set of models that are considered. The meta-learning model identifies a small subset of candidate models that are well suited for a machine learning problem, given various characteristics of the features. A meta-learning heuristic was derived from publicly available datasets, for which pre-determined characteristics can be computed, and associated with a variety of models and their performance.
With fitcauto, identifying the best combination of model and optimized hyperparameters essentially becomes a one liner, aside from defining some basic parameters controlling execution, like limiting the number of iterations to 50 (to keep runtime to a few minutes).
ppts = struct('MaxObjectiveEvaluations',50, 'ShowPlots',true); modelAuto = fitcauto(trainWavFeatures(:,topFeaturesMRMR),... trainWavFeatures.activity,'Learners','all',... 'HyperparameterOptimizationOptions',opts);
After 50 iterations, the best performing model on this data set is an Ada-boosted decision tree, which achieves with just 14 features 99% accuracy on held-out test data. That compares favorably to the best models you can obtain with manually engineered features and model tuning!
predictionAuto = predict(modelAuto, testWavFeatures); accuracy = 100*sum(testWavFeatures.activity == predictionAuto)/size(testWavFeatures,1); round(accuracy)ans = 88
Summary
In summary, we have described an approach that reduces building effective machine learning models for signal and image classification tasks to three simple steps: automatically extract features by applying wavelet scattering technique; second, automated feature selection that identifies a small subset of features with little loss in accuracy; and third, automatically selecting and optimizing a model whose performance gets close to that of models manually optimized by skilled data scientists. AutoML empowers practitioners with little to no expertise in machine learning to obtain models that achieve near-optimal performance.
This article just provided a high-level overview of what AutoML is available in MATLAB today. Being the product manager of the Statistics and Machine Learning Toolbox. I’m always curious to hear your use cases and expectations you have for for AutoML. Here’s some resources for you to explore.
And please leave a comment here to share your thoughts.
Appendix: Wavelet scattering function
Here’s the function that applies wavelet scattering over a buffer of signal data.
function featureT = extractFeatures(rawData, scatterFct, N)
% EXTRACTFEATURES.M - Apply wavelet scattering to raw data (3 dimensional
% signal plus "activity" label), using scatterFct on signal of length N
% extract X, Y, Z from raw data (columns 2-4), and sort by subject
signalData = table2array(rawData(:,2:4));
% Apply wavelet scattering featureMatrix on all rows from each subject.
% We get back 3-dimensional matrices (#features, #time-intervals, 3 signals)
waveletMatrix = splitapply(@(x) {featureMatrix(scatterFct,(x(1:N,:)))},signalData,gTrain);
featureT = table; % feature table we'll build up
% loop over each of the Wavelet matrices that were created above
for i = 1 : size(waveletMatrix,1)
oneO = waveletMatrix{i}; % process this observation
thisObservation = [oneO(:,:,1); oneO(:,:,2); oneO(:,:,3)];
thisObservation = array2table(thisObservation'); % don't forget to convert the features from rows into columns
featureT = [featureT; thisObservation];
end
% get labels by duplicating label for each row of "wavelet" features obtained for subject x activity
featureT.activity = repelem(activityTrain,size(waveletMatrix{1},2));
Get the MATLAB code (requires JavaScript)
Published with MATLAB® R2020a

Images Powered by Shutterstock