Logo

The Data Daily

From Softmax to Sparsemax: A Sparse Model of Attention and Multi-Label Classification

From Softmax to Sparsemax: A Sparse Model of Attention and Multi-Label Classification

From Softmax to Sparsemax: A Sparse Model of Attention and Multi-Label Classification
Andre Martins, Ramon Astudillo
Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:1614-1623, 2016.
Abstract
We propose sparsemax, a new activation function similar to the traditional softmax, but able to output sparse probabilities. After deriving its properties, we show how its Jacobian can be efficiently computed, enabling its use in a network trained with backpropagation. Then, we propose a new smooth and convex loss function which is the sparsemax analogue of the logistic loss. We reveal an unexpected connection between this new loss and the Huber classification loss. We obtain promising empirical results in multi-label classification problems and in attention-based neural networks for natural language inference. For the latter, we achieve a similar performance as the traditional softmax, but with a selective, more compact, attention focus.
Cite this Paper
BibTeX
@InProceedings{pmlr-v48-martins16, title = {From Softmax to Sparsemax: A Sparse Model of Attention and Multi-Label Classification}, author = {Martins, Andre and Astudillo, Ramon}, booktitle = {Proceedings of The 33rd International Conference on Machine Learning}, pages = {1614--1623}, year = {2016}, editor = {Balcan, Maria Florina and Weinberger, Kilian Q.}, volume = {48}, series = {Proceedings of Machine Learning Research}, address = {New York, New York, USA}, month = {20--22 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v48/martins16.pdf}, url = {https://proceedings.mlr.press/v48/martins16.html}, abstract = {We propose sparsemax, a new activation function similar to the traditional softmax, but able to output sparse probabilities. After deriving its properties, we show how its Jacobian can be efficiently computed, enabling its use in a network trained with backpropagation. Then, we propose a new smooth and convex loss function which is the sparsemax analogue of the logistic loss. We reveal an unexpected connection between this new loss and the Huber classification loss. We obtain promising empirical results in multi-label classification problems and in attention-based neural networks for natural language inference. For the latter, we achieve a similar performance as the traditional softmax, but with a selective, more compact, attention focus.} }
Copy to Clipboard
Download
Endnote
%0 Conference Paper %T From Softmax to Sparsemax: A Sparse Model of Attention and Multi-Label Classification %A Andre Martins %A Ramon Astudillo %B Proceedings of The 33rd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2016 %E Maria Florina Balcan %E Kilian Q. Weinberger %F pmlr-v48-martins16 %I PMLR %P 1614--1623 %U https://proceedings.mlr.press/v48/martins16.html %V 48 %X We propose sparsemax, a new activation function similar to the traditional softmax, but able to output sparse probabilities. After deriving its properties, we show how its Jacobian can be efficiently computed, enabling its use in a network trained with backpropagation. Then, we propose a new smooth and convex loss function which is the sparsemax analogue of the logistic loss. We reveal an unexpected connection between this new loss and the Huber classification loss. We obtain promising empirical results in multi-label classification problems and in attention-based neural networks for natural language inference. For the latter, we achieve a similar performance as the traditional softmax, but with a selective, more compact, attention focus.
Copy to Clipboard
Download
RIS
TY - CPAPER TI - From Softmax to Sparsemax: A Sparse Model of Attention and Multi-Label Classification AU - Andre Martins AU - Ramon Astudillo BT - Proceedings of The 33rd International Conference on Machine Learning DA - 2016/06/11 ED - Maria Florina Balcan ED - Kilian Q. Weinberger ID - pmlr-v48-martins16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 48 SP - 1614 EP - 1623 L1 - http://proceedings.mlr.press/v48/martins16.pdf UR - https://proceedings.mlr.press/v48/martins16.html AB - We propose sparsemax, a new activation function similar to the traditional softmax, but able to output sparse probabilities. After deriving its properties, we show how its Jacobian can be efficiently computed, enabling its use in a network trained with backpropagation. Then, we propose a new smooth and convex loss function which is the sparsemax analogue of the logistic loss. We reveal an unexpected connection between this new loss and the Huber classification loss. We obtain promising empirical results in multi-label classification problems and in attention-based neural networks for natural language inference. For the latter, we achieve a similar performance as the traditional softmax, but with a selective, more compact, attention focus. ER -
Copy to Clipboard
Download
APA
Martins, A. & Astudillo, R.. (2016). From Softmax to Sparsemax: A Sparse Model of Attention and Multi-Label Classification. Proceedings of The 33rd International Conference on Machine Learning, in Proceedings of Machine Learning Research 48:1614-1623 Available from https://proceedings.mlr.press/v48/martins16.html.
Copy to Clipboard

Images Powered by Shutterstock