Logo

The Data Daily

Nonlinear Computation in Deep Linear Networks

Nonlinear Computation in Deep Linear Networks

We've shown that deep linear networks — as implemented using floating-point arithmetic — are not actually linear and can perform nonlinear computation. We used evolution strategies to find parameters in linear networks that exploit this trait, letting us solve non-trivial problems.

Neural networks consist of stacks of a linear layer followed by a nonlinearity like tanh or rectified linear unit. Without the nonlinearity, consecutive linear layers would be in theory mathematically equivalent to a single linear layer. So it's a surprise that floating point arithmetic is nonlinear enough to yield trainable deep networks.

Numbers used by computers aren't perfect mathematical objects, but approximate representations using finite numbers of bits. Floating point numbers are commonly used by computers to represent mathematical objects. Each floating point number is represented by a combination of a fraction and an exponent. In the IEEE's float32 standard, 23 bits are used for the fraction and 8 for the exponent, and one for the sign.

As a consequence of these conventions and the binary format used, the smallest normal non-zero number (in binary) is 1.0..0 x 2^-126, which we refer to as min going forward. However, the next representable number is 1.0..01 x 2^-126, which we can write as min + 0.0..01 x 2^-126. It is evident that the gap between the 2nd number is by a factor of 2^20 smaller than gap between 0 and min. In float32, when numbers are smaller than the smallest representable number they get mapped to zero. Due to this 'underflow', around zero all computation involving floating point numbers becomes nonlinear.

An exception to these restrictions is denormal numbers, which can be disabled on some computing hardware. While the GPU and cuBLAS have denormals enabled by default, TensorFlow builds all its primitives with denormals off (with the flag set). This means that any non-matrix multiply operation written in TensorFlow has an implicit non-linearity following it (provided the scale of computation is near 1e-38).

So, while in general the difference between any "mathematical" number and their normal float representation is small, around zero there is a large gap and the approximation error can be very significant.

This can lead to some odd effects where the familiar rules of mathematics stop applying. For instance, $(a + b) \times c$ becomes unequal to $a \times c + b \times c$.

For example if you set $a = 0.4 \times min$, $b = 0.5 \times min$, and $c = 1 / min$.

Then: $(a+b) \times c = (0.4 \times min + 0.5 \times min) \times 1 / min = (0 + 0) \times 1 / min = 0$. However: $(a \times c) + (b \times c) = 0.4 \times min / min + 0.5 \times min \times 1 / min = 0.9$.

In another example, we can set $a = 2.5 \times min$, $b = -1.6 \times min$, and $c = 1 \times min$.

Then: $(a+b) + c = (0) + 1 \times min = min$. However: $(b+c) + a = (0 \times min) + 2.5 \times min = 2.5 \times min$.

At this smallest scale the fundamental addition operation has become nonlinear!

We wanted to know if this inherent nonlinearity could be exploited as a computational nonlinearity, as this would let deep linear networks perform nonlinear computations. The challenge is that modern differentiation libraries are blind to these nonlinearities at the smallest scale. As such, it would be difficult or impossible to train a neural network to exploit them via backpropagation.

We can use evolution strategies (ES) to estimate gradients without having to rely on symbolic differentiation. Using ES we can indeed exploit the near-zero behavior of float32 as a computational nonlinearity. When trained on MNIST a deep linear network trained via backpropagation achieves a training accuracy of 94% and a testing accuracy of 92%. In contrast, the same linear network can achieve >99% training and 96.7% test accuracy when trained with ES and ensuring that the activations are sufficiently small to be in the nonlinear range of float32. This increase in training performance is due to ES exploiting the nonlinearities in the float32 representation. These powerful nonlinearities allow any layer to generate novel features which are nonlinear combinations of lower level features. Here is the network structure:

Beyond MNIST, we think other interesting experiments could be extending this work to recurrent neural networks, or to exploit nonlinear computation to improve complex machine learning tasks like language modeling and translation. We're excited to explore this capability with our fellow researchers.

Images Powered by Shutterstock