So much of math seems to be unrelated bits and pieces. Learn this for the test, then forget it to make room for the next thing. But this is an illusion fostered by the frequently atrocious high school curriculum. Mathematics has structure, ideas that start simple and then are extended and combined. Here, we're focusing on two of them.
First, functions transform inputs into outputs. You can group similar functions by how many inputs and outputs they take. For our purposes, each input or output is just a number. We'll call the inputs x and the outputs y, adding subscript numbers once we get more than one. Each input or output is called a dimension.
Second, we're looking at functions that use addition and multiplication only to turn inputs into outputs. These are known as linear equations because when graphed, they're just lines. Curves like polynomials, exponentials, and sine waves will wait for another day. We'll see that geometrically, addition is translation and multiplication is scaling.
As we explore these ideas, we'll come across many ways of seeing, writing, and interacting. The diagrams that follow are highly dynamic; mouseover and grab everything.
A linear function for an input, called x, will scale the input by multiplying it by a scale factor called m. Then it translates the result by adding another value, called b. The output we call y.
You can adjust these values in the equations, in the text below, or on the graph itself directly. Hover over a red circle to see the exact value.
This is the classic linear function from middle school, and it has one input and one output. You learned that m is the slope, but you may not have made the connection to scaling up the identity by some factor. Also, when x is 0, there's no scaling, so b is the y-intercept.
It's natural to ask, what happens if we put two of these functions together, side-by-side? The simplest approach is to have two outputs – y1 and y2 – depend on a single input, still called x. We can double-down on linear equations, treating this new function as two copies of what we just saw. Or, we can think of these outputs as forming a vector, which basically pairs up all the operations.
Similarly, we could just plot two linear functions on the same graph, but instead, we're going to lay out three number lines side-by-side-by-size, and show the transformations from one to the next. Once again, change the numbers by dragging them:
The vector equation uses the two fundamental operations of linear algebra: scalar multiplication and vector addition. Scalar multiplication (a scalar is just a single number, instead of a vector) multiplies each component of the vector by the scalar. Vector addition adds two vectors (of equal length) by component: the first component of the new vector is the sum of the first components of the two vectors being added, and so on down the line.
This diagram only shows certain values that have been transformed. Unlike the traditional Cartesian coordinates, we're confined to samples (i.e. examples) of the input and outputs. Although sampling isn't considered part of linear algebra, depictions of higher dimensions benefit from using it, opting to show samples of many dimensions rather than a continuous surface in fewer.
Which brings us to how the x number line is distorted to become y1 and y2. Our samples of x are equally spaced, and the samples of the output are as well. Although scaling changes the spacing, it affects all samples equally. Linear transformations stretch the number line in consistent ways, as we'll continue to see.
In most cases every point in x goes to a different point in y1. If you could sample y1 as you please, you could get back to any point in x. The one exception is when m = 0; in that case, the function is not invertible. Although we won't go into it further, there's a deep connection between these topics.
That wraps up functions with one input and two outputs, which are known as a parametric equations.
This will require a change of perspective. We're used to plotting the x-axis horizontally and the y-axis vertically. Now that we want to consider two inputs, we have to use the vertical axis for x2. The output we encode as the area of circles, either positive or negative. This type of diagram is known as a scalar field.
Bear in mind that this is a sample; the true scalar field is a continuous surface "above" and "below" the plane of the screen, a tilted flat plane. If you ignore the color of the circles, you're effectively looking at the absolute value, two half-planes meeting at a V-shaped rift. You might see this rift as a "wave" of whiteness moving across the field as you change the additive constant (). It's tempting to see the wave moving perpendicular to itself, but this is an illusion – it's always moving vertically.
The vector equation on the right uses the dot product of two vectors to concisely represent the repeated multiplication and addition operations on the left. The dot product multiplies each pair of components together, and then adds them all up, taking two vectors to a single scalar.
It's time to put things together and have two of both outputs and inputs. We will finally abandon systems of equations and instead use a matrix to compactly store all the parameters. A matrix is really just a description of a function; terse, but useful once you know how to think of it. The scale factors go down the main diagonal, top left to bottom right. The translations are on the right side. The bottom row is always zero-zero-one.
It's the remaining numbers where things get interesting: they control the "crosstalk" between the first and second function. The first input can influence the second output, and vice versa, a phenomenon known as shear. Despite carefully examining all the ingredients in isolation, we couldn't have predicted this would happen when we put them together.
Graphically, a matrix's function is a vector field. Instead of having one number at every point in the plane, we now have two. Like a scalar field, a vector field starts with samples of the 2D input space. For each of those samples, we draw a vector whose horizontal component is proportional to y1 and whose vertical component is proportional to y2. By "proportional", we mean that the vectors are shrunken down so they don't overlap when repeated. (You can isolate one output by hovering over the output vector.) If you think of the vectors showing the velocity of a particle at a point, we can see patterns of swirls, attraction, and repulsion. Each arrow is a weather vane, and the matrix controls the wind.
You may have been taught to write the input vector to the right of the matrix, but it's shown above it here. This intentional break from pencil-and-paper notation is meant to emphasize how matrices work. To compute the output vector (i.e. to apply the function), multiply each column of the matrix by the input above it, and then add up the columns (think of squishing them together horizontally).
In pure mathematics, you'll typically only see the top-left two-by-two of our three-by-three matrix. The right column allows us to add constants (one in each output dimension) in addition (literally) to what we get by multiplying the inputs. However, in computer graphics we want to distinguish between points, which can be scaled and translated, and vectors, which can only be scaled. Points are padded with a 1, causing one copy of the last column, the translation, to be added to the output. Vectors are padded with a 0, which causes the translation to be zeros as well. This means we can use the same matrix on points and vectors, which is particularly handy if the matrix took a long time to compute.
If you focus on the lengths of the vectors, you'll see that they get longer as they move away from the origin. This makes sense, since you're multiplying by larger numbers. But this appearance of growth is deceptive. Pick any of the straight lines of gray points, and then focus on the arrowheads (not the shafts) that come from it. You'll find that they still form a straight line, just at an odd angle. Do this with two parallel gray-dot-lines and the blue-arrowhead-lines are still parallel. Do it with three and they will be evenly spaced. It works for vertical and horizontal lines (indeed lines at any starting angle if we were to draw them. If you look at four arrowheads that started as a square, they still form a parallelogram, and this parallelogram is tiled repeatedly across every former square. The matrix, then, converts to another coordinate system (or grid): scaled, translated, and stretched, but consistent. It's like a superpower: the ability to bend and distort space, and in a useful, orderly way.
It's worth noting that a vector field is not the only way to think about this function; two overlapping scalar fields would also work. Nicky Case has a demo where the two dimensions of input double up to also show the two dimensions of output. In other words, the vectors aren't shrunken down like they are here. You can see where a point starts and where it ends relative to the same pair of axes, and a vector showing the change in position. The bad news is that you can't see as much of the overall structure. All of these are valid tradeoffs, given the inherently tricky problem of trying to depict four-dimensional functions using only a 2D plane.
However, once you've gotten your head around that, it's relatively simple to understand six dimensions depicted in a 3D volume (which is then projected into the two dimensional space of your screen).
A 3D vector field might look like a forest of blue arrows, but it's actually easier to see when you adjust the matrix and see how the arrows change – predictably, in lines and at right angles. There is symmetry in the motion, each left counterbalanced by a right wherever the sign is flipped. Down the main diagonal, two half-volumes attract or repel; shearing looks like tectonic plates sliding past each other.
By modifying the matrix, you're getting a glimpse into the space of all possible matrices, a twelve dimensional space. Do we need fifteen inputs to specify three outputs? Not for a single function, no, but breaking things apart allows us to see adjacent functions in many dimensions. If something isn't right, we have a lot of knobs to fix it.
Perhaps more importantly, you should have some idea of what each of those knobs do thanks to the presentation. Color, notation, and interactivity have been chosen very carefully to give a consistent design. More importantly, each stage incrementally built on y = mx + b. There, we saw that looking at zero showed us the addition only, and that's true even in the 3D vector field, for the very simple reason that scaling zero by anything is always zero.
We've added more numbers, and seen the surprising effect of shear between them, but the fundamental concepts have stayed the same. Addition and multiplication, translation and scaling, served as guideposts as we hoisted ourselves ever higher.