Nietzsche, The Birth of Tragedy

‘The periphery of the circle of science has an infinite number of points, and while there is no telling how this circle can ever be completely measured, the noble person, before the middle of his life, inevitably comes into contact with those extreme points of the periphery where he stares at the inexplicable. When he here sees to his dismay how logic coils round itself at these limits and finally bites its own tail…’

Chaos theory, like relativity or quantum physics, enjoys a reputation that precedes itself. There are countless pop culture references to it (the film ‘The Butterfly Effect’ starring Ashton Kutcher being a particularly notable one) and it sounds cool and mysterious. It’s also quite easy to understand conceptually, the classic example being if you went back in a time machine and stepped on an ancient bug and came back to the present day the world would look very different.

But if you think for a minute about the butterfly effect or the squished primordial bug it seems to contradict any predictive abilities we have in maths. If something so small and insignificant changes things so unpredictably, how can we predict anything? Are these examples overblown? If I stepped on a bug when I walked out of the time machine things will surely look mostly the same when I come back, with maybe a tiny difference?

No! These examples are actually accurate and the daunting truth is that most situations are too unpredictable for maths to handle. That dead bug really would change the course of history. The butterfly really could cause a tornado.

We think we’re good at seeing into the future because we can predict how man made systems (like machines or experiments) which are designed to be predictable, will behave in the future. But that is a testament to good design, not predictive power. It’s like saying you’re a talented artist because you’re good at colouring books. When we hold up our predictive tools, which have been honed over centuries, to the real world we fall laughably short. For instance one of the most studied systems in human history is the solar system. We know exactly how 9 planets and the Sun interact with one another. So we can model and predict where these planets will be for the rest of time (barring an outside event), right? Nope. We have a good idea what’s going to happen for the next 5 million years, but beyond that? We can’t tell. In relation to the age of the universe this horizon is not impressive. We’re at the mercy of chaotic systems, and they’re all around us.

The reason I’m writing this blog is because if you scratch below the surface of chaos you can gain a much deeper appreciation of it, and this appreciation can make you view the world differently. With high school maths knowledge you can see how maths breaks down and frays at the edges. It is humbling, as it shows us that we are not enlightened masters of our environment. Instead we drift in the dark sea of chaos, illuminated by our flickering torch of maths, trying to figure out where the next wave will come from.

This post is written for someone who wants to approach chaos from the ground up, without getting too bogged down in maths. While there is maths in this post, I’ve made it as simple as possible and much of it you can skim over while still getting the point (though I do recommend trying to follow along).

There is quite a bit to get through so I’ve split this post into 2 parts. This first part is a rundown of dynamical systems, linearity and introduces the phase plane. It’s an overview of the types of system we can predict using maths. This recap of the mass spring system is the basics we need to cover before we journey to the limits – and hopefully it makes you see this physics problem in a new light.

To get the most out of this post, you should know about differentiation, especially over time (velocity and acceleration), e, sin and cos.

**Dynamics and solving linear systems**

Chaos theory is an area of **dynamics**, which is concerned with things that move, for instance a ball on a hilly surface. In dynamics we analyse **systems**, which is the mathematical **model** of everything going on with the things that move in the real world.

Because dynamics studies movement we deal with derivatives, mostly velocity and acceleration. A lot of dynamics is solving equations that relate position, velocity and acceleration to one another (how does the acceleration and velocity of the ball change at different positions on the hill?). The position, velocity and acceleration of an object in a model is its **state.** The state of a system changes over time because things move.

This is all a bit abstract so let’s look at the classic school physics problem, the mass and spring:

The spring with the weight on the end is the **system** we’re analysing. The arrow with x=0 shows the spring is at its resting point. If you dust off the cobwebs on balancing forces you may remember the equation for this system is:

This is the mathematical **model** of the system, which is pretty simple. It is an equation that relates the position of the weight (x) to its acceleration (x’’). It relates position to a derivative of itself. These are called **differential equations** and they are the bread and butter of dynamics. The goal is to solve these equations for a system so that we can say ‘where will the weight be at some point in the future?’. I.e. an equation that takes in time and spits out position. Once we have that we have solved the system, which is the ultimate goal.

Luckily for us differential equations are pretty easy to solve, as long as they’re linear. What does linear mean? It means that if we put the differential equation on a graph it’d be a straight line. Look at the spring equation again:

The spring constant k and the mass m don’t change, so -k/m is a constant number. A graph of x’’ against x looks like this:

See how it’s a straight line? That means it’s a linear equation. If the graph isn’t a straight line it’s non linear.

Linear differential equations are easy to solve because of e. E has the great property that it differentiates to itself and brings down the value not differentiated:

This property makes e the key to solving linear differential equations. For instance in the spring equation if we assume that x takes the form e^{at} we can solve it quick (remember acceleration is position differentiated twice):

Subbing for a:

Easy! This is now solved and we have an equation where we can put in a time (t) and it gives us back the position of the mass (x). We call this an **analytical solution** to a system and once we have this we can predict what will happen to the system in the future for the rest of time.

People with more advanced maths will recognise the analytical solution for x in the spring to be Euler’s formula:

We can ignore the imaginary sin part here, making the less intimidating solution:

Which is easy to digest. The gif below shows how position changes over time:

A perfect cosine wave. Ignore the weight on the right for now, as we haven’t taken dampening or friction into account. The weight will oscillate back and forth, reaching an equal distance away from its resting point every oscillation. This is what we set out for in the beginning, an equation that takes in time and gives us back position.

But why did x need to be in the form e^{at}? It didn’t. If you could find a version of x that satisfied the equation:

It would be a valid solution to the system. There’s no general methodological way of solving differential equations. It boils down to taking a guess at what x is and then seeing if it satisfies all the system’s equations. This is why the system’s equations are sometimes called constraints – before equations are introduced x could be anything, but the moment you start adding differential equations it constrains what x could be. For linear systems it just so happens we’ve done the guesswork already so we know the form that x will take ahead of time.

The fact that solving differential equations comes down to guesswork should start ringing alarm bells. Maths works best when there’s methodology behind solutions, it’s not so great at just guessing them. It’s also quite disappointing. How are the best minds in maths unable to solve such simple equations without guesswork?

**Phase planes**

Before we leave the mass-spring system I want to introduce a useful visualisation that’s used in dynamics called the phase plane. It sounds fancy, but it’s just a graph of how different states of the system move over time.

Often a phase plane shows position (x) against velocity (x’). What we plot on a phase plane is the trajectory of these two states over time, starting from an initial position. You need to follow the direction of the lines to see how the states evolve over time. We know already that:

The speed of the ball is just x differentiated with time, which is:

So as time increases the x axis is cos and the y axis is -sin, so the result is an oval (or circle, depending on how you scale the axes) that goes clockwise with time, like below:

There’s a couple of points to mention here:

- The graph above shows 2 different trajectories for the mass spring system. The mass on the outer circle starts further away from the resting point than the mass on the inner circle. It has more energy.
- These are perfect circles because we’re modelling perpetual motion. There is no friction in our model so if our weight starts a certain distance from the resting point it’ll keep returning there because it doesn’t lose any energy. This leads to the closed loops in the diagram above.

The diagram below shows where the mass spring system is at different points on the circles. Notice where the mass is in relation to the resting point

For completeness sake here’s how time relates to x and x’ at certain points on the trajectory:

Any other trajectories would look similar to the ones above. A circle/oval with a radius that’s dependant on how much energy the system initially starts with (the further out you pull the mass, the more energy you’re giving the system, the bigger the circle on the phase plane).

While phase planes are not used that much for linear systems, they become an indispensable tool when we start to look at non linear systems, as it helps to visualise how the system evolves over time in different scenarios.

So far we’ve been studying a perpetual motion machine. This is because when we modelled the system with maths we didn’t take into account friction or anything that could take energy out the system. Obviously in the real world this doesn’t exist. Lets look at the gif above again, but this time focus on the right hand side:

In the real world we release the mass, it oscillates around its resting point, but gradually loses energy so that after some time it returns to its resting position. I won’t go through the maths here, but even without the maths you should intuitively be able to sketch out what the phase plane would look like for such a trajectory:

Now we see what real world phase planes look like. Notice how this trajectory isn’t a closed loop like before. Instead when the mass is released from it’s initial position it loses energy, so that it won’t quite return to it’s original position after 1 oscillation. Instead we gradually spiral to the resting point, where position, velocity and acceleration are 0. In dynamics we call this resting point an **equilibrium**. The equilibrium in a mass spring system is called a **stable equilibrium** because all trajectories will eventually end up there. Wherever you start from the mass will eventually end up in its resting point. The term ‘stable equilibrium’ implies the existence of an ‘unstable equilibrium’, but I’ll go over that in the next post.

**Conclusion**

This post has shown how we can use dynamics to find the solution to linear systems. When a system is linear, like the mass spring, we can apply lots of analytical tools to dissect the problem and deeply understand what’s going on. When I said in the introduction that we are good at predicting things designed to be predicted this is what I was referring to: linear systems.

In engineering we strive to make dynamic systems linear because they’re easy to analyse. We design systems with linearity at the forefront of our minds. If part of the system isn’t linear, we **linearize **it so that we can analyse it. Linearizing means you find a part of a non linear function that looks straight, and then pretend that it is linear. We do this because, as you will read in the next blog post, our analytical capabilities start breaking down with non linear systems. If we run into non linear differential equations we desperately try and make them linear in whatever way we can, because that’s what we know.

Now that we’ve covered the basics of how maths can help us dissect a linear man made system, we can begin looking at how maths stutters when confronted with non linear real world systems. This will be the subject of the next post, along with how chaos theory plays into all of this and what chaos looks like mathematically.