- riccardo

# Mechanical Occam s chaos Lorenz butterfly

Updated: Mar 20

Many may be familiar with the concept of the “butterfly effect”, small things potentially determining major events without a clear causality relationship. Some may also know Chaos Theory, the bigger theory behind the expression of the butterfly effect. After experiencing the problem while working in a robotics lab, years later I received an intuitive quantitative clarification through a financial application. I recently happened to go over those arguments again while analyzing a statistical model. I found the resulting connections among domains particularly interesting, they all converge to the final simulation of this post showing the butterfly effect in action. To support that, we will go over a collapsed bridge, some design heuristics from 1300 (probability-based), and the geometry of nature being not our usual Euclidean one (probably a sub-set of it); they all contribute to an infinite number of mathematical solution within a finite space ... phase-space.

__Note: complex problems and arguments should probably not be reduced to simple discussions; meaning, even when we can appreciate short summaries, we should respect the underlying complexity and clearly communicate that.__ Therefore, I must stress that this post wants to reduce my activities and processes while documenting them, but it does not want to reduce the underlying math and concepts. If some mathematical steps or concepts are not clear, please do not bother too much, just push through the reading of the 3 sections. By the end, everything should be clear through logical connections, the main objective, and we will be able to appreciate the fascinating conclusion.

**1/3) Rise of the problem - Years ago, in a mechatronics lab (somewhere in Rome)**

I was working in a lab on a mechatronic system composed of a mechanical device coupled with a controller. Simplifying, we can picture an electric motor attached to a computer intended specifically to control the device (Figure 1).

*Figure 1: Simplified mechatronics system: an electromechanical part (e.g. electric motor), handled by a controller (the reader can assume the controller is handling both the low-power circuit, which is the logical electronics, and the high-power circuit, which is the actual power source for the electric motor).*

While designing such a system, an engineer must build a quantitative model describing the functioning of the device. That mathematical characterization is part of the overall design, it is a separate activity from the structural calculation - while still related - and it usually aims at controlling the dynamics of the system in any configuration and at any moment. While intended to control the system, the math is not only for electronic controlling, for example, it can also be a way to know the dynamic natural response of the system during functioning conditions; the reader can think about investigating the response of a plain to vibrations - a famous and fascinating example of response not properly characterized is this video of the __collapse of the Tacoma bridge__.

In general, the tough part of the characterization involves some ways around the __non-linearities__, always present and tough to handle. We are not too good with non-linearities, and we often get by through simplified mathematical models involving some linearization. Non-linearities, something like x^2 within the math, are tough because they alter the relationship between what goes in and what goes out of the system. We could say that domains sometimes allow us to neglect non-linearities, and the intuition for that is the following: say we have a small problem *x*, its non-linear evolution *x^2* is even smaller, therefore probably negligible. When we are forced to include non-linearities in our model, we do solve some of those problems - like first order non-linear equations - but in general, and especially when put together in systems of non-linear equations, they are tough to solve through closed-form solution (i.e. exact analytical solution). In those cases we rely on numerical solutions [approximate] - see Euler's, Runge Kutta, etc. The reader can have an idea of the complexity of non-linear dynamics through the famous __Three Body Problem__, which some may relate to interplanetary motion. the critical point, however, is the following: as we will see in the last paragraph, even when we overcome the problem of modeling and even solving non-linearities, we still have to face the tough task of handling them.

Going back to my project, after designing and building a prototype of the machine, I approached the characterization through a simple initial mathematical model - generally a good starting point according to Occam’s razor (Ockham). My initial equation is represented by __equation 1__ - __figure 1__ shows a possible system identified by that equation.

*Equation 1: non-homogenous linear equation*

*("Sign" is just the sign or direction of x', it is not the function sin, which would make the equation non-linear)*

*Figure 2: possible system described by equation 1 (Fa term not shown)*

The cart-type representation of __figure 2__ should not be mistaken, the model can represent multiple forms of structural elasticity and dynamics. Similar models and equations are usually deployed to model bridges, planes, suspensions of vehicles, etc. The equation above is a general representation of systems with one degree of freedom, which in our case is the movement *x* of the motor’s shaft (linear or rotatory depending on the motor), and it is described in terms of its derivatives *x’* and *x”* (speed and acceleration). I’ll not bother the reader with the full characterization, I’ll just say that to complete the model, we can find parameters *c* and *Fa* through a test at constant velocity on a prototype of the machine. The term *mx”* would vanish, being the acceleration zero in those conditions, and *F* would be known as the input-current applied to the motor during the test. The resulting equation would be the equation of a line, and it would have *c* as angular coefficient and *Fa* as intercept on the y-axis when plotting the results of the tests. So, __equation 1__ becomes __equation 2__ after running some tests with an applied force of 14 newtons moving a mass of 6.27 kg - different values can be used.

*Equation 2:*

*14 is newton (equal to 1.14 ampere from the controller), Fa is also in newton, and 52.97 © is in N*s/m (which then multiplied times a speed c’ gives again newton). The mass 6.27 is in kg (which multiplied times an acceleration x’’ gives a force in newton*

Here comes the fun. After obtaining a first mathematical model, we can now use different inputs and compare the results from __equation 2__ with results from a broad range of tests run on the prototype. If the results match, we will know the mathematical model is correct and it could then be used to predict and control the system in any possible scenarios. Skipping the complete characterization also involving a frequency-analysis and different kinds of applied forces, __figure 3__ shows some results I obtained from the comparison of the analytical solution with real tests on the prototype.

*Figure 3*

In short, we can say that it is possible to see deviations of real data (red dots) from the predictions of the mathematical model (solid line). I had found non-linearities; my linear model was not enough to fully characterize the system. Considerations could follow on whether that was critical to the specific project, or whether non-linearities could be neglected in that case considering their magnitude within the range of functioning of the machine. Still, I was introduced to the limits we get to face when we “linearize” problems.

**2/3) Clarification of the problem - Fast forward several years, in a financial office (somewhere in New York City)**

I was working on a financial project and I was conducting a market analysis. Again, I had another mathematical hesitation; years after being thrown a little off by the non-linearities of that robotic system, I was again in trouble with the characteristics of the system I was dealing with. Specifically, I got to know better the concept of fractals after being introduced to one of the main authors on the subject, Benoit Mandelbrot. To gain an intuitive idea of fractals, the reader can go online to Yahoo Finance, choose a ticker s/he likes (e.g. AAPL, TSLA, etc…), and focus on the chart of the price over time. The reader should then pass from the daily chart to the yearly one, trying to understand whether it would be possible to differentiate one chart from the other by just looking at the shape of the chart (without being told what the period represented was). The answer would probably be “not that easily”, especially if presented with random and multiple daily and yearly charts. F__igure 4 a/b__ shows how IBM would look like - if some readers think they can spot actual differences, they should try to trade on them.

*Figure 4a: IBM 1 day chart*

*Figure 4b: IBM 1 year chart*

Prices of stocks give an idea of fractals, something which scales, which stays similar at different scales – to be more rigorous we should mention and distinguish “self-similar” fractals, probably ideal, and other fractals, the ones we have in nature. We have plenty of fractals around us: clouds, fluid-dynamics turbulence, stock prices, etc. That is not only a visual effect, but there are also important mathematical implications that could be summarized somehow by the “fractal dimension”, which we will discuss now through the coastline of Britain.

In the book “The Fractal Geometry of Nature” and in one of his papers, among the several applications discussed, Mandelbrot deals with the following question: how long is the coast of Britain? It turns out, we cannot easily calculate it. Say we use a ruler with 1 meter as the base unit, we obtain a measure down to the meter cutting over details along the coast which are smaller than that. Say we refine the measure down to the centimeter; the measure would increase because we would capture more details and curvatures, but we would still miss some details along the coastline. Note, this does not happen in Euclidean geometry, where a scaled ruler would give an exact magnification of the measure maintaining the final measure constant - clarified immediately below. We could repeat the process to the limit without getting to a final measure and obtaining at each step an ever-increasing measure (Figure 5).

*Figure 5: calculating the length of the coast of Britain*

*Source: https://computationallegalstudies.com/2010/10/18/how-long-is-the-coastline-of-the-law-additional-thoughts-on-the-fractal-nature-of-legal-systems/*

The material I went through even mentions some atomic limits of the argument above, and while I think it may be interesting to ask some scientists whether some uncertainty principles would validate some aspects of that at small scales too, we cannot and should not speculate further on that here. I will stick with the documentation of my experience and mental process.

In __figure 5,__ the reader may notice some proportionality between the lengths of the rulers and the resulting lengths; in fact, that is related to the *fractal dimension*, which is intuitively a measure of the roughness or the folding of the contour. Let us clarify the concept through an exercise trying to figure the fractal dimension of IBM stock (Figure 6).

*Figure 6: calculation of the fractal dimension of IBM stock (form the yearly chart). Red, green, and yellow rulers are respectively of measure 1, ½, and ¼*

In the picture above I tried to fit the pattern with lines of [non-dimensional] measure: G0 = 1 (red), G1 = ½ (green), and G2 = ¼ (yellow). The resulting lengths are: L0 = 5.25x1 -> 5.25 total, L1 = 12.6x0.5 -> 6.3 total, and L2 = 28.5x0.25 -> 7.12 total. The reader can verify that the fractal dimension D is related to the ratio between the magnification of the resulting length and the scaling of the ruler. In detail, D is given by the resulting length L and the measuring ruler G through the following: L = G0^D * G^(1-D), which to get to D, it has to be transformed through the logarithm Log. In our case, the relationship would yield D = 1.23. Note, I also checked that the log of G and L for all cases would yield pretty much a straight line on a chart, which is the mathematical relationship we would wish for.

Applying the relationship above to common [ideal] Euclidean geometry would result in D being an integer: a line of measure 5 when measured through a unitary ruler, would result in a length of 10 units if measured with a ruler of ½, therefore still a 10x0.5 -> 5 total, and repeating the calculation above would result in D = 1 (lines are mono-dimensional in our scholastic geometry); the length would not change with the size of the ruler because there would be no further details to discover along the line. A fractal dimension of 1.23 says to some extent that our fractal geometry is in the middle between a line (D=1) and a plane (D=2), and it covers the space more than a line, not quite as much as a square though.

**3/3) Resolution - Putting everything together (somewhere within Chaos)**

The evolution of systems like the mechatronics project I initially mentioned, can be described through differential equations like __equations 1 and 2__. While we have already mentioned that much of the issue in the mathematical approximation of real characteristics is probably to be found in non-linearities, the real problem is not modeling them, but handling them.

__Important note:__ the arguments below are subjects of ongoing research of established researchers and professionals; this post does not want to risk passing the idea that this is a sufficient reduction fully framing those concepts. While I always discuss topics through my projects to give readers bases to relate her/his possible subsequent research to, I do not want to theorize nor give comprehensive teachings. Once we have agreed on that and clarified our intellectual honesty, and while I hope this clarification was unnecessary because implicit, we can keep on going through the conclusion.

Say we are working on a project and we have obtained an approximate equation fairly describing the dynamics of a system. Say also we managed to reduce the approximation by including some non-linearities in our model through some *x^2* term, and that those non-linear terms fairly describes the non-linear response of our system. Can we be sure we can control our system? Let us try to answer that question by solving a famous non-linear problem. E.N. Lorenz (major Chaos Theory contributor) came up with a system of 3 differential equations while studying phenomena like the weather and the motion of fluids. Lorenz' system is composed of three equations similar to our __equation 1,__ but they have nonlinear terms, and they are put together in a system of 3 (figure 7).

*Figure 7: Lorenz' non-linear system (non-linear terms being the xz, xy, and the interdependencies among equations) *

We can solve that system through a couple of lines of code and we can plot the corresponding solution for specified initial conditions that we can just choose to be close to the zeroes - I'll provide the code at the end of the post. The representation of the solution we obtain is shown in the phase-space representation in __figure 7__ – the phase-space is a representation useful in describing solutions of dynamical systems; we will clarify immediately below.

Figure 8: Lorenz solution sample - Python generated plot through odeint()

*For sake of this post, we can think the two attracting regions in the picture (those two circular regions) as two states to which the system non-periodically returns in some statistical consistency*

The 3D representation above is the phase-space representation of Lorenz's system: say the weather model of Lorenz was fully identified by the temperature, pressure, and humidity, the three axes of the plot would represent the evolution in time of those three parameters starting from the values of those parameters at time zero (e.g. outside our house or apartment right now). The line was obtained through numerical integration, having already mentioned the difficulty in getting exact analytical solutions for non-linear problems. We can think the two attracting regions in the picture (those two circular regions) as two states to which the system __non-periodically__ and returns - statistical consistency.

The problem with the solution in __figure 8__, is not that Lorenz arrived at that phase-space representation through simplified equations (probably not including all parameters governing real weather), in fact, we can even pretend that Lorenz’s system exactly describes reality. The actual problem is in using that solution. Even if the figure above does not have great resolution, it might seem easy to clearly follow the evolution of a particular weather starting from known initial conditions (e.g. the same parameters of air outside our house or apartment right now). Problem is, that would be wrong, by a lot. Hinting to the problem before showing our simulation, we can say that Lorenz’s solution (plotted line) is __kind of fractal__, and it presents problems similar to the one we had in measuring the coast of Britain. However, it is not about the length of the line, but how [infinitely] densely different solutions - corresponding to slightly different initial conditions - are packed among them. To show that concept, we will plot two solutions of Lorenz's system of differential equations, obtained by varying the initial conditions just a tiny bit between the two. The two different sets of initial conditions for x, y, and z will be the following:

· Initial condition 1 = [1.0000000, 1.0, 1.0] (these are the same used to obtain __figure 5)__

· Initial condition 2 = [1.0000001, 1.0, 1.0] (just a tiny variation in x from condition 1 - we will comment below on the actual accuracy of that difference)

Integrating again analytically the system in __figure 7,__ and using the two initial conditions above, after a few instants we would obtain the following evolution (figure 8):

*Figure 8*

In __figure 8__ the end-states of the two solutions are shown in blue and orange. We can see only the blue final state because the orange one is exactly below it; they have basically behaved almost identically so far. So, it would seem our tiny alteration of the initial conditions does not have a considerable impact and the two weather's evolution are evolving pretty much along the same solution over time. However, let’s check it properly and keep going with the simulation. From the video below, it is possible to see how the two solutions keep staying pretty much the same for the first 10 seconds (we can see at times only the blue marker and other times only the orange one), to then diverge and act in two completely different and random ways (clearly possible to distinguish the blue and orange dots in two completely different places and paths):

*Video: simulation of the evolution of two systems with slightly different initial conditions*

The simulation shows that, along their evolutions, solutions starting from initial conditions almost identical among them, diverge then chaotically - along very different paths and non-periodically, therefore without a known and predictable period. That is allowed by that finite phase-space accommodating infinite solutions tightly packed among them in a fractal geometry.

*Please, again, remember the call to skepticism we made above and the invite to further research and connect ideas; one reason for all, I found current papers still trying to completely characterize the fractality of Lorenz's solution. Moreover, while the theory would call for variations smaller than the ones we used between the two initial conditions, even non-identifiable differences being related to fractals, the actual accuracy of the difference between our two initial conditions should be discussed by also involving the specific algorithms and manipulations of numbers we are using. Therefore, those two numbers we chose were mainly for sake of presentation, while still effective in passing the idea.*

What we just went through would be the quantitative intuition of the butterfly effect: initial conditions just a bit different resulting in completely different outcomes over time. That is in big part why we cannot properly predict the weather beyond a few days. Within that, non-linearities, fractals, and more … they all play complex and fascinating roles.

*riccardo@m-odi.com*

**Python code to solve Lorenz' system (the first part with the simple single solution is widely available online too):**

*import numpy as np*

*import matplotlib.pyplot as plt*

*from scipy.integrate import odeint*

*from mpl_toolkits.mplot3d import Axes3D*

*from time import sleep*

*# chaotic parameters*

*rho = 28.0*

*sigma = 10.0*

*beta = 8.0 / 3.0*

*# Lorenz system*

*def f(state, t):*

*x, y, z = state* *# Unpack the state vector*

*return sigma * (y - x), x * (rho - z) - y, x * y - beta * z* *# Derivatives*

*for i in range(1, 50):*

*state0 = [1.0000000, 1.0, 1.0]*

*state1 = [1.0000001, 1.0, 1.0]*

*t = np.arange(0.0, i, 0.001) # start, stop, step*

*states = odeint(f, state0, t)*

*states1 = odeint(f, state1, t)*

*fig = plt.figure()*

*ax = fig.gca(projection="3d")*

*ax.plot(states[:, 0], states[:, 1], states[:, 2], color='grey')*

*ax.plot(states1[:, 0], states1[:, 1], states1[:, 2], color='grey')*

*plt.draw()*

*#plt**.show()*

*ax.scatter3D(states[-1, 0],states[-1, 1],states[-1, 2])*

*ax.scatter3D(states1[-1, 0],states1[-1, 1],states1[-1, 2])*

*plt.show()*

*sleep(0.150)*

*Main image's tag:*