# Does Number Encoding Affect Logistic Map Results? ## The Logistic Map

The Logistic Map provides a well known demonstration of chaos theory and, specifically, the butterfly effect. It is defined as follows:

```xn+1 = rxn(1 - xn)    eq 1

where x in range (0, 1.0)```

The Logistic Map is a discrete variant of the Logistic Function, often used to model populations. The mapping was proposed by Robert May in 1976.

Here, the equation is applied iteratively, with `xn+1` calculated from the result of the previous iteration. In this way, we can generate a sequence of values starting from an initial `x0` in the range (0, 1.0) and a chosen value for the constant `r`. Now, the interesting thing about this is that tiny differences in values for `x0`, no matter how small, will eventually give rise the radically different sequences further down the line.

## A Question of Encoding

I suspected, therefore, that the numeric encoding used to perform the calculation may have a dramatic effect on results due to the rounding of irrational numbers. If, for example, we used a hand calculator to crank through the numbers, our encoding would effectively be decimal with, say, 10 digits of accuracy. If, however, we were to automate the calculation programmatically, using the IEEE 64-bit `double` type, we may expect very different results after a given number of iterations.

I constructed a trivial C# program to test this, below, to print out the first 200 results of the Logistic Map. Shown, our number type is that of `double`, but note that we can easily substitute for `float`, or in the case of C#, the 31 digit `decimal` type. Here, we are using a value of 3.9999 for the constant `r`, which puts us in the chaotic domain, and an initial start value of 0.3 for `x0`.

``````using System;

namespace ConsoleApp1
{
class Program
{
const double ConstR = 3.9999d;

static void Main(string[] _)
{
// Initial x(0)
double x = 0.3d;

for (int n = 0; n <= 200; ++n)
{
Console.WriteLine("{0}: {1}", n, x);

// Logistic map: x(n+1) = rx[n] * (1 - x[n])
x = ConstR * x * (1 - x);
}

}
}
}
``````

## Results

I ran this for the three numeric types, and obtained the following results. Note the iteration numbers `n`, in the left column, are not consecutive.

Shown in red, are the approximate points where the sequences are deemed to diverge from a visual inspection to 2 decimal places. We can see the 64-bit `double` and `decimal` sequences track closely to around 50 iterations, whereas the `float` results diverged much earlier at iteration 20. Given that the `float` and `double` types equate to 6-7 and 15-16 significant digits respectively, compared with the `decimal`‘s 31, this is to be expected.

By Andrew Thomas

Andrew Thomas is a software author and writer in the north of England. He holds a degree in Physics and Space Physics and began a career in spacecraft engineering but later moved into programming and telecommunications.

View all posts by Andrew Thomas