One of the things about complexity/chaos theory is that prediction submits to very specific limits, in the sense that with anything but a VERY simple system, we must give up the potential for long-term predictive certainty. This is because very small differences can lead towards extremely large differences later down the line, and we simply can’t account for the effects of these very small differences.

The problem is that one never knows whether a small difference will actually make a difference or not. And in any case, with any non-trivial system, the small differences are all complexly interrelated to larger scale changes, with much cross-feedback.

So SOME kinds of predictions are possible — it’s the RANGE that’s a problem. For example, we can predict with high certainty that certain climatic patterns will continue far into the future — but we can’t predict with any accuracy on any given future date whether it will be cloudy and raining or sunny, because those detailed specifics are precisely what are affected the most by the tiny shifts in today’s weather.

Then there is a completely separate but equally problematic blow to our predictive abilities. If the only problem was the butterfly effect, we might be able to make a computer powerful enough to predict far-future events with accuracy, because it could take everything into account — after all, the universe follows its own laws, right? We just tell the computer what those laws are, the initial conditions, and give it enough time to calculate and we should have an accurate and complete answer, right?

Nope. The problem is two-fold. On the one hand, we simply don’t know how to account for (measure) all the very small initial states that would be required for such a computer to have sufficient information to do its job. It’s simply too huge of a task in a practical sense. You would, for example, need to know (in the case of weather, a good chaotic system) the pressure and temperature of the air… but for WHAT air? Can you generalize and only look at average values for large volumes of air, say, those the size of a city? Or do you need accuracy to within a cubic meter of air, so you can have distinct values for every cubic meter of air on the planet? You see the practical problems associated with this. Other complex systems, such as the body, behave in the same way — measuring hormone levels in the blood, the distance between dendrites and axons, the thickness of their myelination, etc. etc. — there is no practical way to get the information needed.

BUT WAIT, IT GETS WORSE!

Unfortunately for our predictive abilities, it isn’t even just a question of the practical difficulties in obtaining such information, but rather that such information is IN PRINCIPLE, not gettable. This is because, as the quantum mechanics shows, we cannot help but disturb a system when we measure it, and that disturbance always introduces an uncertainty with respect to some other aspect of the system that we can never – IN PRINCIPLE – simultaneously account for so as to know the COMPLETE state of the system at any given instant. We have to do all our predictive work on the basis of PROBABILITIES, not certainties – at least when we are talking about the quantum level.

But here’s the rub: NOBODY knows when it might be okay to ignore quantum effects and stick with much more manageable ‘classical’ (and predictable) rules. It’s a real philosophical and practical problem, because we see more and more that the quantum world and the world of macro-scale qualities are not separate. Of course, it would be quite bizarre if quantum-scale effects simply occurred over a very well-defined set of limits, beyond which different rules took over that needed no reference to the quantum rules. Of course, this is precisely how most scientists actually do their work — they either work in particle physics or other fields that look at very small scales, in which case they use the rules that describe the quantum realm, or they work in engineering or biology and only worry about quantum effects when they have to, otherwise they use classical rules as much as possible.

Chaos theory itself is fractal – it’s laws span all scales. We cannot ignore the potential and actual effects that cross over from one scale to another, decreasing our certainty at each level. The change on one level of a phenomenon may very well cause a change on a completely different level — but in predicting that change to arbitrary accuracy we run into practical as well as principled issues which we cannot overcome. Thus, the built-in indeterminacy at the quantum level can, through the scalar linking effects described by chaos theory, can link single, quantum-scale processes to shifts at the macro scale.

We can always make predictions, and for many situations and circumstances, the above problems are ignorable. But for other situations–those near criticality, for example–the full effects of both quantum indeterminacy and chaos theory may make all the difference.