Norton’s Dome

In 2003 John Norton came up with a paradox known as Norton’s Dome, which purports to show that Newton’s physics is non-deterministic since an object placed at the top of the dome will not stay there forever but instead at some arbitrary moment of time will slide down without apparent cause. The paradox follows from the math of ordinary differential equations that we use to describe Newton’s laws, which we all thought were well understood. The equations of motion describing an object at the apex of the dome admit two solutions: 1) the particle rests at the top of the dome indefinitely; 2) the particle starts sliding down the dome at some random moment of time, apparently without cause.

Although there does not appear to be an error in the math, the unexpected second solution is widely viewed as ‘unphysical’, but so far there is no consensus among the critics as to why.

The point of Norton’s paradox, however, is to show that there is more to Newton’s laws than meet the eye, and we do not understand them as well as we thought we did. This is a huge problem since Newton’s laws are as basic and as fundamental as they get, and if our understanding of them is incomplete then perhaps the whole of modern physics is built on a very shaky foundation…

I hereby join Norton and present another – and in my view far more nefarious – problem with the Newton laws, which pertains to circular motion. I hereby argue that the mathematical formulation of Newton’s laws has a fundamental problem describing circular motion. Here is why.

A Circular Motion

Consider an object P moving about the center O on a circular trajectory as shown in Fig. 1.

Fig. 1. Circular motion.

This is a textbook problem of the simplest kind, which is taught to physics students as follows: the object P at the moment of time t1 (denoted as P(t1) in Fig. 1) has a velocity vector V1, which is perpendicular to its radius-vector R1 linking the object P(t1) with the center of rotation O. So far so good. Because the object P is locked on the circular orbit around O, we declare that the acceleration a1 (which is directed along the radius-vector R1) is acting on the object P(t1), altering its velocity V1.

As such at the moment of time t2, the object P(t2) has a velocity vector V2, which is once again perpendicular to its radius-vector R2, and the acceleration a2 is now acting on the object P(t2), and this acceleration is directed along the radius-vector R2. And so on, or so we think when we examine this problem abstractly, without invoking the actual Newtonian calculus that we must use for calculating V2 from V1 and a1.

So let’s delve into the mechanics of such calculation. Newton teaches us that we can use a simple vector addition to find out the velocity of the object P at the time t1 + dt if we know the velocity V1 and the acceleration a1 at the time t1. Specifically, V(t1+dt) = V1 + a1dt. This calculation is illustrated in Fig. 2, and there lies the problem.

Fig. 2. The Newtonian velocity addition.

The Impossibility of a Circular Motion

The problem arises when we add the vector a1dt to the vector V1: we must use the Pythagorean theorem to compute the absolute value of V1 + a1dt, as |V1 + a1dt| = √(|V1|2 + |a1dt|2). The predicament is that |V1| < |V1+a1dt| no matter how small the dt is! As long as dt is non-zero, even though it is infinitesimally small, |V1| < |V1+a1dt|. Therefore |V1| < |V2|, and for every moment of time tn |V(tn)| < |V2(tn+1)| according to this logic. As such, in the framework of Newtonian calculus, it is impossible to maintain a uniform circular motion characterized by |V(t)| = const as long as we insist that the acceleration vector a is perpendicular to the velocity vector V.

To preserve the magnitude of velocity |V| via the vector addition, the acceleration vector a must form an angle with the velocity vector V that is slightly less than 90 degrees, such that V1 and V2 form the sides of an equilateral triangle with the velocity vector times dt (a1dt) at its base – Fig. 3.

Fig. 3. The velocity-magnitude preserving acceleration.

Now when a1 is no longer perpendicular to V1 the vector addition preserves the velocity vector magnitude ensuring that |V(t)| = const at all times. This also means that either the velocity vector V1 is no longer tangential to the circular trajectory, or the acceleration vector a1 is no longer radial, or both – Fig. 4.

Fig. 4. The velocity magnitude preserving circular motion.

The Problem with Infinitesimal

So what do we make of it? To me, this simple exercise illuminates a significant limitation of the calculus-driven Newtonian mechanics, which is a lot more severe than Norton’s paradox. However, both problems stem from the same root cause: the infinitely small yet non-zero (aka infinitesimal) differential, which is the foundational block of Newton’s calculus. We think of continuity in terms of infinitesimal change, yet like in the case of Zeon’s paradox we run into a logical problem. Perhaps the problem is that we ascribe two irreconcilable attributes to an infinitesimal quantity of being zero-like and yet non-zero at the same time. This is a folly.

Separating Effects from Causes

The really big question we must ask is which way is Nature? Is Nature fundamentally computational in the sense that the cause and effect are separated by a finite interval of time, dt?  If so, then it would be fair to say that we live in a ‘simulation’, and we must rewrite the laws of physics starting from the foundation by superseding the calculus-based Newton’s laws with their causal ‘computational’ counterparts.

Indeed, a familiar equation F = ma, which is often expressed as dP/dt = md2r/dt2, tells us nothing about cause and effect. This is an equation, and all it says is that the force equals the mass times the acceleration. Basically it tells us that the force is the same thing as the acceleration (save for a proportionality factor). This notion is absurd since this is not how we think Nature works. We think that force causes acceleration. Thus, a causal formulation of Newton’s second law should state that the force F at the moment of time t causes the acceleration a at the later moment of time t+dt. We can express it as F(t) → ma(t+dt) or dP(t)/dt → md2r(t+dt)/dt2, where the → symbol denotes causation.

I must emphasize the delay between the cause and effect. The force cannot cause acceleration instantaneously, the acceleration must come at a later moment. Once we accept the inevitability of the delay between cause and effect, we essentially adopt a ‘computational’ way of thinking. When we compute, the result of a calculation (the effect) always comes after the act of the calculation (the cause).

Incorporating Causality in Math

I submit that there is no other way to comprehend math in general (even pure math) than by way of computational thinking. Here I equate the term ‘computational’ with the term ‘causal’. The two are the same. Any computation relates effects (results) to causes (calculations). Any computation involves time, and there is always a delay between the calculation and the result. I am emphasizing this because I argue that we cannot comprehend math by abstracting from causality and discarding time.

It is unfortunate that a vast body of math – including calculus and ODEs – deals with a-causal equations. These equations merely equate their left-hand side to their right-hand side without telling us what came first or what caused what. Such equations cause paradoxes and confusion because, in their very structure, they ignore what must not be ignored: causality and time.

If you think that causality is important only for physics, you’d be wrong. Even in pure math a-causal thinking is a folly that among other things leads to Quinn’s paradox, which can be easily resolved once we introduce the notion of time and adopt the computational approach to math by noting that all computations (including logical inferences) do not happen at the same time. We need time to separate results from calculations as without this separation we cannot make sense of anything (be that abstract logic or the physical world we live in).

Switching to Causal Formulations

Fortunately, all numerical computations are causal in their implementation, even if the underlying formulations of the physical laws they are written to model are not. Thus, it should not be hard to abandon the a-causal calculus ditching the equations in favor of causal formulations. We sort of do it already when we deal with electromagnetic waves, but as I have shown above we must do it for everything, including Newton’s laws, because even something so simple and fundamental as a circular motion cannot be understood within the framework of the a-causal Newtonian calculus.

Is it possible that Nature is not fundamentally computational? I think not, for all our experience tells us that causes precede effects. We experience time and we are deeply aware of time irreversibility: causes are not equivalent to effects. The problem is with the math that we use: the ODEs are reversible as the equations do not separate causes and effects.

Potential for ‘New Physics’

What also follows is that perfectly circular motion does not in actuality exist, it is an impossible abstraction that we erroneously believe in. You may say, wait a minute, but the wheels do turn and the planets do orbit! To which I reply: yes, but any such circular motion must be imperfect. The wheels and planets may turn in a non-Newtonian manner. E.g. planetary and stellar orbits may grow with time (per the  |V1| < |V1+a1dt| conclusion), and a rotation of a wheel is rich in some other hitherto unrecognized physics that involves internal stresses, skips, or other peculiar effects. Allow me a jape: a circle may be more ‘square’ than we think; why else would it be characterized by pi-R-squared?

2 thoughts on “Failure of the Laws of Newton

  1. Lance Nist says:

    AHA, this explains how a NUCLEAR EXPLOSION occurred in Russia OVER 100 years ago.

    1. maxfomitchev says:

      How is that?

Leave a Reply