Buridan's Principle
You’re walking down the street and realize, to your chagrin, that you’re on a collision course with another person walking toward you. You swerve to the right, only to realize that your counterpart is doing the same. Hesitating for a second, you try to figure out which way to go, but your legs somehow keep moving you forward, bringing you inexorably closer to a collision. Finally, when you’re only an arm’s length apart, you do the obligatory awkward dance and find a way to go around each other. What happened here? If you ask Leslie Lamport, you just fell afoul of Buridan’s Principle:
A discrete decision based upon an input having a continuous range of values cannot be made within a bounded length of time.
The name comes from an ancient thought experiment cheekily referred to as Buridan’s ass. It imagines a scenario where a donkey precisely equidistant between two equally large and attractive piles of hay would have no reason to choose one over the other, and would therefore end up starving to death through indecision. Early versions of the tale existed among the ancient Greeks, and the scenario got its name as a satire of the moral determinism advocated by the 14th century philosopher Jean Buridan. Buridan believed that human beings would invariably choose the option they judge to be best when faced with a choice. However, there’s the possibility that two options are equally good:
Should two courses be judged equal, then the will cannot break the deadlock, all it can do is to suspend judgement until the circumstances change, and the right course of action is clear.
Donkeys don’t actually starve to death from an inability to choose between food sources, which is precisely the satire. But this doesn’t mean there isn’t an important lesson to be had from the parable. Here’s the first example Lamport gives, which is surprising and even a bit disturbing:
You drive up to an ungated railroad crossing, stop at the sign, and now have to decide whether to go onward or wait. Suppose the train comes at time \(t=0\) and you arrive at time \(s\). Your position at time \(t\) is a function \(x(t,s)\) of both \(t\) and the time you arrive at the crossing \(s\). If \(x\) is continuous, so is the function taking \(s\) to \(x(0,s)\). That is, your position when the train arrives depends continuously on the time you arrive at the crossing. If you arrive long before the train, you should clearly cross before the train arrives, so \(x(0,s)\) for \(s \ll 0\) must be on the far side of the track. On the other hand, if you arrive well after the train passes, you will obviously be on the near side at \(t=0\), so \(x(0,s)\) is on the first side for \(s \gg t\). Now we apply the intermediate value theorem: there must be some \(s\) such that \(x(0,s)\) is in the middle of the track. No matter what you do, there’s a possibility that you get hit by the train!
Well, you say, this relies on the assumption that \(x\) was continuous. The slices \(x(t,s)\) for fixed \(s\) must be continuous in \(t\) because this is simply your position over time; you can’t jump from one side of the track to the other. But why should your position at \(t=0\) be continuous with respect to your arrival time \(s\)? Why can’t you pick some threshold \(S \lt 0\), and if \(s \leq S\) follow a continuous path crossing before the train arrives, and if \(s \gt S\) wait for the train to pass first? Most of the time this works. But say you arrive at the crossing very close to \(S\). How long does it take to determine which side of the line you are on? If you’re close enough, it might take longer than you have to figure out if you have enough time. Any completely safe decision process will again rely on some discontinuity in the function \(x\), which is simply impossible to physically implement in a bounded amount of time.1
Does this actually happen? Well, maybe. There was a train crash in 2015 that looks like it might have Buridanian influences. From the NTSB accident report:
On February 3, 2015, at 6:26 p.m. eastern standard time, a 2011 Mercedes Benz ML350 sport-utility vehicle (SUV) driven by a 49-year-old woman (driver), traveled northwest on Commerce Street in Valhalla, New York, toward a public highway-railroad grade crossing….The driver moved beyond the grade crossing boundary (stop line) and stopped adjacent to the railroad tracks. The grade crossing warning system activated and the gate came down, striking the rear of her vehicle. She then exited her vehicle and examined the gate. The driver then returned to her vehicle and moved forward on to the tracks.
There are many ways to explain this driver’s behavior. Perhaps she felt she was already committed to crossing the tracks; perhaps she was just running on autopilot and did not comprehend the danger facing her. Buridan’s principle was not the only, or perhaps even the determining factor in this accident. But the driver’s behavior does look a bit like what would happen if she were having trouble judging whether she had enough time to cross.
A more familiar and perhaps more plausible example of the Buridan phenomenon is the decision of whether to stop or speed through a traffic light that has just turned yellow. It’s easy to waffle between the two options, only coming to a conclusion by slamming on the brakes or screeching through the intersection, risking running a red light. Again, there are other interpretations of this behavior, but the difficulty of figuring out in time whether you have enough time is a real one.
And this is a general problem, not limited to determining safe paths for vehicles. Any rule for deciding between two discrete options induces a decision boundary on the parameter space of inputs to the decision. An input very close to the decision boundary may be easily confused with one on the other side. It can take longer than expected to figure out what side of the boundary you are on. And in the time it takes to carefully measure where you are, the time for making the decision may pass.
What can you do about this problem? Well, the mathematics should be sufficient to convince that you can’t avoid it entirely, but you can make it less likely. For example, you could take more parameters into account, reducing the likelihood that you end up near the decision boundary. An extreme version of this is using a coin flip to make the decision if you’re near the boundary. This is a sort of deadlock detector: a second level system that is meant to step in when it senses that the first level is having trouble making a decision. Of course, this just replaces the problem with the one of having to decide if you’re close enough to the boundary to use the coin flip. By Buridan’s principle, any deadlock detector could itself deadlock,2 but as you add more layers, this can become less and less likely.
You can also give yourself more time to make the decision. Suppose you’re driving and see an obstacle in the road ahead. You can either go around it to the right or left, depending on where it is. In the extreme cases where the obstacle is not actually in the road, but to the side, you would obviously go to the side you’re already on. But there’s a discrete decision to be made based on the continuous parameter of the position of the obstacle, so Buridan’s principle applies. But you have another trick up your sleeve. If it’s taking a long time to decide, you can put on the brakes, up to and including stopping the car completely. You can give yourself an arbitrarily long amount of time to make the decision. Even if you’re going at a fixed speed, if that speed is low enough, the probability of not making a decision in time can be very small.3
This is the approach often taken in computer design. In order to interact with the outside world, a microprocessor needs to observe some external inputs. This happens at fixed points in its clock cycle. Most of the time an observation will be unambiguously a zero or one. But if the input is not synchronized with the clock, it’s possible to catch it right in the middle of a transition. What happens then? The circuit has left the regime of digital logic with clear 1s and 0s, and entered a realm of metastability and undefined behavior. In a previous paper, Lamport showed that it is theoretically possible for this period of ambiguity to last arbitrarily long before the system settles into a well-defined digital state. In practice, what system designers do is add intentional delays to push the probability of staying in an undefined state for that long very close to zero. In essence, rather than trying to look at what the state is right away, the processor looks at what the state was several clock cycles ago. It’s possible to make the probability of decision failure small enough that it’s orders of magnitude less likely than other sorts of spontaneous failures.
Coming back to the original image of a donkey hesitating between two piles of hay, we have a perhaps better look at how our brains deal with this kind of problem, and what it might actually mean for questions about free will and moral determinism.
There’s a famous set of brain studies that has been purported to prove that conscious thought does not dominate in decision making. These studies, introduced by Benjamin Libet, ask subjects to sit in silence and, of their own volition and at any time, take some small action like tapping their finger. Meanwhile, their brains are monitored, via EEG or FMRI or some other measurement, and they are asked to report the time that they chose to take the action. Researchers have found that there is typically an increase in brain activity—called the Bereitschaftspotential—beginning significantly before subjects report deciding to act. This increase in activity is taken to be the “real” cause of the action, not the conscious decision to act.
Recent work has cast this interpretation into doubt. Rather than indicating some subconscious process dictating your every move, this precursor activity might actually be a mechanism to avoid deadlocks:
Neuroscientists know that for people to make any type of decision, our neurons need to gather evidence for each option. The decision is reached when one group of neurons accumulates evidence past a certain threshold. Sometimes, this evidence comes from sensory information from the outside world: If you’re watching snow fall, your brain will weigh the number of falling snowflakes against the few caught in the wind, and quickly settle on the fact that the snow is moving downward.
But Libet’s experiment, Schurger pointed out, provided its subjects with no such external cues. To decide when to tap their fingers, the participants simply acted whenever the moment struck them. Those spontaneous moments, Schurger reasoned, must have coincided with the haphazard ebb and flow of the participants’ brain activity. They would have been more likely to tap their fingers when their motor system happened to be closer to a threshold for movement initiation.
This would not imply, as Libet had thought, that people’s brains “decide” to move their fingers before they know it. Hardly. Rather, it would mean that the noisy activity in people’s brains sometimes happens to tip the scale if there’s nothing else to base a choice on, saving us from endless indecision when faced with an arbitrary task. The Bereitschaftspotential would be the rising part of the brain fluctuations that tend to coincide with the decisions. This is a highly specific situation, not a general case for all, or even many, choices.
The Bereitschaftspotential might be evidence of a mechanism keeping Buridan’s ass from starving, and research subjects from being paralyzed due to insufficient reason to act. Perhaps these fluctuations in some sense provide the ability to do otherwise when all the relevant facts are held constant, at least when nothing is at stake. Maybe this is nature’s answer to the problem Buridan poses.
-
This has something to do with the fact that all computable functions are continuous. ↩
-
Lamport makes this point about a proposed election deadlock detector in a comment on his webpage: “Another amusing example occurred in an article by Charles Seif titled Not Every Vote Counts that appeared on the op-ed page of the New York Times on 4 December 2008. Seif proposed that elections be decided by a coin toss if the voting is very close, thereby avoiding litiginous disputes over the exact vote count. While the problem of counting votes is not exactly an instance of Buridan’s Principle, the flaw in his scheme will be obvious to anyone familiar with the principle and the futile attempts to circumvent it. I submitted a letter to the Times explaining the problem and sent email to Seif, but I received no response.” ↩
-
In a sense, the decision you make here is not discrete, since you can vary your speed continuously. ↩