# The Most Bogus Sort

## Random fun with the esoteric bogosort ‘algorithm’

Surely, there can be nothing worse than permutation sort in terms of performance? Don’t be so sure. And that’s the essence of bogosort: uncertainty.

Permutation sort, which I explored previously, has (staggeringly) poor performance, but it does at least offer two guarantees bogosort does not: (1) it is deterministic and (2) it is guaranteed to terminate. Bogosort is free of such constraints.

## Do the random shuffle

The essence of bogosort is to shuffle the values randomly until they are sorted. In Python, following the idiom that the built-in sorted function returns a sorted copy of the input rather than modifying the input by sorting in place:

`def bogosorted(values):    result = values[:]    while not is_sorted(result):        result = sample(values, len(values))    return result`

A variant of this would be to feed the result generated each time into the input of the next random sampling, as opposed to drawing from the given values each time. The values available will be the same in each case, but the sorted outcome will be reached by a different path:

`def bogosorted(values):    result = values[:]    while not is_sorted(result):        result = sample(result, len(values))    return result`

The is_sorted algorithm was defined previously as

`def is_sorted(values):    pairs = zip(values, islice(values, 1, None))    return all(i <= j for i, j in pairs)`

This implementation compares each adjacent value pair in values, returning true iff all pairs are in non-descending order.

There is a variant of bogosort called bogobogosort that uses a wilfully convoluted recursive approach to determine if values are sorted. In one sense it might have a claim to be a more bogus sort than bogosort, but it’s not a particularly interesting esoteric algorithm in the sense its poor performance comes from an intentional pessimisation rather than a simple algorithmic premise: it’s just bogosort with messy detours thrown in. If you want to implement is_sorted inefficiently, go ahead. You can do that for many other algorithms, but it doesn’t change their essence — bogobogosort is just bogosort done badly rather than a fundamentally different, deep or delightful approach.

## Shuffle first

I first came across bogosort (a.k.a. bogo-sort) in the New Hacker’s Dictionary, the 1991 revision of the 1983 edition of the Hacker’s Dictionary (which I also have, but does not include an entry for bogo-sort). Its specification remains unchanged in the Jargon File since book publication:

Bogo-sort is equivalent to repeatedly throwing a deck of cards in the air, picking them up at random, and then testing whether they are in order.

If we take this definition, we can see there is an element of certainty we have accidently introduced into our Python implementation: for an already-sorted sequence of values, the implementation is guaranteed to have O(n) complexity, succeeding in the first call to is_sorted without a hint of card throwing or factorial runtime. This oversight is common in many published implementations of bogosort.

We can (un)fix this easily:

`def bogosorted(values):    while not is_sorted(result := sample(values, len(values))):        pass    return result`

This elimination of a best-case execution, reduction of determinism and use of the not-very-Pythonic walrus operator, along with the work-heavy loop condition and empty loop body, seems more in keeping with the spirit of bogosort. Je ne regrette rien.

## Shuffle with a twist

A C++ implementation looks quite innocent by comparison:

`void bogosort(auto begin, auto end){    do        random_shuffle(begin, end);    while (!is_sorted(begin, end));}`

The function is defined over an iterator range, from the begin element up to but not including the end element. Such an iterator range may span an array or any other sequence container supporting the right kind of iteration (linked lists and binary trees need not apply).

Although is_sorted is available out of the box in C++, I should add that random_shuffle was deprecated in C++14 and officially removed in C++17. In the interests of backward compatibility, however, it continues to linger in many implementations. The recommended approach is to use shuffle explicitly parameterised with a random number generator of your choice:

`void bogosort(auto begin, auto end){    mt19937 randomness;    do        shuffle(begin, end, randomness);    while (!is_sorted(begin, end));}`

This forces an extra decision onto developers: which RNG to use? This choice demands more understanding of the machinery and design of the standard C++ random number library, which is far more comprehensive than off-the-shelf RNG libraries in most languages. While this versatility and control is welcome in many application areas — cryptography being perhaps the most obvious example — it is an obstacle for casual use in others — such as shuffling questions in a classroom quiz or, say, a simple demo implementation of bogosort. There’s no appetiser menu for random numbers: you have to order and eat the whole five-course enchilada.

In the code shown, mt19937 refers to a 32-bit implementation of the Mersenne Twister algorithm based on the Mersenne prime 2¹⁹⁹³⁷ – 1.

## Rand discorporation

It’s worth pausing a moment to reflect on three quite distinct audiences for random number libraries:

1. Users who want numbers that exhibit statistical randomness but also want repeatability. For example, developers of simulations want to be able to rerun simulations identically, but want to be sure the quality of randomness is high enough that simulation results do not reflect artefacts of non-randomness. This audience wants a choice of pseudo-random number generators with a lot of control.
2. Users who want numbers that are as indistinguishable from true randomness as possible. In the context of security, predictability means discoverability, and so discoverability and determinism are negative qualities. This audience wants true random numbers as well as sophisticated pseudo-random number generation.
3. Users who just want to mix things up a little. They are not particularly interested in the statistical quality of the randomness, and repeatability is less typically a requirement. Their prime concern is batteries-included ease of use.

The C++ <random> library caters to the first two audiences, but raises a barrier to entry for the third.

Simple things should be simple, complex things should be possible.

A strong motivation for the removal of random_shuffle is that it’s typically implemented using rand. That it be implemented using rand is not a requirement of the C++ standard, so implementers of the library have a free hand. That said, another motivation for the removal of random_shuffle is that it’s not sufficiently specified to be portable across platforms, so that for the same seed you always get the same values wherever you build and run. Both de facto specification in terms of rand and de jure non-specification are understandable concerns.

Similarly, although there is no standard requirement that C’s rand be poorly implemented, it has such a history of being poorly implemented that poor implementation has itself become the de facto standard — many have come to expect a poor implementation instead of asking for something better.

Rather than addressing the root cause, such as improving their quality of implementation or lobbying for tighter specification, many vendors and platforms have knowingly chosen to preserve poor implementations of both random_shuffle and rand for decades. Given that software is an industry where updates are routinely issued for bugs, security holes and performance improvements, this was clearly a fixable problem. A vendor wanting to support backward compatibility for poor RNGs, while also offering improved implementations, could have done so via many mechanisms.

Letting the issue fester to the point that deprecation and removal of a convenience feature seemed like the best option can be considered a shame. That it was done so without offering a replacement of comparable convenience can be considered unfortunate. That rand was a root cause of all this and is still not even deprecated — let alone removed — can be considered shamefully unfortunate.

## Redemption from a state of sin

As any pedant will be only too happy to point out, we have so far been using pseudo-random number generation in the code rather than truly random numbers.

Any one who considers arithmetical methods of producing random numbers is, of course, in a state of sin.

How do we define randomness? It is the absence of something, and that something is determinism. A pseudo-random number generator offers a repeatable cyclic sequence of numbers based on a seed value. A good PRNG is one that has a long cycle that is, to all intents and purposes, statistically indistinguishable from a truly random sequence.

In its favour, the C++ standard library does offer access to non-deterministic random number generation, subject to availability from the underlying platform (e.g., via /dev/random or CryptGenRandom). If you want to be certain that nothing is certain, you can query the entropy of the source: if it’s 0, then the source is deterministic, i.e., it is pseudo-random. Note that if it’s not 0 it may still have a bias rather than be evenly distributed, but at least it won’t be deterministic.

You can use a random_device directly with the shuffle algorithm:

`void bogosort(auto begin, auto end){    random_device randomness;    do        shuffle(begin, end, randomness);    while (!is_sorted(begin, end));}`

Or you can use it to randomly seed a PRNG:

`void bogosort(auto begin, auto end){    random_device seed;    mt19937 randomness(seed());               do        shuffle(begin, end, randomness);    while (!is_sorted(begin, end));}`

Note that the second approach is the recommended one for both reasons of efficiency and quality of randomness. Assuming that randomness.entropy() is not already zero (i.e., a non-deterministic implementation is not available), the entropy source may become depleted with repeated use.

Nothing in life is certain except death, taxes and the second law of thermodynamics.

## But is it an algorithm?

One of the interesting observations about bogosort is that, in the strictest sense of the word, it is not an algorithm.

The word algorithm comes in for a lot of abuse and misuse, most commonly in connection with machine learning and AI:

There are those who find the loosening of the term algorithm to include AI unhelpful. “Now people use ‘algorithm’ to mean almost anything,” says Martin Dyer at the University of Leeds, UK. “I’ve become so annoyed at people misusing it.”

The algorithms behind machine learning are mostly concerned with matrix multiplication and statistical operations. While some may consider them tedious, these algorithms are not particularly nefarious, devious or socially problematic in and of themselves. The issues are more with the data than the algorithms, from the story behind the data (Where did it come from? What is it being used to represent? What does it actually represent?) to the story behind how results are applied (What groups or predispositions are advantaged and disadvantaged?). Thus, most popular objections to ‘algorithms’ are in truth objections to data provenance, methodology and business practice.

To reclaim the word, what then is an algorithm?

Ask a computer scientist this question and they will tell you it is a sequence of instructions that takes an input, performs some repeatable computation on it and provides an output.

That’s pretty much it.

The idea that an algorithm is a specific sequence of steps and that its computation is repeatable, however, is where bogosort stumbles. The random element is neither specific nor is it, when truly random, repeatable.

It is, however, possible to restructure the code to mitigate this objection. We can lift randomness out of the function body and pass it in as a parameter. This means the code is now strictly a function of its inputs, which previously only covered the data to sort but now also includes the source of values that drive the shuffle, i.e., the randomness. The code no longer has a direct dependency on whether this confounding source of values is truly random or not.

`void bogosort(auto begin, auto end, auto randomness){    ...}`

Perhaps the simplest way to supply a sequence of random numbers in C++ is to model randomness as an input iterator — in other languages, such as Java, a stream would be the moral equivalent. This means that randomness could be a pointer to the start of an array of numbers, an iterator that wraps a Mersenne Twister, a wrapper for random_device, a wrapper for rand to annoy your crypto-sensitive colleagues, an istream_iterator taking values laboriously copy-typed into the console, and so on, ad nauseam et ad infinitum.

The structural inversion gives us the benefit of parameterisation from above, i.e., from the caller, but it also means we have to get our own shuffle on. The resulting code makes bogosort look like some kind of messed-up selection sort:

`void bogosort(auto begin, auto end, auto randomness){    do    {        for (auto sorting = begin; sorting != end; ++sorting)        {            auto selected = sorting + *randomness++ % (end - sorting);            swap(*sorting, *selected);        }    }    while (!is_sorted(begin, end));}`

For comparison, here is an unmessed-up selection sort:

`void selection_sort(auto begin, auto end){    for (auto sorting = begin; sorting != end; ++sorting)    {        auto selected = min_element(sorting, end);        swap(*sorting, *selected);    }}`

The loop progresses from the beginning to the end, exchanging the value at the current position with the lowest from the remaining part of the range. The outer loop, checking whether the sequence is sorted yet, is unnecessary: by the time the sorting iterator reaches the end, we know it has left a sequence of sorted values in its wake.

A variation of bogosort that doubles down on the paired-exchange approach is bozosort. Where bogosort randomly shuffles all the values before checking if they are sorted, bozosort just swaps two elements at random before checking. Even though the roles are not quite as they were, keeping the same variable naming scheme affords clearer comparison:

`void bozosort(auto begin, auto end, auto randomness){    if (begin != end)    {        do        {            auto sorting = begin + *randomness++ % (end - begin);            auto selected = begin + *randomness++ % (end - begin);            swap(*sorting, *selected);        }        while (!is_sorted(begin, end));    }}`

From one point of view, the main difference between bogosort and bozosort is the size of the shuffle, i.e., all elements versus a random pairing. From another, the main difference is frequency of checking, i.e., an all-element shuffle can be seen as composed of many pair shuffles so that bogosort checks for completion less often than bozosort. In terms of its value movement and control flow, bozosort can be considered simpler than bogosort.

Either way, whether we consider bogosort or bozosort, we have shown code that is more specific in terms of steps and dependencies. For a given sequence of randomising values it is repeatable. So… it’s an algorithm, right?

## Stopping distance

Well… there’s still an issue.

Remember I said that ‘randomness could be a pointer to the start of an array of numbers’? How long would such an array have to be? Is it possible to define a maximum number of random numbers that will guarantee for any input of values to be sorted that the sorted condition will be reached? No, it’s not possible to define a maximum length for such an array such that all sorts will be successful. Put another way, the maximum is not finite, which means there are cases it can loop forever.

Ignoring the constraints of native integers and array storage, a simple proof by contradiction demonstrates there is no N such that N is the maximum length for a random number supply that will guarantee bogosort sorts. If we imagine N does exist, then what happens if all the array entries are 0? In the code shown above, 0 is the identity element for the shuffling operation: it leaves the values to be sorted unchanged. Thus, if all N values are 0, no shuffling will occur and bogosort will not reach the sorted condition. Therefore, N is not the maximum length that guarantees sorting — QED.

Returning to the earlier definition earlier of an algorithm, one of the important characteristics of an algorithm is that it ‘provides an output’. That there is a point at which there is a result means that, by definition, algorithms terminate. This implication is made explicit by Hopcroft and Ullman:

A procedure which always terminates is called an algorithm.

Bogosort is not guaranteed to always terminate.

The probability that bogosort never reaches a sorted outcome is vanishingly small… but it hasn’t vanished completely. For shuffles generated from a truly random source, there is no guarantee all values in the output domain will ever be visited and, hence, no guarantee that all possible permutations will ever be visited, including the sorted one. For the more typical case of a pseudo-random number generator, whether all the possible values in the output range are ever produced is a quality of implementation issue rather than a property you can assume. For a known generator implementation, the probability of termination may be known to be 1; without this knowledge, however, a terminal state cannot be assumed.

The question of whether or not a piece of code qualifies as an algorithm also came up in the first channel-based implementation of dropsort, which had no terminating condition. That dropsort implementation ran as a process on a potentially unbounded stream, thus its possibility for non-termination was therefore an artefact of that architectural choice; other implementations were shown that were bounded and, therefore, terminated. Whether or not bogosort terminates with a result, however, is not a property of paradigm; it is an inherent property of the non-systematic generate-and-test approach it employs.

Another distinction is that a channel-based dropsort acts as a stream filter, so it produces results against its input continuously. Bogosort follows a more classic procedural model: all input must be completed before it is processed; all processing must be completed before there is any output. In this sense, bogosort is, curiously, more like a conventional sorting algorithm than dropsort is!

## Can’t decide

What started as a bit of fun with an esoteric procedure for sorting has strayed into a minefield of undecidability, a class of problems concerning computability that includes Gödel’s Incompleteness Theorems, the Halting Problem, the Entscheidungsproblem, the Post Correspondence Problem, and others.

Gödel proved that any consistent axiomatic system contains valid theorems that cannot be proven within that system. Put another way, there are statements of a system that are true but cannot be proven to be true without stepping outside that system. From individual functions to whole codebases, this applies to code. For example, there are function preconditions that cannot be written and enforced at the level of a function to guarantee its correct execution. Note that this is not a judgement about whether it is practical to do so or not, but whether or not it is possible even if we were to embrace the impractical. The Halting Problem states that there is no way to write a general algorithm that will determine whether or not an arbitrary piece of code will terminate. The Entscheidungsproblem is related, and is notable for having given us both the Turing machine and Church’s lambda calculus, as well as a more academic and sesquipedalian way of saying decision problem.

Consider the following implementation of bogosort in Groovy:

`def bogosort(values){    def randomness = new SecureRandom()    do        values.shuffle(randomness)    while (!isSorted(values))}`

The standard Java SecureRandom class is used to produce a non-deterministic random number sequence. Following the same principle as the is_sorted implementation shown for Python, we can implement isSorted as

`def isSorted(values){    (1..<values.size()).every {values[it - 1] <= values[it]}}`

If we are thinking formally about bogosort as an algorithm, we should be able to characterise it in terms of a precondition — what requirement must be met for the function to execute successfully — and a postcondition — what promise must be kept by the function when it executes successfully. What could we assert at the start and end of the function?

Clearly, isSorted(values) is part of the postcondition, although it is not the whole of it:

The full postcondition is that “The result of sorting a sequence of elements is a sequence of the original elements sorted in non-descending order.”

For the precondition, we can assert on values being non-null, indexable and its values being of comparable types. But we cannot write a precondition that always determines whether or not the function will terminate — even if we were to make the random number stream a parameter as we did with the C++ example.

## Halting? No problem!

But is all this going to stop us from demonstrating that our implementations of bogosort are correct? Of course not! The difference between theory and practice is far greater in theory than in practice. I noted that ‘the probability that bogosort never reaches a sorted outcome is vanishingly small’. What this means in practice is that it is more likely our system will crash as a result of hardware failure than that bogosort will pursue a journey into infinity. We also lack the patience to wait until the day after tomorrow for results, let alone beyond the end of time — we will reach for Ctrl+C long before we contemplate the infinite.

In practice, it’s easy enough to write a test for bogosort:

`@Testvoid "To infinity and beyond!"(){    def values   = [3, 1, 4, 1, 5, 9]    def expected = [1, 1, 3, 4, 5, 9]    bogosort(values)    assert values == expected}`

Assuming no other intervention, there are three possible outcomes, which mirror the possible verdicts (not guilty, guilty, not proven) in a trial under Scots law:

1. The test passes.
2. The test fails.
3. The test never terminates, i.e., never passes or fails.

In practice, the test terminates — the odds are ever in your favour. In theory, it doesn’t have to. We might want to ensure against the third outcome. If we cannot do so by proof, can we do it by test?

`@Test(timeout=POSITIVE_INFINITY)void "To infinity and beyond!"(){    def values   = [3, 1, 4, 1, 5, 9]    def expected = [1, 1, 3, 4, 5, 9]    bogosort(values)    assert values == expected}`

In other words, if it reaches the end of time, fail the test because it hasn’t terminated. You may notice a problem with this, and not simply because timeout in JUnit 4 takes long rather than double.

In practice, however, nothing lasts forever. There are no infinities in the physical universe — infinity is a mathematical concept, not a physical one. Everything in the universe is bounded, either by physics or by our patience. We establish limits that are practical and sufficient according to our needs. Rather than gazing long into an abyss, we use place-holder symbols, such as the word ‘infinity’, a lemniscate (∞) or the POSITIVE_INFINITY constant defined in Double.

`@Test(timeout=1000)void "To infinity and beyond!"(){    def values   = [3, 1, 4, 1, 5, 9]    def expected = [1, 1, 3, 4, 5, 9]    bogosort(values)    assert values == expected}`

This is how we generally address questions of non-guaranteed completion, from accidental occurrences of the Halting Problem and its kin to waiting on messages that may never arrive. We guarantee termination by introducing non-determinism: we convert any operation that can be characterised as ‘if it terminates, it returns a result’ to ‘it terminates with a result or it times out’. We trade uncertainty of result for a guarantee in time. We could even use this technique in our implementation, signalling an error in the case of failure to terminate in time.

Because success or failure of the test is not solely a function of the code’s correctness, bogosort is not technically unit testable according to strict definitions of unit testability. It provides a service whose outcome falls outside our control, as if it had an external resource dependency.

The chief task in life is simply this: to identify and separate matters so that I can say clearly to myself which are externals not under my control, and which have to do with the choices I actually control.

— Epictetus

## Terminating

As with sleep sort, dropsort and permutation sort, putting bogosort under the microscope is about fun and learning, not about practical code you can drag and drop into the next commit on your current codebase.

Esoteric algorithms — even the ones that aren’t algorithms — belong to a class of algorithms than are to regular algorithms as esolangs are to more conventional programming languages intended for actual work. The focus here is on esoteric algorithms implemented conventionally rather than conventional algorithms implemented esoterically.

Like a good esolang, a good esoteric algorithm is not simply esoteric for its own sake. These unconventional takes on a conventional requirement are not simply challenging for the sake of being challenging: they are about the delight and insight that come from an intellectual surprise or subversion of the expected.

Bogosort is eyebrow-raising for its unsystematic approach and shocking attitude to efficiency. Look closer, though, and it spotlights random number generation, requirements, testing and the fundamentals of computability. In theory, using bogosort means never having to say you’re done.

--

--

consultant · father · husband · itinerant · programmer · speaker · trainer · writer