Sometimes good performance seems to be in direct conflict with programmer-understandable types. This post is about ways to get fast parity checking on natural number types, while also maintaining as many invariants as possible automatically. The example here is definitely contrived—if you wanted fast numeric parity calculations you could just check the lowest bit. However, it’s a pretty good stand-in for more complicated encodings, and how we can play with them to make them both performant and easy to understand. Along the way we’ll briefly touch on some cool Haskell language extensions: type families, GADTs, and data kinds.
Let’s start with a tiny subset of the Prelude, and we’ll also define a Bool
type as well as the not
operation on it which we can use later.
import Prelude((++), Show(..))
data Bool = True | False
deriving Show
not :: Bool -> Bool
not False = True
not True = False
Hopefully this is straightforward. Here’s our stand-in for an existing type with a focus on understandability by human programmers. We’re encoding natural numbers as Peano numerals, where Z
means 0, and S
means successor. Let’s also throw in a helpful succ
function for finding successors.
Our version of succ
here only needs to provide the S
constructor, which means it’s simple to write and easy to understand.
For whatever reason, the other constraint on our type is that we want fast parity checking, i.e. a way to tell whether a number is even or odd. Let’s write a version of that now.
even :: Nat -> Bool
even Z = True
even (S Z) = False
even (S (S n)) = even n
odd :: Nat -> Bool
odd n = not (even n)
Here we’re deconstructing our Nat
and recursively finding out whether it’s even by reducing the number by two, and checking the parity of that. Intuitively, this works, because we always end at one of the base cases and are making the number smaller at each step. However, it’s rather slow. We’re taking around \(n/2\) steps to check the parity of a number \(n\), which means the runtime of this function grows linearly with the size of its input. Let’s see if we can do better.
Here’s another idea: we can encode odd and even numbers separately. This way, there’s an easy way to check a number’s parity from its encoding in a single step. Maybe our new data type looks like this:
ZZ
is the equivalent of Z
in Nat
, but now we have both an SO
(“odd successor”) and an SE
(“even successor”). This makes the succ
function a bit more complicated now:
succPar :: ParityNat -> ParityNat
succPar ZZ = SO ZZ
succPar n@(SO _) = SE n
succPar n@(SE _) = SO n
We want the SE
s and SO
s to alternate, which we can do as shown. However, note that now there’s a higher burden on people writing functions dealing with ParityNat
as opposed to just plain Nat
s. Along with additional cases to handle, there are more places for implementations to contain errors. We’re also relying on people to use succ
now, rather than just the plain data constructors. At least our new parity checking functions are speedy:
evenPar :: ParityNat -> Bool
evenPar ZZ = True
evenPar (SO _) = False
evenPar (SE _) = True
oddPar :: ParityNat -> Bool
oddPar n = not (evenPar n)
There’s another encoding option, which puts more burden on the author of the encoding, but hopefully automatically enforces more constraints for later users of the library. We can push parity checking into the types. First, let’s make a new type for the parity itself.
If this seems weirdly familiar, that’s because it’s isomorphic to Bool
which we defined earlier. We can also define a “not
” operation on Parity
s, but first let’s take one step back. If we want to enforce parity in types, having Even
and Odd
terms isn’t that helpful. We need to promote Parity
to a kind, and promote Even
and Odd
to types. While we’re at it, let’s also add the type families extension so we can define the equivalent to our not
function, but at the type level instead of the term level. All in all, we’ll want the following extensions enabled:
Now we can define the type-level equivalent of not
, but for Parity
instead of Bool
. Let’s call it “Opp
”. Here’s its definition:
type family Opp (n :: Parity) :: Parity
type instance Opp 'Even = 'Odd
type instance Opp 'Odd = 'Even
This may look bizarre, but it’s more or less just a different syntax for defining a function. Opp
takes a Parity
called n
and returns a Parity
. The opposite of even is odd, and the opposite of odd is even. The ticks in front of 'Even
and 'Odd
remind us that these have been promoted from terms to types.
We can now define our constructors analogous to Z
and S
. This stuff here is the reason we needed GADTs.
data Natural :: Parity -> * where
Zero :: Natural 'Even
Succ :: Natural p -> Natural (Opp p)
instance Show (Natural p) where
show Zero = "Zero"
show (Succ n) = "(Succ " ++ show n ++ ")"
Breaking this down a bit further, a Natural
is a type which takes something of kind Parity
and gives us back a normal type (something of kind *
). Zero is even, and the Succ
of any Natural
has the opposite parity as that Natural
.
The successor function is again trivial to write.
Weirdly, it now almost doesn’t make sense to have even
and odd
functions. Because this information is encoded in the types, it’s already sort of carried along with every Natural
. However, just for the sake of completeness we can write something like this:
This was a brief tour of a few possible encodings for a small bit of data with additional outside constraints on it. The first encoding was very simple, but runtime for operations we care about a lot (even
and odd
) was too slow. We switched to the more performant ParityNat
, but this came at the cost of ease of use when writing functions using that type. Finally, we sort of pushed the problem up to the type level so that anything of type Natural 'Even
has even parity, and likewise any Natural 'Odd
has odd parity. This did away with the need for parity checking as functions, but comes at the cost of a more complicated type system encoding of our desired result.
Like so many places in engineering, this provides an interesting example of multiple tradeoffs that have to be balanced. Normally I lean towards “clarity at most costs”, sacrificing performance to make programmers’ jobs more manageable. However, occasionally understandability has to be traded away for enhancements of the details of how our programs actually run, and this tradeoff is more common as systems become more heavily relied upon by others.