mitchell vitez

dark mode

blog about music art media

resume email github

A Beginner’s Guide to the Ways Haskell Helps Us Avoid Errors

Everyone makes mistakes. One of the reasons I tend to like languages like Haskell is that they help me catch more of my mistakes up front, at compile time. This can be frustrating, but it also gives me a somewhat greater sense of security when my code compiles correctly. Though it’s not a total guarantee, it’s a decent indicator that I haven’t made one of at least a restricted class of possible mistakes.

A friend who’s currently learning Haskell asked me about the ways using it might help us avoid errors that we see in other languages. This post is an attempt to clarify some of the main ways we can use the features available to us (especially the type system) to cut back on bugs.

This post is aimed at people somewhat new to learning Haskell. They probably know about pattern matching, pure functions, strong static types, and what laziness is. They might kinda-sorta know about functors, applicatives, and monads but don’t have much experience with them in practice (and I won’t be using any of those words again in this post).

One note about the examples given: it’s hard to present non-trivial examples in a blog post. It’s also hard to really see the value of certain kinds of error reduction in a trivial example. I’ve tried to make sure these examples generalize fairly naturally, but they’re also all relatively short, so may fall prey to either of these issues.

Obvious and Common

There are some obvious ways Haskell reduces certain classes of errors, but they’re so common in most programmers’ experience that I’ll only mention them in passing here. This includes things like garbage collection, which helps us reduce memory-management-related errors. Unless you’re commonly working in C, C++, or similar, you’re probably used to this already. I’ve tried to make the majority of this post about what Haskell does differently from at least a sizeable chunk of mainstream languages.

Type System Basics

Of course, although we might dream about it, we can’t eliminate bugs entirely. Nothing in the language itself stops you from writing this kind of code:

isThree :: Int -> Bool
isThree 4 = True
isThree _ = False

This would best be caught by tests or careful code review. The type system does stop us from making the following kind of mistake, though:

isThree :: Int -> Bool
isThree "three" = True
isThree _ = False

Even without the type signature, the way we use a function elsewhere can cause the compiler to reject bad code, thanks to type inference

goodOrBad :: Int -> String
goodOrBad n = if isThree (sum [1..n]) then "good" else "bad"

isThree "three" = True
isThree _ = False

We can eliminate another set of possible typos (and therefore possible lurking bugs) by moving from String to a specific type. We’ve gotten rid of the possibility that "Bad" and "bad" were used interchangeably because the programmer wasn’t sure about the convention. We’ve also changed the type of goodOrBad to better let other functions know about what’s going on in it.

data Quality = Good | Bad

goodOrBad :: Int -> Quality
goodOrBad n = if isThree (sum [1..n]) then Good else Bad

Note that Quality is isomorphic to Bool. That is, it has the same “shape”, just different names.

data Quality = Good | Bad
data Bool = True | False

In many other languages, because creating new types is somewhat difficult, there would be a strong temptation here to use Bool instead of something more accurate like Quality. However, if we do this we’re losing some information (at the level of the programmer, not at the level of the type system). If a function we have takes a Quality, we know something more about it than if it takes a Bool. For this purpose, using newtype is often the way to go. It lets us add information to an underlying type, with zero runtime cost.

newtype Quality = Quality Bool

goodOrBad :: Int -> Quality
goodOrBad n =
  if isThree (sum [1..10]) then Quality True else Quality False

Things we encode in the type system are often safer (meaning more error-free) than conventions, or things programmers are supposed to remember. At least there’s some process checking for mistakes (typechecking during compilation). We want to offload as much boring bookkeeping to the computer as possible, so we can think about the harder parts of the problem undisturbed by having to remember whether it’s spelled "Good" or "good".

Purity and Lack of Side Effects

The above reduces certain kinds of errors we see in dynamically-typed languages (JavaScript’s {} + [] === 0 comes to mind), but there are definitely many mainstream languages with static types that avoid these pitfalls. Where does Haskell differ from those languages?

One answer is that Haskell eliminates side effects. If you have a function f :: Int -> Int, there is no way for it to print to the screen, or write to a database, or start a global thermonuclear war. All it can do is transform an Int into another Int. This is a guarantee not often seen, even in many statically typed languages.

This is also where the idea that Haskell is useless comes from. The worst version of this is when somebody says that you can’t do anything at all with Haskell, because it has no side effects! You might think this doesn’t really happen, but when I was in school the professor of my “Advanced Object-Oriented Programming” class was asked what he thought about pure functional programming, and he said something like “I don’t see how it could be useful, since the end goal of any program is to have an effect on the world”.

Clearly, Haskell is able to have effects in the real world (or it would indeed be useless). For example, this blog post was served to you by a Haskell program. The HTML in the post was generated from a different format by using another Haskell program.

The key here is that although Haskell does have effects, those effects are limited. Just as types limit the possible values that a term in our program can take on, purity limits the kinds of effects that a function in our program can have.

Every function starts with no effects. The way to introduce effects into a program that you’ve probably heard the most about is using IO. The main function has type IO (), which indicates to us that main can have input and output effects.

There are other kinds of effects we can add to our functions though. For example, State is a way to hold onto some stateful value, and transform that state in various ways. If we add the effects of Writer, we can do things like maintain a persistent log that we write to throughout the function. Adding Reader to the mix lets us have an environment that we can read from. In fact, these three are so commonly used together that there’s something called RWS which combines the effects of reading, writing, and state.

Because we might want more than one type of effect per function, we want some way to mix and match the effects that we need in each function. What if we don’t need to read anything, but we want both writing and state? We can combine Writer and State into a new thing (call it WS) and use them together. In this way we can select exactly the effects that a function is able to have, and no more. If we see a function f :: Int -> WS Int, we know that f transforms an Int, and that it can write or use state.

What’s more valuable for error reduction is what we know f can’t do. We know it can’t read. We know it can’t perform arbitrary IO. It can only perform the effects that it has in its type signature, in stark contrast to most languages where you can perform any effect at any time!

The Virtue of Laziness

Laziness is a surprising feature to include on a list of “things that reduce bug rates”, but it does have a few properties that help us do so. Imagine something like this:

complicatedFunc x =
  error "the complicated function ran into an error"

f x = const 0 (complicatedFunc x)

Because of laziness, f never even needs complicatedFunc to run before it returns the result 0. If there’s some complex logic that can produce bad results like this, laziness can skip right past it by simply not doing any more work than it has to. Laziness ensures that we only do work that’s needed later on, which helps us not only with errors but with performance. (Imagine complicatedFunc just took a really long time rather than erroring out.) It’s a small gain, but it’s helpful nonetheless.

Correct by Construction

Let’s examine this type for a binary tree:

data Tree a = Leaf | Node a (Tree a) (Tree a)

This is “correct by construction”. There is no way to create an object of type Tree that is an invalid binary tree. Contrast this with creating trees via pointer manipulation, or with creating trees in a dynamic environment where we can set one node to 7 and another node to ["hello", "goodbye"], and another node to point to itself.

There are many types that provide similar kinds of properties. For example, NonEmpty is a list that is guaranteed to always have at least one element.

This leads me to a small point about library quality. Any language with high-quality libraries can easily provide correct trees, or correct nonempty lists. Haskell’s advantage is when you want to create your own types, and you want to reason about them. Because of various features of how the language works, it’s a lot easier to be safe not just with libraries that have been looked over by hundreds of people, but also with code that maybe only a couple people will ever see.

Maybe Just Nothing

The final piece I want to say in this post is about types like Maybe or Either. If we see the type Maybe, we know we either have a Just value, or we have the special indicator Nothing. In this way, Maybe is sort of like having nullable values, but it doesn’t infect every object like null pointers do! While it’s true that Nothing is like a better replacement for null, the real power of Maybe comes from its ability to fade into the background.

Consider the following few “safe” functions:

maybeHead [] = Nothing
maybeHead (x:_) = Just x

maybeDiv a 0 = Nothing
maybeDiv a b = Just (a `div` b)

lookup key [] = Nothing
lookup key ((k, v):xs) =
  if k == key then Just v else lookup key xs

When used individually, they’re nice because they give us a typesafe result every time. Imagine if lookup threw an exception every time it failed to find the thing you were looking for! Instead, we just get back the encoded value that it wasn’t found.

lookup 3 [(x, x*2) | x <- [1..5]]
Just 6

lookup 9 [(x, x*2) | x <- [1..5]]
Nothing

However, when used together in a block of do notation, these functions take on an even nicer character. You know the callback hell you run into when you have to do one thing, then another, then another in JavaScript (later fixed by async/await)? Do notation is sort of the fix for callback hell in Haskell, except it’s more general. It correctly handles what happens when we get a Nothing result from any piece of our computation.

testMaybe :: Int -> Int -> [Int] -> Maybe Int
testMaybe a b c = do
  found <- lookup a [1..b]
  divided <- maybeDiv found 2
  head <- maybeHead c
  return (divided + head)

If any piece of testMaybe fails, the whole thing returns a Nothing. It doesn’t try to compute using a bad value, it essentially just skips over remaining computations. If the whole thing succeeds, it returns a Just x where x is the result of all the computation.

The reason this can be seen as an error reduction strategy is that it massively reduces the boilerplate a programmer might otherwise have to write. If you make one mistake in handling any of the individual Nothing values, then the whole thing could be operating under erroneous assumptions. Because of the way Maybe works, do notation is able to handle all that logic for us “under the hood”. It’s like a fairy godmother came down from the sky and, at the end of each line, did a little work for us to make sure that our function would operate properly.

Conclusion

A general takeaway here is that automatic processes and restrictions help us eliminate errors. The type system checks for us, automatically, that we haven’t done something like add a string to a number. Because the default is “no effects”, when we have effects in a function, someone had to put them in on purpose—the automatic default is “this function has no side effects”. Laziness automatically keeps us from doing unnecessary work. If our types are correct by construction, we are automatically kept from creating bad values of those types. Maybe’s interaction with do notation automatically takes care of Nothings, keeping us from accidentally doing the wrong thing with them.

I think all of this is just fantastic. The more that can be automatically handled by the computer, and the less that I have to manually handle (and make mistakes with), the better! The ideal programming language would be one where I state what I want in a few English sentences, and the computer automatically interprets it the way I meant, without any errors. Until that day comes, I’ll prefer languages that do more of the error-checking and bookkeeping work for me, so I can focus on what’s interesting.