mitchell vitez

dark mode

blog about music art media

resume email github

Using Persistent with Servant

In my last post about writing a database-backed Servant API, I mentioned at the end how using a library like Persistent might be nicer than having strings of raw SQL strewn throughout our code. Here, we’re going to do just that. I’ll be referencing that previous post a bit, so it might be helpful to have read it.

Several of the techniques here come straight out of servant-persistent. However, I’m going to focus on explaining how to put together a truly minimal working app, with as little code as possible, whereas that example does a whole lot more (like proper logging, environment management, and handling exceptions correctly). I definitely recommend reading through that repo for furthering your understanding.1

Models

I’ll breeze past explaining how Persistent works a bit, so we can focus on using it in our API.2

Last time, in the service of data management, we built a record representing a row in our database table of piano exercises.

data Exercise = Exercise
  { exerciseBpm :: Int
  , exerciseName :: Text
  , exerciseAll12Keys :: Bool
  }
  deriving (Eq, Show)
$(deriveJSON (dropPrefix "exercise") ''Exercise)

With Persistent, we do something a little different and define models with a syntax like the below. We’ve also made a few changes to our constraints on the data, as well as added the new Hands type, which I’ll explain in a second.

share [mkPersist sqlSettings, mkMigrate "migrate"]
[persistLowerCase|
Exercise
  name Text
  UniqueName name
  bpm Int
  hands Hands
  allKeys Bool
  deriving Eq, Show
|]
$(deriveJSON (dropPrefix "exercise") ''Exercise)

Whereas with sqlite-simple we would have had to write raw SQL to make sure that e.g. every exercise in the database had a unique name, here we can just add the UniqueName name constraint. This will ensure that we don’t accidentally add duplicate names when we don’t want any.

Our JSON deriving is exactly the same as before, because share builds a record type with the prefix exercise, just as we did manually before. The Eq and Show deriving is also pretty simple.

Note that we don’t have to write instances akin to FromRow and ToRow, as existing Persistent instances for Text, Int, etc. take care of most of that marshalling for us already. However, we have added a new column: Hands.

Marshalling Hands Data

Hands is a simple enum with three possible values

-- left, right, or hands together
data Hands = LH | RH | HT
  deriving (Eq, Show)
$(deriveJSON defaultOptions ''Hands)

However, because this is a custom type, we need to do a bit of work to get it flowing to and from our database properly. First, we need an instance of PersistFieldSql that notes what type Hands should have in SQL-land:

instance PersistFieldSql Hands where
  sqlType _ = SqlOther "hands"

We also need to describe how Hands should be encoded. Sqlite doesn’t support enums directly, so instead we can use a TEXT type, and provide error messages for invalid encodings.

instance PersistField Hands where
  toPersistValue = PersistText . T.pack . show
  fromPersistValue (PersistText hands) = case hands of
    "LH" -> Right LH
    "RH" -> Right RH
    "HT" -> Right HT
    _ -> Left $
      "Invalid hands encoding in database: " <> hands
  fromPersistValue v = Left $
    "Invalid hands format in database: " <> T.pack (show v)

This might seem as bad as ToRow and FromRow, but remember that we got all of the columns we used last time for free, from the model definition. It’s only because we’ve added this new sum type (which we can use in Haskell-land with abandon) that we’re running into having to write these instances here.

The False Path of MonadUnliftIO

There is a function runSqlite :: MonadUnliftIO m => Text -> ReaderT SqlBackend (NoLoggingT (ResourceT m)) a -> m a we can use to essentially replace our old withConn, but with better types backing it up. We no longer have to write a form of connection management ourselves, because we have this ReaderT to grab configuration values from.

To use runSqlite directly with a Servant Handler, we would need to put together a MonadUnliftIO instance. Handler’s newtype looks like this:

newtype Handler a = Handler
  { runHandler' :: ExceptT ServerError IO a }

We can write an instance of Handler fairly straightforwardly, though withRunInIO’s type of MonadUnliftIO m => ((forall a . () => m a -> IO a) -> IO b) -> m b isn’t exactly something to sneeze at.

instance MonadUnliftIO Handler where
  withRunInIO run =
    Handler $
    withRunInIO $ \r ->
    run $ r . runHandler'

We now need to implement withRunInIO for ExceptT ServerError IO, because that’s what our “inner” withRunInIO is working with here. However, doing the same thing as we did above for ExceptT doesn’t really make sense. As a rule of thumb, trying to use MonadUnliftIO with “stateful” monads (StateT, ExceptT, etc.) isn’t going to work.3

So, instead of just adding instances to Servant’s existing Handler, we’ll need to come up with something a bit more workable.

Configuring an App

Instead of hardcoding values like last time, let’s keep a record around to simplify. Here, we add a ConnectionPool, as well as the Port and DatabaseName we had last time (as hardcoded values).

data Config = Config
  { configPool :: ConnectionPool
  , configPort :: Port
  , configDatabaseName :: Text
  }

Instead of using Handler directly, we’ll need some other type. We can use our Config in a ReaderT over ExceptT ServerError, to get both convenient access to the ConnectionPool (through Reader’s ask), as well as recovering how Handler worked on the inside.

newtype AppT m a =
  AppT
    { runApp ::
      ReaderT Config (ExceptT ServerError m) a
    }
  deriving 
    ( Functor, Applicative, Monad, MonadIO
    , MonadReader Config, MonadError ServerError
    )

Since we’ll mainly be using this AppT as a transformer on IO, it’s convenient to have a type synonym

type App = AppT IO

Finally, we need a way to take some App and transform it into Handler, in the context of some Config. This is surprisingly straightforward if we remember that runReaderT and runApp do what we want.

convertApp :: Config -> App a -> Handler a
convertApp config app = do
  Handler $ runReaderT (runApp app) config

Building an Application

When I said we were done with withConn, that was slightly misleading. We still want a similar function around, as a way to grab connections to the database in context. However, instead of being at the per-query level, like withConn, now we’re at the per-app level, managing an entire pool of connections, and providing access to that pool via Reader.

withConfig :: (Config -> IO a) -> IO a
withConfig action = do
  let dbname = "pianoreps.db"
  !pool <- createSqlitePool dbname 1
  action Config
    { configPool = pool
    , configPort = 1810
    , configDatabaseName = dbname
    }

Again looking for replacements of functionality we had last time, we need some way to migrate. Glancing at our Persistent models definition, we see that we’re using mkMigrate to create a way to do migration…called migrate.

share [mkPersist sqlSettings, mkMigrate "migrate"]

To use this, we’ll need to runMigration in the context of some pool that we can grab from a Config.

initialize :: Config -> IO Application
initialize Config{..} = do
  runSqlPool (runMigration migrate) configPool
  pure $ app Config{..}

initialize calls through to app, which does the same work as before, but with App instead of Handler. This isn’t the cleanest way to handle this in the long term, but it’s enough for now when we only have a few endpoints.

app :: Config -> Application
app config = serve (Proxy :: Proxy API) $
  convertApp config exercises :<|>
  (\ex -> convertApp config (addExercise ex)) :<|>
  convertApp config populate

Running the actual server on some port is a process of putting pieces together. Grab a Config with withConfig, initialize an Application, and start running it.

main :: IO ()
main = do
  withConfig $ \Config{..} -> do
    application <- initialize Config{..}
    putStrLn $ "running on port " <> show configPort
    run configPort application

Running Database Queries

Because we’re now handling the database connection bit of withConn at a higher level, with a pool, we’ll need some way to get access to that pool to run database actions on it. Reader works great for this sort of thing, allowing us to grab the pool and do DB stuff from anywhere in an App context.

runDB ::
  (MonadReader Config m, MonadIO m) =>
  SqlPersistT IO b -> m b
runDB query = do
  pool <- asks configPool
  liftIO $ runSqlPool query pool

A prime example of how Persistent helps clean up is addExercise. Here’s an old, sqlite-simple style version, with its raw SQL string (don’t miss a ?)!

addExercise :: Exercise -> Handler ()
addExercise exercise = liftIO $ withConn $ \conn -> do
  execute conn
    "INSERT INTO exercises (bpm, name, all12keys) \
    VALUES (?,?,?)" exercise

And here it is with Persistent working hard behind the scenes!

addExercise :: Exercise -> App ()
addExercise exercise =
  runDB $ insert_ exercise

exercises is similarly nice-and-typesafe now.

exercises :: App [Exercise]
exercises = runDB $ map entityVal <$> selectList [] []

Random Populations

Now, let’s quickly demonstrate adding another endpoint. This endpoint will generate a random Exercise, insert it into the DB, and return it to us. We’ll call it populate.

type API =
  "list" :> Get '[JSON] [Exercise] :<|>
  "add" :> ReqBody '[JSON] Exercise :> Post '[JSON] () :<|>
  "populate" :> Get '[JSON] Exercise

populate is easy enough to implement since we have direct access to randomness via liftIO

populate :: App Exercise
populate = do
  rand :: Int <- liftIO $ randomRIO (1, 1000)
  bpm :: Int <- liftIO $ randomRIO (60, 180)
  hands <- liftIO $ pickOne [LH, RH, HT]
  all12 <- liftIO $ pickOne [True, False]
  let exercise = Exercise
    ("Exercise " <> T.pack (show rand)) bpm hands all12
  runDB $ insert_ exercise
  pure exercise

Quick Side Bonus (and an Infinite Failure)

As a nice bonus, we can very easily log exactly which raw SQL queries are being executed.

However, my initial implementation was a bit flawed, which stumped me for a bit as I didn’t realize why my web server seemed to be hanging.

instance MonadLogger IO where
  monadLoggerLog loc src lvl msg = do
    liftIO $ monadLoggerLog loc src lvl msg

However, that is of course an infinite loop. Don’t just copy and paste a function and all its arguments!

This sort of thing provides a saner, finitely-terminating situation. (You can get much deeper into piping logs around though).4

instance MonadLogger IO where
  monadLoggerLog loc src lvl msg = do
    say $ T.pack (show lvl) <> ": " <>
      (decodeUtf8 . fromLogStr $ toLogStr msg)

That will produce logs that look like this

LevelDebug: SELECT "id", "name", "bpm", "hands", "all_keys" FROM "exercise"; []
LevelDebug: INSERT INTO "exercise"("name","bpm","hands","all_keys") VALUES(?,?,?,?); [PersistText "Exercise 907",PersistInt64 172,PersistText "LH",PersistBool True]
SQLite3 returned ErrorConstraint while attempting to perform step: UNIQUE constraint failed: exercise.name
LevelDebug: INSERT INTO "exercise"("name","bpm","hands","all_keys") VALUES(?,?,?,?); [PersistText "Exercise 469",PersistInt64 64,PersistText "LH",PersistBool False]
LevelDebug: SELECT "id" FROM "exercise" WHERE _ROWID_=last_insert_rowid(); []
LevelDebug: SELECT "id", "name", "bpm", "hands", "all_keys" FROM "exercise"; []

Conclusion

As expected, Persistent provides a nicer interface than littering raw SQL strings around, although I was not expecting the extra ReaderT/AppT work to be necessary at the outset. However, looking back it’s not so bad, and this is a codebase I’d much rather come back and add on to, vs. one without autogenerated models, or a nice SQL DSL.

As ever, you can see all the code on github.