Tweag
Technical groups
Dropdown arrow
Open source
Careers
Research
Blog
Contact
Consulting services
Technical groups
Dropdown arrow
Open source
Careers
Research
Blog
Contact
Consulting services

The refactoring of a Haskell codebase

6 February 2025 — by Facundo Domínguez

Common engineering scenario: There is a large legacy codebase out there which is known to have a few pervasive problems that everyone wants to get rid of. But nobody understands all the details of the codebase, and few are willing to risk breaking the artifact in a long and costly surgery. This post is an experience report on one such refactoring of Liquid Haskell (LH), a tool to verify Haskell programs.

LH has grown mostly from academic contributions that demonstrate the feasibility of some proof technique or another. Since the focus of a demonstration is not always placed on generality, a new user can find unresolved problems, sometimes blockers that make adoption difficult. Let us look at one such example.

The problem: Name resolution

LH requires the user to write specifications for the various parts of the program she wants to verify. Suppose we have a module with a type to describe the verbosity of a program.

module Verbosity where
data Verbosity = Quiet | Verbose

And suppose that we also have a module where we declare the configuration of the program.

module Config where
import Verbosity
data Config = Config Verbosity

For the sake of brevity, this program can only configure the verbosity. Now let us add some more definitions in the Config module to construct a configuration and to give it a specification.

{-@ measure isVerboseConfig @-}
isVerboseConfig :: Config -> Bool
isVerboseConfig (Config Verbose) = True
isVerboseConfig _ = False

{-@ verboseConfig :: {v:Config | isVerboseConfig v } @-}
verboseConfig :: Config
verboseConfig = Config Verbose

The annotation {-@ measure isVerboseConfig @-} indicates to LH that we want to use isVerboseConfig in specifications, as we do in the specification of verboseConfig:

{-@ verboseConfig :: {v:Config | isVerboseConfig v } @-}

This specification says that we expect verboseConfig to have verbosity Verbose, and LH will verify so, but first it has to find out what names like Verbosity and Verbose refer to. For this sake, LH inspects the imports of the module to learn that these names come from the Verbosity module.

Now, when we import the module elsewhere

module Main where
import Config
...

the specification of verboseConfig should be available to verify the new module Main. This time we would hope we wouldn’t need to resolve the names of the imported specs again. Alas, when changing modules LH discards the name resolution of imported modules and needs to resolve names a second time. But module Main is missing the import of module Verbosity, which provides the names that LH needs to resolve.

Easily enough, we can import Verbosity in module Main and declare the problem solved. Unfortunately, this solution means that in large programs we need to import explicitly the transitive dependencies of the modules we want to verify, which is too much to ask of our kind users.

We must consider, then, why LH is discarding the name resolution of imported modules. It turns out that it is a pretty structural reason: all names in Liquid Haskell are represented as strings. While at places there is an effort to make the strings unambiguous, the representation makes resolved and unresolved names hard to distinguish without doing some parsing, and there are just too many opportunities to mistake one for the other. Matters are worsened by the fact that LH does not keep the keys (GHC Names) that allow us to retrieve type information used for verification, and finding these keys from the environments of different modules is not trivial either.

The refactoring process

LH is a tool of 28000 lines of code; changing its representation of names is not an easy refactoring, especially when much of the knowledge on the implementation details still needs to be acquired, as it was in my case. Another alternative would be to rewrite LH from scratch, this time making it right. But to accomplish this I would also need to have complete awareness of all the quirks in the old implementation. We couldn’t argue either that we have a method or a technology that promises a better outcome, so refactoring it had to be then.

Ideally, we would replace all strings with a more structured type to represent resolved names, which should weed out the accidental mistakes and omissions. The change was so massive though, that making a single contribution with the whole change was impractical.

  1. It would have been difficult for anyone to review such a large contribution.
  2. If tests in the testsuite didn’t pass, it would be difficult to identify which part of the changes affected the test outcome.
  3. If tests uncovered issues with the design, we would have invested much effort into an implementation built on the wrong assumptions.
  4. It would have been difficult to estimate the overall effort.

For these reasons, it is essential that whatever plan is chosen, the refactoring is carried over in sufficiently small and incremental steps. Fortunately, name resolution admits breaking down the task, as the many language constructs used in specifications can be resolved separately. I started by resolving names of Haskell types used in specifications, and then I could resolve names in measure annotations, and later names in assumptions, and later names of data constructors in specifications for algebraic data types, and a long etcetera.

There were also choices to make about introducing a new representation. For instance, I knew from the start that I wanted to use GHC Names for all names pointing to Haskell entities (type constructors, data constructors, functions). This makes the name representation as precise as it can ever be. But should I arrange data structures to be parametric on the name representation?

data Spec name = ...

The parser then would produce specifications that use strings when the names are unresolved, and later on convert them to a type of specifications that have resolved names.

parse :: String -> Either [Error] (Spec String)
resolveNames :: Spec String -> Spec GHC.Name

This is close to how the GHC compiler manages different representations of names in the various phases of the compilation pipeline.

Making the abstract syntax tree (AST) parametric, and then implementing the traversals and updating function type signatures was going to be some work, and it didn’t look like the parametricity would help me catch a lot of mistakes. The alternative that I adopted was to replace strings with a sum type called LHName that could hold both resolved or unresolved names.

data LHName = LHNResolved ... | LHNUnresolved String

The parser produces LHNUnresolved values, and a generic syb-style traversal takes care of changing those to LHNResolved during name resolution. In intent, all names are resolved after name resolution, though this knowledge is not explicit in the types of the AST. This would be a problem if some odd function after name resolution expected unresolved names, or if name resolution accidentally left names unresolved. But I don’t regard the runtime errors arising in those cases too likely to escape the testing. After I modified the AST one string occurrence at a time, the type checker dutifully flagged every use of string names that needed updating.

Being in a strongly typed language, it feels sinful to defer to runtime the checks that could be detected via the type system, like parameterizing the AST. The implementation of the GHC compiler is a notable example where the common ASTs are parametric on the variable representation. Parameterizing the specification may still be considered in LH, though it wasn’t an absolute prerequisite to start.

One advantage of LHNames with respect to strings is that LHNames can hold GHC Names. And another advantage is that passing an unresolved LHName where a resolved name is expected produces a runtime error, whereas with strings we got undefined behavior. This is most helpful when serializing specifications to import them later. If any unresolved name is found at that time, an error is produced. Moreover, propagating the switch to LHName through the code helped finding the places that were mistakenly producing unresolved names.

Another interesting choice came when deciding the representation for logic names. These are names that refer to entities in the logic, usually unknown to the Haskell compiler. Logic names can refer to functions or type aliases that can be used in specifications. An example of a type alias is the one for non-negative integers.

{-@ type Nat = {v:Int | v >= 0} @-}

For the refactoring, the major difference between Haskell and logic names was that logic names need to be fed to liquid-fixpoint, the theorem prover that LH uses to discharge proof obligations, and liquid-fixpoint does expect strings as a representation for names. Because liquid-fixpoint is used as a library, data structures in LH and liquid-fixpoint share the name representation, which made using a sum type like LHName harder in this case.

One option would have been to generalize liquid-fixpoint to deal with other representations for names. But this was going to be another project on its own. It seemed more practical to keep the interface to liquid-fixpoint unaffected, so the plan was to parse names as strings, resolve them to LHName, serialize the specs, and then convert the LHName back to strings before interacting with liquid-fixpoint. In this way, I could still reuse the output of name resolution when importing specs.

If I didn’t want to have two versions of the AST with strings and with LHNames, then oh surprise, I had to parameterize specifications with logic names. Ignoring environments and other details I ended up with a schematic interface like

data Spec logicName = ...
parse :: String -> Either [Error] (Spec String)
resolveNames :: Spec String -> Spec LHName
serializeSpec :: Spec LHName -> ByteString
convertToLiquidFixpoint :: Spec LHName -> Spec String

Now the syb-traversals were no longer good to implement name resolution, as the transformation is changing the type of the AST, so I had to implement it with a mix of stock Traversable instances and manual traversals. And I find a bit amusing that I chose a parametric representation for the sake of reusing data structures, and I’m still not doing it to have more precise types.

The current state of the refactoring

At the time of writing, the resolution of all Haskell names in LH annotations is persisted and reused when importing specifications. Some of the logic names are handled in the same fashion, but there are a few cases needed still to complete the refactoring. The state of the refactoring and all of the related contributions can be checked in the corresponding GitHub issue.

There were quite a few side quests derived from the name resolution refactoring. I found it challenging to stay focused on name resolution and not try to fix all the things I discovered broken along the way. These were details like type parameters that could be removed since they were always instantiated to the same type, or fields in record data types that were never read, or functions that were almost dead-code if I could remove just that one use site that should be doing something different. I ended up fixing a bunch of secondary problems when they were easy enough to resolve. But I had to give up more than once on a few issues that turned out to be deeper than anticipated; a humbling exercise, if you will, where I had to admit my goals of the day to be too ambitious for the sake of progressing on the main refactoring.

I’m excited at the prospect of leaving behind the kind of user-facing errors that the old implementation induced. Much of the success rests on having formulated a way that allowed to perform the task incrementally, always keeping the test suite passing. The disarray of name resolution was identified as problematic by both contributors and users, and for much of the specification language it is already a thing of the past.

About the author

Facundo Domínguez

Facundo is a software engineer supporting development and research projects at Tweag. Prior to joining Tweag, he worked in academia and in industry, on a varied assortment of domains, with an overarching interest in programming languages.

If you enjoyed this article, you might be interested in joining the Tweag team.

This article is licensed under a Creative Commons Attribution 4.0 International license.

Company

AboutOpen SourceCareersContact Us

Connect with us

© 2024 Modus Create, LLC

Privacy PolicySitemap