Lab note #031 Proving out with Isabelle
In Lab note #029, I was thinking about a different way to construct a unicit tree. Can we create both a description of the splits in a tree and a comparator that would make it a partially ordered set? And with that partially ordered set and an algebraic merge function, can we construct a join-semilattice? From the join-semilattice, do we have a unicit tree under the insert and delete operations? Since this was math unfamiliar to me, I leveraged GPT to help me play this out last time, by asking it what I needed, and reasoning out what it spit back out.
And though I got further than I would have otherwise, I wasn't completely satisfied that it had the properties I wanted, especially the merge function at the end. After a bit of internal debate, I decided to try proving it out with Isabelle.
Isabelle is a programming language for theorem proving. In addition to writing functional programs, you can prove different properties of the program that you wrote. It's another level of validation beyond unit tests and fuzzers. Of course, not all programs need this level of correctness, but given that I wanted it as a basis for syncing, I hoped it might be worth it in the long run.
It seemed intimidating, but so far, it's going smoother than I thought it would go. I think two things contributed. First, I could already read and write Haskell, and Isabelle is not too far off. Second, being armed with GPT significantly lowered the learning curve.
Now that I knew the construction I wanted, I just needed to ask GPT to generate Isabelle functions and proofs. Given Isabelle is a pretty obscure language, I wasn't sure if GPT had enough training data to give robust answers. I'm not finished, but so far, it's been a good experience.
It can write out the data types and the functions to slowly build the construction. Looking back, I should have used it In fact, it was a great way to learn the language, because I can readily see how the language works for a problem that I care about. It's far better than scanning through pages of PDFs and tutorials to try to adjust it to my own problem. I think I got up to speed on the basics in about a day, whereas without GPT, it'd take me at least a week.
But like generating any other type of code, GPT doesn't have enough intelligence to give you a good design. It will happily go along with whatever you ask it to do, including running down ill-suited paths to a solution you're asking for.
And while it has a lot of similarities with Haskell, it does have the proof mechanisms I've never seen before in other languages. The automatic proofs work like magic when the computer figures it out. But when it doesn't, I have to fall back to asking GPT what it's asking for. It feels almost like a complex puzzle game.
It's apparent to me in this experience with GPT and Isabelle that LLMs work best within the context of the work that you're doing. Shuttling back and forth between a chat window and the IDE is a pain. It makes far more sense to integrate LLM into custom environments of the work to be done, rather than the other way around. It also says to me that OpenAI won't be able to conquer the many niches that LLM is applicable with their GPT store, especially those that fall further away from a text document.
As I've gotten further into the construction, the more headwinds I've met, but nothing insurmountable so far. I'm still keeping an eye on Prolly Trees if this turns out to be a dead-end.