# Vulcan: A JavaScript Automated Proof System

Mathematicians have been trying to figure out how to get computers to write proofs for a long time. One of the earliest (dating back to the 1960s) attempts to do so was a logical rule called resolution.

I created Vulcan, an NPM package that implements a resolution-based automated proof system. Below is a in-browser demo I created with AngularJS and Browserify. You can use symbols A-Z as variables that can be either true or false, and any of the following operations: -> (implication), <-> (equivalence), ! (negation), & (conjunction), and | (disjunction).

Enter some information into the knowledge base, enter a query, and click “prove” to see a proof (if one exists).

This certainly isn’t the first JavaScript implementation of a theorem prover, nor is it even the most powerful. It does, however, demonstrate the ability of a very simple system to come up with (in exponential time), a proof of any provable statement in propositional calculus.

But how does it work? Once again, you can find an excellent introduction in Stuart Russell and Peter Norvig’s book. But understanding the system really only requires two fundamental ideas about symbolic logic:

1. That any expression in propositional logic can be written in conjunctive normal form
2. That logical resolution is sound and complete.

### Conjunctive normal form

The first point is the simplest. We say that a sentence of propositional logic is in conjunctive normal form if it is a series of other sentences combined via conjunction, such that each of the combined sentences is a disjunction of sentence symbols or negations of sentence symbols. In other words, a sentence is in conjunctive normal form if it is an AND of ORs. For example,

… is in conjunctive normal form, as is:

However, this sentence is not in conjunctive normal form:

Nor is this one:

The fact that $(A \to B) \land C$ is not in CNF (conjunctive normal form) seems problematic. Surely, any sufficiently powerful proof system should be able to handle implications. However, we can transform this expression into one that is in CNF using a simple trick: we replace the implication $A \to B$ with $\lnot A \lor B$.

Our new expression, $(\lnot A \lor B) \land C$, is in CNF. In fact, there is a sequence of logical rules that can be applied to any expression to convert it into CNF. Norvig and Russell give an excellent description that can be found in other places on the Internet, and I’ll produce my own explanation here.

1. Remove bijections: $A \leftrightarrow B \Rightarrow (A \to B) \land (B \to A)$.
2. Replace any implications: $A \to B \Rightarrow \lnot A \lor B$
3. Move nots inwards via DeMorgan’s rule:
• Not over and: $\lnot (A \land B) \Rightarrow (\lnot A \lor \lnot B)$
• Not over or: $\lnot (A \lor B) \Rightarrow (\lnot A \land \lnot B)$
4. Eliminate double negation: $\lnot \lnot A \Rightarrow A$
5. Distribute ors over ands: $(A \land B) \lor C \Rightarrow (C \lor A) \land (C \lor B)$

If you don’t believe that this simple algorithm will convert any sentence into CNF, try out a few examples. But this algorithm is not the easiest way to understand why any sentence in propositional logic can be converted into CNF. It’s helpful to remember that two sentences are equivalent if and only if they agree in every model. In other words, imagine you have two sentences, $\alpha$ and $\beta$, composed of the symbols A, B, and C. You can say that $\alpha = \beta$ if you can plug in any value for A, B, and C and get the same final result for both $\alpha$ and $\beta$.

Let’s take $\alpha = (A \to (B \lor C))$, and $\beta = \lnot (A \land \lnot B \land \lnot C)$ and test to see if they are equivalent by building a truth table.

ABC$$\alpha$$$$\beta$$
TTTTT
TTFTT
TFTTT
TFFFF
FTTTT
FTFTT
FFTTT
FFFTT

Quite an exhausting process, but it works. We can see that $\alpha = \beta$. The important consequence here is this: if we can construct a sentence with the same truth table as $\alpha$, we can construct a sentence that is equivalent to $\alpha$.

So let’s think about how to construct a new sentence $\gamma$ that will be equivalent to $\alpha$ but also in CNF. Think of $\gamma$ as a bunch of clauses linked together by conjunctions. So, whenever $\alpha$ is false, we need to make sure that at least one of the clauses in $\gamma$ is false – that’ll make sure that all of $\gamma$ is false.

For every row in the truth table that ends in false, add a clause to $\gamma$ that is a conjunction of each sentence symbol, but negate the sentence symbol if that symbol is “false” in the table. For $\alpha$, we have only one row in the truth table where the result is false. So we’ll have only one clause in $\gamma$. That clause will be a disjunction of:

• the negation of A, because A is false in that row of the truth table
• B, because B is true in that row
• C, because C is true in that row.

So finally, we get $\gamma = \lnot A \lor B \lor C$, which is equivalent to $\alpha$ and in CNF. If you don’t believe me, try a truth table or try WolframAlpha. Now we have an algorithm for taking a truth table to a new sentence that will be in CNF.

Let’s try another example. Let $\sigma = A \leftrightarrow B$. We’ll write out a truth table for $\sigma$ and then convert it to CNF.

AB$$\sigma$$
TTT
TFF
FTF
FFT

Since $\sigma$ is false in two rows, we’ll have to build clauses from both of them. For the seoncd row, we get the clause $A \lor \lnot B$. From the third row, we get $\lnot A \lor B$. Putting the clauses together gives us $(A \lor \lnot B) \land (\lnot A \lor B)$. That’s equivalent to $\sigma$, which you can verify with a truth table or with WolframAlpha.

Hopefully you’re now significantly convinced that, given any sentence in propositional logic, there’s an equivalent sentence in CNF. The next critical component is logical resolution.

### Resolution

Resolution is an interesting trick with some very useful properties. It can be stated as follows:

If you’ve never seen this notation before, it just means that if you are given the two sentences on top as true, then you can deduce that the sentence on the bottom is true as well. For resolution specifically, if you have two disjunctions with a complementary symbol (a symbol that is negated in one disjunction but not in the other), you can remove that symbol from both sentences and combine the two sentences with another disjunction.

It possible to prove that resolution is both sound (meaning that a deduction made by resolution will always be correct) and complete (meaning that any sentence that can be deduced can be deduced by resolution). The second property – completeness – is rather amazing. You might find it interesting to read through a proof that explains why.

So how do we take advantage of resolution to create a proof system? Notice that the inputs and the output of resolution are conjunctions – meaning a string of (possibly negated) symbols linked together by ORs. Since we can convert every statement in our knowledge base to CNF, we can separate out each of the disjunctions into different statements and combine them in different ways with resolution. Since resolution is sound, we know that any combination of two clauses in our knowledge base will be valid. Since resolution is complete, we know that any fact that can be inferred from our knowledge base can be inferred via resolution.

Now, given a knowledge base $KB$ and a query $Q$, how do we find out if $Q$ is true, given $KB$? To be more specific, we want to know if $Q$ is semantically entailed by $KB$. Written formally: $KB \vDash Q$. By the completeness theorem, we know that $KB \vDash Q$ if and only if $Q$ can be syntactically generated from $KB$ using some sound method $m$: $KB \vdash_m Q$.

Let’s call our resolution method $r$. Since resolution is complete, we know that any semantically true sentence entailed by $KB$ can be syntactically derived via $r$. In other words, we know that:

And, since $r$ is sound, we know that any sentence derived via $r$ from $KB$ will also be entailed by $KB$. In other words, we have:

Combining these two gives us:

At this point, you might think an acceptable algorithm would be to take your knowledge base and apply resolution over and over again until you either find all possible sentences or you find $Q$. The problem here is that all sentences of propositional logic can be stated in infinitely many finite ways (there are infinitely many ways to express any sentence using only finitely many symbols). You might think you could solve this problem by simply converting each step into CNF. The problem with that is that CNF representations are not unique. Example:

Even if you were to generate unique CNF statements by deriving the CNF from the truth table at each step, such an approach would require the proof system to build larger and larger clauses (until reaching $Q$). Ideally, we want to make things smaller and smaller. So instead of searching for $Q$, we’ll add $\lnot Q$ to the knowledge base and then search for false. If you think about it, this is equivalent to reductio ad absurdum, or proof by contradiction. If by assuming that our query is false we can produce a contradiction, then it must be the case that our query is true.

Let’s formalize that a little bit. Essentially, our statement is:

This is equivalent to the deduction theorem. A nice way to think of this is:

A statement can be proven from a knowledge base if and only if the negation of that statement combined with the knowledge base produces a contradiction. In other words, a statement is provable from a knowledge base only if the union of the knowledge base and the negation of the statement is unsatisfiable.

So, if we show that $KB \cup \lnot Q$ is unsatisfiable, we’ve shown that $KB \vDash Q$. If you aren’t convinced, here’s a proof. This gives us the following algorithm:

1. Convert $KB$ into CNF, and split up each sentence into clauses
2. Assume $\lnot Q$
3. Apply resolution to every pair of clauses until either…
• (a) no more clauses can be derived, meaning that there is no proof of $Q$ from $KB$. If there were a proof, that would imply that there existed some $m$ such that $KB \vdash_m Q$, but since we know resolution is complete and sound, it must be the case that such an $m$ is not sound.
• (b) we derive an “empty clause”, or false. In other words, we find a contradiction. The existence of a contradiction is enough to prove that $KB \cup \lnot Q$ is unsatisfiable, since it proves that you’ll always get false no matter what model you use. You’ve proven $Q$ by contradiction.

The correctness of this algorithm has some interesting consequences. For example, try a knowledge base of $P$ and $\lnot P$. That’s a contradiction. Then, ask the system to prove $A$, a symbol we know absolutely nothing about. The system will resolve $P$ and $\lnot P$ to false, and suggest that a contradiction has been reached from the knowledge base $P \land \lnot P \land A$. So have we proven $A$ from $P \land \lnot P$?

Well, it turns out that we have! In fact, any conclusion follows from a contradiction. This is called the principle of explosion, which can be stated as:

Think of it this way. Consider a true statement like “the sky is blue.” We’ll call that $B$. Consider another statement, “the Easter Bunny is real.” We’ll call that $E$. We know that the statement $B \lor E$ is true because $B$ is true. However, let’s say for some reason we knew that the sky was blue and not blue, in other words, we know that $B \land \lnot B$ was somehow true. Since we know $B \lor E$ is true, and we know $\lnot B$ is true, we can use resolution to deduce $E$:

So we’ve shown that $\{ B \lor E, B \land \lnot B \} \vdash_r E$. Since resolution ($r$) is sound, we know that $\{ B \lor E, B \land \lnot B \} \vDash E$. This isn’t so ridiculous when we say it out loud:

If the sky isn’t blue, the Easter Bunny is real.

So it’s a good thing that our system finds that $A$ is entailed by $P \land \lnot P$. If it didn’t, it wouldn’t be complete!

You might’ve noticed that the line numbers of each statement in the generated proofs aren’t sequential. That’s because the proof is generated via resolution, and only the relevant clauses are displayed at the end. Since what we’re trying to prove is that $KB \cup \lnot Q$ is unsatisfiable, we’re essentially solving a limited case of the boolean satisfiability problem, which is NP-complete. That means there could be quite a few steps! If you put a lot of data into the knowledge base and ask a tough question, it might take your browser quite a while to come with an answer!

One more interesting tidbit: the initial state of the demo shows how modus ponens is really just a special case of resolution. Since we know that resolution is sound and complete, all sound logical inference rules are a special case of resolution. We also know that resolution is a subset of any complete inference rule, and we know that resolution is equivalent to any sound and complete inference rule. It’s reassuring to know that truth works the same way no matter what.