Automated Theorem Proving

Proving a theorem is essentially like playing a game. There is a desired outcome, but many possible moves at every step. The number of possible logical paths is huge.

It takes intelligence to find the right one. We've seen computers do this with chess and go for games, but it might take time for theorems. The space is so much more complex.
I’ve used it a few times on both my masters and phd thesis.

For my masters I used prover (prov3r or provr? It’s been a while) to check if combinatorial configurations satisfied the elliptic curve group law. Calculations which are pretty rote, but tedious.

For my phd thesis, I used the magma language to verify if reflection groups satisfied specific properties I was after. Once again, tedious and rote work I didn’t want to waste time doing.
>It is amazing to me that this technology did not surpass humans yet. It's weird that humans can do something that is essentially symbolic manipulation better than computers.

Remember that the size of the space to be explored is exponentially large, ridiculously so. Following intuition is much better than scrambling around in the dark. Great strides have been made with automated theorem proving, and I'm sure much more will be done in the future. But I think it's amazing that automated theorem can sometimes match humans, not the other way around.
>It's weird that humans can do something that is essentially symbolic manipulation better than computers.

I would argue that (most) humans aren't actually doing symbolic manipulation when figuring out a proof, we are just doing that when we write it down.
> Do you have some papers to recommend that will help me learn techniques that are often used?

John Harrison's _Handbook of Automated Reasoning and Practical Logic_ basically is a "review book" on this very subject. Chapters 2 and 3 seem most relevant for your purposes, but the entire book should be consumed.

I think you may also be interested in chapter 6, on LCF-style provers. Harrison implements one for a simple Hilbert system for first-order logic (if I recall correctly). Then he implements a declarative input language atop of it, so the proofs more closely resemble "ordinary proofs" (compared to the tactics-based proofs).

> I also wanted to experiment with proving the theorems from axioms, that is going forwards.

Ah, this is slightly different.

The analogy that springs to mind: writing a proof is like programming a method or a procedure, but the organization of theorems and axioms is more analogous to writing a "standard library" for a programming language.

I don't believe this is a well-documented, or even well explored.
Theorem proving is subject to combinatorial explosion.  Just like chess and Go.   And just like Chess and Go required some extreme engineering and man years of research to surpass humans,  the same thing would be needed for theorem proving.

It is not a given that it would pass human competence.
>It is amazing to me that this technology did not surpass humans yet. It's weird that humans can do something that is essentially symbolic manipulation better than computers.

Computers are very good at doing things quickly but in most contexts we're actually better than the machine. Computers are very good at executing a task that they've been given in a very explicit manner but comprehending a vague task or defining a task for themselves is exceptionally difficult.
This must be a fun project. Just to throw a few ideas out there..

I've never done a theorem prover, but a while back, I was thinking about *where* a proof generated by such a program ought to get published. One idea was to publish the result on a purpose built *blockchain*.

Now given the current crypto market rout, "blockchain" is not the sexy word it once used to be, but pls hear me out..

CA proofs are often exhaustive calculations against a huge combinatorial search space. Now while the programs themselves are small, their outputs (the formal proofs) can run into tera or petabytes. So we might *not want* to record the actual output anywhere and instead record that output in its *compressed*, "Kolomogrov" form: i.e. the program itself.

The problem, of course, is that in order verify the result of the program (the prover), you'd have to run the program to completion. What if the prover takes days to finish (halt)? That wouldn't be ideal.

So the blockchain I'm fantasizing about, records both the prover program, and also various checkpoints (analogous to a program's state when suspended at a breakpoint in the debugger) along its execution path. That way, you'd be able to load the prover's state at say its trillionth operation. Kinda random access.

I was thinking the chain's currency (it's native token /unit of worth) could be made to correlate with the flop rate of the network (analogous to hash rate, hash power of a PoW chain). There'd be an on-chain bid and ask system to both offer new prover problems and for submitted solutions, together with an incentive system for any user to check/challenge claims that a program (prover) has been run to generate said checkpoints. Supposing the we call its currency/token *fuzz *, a fuzz would represent X units of flops on said network

I have developed this idea further in a Google doc, if anyone interested
A human can quickly tell that if you stuff n+1 pigeon into n drawers, a drawer will contains at least 2 pigeons. That's intuition. Throw it at a computer, and it could potentially try n! ways of putting n pigeons into n holes before realizing that 1 pigeon per hole is impossible.

(it's not just an analogy, there are research on that, but I don't remember where I read it)
The problem with using LK backwards, is that the quantifier rules, the left universal and the right existential include an entire term t that drops from the proof. This means that one needs to pick something complicated like t from thin air.

0 like 0 dislike