Random thought - How come AI haven't learned complex math yet?

It's already hard enough to teach computers to prove theorems we know very well!
by
Because there’s no money in selling ads for that.
Why do you think computers should be capable of proving theorems? Proving a hard conjecture is not a game of pattern matching that you can get good at by being fed examples,  but by creating totally new ideas.

As for proving theorems leading to advances in technology, what are *proofs* of theorems in the past that have done that? Don't confuse a rigorously proved theorem with a plausible conjecture that has no proof yet.  To this day, when big primes are used in cryptography, nobody cares about a rigorous proof that the numbers are prime: you just run the Miller-Rabin test on the number 50 or 100 times, and if you don't break the test you just work with the number as if it is prime. This has never failed to work in practice.
by
It has already been done for years. In 1996 Robbins conjecture was proved using an automated theorem prover. Nobody understood the proof since it was 12 lines of meaningless manipulation of symbols and people had to transform the proof so that it made sense.
Because current AI is very basic.
Other random thought- what if some of those crypto mining algorithms were trying to brute force useful problems - surely there are plenty of problems, or at least number sequences where it’s easy to verify whether an answer or term is valid or not but not so easy to find said answer or term?
There are a few reasons.

1) Currently there isn't a huge set of advanced theorems proved in a unified format or language that computers can easily parse. People are working on it though and the databases get bigger every year.

2) The search space for mathematical proofs is unbounded. What's more, the 'state' (the set of statements you've already proved) at each node in a search tree is also unbounded. This isn't necessarily an unsolvable problem but it is very hard to deal with.

3) It is difficult to formalize the notion of "partial proof" in mathematics, and it is easy to create statements which are grammatically almost identical, but have wildly different proofs (e.g. "every real polynomial of odd degree has a complex root" and "every real polynomial of even degree has a complex root"). This is in contrast to, say, Go, where there are several well-developed algorithms for determining the relative advantage of certain board states. An AI doesn't need to learn how to recognize them -- they can simply be programmed in, and the AI can devote its training time to learning higher level play. In mathematics the AI would have to learn the instincts of a mathematician before it could do anything high-level.

4) When actually proving things, mathematicians usually try to conceptualize. We draw pictures, try to phrase the problem in new ways, etc. The harder the theorem the more we do this, and thus the harder it would be for a computer to emulate the kind of logical leaps that can occur.

People have tried to apply current machine learning techniques to generate proofs of simple, known theorems, and the results aren't very encouraging.

Where AI has a place in mathematics is as an assistant. Certain classes of proofs, such as those that rely on algebraic manipulations, epsilon-delta proofs, applying induction, or finding counterexamples, are tedious and straightforward. For that reason they are easy to automate. They also often show up as sub-proofs within a bigger proof, so an AI which can do that would be very useful to working mathematicians.

0 like 0 dislike