All of MattMahoney's Comments + Replies

Maybe I am missing something, but hasn't a seed AI already been planted? Intelligence (whether that means ability to achieve goals in general, or whether it means able to do what humans can do) depends on both knowledge and computing power. Currently the largest collection of knowledge and computing power on the planet is the internet. By the internet, I mean both the billions of computers connected to it, and the two billion brains of its human users. Both knowledge and computing power are growing exponentially, doubling every 1 to 2 years, in part by add... (read more)

Because FAI is a hard problem. If it were easy then we would not still be paying people $70 trillion per year worldwide to do work that machines aren't smart enough to do yet.

1JoshuaZ
Almost all of these are hard problems. That seems insufficient.

If we were smart enough to understand its policy, then it would not be smart enough to be dangerous.

3wedrifid
That doesn't seem true. Simple policies can be dangerous and more powerful than I am.

There will never be a singularity. A singularity is infinitely far in the future in "perceptual time" measured in bits learned by intelligent agents. But evolution is a chaotic process whose only attractor is a dead planet. Therefore there is a 100% chance that the extinction of all life (created by us or not) will happen first. (95%).

6wedrifid
How do the votes work in this game again? "Upvote for insane", right?

It's a good idea but upvote because evolution will thwart your plans.

I disagree because a simulation could program you to believe the world was real and believe it was more complex than it actually was. Upvoted for under confidence.

It is not possible for an agent to make a rational choice between 1 or 2 boxes if the agent and Omega can both be simulated by Turing machines. Proof: Omega predicts the agent's decision by simulating it. This requires Omega to have greater algorithmic complexity than the agent (including the nonzero complexity of the compiler or interpreter). But a rational choice by the agent requires that it simulate Omega, which requires that the agent have greater algorithmic complexity instead.

In other words, the agent X, with complexity K(X), must model Omega whi... (read more)

2ArisKatsaris
Not so. I don't need to simulate a hungry tiger in order to stay safely (and rationally) away from it, even though I don't know the exact methods by which its brain will identify me as a tasty treat. If you think that one can't "rationally" stay away from hungry tigers, then we're using the word "rationally" vastly differently.
4skepsci
Um, AIXI is not computable. Relatedly, K(AIXI) is undefined, as AIXI is not a finite object. Also, A can simulate B, even when K(B)>K(A). For example, one could easily define a computer program which, given sufficient computing resources, simulates all Turing machines on all inputs. This must obviously include those with much higher Kolmogorov complexity. Yes, you run into issues of two Turing machines/agents/whatever simulating each other. (You could also get this from the recursion theorem.) What happens then? Simple: neither simulation ever halts.