jsteinhardt comments on How does MIRI Know it Has a Medium Probability of Success? - Less Wrong

19 Post author: peter_hurford 01 August 2013 11:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (137)

You are viewing a single comment's thread. Show more comments above.

Comment author: roystgnr 03 August 2013 03:40:10PM 8 points [-]

I'm concerned about the probability of having some technical people get together and solve some incredibly deep research problems before some perhaps-slightly-less-technical people plough ahead and get practical results without the benefit of that research. I'm skeptical that we'll see FAI before UFAI for the same reason I'm skeptical that we'll see a Navier-Stokes existence proof before a macroscale DNS solution, I'm skeptical that we'll prove P!=NP or even find a provably secure encryption scheme before making the world's economy dependent on unproven schemes, etc.

Even some of the important subgoals of FAI, being worked on with far more resources than MIRI has yet, are barely showing on the radar. IIRC someone only recently produced a provably correct C compiler (and in the process exposed a bunch of bugs in the industry standard compilers) - wouldn't we feel foolish if a provably FAI human-readable code turned UF simply because a bug was automatically introduced in the compilation? Or if a cosmic ray or slightly-out-of-tolerance manufacturing defect affected one of the processors? Fault-tolerant MPI is still leading-edge research, because although we've never needed it before, at exascale and above the predicted mean time between hardware-failures-on-some-node goes down to hours.

One of the reasons UFAI could be such an instant danger is the current ubiquitous nature of exploitable bugs on networked computers... yet "how do we write even simple high performance software without exploitable bugs" seems to be both a much more popular research problem than and a prerequisite to "how do we write a FAI", and it's not yet solved.

Comment author: jsteinhardt 03 August 2013 08:54:55PM 1 point [-]

I'm skeptical that we'll prove P!=NP or even find a provably secure encryption scheme before making the world's economy dependent on unproven schemes, etc.

Nitpick, but finding a provably secure encryption scheme is harder than proving P!=NP, since if P=NP then no secure encryption scheme can exist.

Comment author: Kawoomba 03 August 2013 09:20:34PM 0 points [-]

if P=NP then no [provably] secure encryption scheme can exist.

What? Why? Just because RSA would be broken? Shor's algorithm would also do so, even in a proven P!=NP world. There may be other substitutes for RSA, using different complexity classes. There are other approaches altogether. Not to mention one-time pads.

Comment author: gwern 03 August 2013 10:32:09PM 3 points [-]

As I understand it, if P=NP in a practical sense, then almost all cryptography is destroyed as P=NP destroys one-way functions & secure hashes in general. So RSA goes down, many quantum-proof systems go down, and so on and so forth, and you're left with basically just http://en.wikipedia.org/wiki/Information-theoretic_security

http://www.karlin.mff.cuni.cz/~krajicek/ri5svetu.pdf discusses some of this.

Not to mention one-time pads.

And life was so happy with just one-time pads?

Comment author: Kawoomba 03 August 2013 11:35:42PM 1 point [-]

Really, if P=NP, then encoding your messages would be quite low on the priority list ... however we're not debating the practical impact here, but that "finding a provably secure encryption scheme is harder than proving P!=NP", which was raised as a nitpick, and is clearly not the case.

Happiness or unhappiness of life with one-time pads notwithstanding.