Wei_Dai comments on Against easy superintelligence: the unforeseen friction argument - Less Wrong

25 Post author: Stuart_Armstrong 10 July 2013 01:47PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (48)

You are viewing a single comment's thread.

Comment author: Wei_Dai 10 July 2013 09:05:52PM 10 points [-]

Two other examples that I'm familiar with:

In the mid 1990s, cryptographers at Microsoft were talking (at least privately, to each other) about how DRM technology is hopeless, which has turned out to be the case as every copy protection for mass market products (e.g., DVDs, Blu-Rays, video games, productivity software) has been quickly broken.

A bit more than 10 years ago I saw that the economics of computer security greatly favored the offense (i.e., the cyberweapon will always get through) and shifted my attention away from that field as a result. This still seems to be the case today, maybe to an even greater extent.

Comment author: elharo 11 July 2013 09:43:01AM *  5 points [-]

Maybe not. DRM does not prevent copying. It does, however, enable the control of who is allowed to produce which devices. E.g. DRM makes it much harder to market a DVR, DVD player, cable box, or software that can connect to the iTunes Music Store. It's not a significant technical challenge, but it is a legal one. HTML 5 editor Ian Hickson has made this really clear.

Comment author: timtyler 11 July 2013 01:21:58AM 0 points [-]

DRM technology is quite widely deployed. It also stops lots of copying. So: your comments about it being "hopeless" seem a bit strange to me.

Comment author: Wei_Dai 11 July 2013 04:32:57AM 8 points [-]

Well, hopeless relative to the hopes that some people had at that time. For example, from Wikipedia:

BD+ played a pivotal role in the format war of Blu-ray and HD DVD. Several studios cited Blu-ray Disc's adoption of the BD+ anti-copying system as the reason they supported Blu-ray Disc over HD DVD. The copy protection scheme was to take "10 years" to crack, according to Richard Doherty, an analyst with Envisioneering Group.

and

The first titles using BD+ were released in October 2007. Since November 2007, versions of BD+ protection have been circumvented by various versions of the AnyDVD HD program.[

Comment author: David_Gerard 11 July 2013 06:49:06AM *  7 points [-]

It also stops lots of copying.

(a) Numbers?

(b) What's your evidence that it makes a damn bit of difference? What people want to copy, they do copy.

DRM is sold as security from copying. It has failed utterly, because such security is impossible in theory, and has turned out impossible in practice.

Comment author: ThisSpaceAvailable 16 July 2013 04:16:43AM -1 points [-]

"In theory" is a bit of a slippery term, since all encryption can be cracked in theory. Apart from that, DRM is possible in practice, if you can completely control the hardware. Once you're allowed to hook any TV you want into your DVD player, uncrackable DRM goes out the window, because the player has to supply the TV with unencrypted video. The other way DRM can work is if users aren't viewing all of the content, and there's a way to require external credentials. For instance, people can be forced to buy separate copies of Diablo III if they want to play on BattleNet.

Comment author: Stuart_Armstrong 16 July 2013 10:23:35PM 2 points [-]

all encryption can be cracked in theory

Is it too pedantic to mention one-time pads?

Comment author: [deleted] 16 July 2013 10:24:55PM 1 point [-]

A one-time pad has to be transmitted, too. MITM will crack it.

Comment author: wedrifid 17 July 2013 08:36:11AM 1 point [-]

A one-time pad has to be transmitted, too. MITM will crack it.

A one-time pad that needs to be transmitted can be violated by MITM. But if the relevant private mutual information is already shared or is shared directly without encryption then the encryption they use to communicate is not (in theory required to be) crackable. Since the claim was that "all encryption can be cracked in theory" it is not enough for some cases to be crackable, all must be.

Comment author: Stuart_Armstrong 17 July 2013 07:44:38AM *  0 points [-]

Fair enough - I was out-pedanted!

Comment author: wedrifid 17 July 2013 08:25:12AM *  1 point [-]

Is it too pedantic to mention one-time pads?

No, that's an entirely valid point and I even suggest you were in error when you conceded. If two individuals have enough private mutual information theory allows them encryption that can not be cracked.

Comment author: wedrifid 17 July 2013 08:49:01AM 1 point [-]

"In theory" is a bit of a slippery term, since all encryption can be cracked in theory.

This is what we call The Fallacy of Gray. There is a rather clear difference between the possibility of brute forcing 1024 bit encryption and the utter absurdity of considering a DRMed multimedia file 'secure' when I could violate it using a smartphone with a video camera (and lossless proof-of-concept violations are as simple as realising that vmware exists.)

Comment author: Pentashagon 11 July 2013 07:33:29PM 0 points [-]

A bit more than 10 years ago I saw that the economics of computer security greatly favored the offense (i.e., the cyberweapon will always get through) and shifted my attention away from that field as a result. This still seems to be the case today, maybe to an even greater extent.

When do you foresee that changing to an advantage for the defense? Presumably sometime before FAI needs to be invulnerable to remote exploits. All of the technological pieces are in place (proof carrying code, proof-generating compilers) but simply aren't used by many in the industry and importantly by none of the operating systems I'm aware of.

Comment author: Wei_Dai 17 July 2013 10:06:30AM *  2 points [-]

When do you foresee that changing to an advantage for the defense? Presumably sometime before FAI needs to be invulnerable to remote exploits.

I don't currently foresee the economics of computer security changing to an advantage for the defense. The FAI, as well as the FAI team while it's working on the FAI, will probably have to achieve security by having more resources than the offense, which is another reason why I'm against trying to build an FAI in a basement.

All of the technological pieces are in place (proof carrying code, proof-generating compilers) but simply aren't used by many in the industry and importantly by none of the operating systems I'm aware of.

I'm not an expert in this area, but the lack of large scale deployments makes me suspect that the technology isn't truly ready. Maybe proof carrying code is too slow or otherwise too resource intensive, or it's too hard to formalize the security requirements correctly? Can you explain what convinced you that "all of the technological pieces are in place"?

Comment author: asr 17 July 2013 01:59:55PM *  1 point [-]

Speaking as somebody who works in computer systems research:

I agree with Pentashagon's impression: we could engineer a compiler and operating system with proof-carrying code tomorrow, without needing any major research breakthroughs. Things very similar to proof-carrying code are in routine deployment. (In particular, Java bytecode comes with proofs of type safety that are checked at load time and researchers have built statically-verified kernels and compilers.)

I believe the real barrier at this point is that any sort of verification effort has to go bottom-up, and that means building new libraries, operating systems, etc ad nausiam before anything else runs. And that's just a huge expense and means losing a lot of legacy code.

My impression is that it's not a performance problem. In the schemes I've seen, PCC is checked at load or link time, not at run-time, so I wouldn't expect a big performance hit.

Separately, I'm not sure PCC gets you quite as much security as you might need. Users make mistakes -- grant too many permissions, put their password where they shouldn't, etc. That's not a problem you can solve with PCC.

Comment author: Pentashagon 18 July 2013 12:11:29AM 0 points [-]

I don't currently foresee the economics of computer security changing to an advantage for the defense. The FAI, as well as the FAI team while it's working on the FAI, will probably have to achieve security by having more resources than the offense, which is another reason why I'm against trying to build an FAI in a basement.

If that's true then I'm worried about the ability of the FAI developers to protect the hardware from the FAI as it learns. What safeguards the FAI from accidentally triggering a bug that turns it into UFAI as it explores and tests its environment? The period between when the initial self-improving FAI is turned on and the point that it is confident enough in the correctness of the system it runs on seems to be unnecessarily risky. I'd prefer that the FAI along with its operating system and libraries are formally proven to be type-safe at a minimum.

Hardware is potentially even harder. How does the FAI ensure that a bit flip or hardware bug hasn't turned it into UFAI? Presumably running multiple instances in voting lock-step with as much error correction as possible on as many different architectures as possible would help, but I think an even more reliable hardware design process will probably be necessary.

I'm not an expert in this area, but the lack of large scale deployments makes me suspect that the technology isn't truly ready. Maybe proof carrying code is too slow or otherwise too resource intensive, or it's too hard to formalize the security requirements correctly? Can you explain what convinced you that "all of the technological pieces are in place"?

As asr points out, economics is probably the biggest reason. It's cost-prohibitive to formally prove the correctness of every component of a computer system and there's a break-even point for the overall system where hardware reliability drops below software reliability. The security model will be the most difficult piece to get right in complex software that has to interact with humans, but type-safety and memory-safety are probably within our grasp now. To the best of my knowledge the bugs in Java are not type errors in the byte-code but in the implementation of the JVM and native library implementations which are not proven to be type-safe. Again, the economic cost of type-safe bytecode versus fast C/C++ routines.