All of JackV's Comments + Replies

And a couple of years later, I've not adopted this full-time, but I keep coming back to it and making incremental improvements.

This resonated with me instantly, thank you!

I now remember, I used to do something similar if I needed to make decisions, even minor decisions, when drunk. I'd say, "what would I think of this decision sober"? If the answer was "it was silly" or "I'd want to do it but be embarrassed" I'd go ahead and do it. But if the answer was "Eek, obviously unsafe", I'd assume my sober self was right and I was currently overconfident.

For what it's worth, I quibbled with this at the time, but now I find it an incredibly useful concept. I still wish it had a more transparent name -- we always call it "the worst argument in the world", and can't remember "noncentral fallacy", but it's been really useful to have a name for it at all.

I think this is a useful idea, although I'm not sure how useful this particular example is. FWIW, I definitely remember this from revising maths proofs -- each proof had some number of non-obvious steps, and you needed to remember those. Sometimes there was just one, and once you had the first line right, even if there was a lot of work to do afterwards, it was always "simplifying in the obvious way", so the rest of proof was basically "work it out, don't memorise it". Other proofs had LOTS of non-obvious ideas and were a lot harder to remember even if they were short.

FWIW I think of activities that cost time like activities that cost money: I decide how much money/time I want to spend on leisure, and then insist I spend that much, hopefully choosing the best way possible. But I don't know if that would help other people.

I guess "unknown knowns" are the counterpoint to "unknown unknowns" -- things it never occurred to you to consider, but didn't. Eg. "We completely failed to consider the possibility that the economy would mutate into a continent-sized piano-devouring shrimp, and it turned out we were right to ignore that."

We completely failed to consider the possibility that the economy would mutate into a continent-sized piano-devouring shrimp, and it turned out we were right to ignore that.

That's a survivor bias.

FWIW, I always struggle to embrace it when I change my mind ("Yay, I'm less wrong!")

But I admit, I find it hard, "advocating a new point of view" is a lot easier than "admitting I was wrong about a previous point of view", so maybe striving to do #1 whether or not you've done #2 would help change my mind in response to new information a lot quicker?

http://phenomena.nationalgeographic.com/2013/11/26/welcome-to-the-era-of-big-replication/

When he studied which psychological studies were replicatable, and had to choose whether to disbelieve some he'd previously based a lot of work on, Brian Nosek said:

I choose the red pill. That's what doing science is.

(via ciphergoth on twitter)

6mwengler
Which one is the red pill again?

I don't like a lot of things he did, but that's the second very good advice I've heard from Rumsfeld. Maybe I need to start respecting his competence more.

8FiftyTwo
Its much easier to generate good advice than to follow it.
4NancyLebovitz
I'm also fond of Rumsfeld quotes He's oversimplifying-- was it necessary to go to war then?-- but it's still worth thinking about whether a criticism is based on what's actually possible.
1roland
What's the first one?

The "known knowns" quote got made fun of a lot, but I think it's really good out of context:

"There are known knowns; there are things we know that we know. There are known unknowns; that is to say, there are things that we now know we don't know. But there are also unknown unknowns – there are things we do not know we don't know."

Also, every time I think of that I try to picture the elusive category of "unknown knowns" but I can't ever think of an example.

Do we make suggestions here or wait for another post?

A few friends are Anglo-Catholic (ie. members of the Church of England or equivalent, not Roman Catholic, but catholic, I believe similar to Episcopalian in USA?), and not sure if they counted as "Catholic", "Protestant" or "Other". It might be good to tweak the names slightly to cover that case. (I can ask for preferred options if it helps.)

https://fbcdn-sphotos-f-a.akamaihd.net/hphotos-ak-prn2/1453250_492554064192905_1417321927_n.jpg http://en.wikipedia.org/wiki/Anglo-Cath... (read more)

I took the survey.

I think most of my answers were the same as last year, although I think my estimates have improved a little, and my hours of internet have gone down, both of which I like.

Many of the questions are considerably cleaned up -- much thanks to Yvain and everyone else who helped. It's very good it has sensible responses for gender. And IIRC, the "family's religious background" was tidied up a bit. I wonder if anyone can answer "atheist" as religious background? I hesitated over the response, since the last religious observan... (read more)

2VAuroch
I could and did answer atheist as background. My parents are both inspoken* nonbelievers, though they attended a Unitarian Universalist church for two years when their kids (me included) were young, for the express purpose (explained well after the fact) of exposing us to religion and allowing us to make our own choices. *The opposite of outspoken.

I emphatically agree with that, and I apologise for choosing a less-than-perfect example.

But when I'm thinking of "ways in which an obviously true statement can be wrong", I think one of the prominent ways is "having a different definition than the person you're talking to, but both assuming your definition is universal". That doesn't matter if you're always careful to delineate between "this statement is true according to my internal definition" and "this statement is true according to commonly accepted definitions", but if you're 99.99% sure your definition is certain, it's easy NOT to specify (eg. in the first sentence of the post)

Yeah, that's interesting.

I agree with Eliezer's post, but I think that's a good nitpick. Even if I can't be that certain about 10,000 statements consecutively because I get tired, I think it's plausible that there's 10,000 statements simple arithmetic statements which if I understand, check of my own knowledge, and remember seeing in a list on wikipedia, (which is what I did for 53), that, I've only ever been wrong once on. I find it hard to judge the exact amount, but I definitely remember thinking "I thought that was prime but I didn't really check ... (read more)

Of course, it's hard to be much more certain. I don't know what the chance is that (eg) mathematicians change the definition of prime -- that's pretty unlikely, but similar things have happened before that I thought I was certain of. But rarely.

If mathematicians changed the definition of "prime," I wouldn't consider previous beliefs about prime numbers to be wrong, it's just a change in convention. Mathematicians have disagreed about whether 1 was prime in the past, but that wasn't settled through proving a theorem about 1's primality, the way... (read more)

I think the problem may be what counts as correlated. If I toss two coins and both get heads, that's probably coincidence. If I toss two coins N times and get HH TT HH HH HH TT HH HH HH HH TT HH HH HH HH HH TT HH TT TT HH then there's probably a common cause of some sort.

But real life is littered with things that look sort of correlated, like price of X and price of Y both (a) go up over time and (b) shoot up temporarily when the roads are closed, but are not otherwise correlated, and it's not clear when this should apply (even though I agree it's a good principle).

Note: this isn't always right. Anyone giving advice is going to SAY it's true and non-obvious even if it isn't. "Don't fall into temptation" etc etc. But that essay was talking about mistakes which he'd personally often empirically observed and proposed counter-actions to, and he obviously could describe it in much more detail if necessary.

And the fourth largest country of any sort :)

That's an interesting thought, but it makes me rather uncomfortable.

I think partly, I find it hard to believe the numbers (if I have time I will read the methodology in more detail and possibly be convinced).

And partly, I think there's a difference between offsetting good things, and offsetting bad things. I think it's plausible to say "I give this much to charity, or maybe this other charity, or maybe donate more time, or...". But even though it sort of makes sense from a utilitarian perspective, I think it's wrong (and most people would agree i... (read more)

Hm. My answers were:

Anti-procrastination: "This fit with things I'd tried to do before in a small way, but went a lot farther, and I've repeatedly come back to it and feel I've made intermittent improvements by applying one or another part of it, but not really in a systematic way, so can't be sure that that's due to the technique rather than just ascribing any good results I happen to get to this because it sounded good."

Pomodoro: "I've tried something similar before with intermittently good results and would like to do so more than I do. I... (read more)

I agree that the answers to these questions depend on definitions

I think he meant that those questions depend ONLY on definitions.

As in, there's a lot of interesting real world knowledge that goes in getting a submarine to propel itself, but that now we know that, have, people asking "can a submarine swim" is only interesting in deciding "should the English word 'swim' apply to the motion of a submarine, which is somewhat like the motion of swimming, but not entirely". That example sounds stupid, but people waste a lot of time on the similar case of "think" instead of "swim".

0Bugmaster
Ok, that's a good point; inserting the word "only" in there does make a huge difference. I also agree with BerryPick6 on this sub-thread.

Anna Salamon and I usually apply the Tarski Method by visualizing a world that is not-how-we'd-like or not-how-we-previously-believed, and ourselves as believing the contrary, and the disaster that would then follow.

I find just that description really, really useful. I knew about the Litany of Tarski (or Diax's Rake, or believing something just because you wanted it to be true) and have the habit of trying to preemptively prevent it. But that description makes it a lot easier to grok it at a gut level.

And yet it seems to me - and I hope to you as well - that the statement "The photon suddenly blinks out of existence as soon as we can't see it, violating Conservation of Energy and behaving unlike all photons we can actually see" is false, while the statement "The photon continues to exist, heading off to nowhere" is true.

I remember when you drew this analogy to different interpretations of QM and was thinking it over.

The way I put it to myself was that the difference between "laws of physics apply" and "everything ac... (read more)

2[anonymous]
If I understand it correctly, (and I am not sure, feel free to correct me.) it occurs to me that belief may have a very unusual consequence indeed, which seems to be believing that "The photon continues to exist, heading off to nowhere." is true implies that you should also believe the Probability of world P1 is greater than Probability of world P2 below. P1: "You are being simulated on a supercomputer which does not delete anything past your cosmological horizon." P2: "You are being simulated on a supercomputer which deletes anything past your cosmological horizon." Which sounds like a very odd consequence of believing "The photon continues to exist, heading off to nowhere." is true, but as far as I can tell, it appears to be the case.

FWIW, this is one of my favourite articles. I can't say how much it would help everyone -- I think I read it when I was just at the right point to think about procrastination seriously. But I found the analytical breakdown into components incredibly helpful way to think about it (and I love the sniper rifle joke).

2JackV
And a couple of years later, I've not adopted this full-time, but I keep coming back to it and making incremental improvements.

Tone arguments are not necessarily logical errors

I think people's objections to tone arguments have often been misinterpreted because (ironically) the objections are often explained more emotively and less dispassionately.

As I understand it, the problem with "tone arguments" is NOT that they're inherently fallacious, but rather, than they're USUALLY (although not necessarily) rude and inflammatory.

I think a stereotypical exchange might be:

A says something inadvertently offensive to subgroup Beta B says "How dare you? Blah blah blah" ... (read more)

I don't know if the idea works in general, but if it works as described I think it would still be useful even if it doesn't meet this objection. I don't forsee any authentication system which can distinguish between "user wants money" and "user has been blackmailed to say they want money as convincingly as possible and not to trigger any hidden panic buttons", but even if it doesn't, a password you can't tell someone would still be more secure because:

  • you're not vulnerable to people ringing you up and asking what your password is for
... (read more)
0printing-spoon
Easier to avoid with basic instruction. Enemy knows the system, they can copy the login system in your cell.
0billswift
Indeed, for a recent, real world example, the improvement in systems to make cars harder to steal led directly to the rise of carjacking in the 1990s.

The impression I've formed is that physicists have a pretty good idea what's pretty reliable (the standard model) and what's still completely speculative (string theory) but at some point the popular science pipeline communicating the difference to intelligent scientifically literate non-physicists broke down, and so I became broadly cynical about non-experimentally-verified physics in general, when if I'd had more information, I'd have been able to make much more accurate predictions about which were very likely, and which were basically just guesses.

I'd not seen Elizier's post on "0 and 1 are not probabilities" before. It was a very interesting point. The link at the end was very amusing.

However, it seems he meant "it would be more useful to define probabilities excluding 0 and 1" (which may well be true), but phrased it as if it were a statement of fact. I think this is dangerous and almost always counterproductive -- if you mean "I think you are using these words wrong" you should say that, not give the impression you mean "that statement you made with those words is false according to your interpretation of those words is false".

I once skimmed "How to win friends and influence people". I didn't read enough to have a good opinion of the advice (I suspect djcb's description of it being fairly good advice as long as the author's experience generalises well, which HTWFAIP probably does better than many but not perfectly).

However, what had a profound influence on me was that though there's an unfortunate stereotype of people who've read too much Carnegie seeming slimy and fake, the author seemed to genuinely want to like people and be nice to them, which I thought was lovely.

It seems to me that Elizier's post was a list of things that typically seem, in the real world, to be component of people's happiness, but are commonly missed out when people propose putative (fictional or futuristic) utopias.

It seemed to me that Elizier was saying "If you propose a utopia without any challenge, humans will not find it satisfying" not "It's possible to artificially provide challenge in a utopia".

2TheOtherDave
Sure, at that level of abstraction, we're all in agreement: challenge is better than the absence of challenge. The question is whether this particular form of challenge is better than the absence of this particular form of challenge. Just to make the difference between those two levels of abstraction clear: were I to argue, from the general claim that challenge is good, that creating a world where people experience suffering and death so that we can all have the challenge of defeating suffering and death is therefore good, I would fully expect that the vast majority of LW would immediately reject that argument. They would point out, rightly, that just because a general category is good, does not mean that every instance of that category is good, and they would, rightly, refocus the conversation on the pros and cons, not of challenge in general, but of suffering and death in particular. Similarly, the discussion in this comment thread is not about the pros and cons of challenge in general, but of ignorance in particular.

Hm. Now you say it, I think I've definitely read some excellent non-Elizier articles on Less Wrong. But not as systematically. Are they collated together ("The further sequences") anywhere? I mean, in some sense, "all promoted articles" is supposed to serve that function, but I'm not sure that's the best way to start reading. And there are some good "collections of best articles". But they don't seem as promoted as the sequences.

If there's not already, maybe there should be a bit of work in collecting the best articles by them... (read more)

0David_Gerard
People have indeed started on this, but we could probably do with more. Go for it :-)

Awesome avoidance of potential disagreement in favour of cooperation for a positive-sum result :)

I agree (as a comparative outsider) that the polite response to Holden is excellent. Many (most?) communities -- both online communities and real-world organisations, especially long-standing ones -- are not good at it for lots of reasons, and I think the measured response of evaluating and promoting Holden's post is exactly what LessWrong members would hope LessWrong could do, and they showed it succeeded.

I agree that this is good evidence that LessWrong isn't just an Eliezer-cult. (The true test would be if Elizier and another long-standing poster were d... (read more)

Yes, I'd agree. (I meant to include that in (B)). I mean, in fact, I'd say that "there are no biological differences between races other than appearance" is basically accurate, apart from a few medical things, without any need for tiptoeing around human biases. Even if the differences were a bit larger (as with gender, or even larger than that), I agree with your last parenthesis that it would probably still be a good idea to (usually) _act_as if there weren't any.

From context, it seems "race realism" refers to the idea that there are legitimate differences between races, is that correct? However, I'm not sure if it's supposed to refer to biological differences specifically, or any cultural differences? And it seems to be heavily loaded with connotations which I'm unaware of, that I would be hesitant to say it was "true" or "not true" even if I knew the answer to the questions in the two first sentences.

Let me try to summarise the obvious parts of the situation as I understand it. I contend that:

(A) There are some measureable differences between ethnicities that are most plausibly attributed to biological differences. (There are some famous examples, such as greater susceptibility of some people to skin cancer, or sickle cell anemia. I assume there are smaller differences elsewhere. If anyone seriously disagrees, say so.)

(B) These are massively dwarfed by the correlation of ethnicity with cultural differences in almost all cases.

(C) There is a social tabo... (read more)

4Eugine_Nier
Given the existence of taboo (C) how can you possibly have enough evidence to be as sure of (B) as you are?
1Multiheaded
Yep, I also think that the mainstream position on this is largely better than the more naive approach, whether you call it "race realism" or something else. It relies on denial, doublethink and hypocrisy, but none of these are really horrible in themselves - not compared to the things once done under the banner of "racial realism" (slavery, genocide, mistreatment, etc). Now, I understand the HBD advocates' frustration; it might indeed be possible to build a better-working and more honest system - but I fear that most of them don't even understand how much caution they need to exercise! However, I upvoted the transcript of Aurini's talk the moment I read it, as this is unusually good for that contrarian crowd; he displays some much-needed sympathy, courtesy and sorrow at the whole human tragedy.

I think your summary is fine, but I'd add this: almost everyone who thinks in terms of "differences between races" massively overestimates the effect of race (alone, social class does matter a lot), to the point that pretending there is no difference is probably a better idea. (Similar to how it's better to not designate a 'current best candidate', if you're human.)

4thomblake
Seems like a pretty good summary. With more detail and possibly some experimental results there might be a good Less Wrong post in there - about the dangers of thinking about race if you're a human.
8JackV
From context, it seems "race realism" refers to the idea that there are legitimate differences between races, is that correct? However, I'm not sure if it's supposed to refer to biological differences specifically, or any cultural differences? And it seems to be heavily loaded with connotations which I'm unaware of, that I would be hesitant to say it was "true" or "not true" even if I knew the answer to the questions in the two first sentences.

I would say add [Video]: [Link] would perpetuate the misunderstanding that there may be no immediate content, [Video] correctly warns people who (for whatever reason) can't easily view arguments in video format.

2[anonymous]
Good idea.

I think this is directly relevant to the idea of embracing contrarian comments.

The idea of having extra categories of voting is problematic, because it's always easy to suggest, but only worthwhile if people will often want to distinguish them, and distinguishing them will be useful. So I think normally it's a well-meaning but doomed suggestion, and better to stick to just one.

However, whether or not it would be a good idea to actually imlpement, I think separating "interested" and "agree" is a good way of expressing what happens to con... (read more)

That's an awesome comment. I'm interested which specific cues came up that you realised each other didn't get :)

Heh. It was twenty years ago, I'm probably confabulating more than I'm recalling.

To pick an example... I remember observing that both my family and hers had highly specific ways of communicating the difference between a demand, a request, and a question, but the mechanisms had almost nothing in common. In my family, if it was phrased as an interrogative it was either a question or a demand, but never a request. and I was expected to recognize demands by context. In her family, it seemed everything was an interrogative; whatever the cue was, I never really figured it out.

Perhaps the right level of warning is to say "Cambridge UK" in the title and first line, but not take a position on whether other people are likely to be interested or not..?

I've been reading the answers and trying to put words into what I want to say. Ideally people will experience not just being more specific, but experience that when they're more specific, they immedaitely communicate more effectively.

For instance, think of three or four topics people probably have an opinion on, starting with innocuous (do you like movie X) and going on to controvertial (what do you think of abortion). Either have a list in advance, or ask people for examples. Perhaps have a shortlist and let people choose, or suggest something else if the... (read more)

0handoflixue
I like this, because it forces the audience to come up with specific statements, but it doesn't seem to teach them to recognize WHEN they need to be more specific. I'd say it's a very good precursor, to help them see what a specific statement is, and why it's useful. It's actually my favorite from this whole thread for that, so I do think it's a really cool idea! :) (I'm finding it neat how often this thread is identifying, for me, things that ought to be taught BEFORE you even get in to the core 5-second-skill of "recognizing when to be more specific". It reminds me of Eliezer's comments on the Sequences growing exponentially as he realized he needed to establish X before going on to Y, and then realizing he'd also need Q and K)

For that matter, I couldn't stop my mind throwing up objections like "Frodo buys off-the-rack clothes? From where exactly? Surely he'd have tailor made? Wouldn't he be translated into British English as saying 'trousers'? Hobbit feet are big and hairy for Hobbits, but how big are they compared to human feet -- are their feet and inches 2/3 the size?"

It didn't occur to me until I'd read past the first two paragraphs that we were even theoretically supposed to ACTUALLY guess what size Frodo would wear. And I'm still unsure if the badness of the Fro... (read more)

The explanation of the current system, and how to view it in a rationalist manner was really interesting.

The problem as you state it seems to be that the court (and people in general) have a tendency to evaluate each link in a chain separately. For instance, if there was one link with an 80% chance of being valid, both a court and a bayesian would say "ok, lets accept it provisionally for now", but if there's three or four links, a court might say "each individual link seems ok, so the whole chain is ok" but a Bayesian would say "t... (read more)

3TimS
I suspect the low hanging fruit in improving the legal system is related to evaluating witness credibility and eyewitness accuracy. Judges talk all the time about how the factfinder was present when a witness testified and was in a unique position to evaluate the credibility of a witness' testimony. But the evidence is pretty strong that people are terrible at those kinds of judgments and don't realize how bad they are at them.

I would be interested in going along to a meet-up at some point, but am not normally free with less than a week's notice :)

2rebellionkid
Hopefully future meetups will be on a regular basis so they can be advertised in good time.

[Late night paraphrasing deleted as more misunderstanding/derailing than helpful. Edit left for honesty purposes. Hopeful more useful comment later.]

That sounds reasonable. I agree a complete discussion is probably too complicated, but it certainly seems a few simple examples of the sort I eventually gave would probably help most people understand -- it certainly helped me, and I think many other people were puzzled, whereas with the simple examples I have now, I think (although I can't be sure) I have a simplistic but essentially accurate idea of the possibilities.

I'm sorry if I sounded overly negative before: I definitely had problems with the post, but didn't mean to negative about it.

If I were brea... (read more)

Thank you.

But if you're running something vaguely related to a normal program, if the program wants to access memory location X, but you're not supposed to know which memory location is accessed, doesn't that mean you have to evaluate the memory write instruction in combination with every memory location for every instruction (whether or not that instruction is a memory write)?

So if your memory is -- say -- 500 MB, that means the evaluation is at least 500,000,000 times slower? I agree there are probably some optimisations, but I'm leery of being able to r... (read more)

1paulfchristiano
Running a C program with 500MB of memory would undoubtedly be very slow. Current technologies would be even slower than you suspect for other reasons. However, there are other languages than C , and for some this transformation is much, much more efficient. The game of life is at least parallel, but is also extremely inefficient. You could instead use some fundamentally parallel language which has all of the benefits of the game of life while actually being structured to do computation. Coincidentally, many AI researchers already do work in functional languages like Lisp, which are very well-suited to this transformation. Running a huge List program homomorphically might be practical if you had constant rate homomorphic encryption, which I suspect we will eventually. The slowdown could be something like 10x as slow, running on 20x as many processors. When I say that I think the problem is theoretically possible, I do also mean that I think significant speedups are theoretically possible. Homomorphic encryption is an unnecessarily huge hammer for this problem. Its just that no one has thought about the problem before (because cryptographers don't care about unfriendly AI and people who care about friendly AI don't think this approach is viable) so we have to co-opt a solution to a normal human problem. I could include a complete discussion of these issues, and you are probably right that it would have made a more interesting article. Instead I initially assumed that anyone who didn't know a lot about cryptography had little chance of learning it from a blog post (and people who did know wouldn't appreciate the exposition), and I should post a convincing enough argument that I could resolve the usual concerns about AI's in boxes. I think as it stands I am going to write a post about theoretically safe things to do with an unfriendly AI in a box, and then conditioned on that going well I may revisit the question of building a box in a more well thought out way. Unti

Isn't "encrypt random things with the public key, until it finds something that produces [some specific] ciphertext" exactly what encryption is supposed to prevent?? :)

(Not all encryption, but commonly)

1paulfchristiano
In RSA, it is easy to find a message whose encryption begins with "001001," for example.

"You don't quite understand homomorphic encryption."

I definitely DON'T understand it. However, I'm not sure you're making it clear -- is this something you work with, or just have read about? It's obviously complicated to explain in a blog (though Schnier http://www.schneier.com/blog/archives/2009/07/homomorphic_enc.html does a good start) but it surely ought to be possible to make a start on explaining what's possible and what's not!

It's clearly possible for Alice to have some data, Eve to have a large computer, and Alice to want to run a well-k... (read more)

4jsteinhardt
To answer your question, since Paul apparently failed to --- Paul reads crypto papers and spends non-trivial amounts of time thinking about how to design cryptographic protocols to attain various guarantees. I'm not sure if that constitutes 'working with' or not, but it presumably answers your question either way.
0paulfchristiano
I'm sure I'm not making it clear. I may make a discussion post describing in much more detail exactly what is and is not possible in theoretical computational cryptography, because it is a really beautiful subject that might be independently interesting to many readers. What you say in the last post seems precisely correct. You would use something much more efficient than the game of life. This makes any program run much, much slower. I have limited experience with computer architecture, but I would guess that a single operation on a modern processor would require about 300 homomorphic operations, many of which could be done in parallel. So optimistically you could hope that a program would run only a couple billion times slower using modern cryptography with dedicated hardware (it does parallelize really well though, for what its worth). I think if it was concluded that quarantining code was useful, smart people could solve this problem with much, much better factors. I would not be surprised if you could make the code run say 100x slower but be provably secure in this sense. The gimped key I mentioned is not a part of the cryptosystem. Again, I have high confidence that it could be worked out if smart people decided this problem was interesting and worked on it.

I definitely have the impression that even if the hard problem a cryptosystem is based on actually is hard (which is yet to be proved, but I agree is almost certainly true), most of the time the algorithm used to actually encrypt stuff is not completely without flaws, which are successively patched and exploited. I thought this was obvious, just how everyone assumed it worked! Obviously an algorithm which (a) uses a long key length and (b) is optimised for simplicity rather than speed is more likely to be secure, but is it really the consensus that some cr... (read more)

0timtyler
This isn't really right. With the cases where algorithms can be proved as secure as some math problem, you can attack the protocol they are used in - or the RNG that seeds them - but the not really the algorithm itself. Of course, not all cryptography is like that, though. http://en.wikipedia.org/wiki/Provable_security
5paulfchristiano
A provably correct implementation of any reasonable encryption scheme requires automatic verification tools which are way beyond our current ability. I strongly suspect that we will solve this problem long before AGI though, unless we happen to stumble upon AGI rather by accident. I think that you could implement the scheme I described with reasonable confidence using modern practices. Implementing encryption alone is much easier than building an entire secure system. Most flaws in practice come from implementation issues in the system surrounding the cryptography, not the cryptography itself. Here the surrounding system is extremely simple. You also have a large advantage because the design of the system already protects you from almost all side channel attacks, which represent the overwhelming majority of legitimate breaks in reasonable cryptosystems.

LOL. Good point. Although it's a two way street: I think people did genuinely want to talk about the AI issues raised here, even though they were presented as hypothetical premises for a different problem, rather than as talking points.

Perhaps the orthonormal law of less wrong should be, "if your post is meaningful without fAI, but may be relevant to fAI, make the point in the least distracting example possible, and then go on to say how, if it holds, it may be relevant to fAI". Although that's not as snappy as Godwin's :)

Load More