Comment author: Sniffnoy 29 July 2012 01:47:36PM 2 points [-]

Note that this makes the struck-through text very difficult to search.

Comment author: robertskmiles 30 July 2012 03:55:44PM *  1 point [-]

If LW's markdown is like reddit's, double tilde before and after will strike through text. ~~Let's see if that works~~

Edit: It doesn't. Does anyone know how I would go about fixing this?

Edit2: The issue tracker suggests it's been fixed, <del>but it doesn't seem to be</del>.

Comment author: skeptical_lurker 29 July 2012 08:21:02PM 0 points [-]

But remember that it's not just your own rationality that benefits you.

Are you saying that improving epistemic rationality is important because it benefits others as well as myself? This is true, but there are many other forms of self-improvement that would also have knock-on effects that benefit others.

I have actually read most of the relevant sequences, epistemic rationality really isn't low-hanging fruit anymore for me, although I wish I had known about cognitive biases years ago.

Comment author: robertskmiles 30 July 2012 11:18:04AM *  1 point [-]

Are you saying that improving epistemic rationality is important because it benefits others as well as myself?

No, I'm saying that improving the epistemic rationality of others benefits everyone, including yourself. It's not just about improving our own rationality as individuals, it's about trying to improve the rationality of people-in-general - 'raising the sanity waterline'.

Comment author: skeptical_lurker 28 July 2012 06:51:25PM *  9 points [-]

Hello everyone, Like many people, I come to this site via an interest in transhumanism, although it seems unlikely to me that FAI implementing CEV can actually be designed before the singularity (I can explain why, and possibly even what could be done instead, but it suddenly occurred to me that it seems presumptuous of me to criticize a theory put forward by very smart people when I only have 1 karma...).

Oddly enough, I am not interested in improving epistemic rationality right now, partially because I am already quite good at it. But more than that, I am trying to switch it off when talking to other people, for the simple reason (and I'm sure this has already been pointed out before) that if you compare three people, one who estimates the probability of an event at 110%, one who estimates it at 90%, and one who compensates for overconfidence bias and estimates it at 65%, the first two will win friends and influence people, while the third will seem indecisive (unless they are talking to other rationalists). I think I am borderline asperger's (again, like many people here) and optimizing social skills probably takes precedence over most other things.

I am currently doing a PhD in "absurdly simplistic computational modeling of the blatantly obvious" which better damn well have some signaling value. In my spare time, to stop my brain turning to mush, among other things I am writing a story which is sort of rationalist, in that some of the characters keep using science effectively even when the world is going crazy and the laws of physics seem to change dependent upon whether you believe in them. On the other hand, some of the characters are (a) heroes/heroines (b) awesomely successful (c) hippies on acid who do not believe in objective reality (not that I am implying that all hippies/people who use lsd are irrational). Maybe the point of the story is that you need more than just rationality? Or that some people are powerful because of rationality, while others have imagination, and that friendship combines their powers in a my little pony like fashion? Or maybe its all just an excuse for pretentious philosophy and psychic battles?

Comment author: robertskmiles 28 July 2012 08:23:15PM 6 points [-]

I am not interested in improving epistemic rationality right now, partially because I am already quite good at it.

But remember that it's not just your own rationality that benefits you.

it seems presumptuous of me to criticize a theory put forward by very smart people when I only have 1 karma

Presume away. Karma doesn't win arguments, arguments win karma.

Comment author: evand 28 July 2012 05:30:29PM 2 points [-]

I must conclude one (or more) of a few things from this post, none of them terribly flattering.

  1. You do not actually believe this argument.
  2. You have not thought through its logical conclusions.
  3. You do not actually believe that AI risk is a real thing.
  4. You value the plus-votes (or other social status) you get from writing this post more highly than you value marginal improvements in the likelihood of the survival of humanity.

I find it rather odd to be advocating self-censorship, as it's not something I normally do. However, I think in this case it is the only ethical action that is consistent with your statement that the argument "might work", if I interpret "might work" as "might work with you as the gatekeeper". I also think that the problems here are clear enough that, for arguments along these lines, you should not settle for "might" before publicly posting the argument. That is, you should stop and think through its implications.

Comment author: robertskmiles 28 July 2012 07:19:23PM *  0 points [-]

I'm not certain that I have properly understood your post. I'm assuming that your argument is: "The argument you present is one that advocates self-censorship. However, the posting of that argument itself violates the self-censorship that the argument proposes. This is bad."

So first I'll clarify my position with regards to the things listed. I believe the argument. I expect it would work on me if I were the gatekeeper. I don't believe that my argument is the one that Eliezer actually used, because of the "no real-world material stakes" rule; I don't believe he would break the spirit of a rule he imposed on himself. At the time of posting I had not given a great deal of thought to the argument's ramifications. I believe that AI risk is very much a real thing. When I have a clever idea, I want to share it. Neither votes nor the future of humanity weighed very heavily on my decision to post.

To address your argument as I see it: I think you have a flawed implicit assumption, i.e. that posting my argument has a comparable effect on AI risk to that of keeping Eliezer in the box. My situation in posting the argument is not like the situation of the gatekeeper in the experiment, with regards to the impact of their choice on the future of humanity. The gatekeeper is taking part in a widely publicised 'test of the boxability of AI', and has agreed to keep the chat contents secret. The test can only pass or fail, those are the gatekeeper's options. But publishing "Here is an argument that some gatekeepers may be convinced by" is quite different from allowing a public boxability test to show AIs as boxable. In fact, I think the effect on AI risk of publishing my argument is negligible or even positive, because I don't think reading my argument will persuade anyone that AIs are boxable.

People generally assess an argument's plausibility based on their own judgement. And my argument takes as a premise (or intermediary conclusion) that AIs are unboxable (see 1.3). Believing that you could reliably be persuaded that AIs are unboxable, or believing that a smart, rational, highly-motivated-to-scepticism person could be reliably persuaded that AIs are unboxable, is very very close to personally believing that AIs are unboxable. In other words, the only people who would find my argument persuasive (as presented in overview) are those who already believe that AIs are unboxable. The fact that Eliezer could have used my argument to cause a test to 'unfairly' show AIs as unboxable is actually evidence that AIs are not boxable, because it is more likely in a world in which AIs are unboxable than one in which they are boxable.

P.S. I love how meta this has become.

Comment author: robertskmiles 25 July 2012 10:34:38AM *  2 points [-]

But it turns out that tasks that seem easy to us can in fact require such a specialized region.

In a way, this really shouldn't be surprising at all. Any common mental task which has its own specialised region will of course seem easy to us, because it doesn't make use of the parts of the brain we are consciously aware of.

Comment author: robertskmiles 17 July 2012 07:41:47PM 3 points [-]

I think the "just make sure everyone agrees on everything" idea is a good one, but quite difficult in practice.

Comment author: robertskmiles 12 July 2012 04:56:33PM *  7 points [-]

With this line of argument, there's literally no point at which one can sit back and say, "I've fulfilled my duty to charity - there's nothing more to do".

That reminds me very strongly of something I read in a Jewish prayerbook, or possibly the Talmud, a long time ago. I can't find it with google (translation being what it is), but here's my best recollection:

"It is a command we are given repeatedly in Torah. But what does it really mean to 'love your neighbour as yourself'? ... Never would a man say 'I have fulfilled my obligation to myself'. In the same way, you have never fulfilled your obligation to your neighbour."

Taking the comparison back the other way raises what I think is an interesting question. People have no issues with the idea that your obligations to yourself are unbounded, so why does having unbounded obligations to others pose a problem?

There's literally no point at which one can sit back and say, "I've fulfilled my duty to myself - there's nothing more to do".

Comment author: Multiheaded 18 June 2012 12:32:24PM -1 points [-]

Oh jeez, just screw it. Seems that I can't say even a slightly, tangentially ideological thing without fucking up.

Comment author: robertskmiles 18 June 2012 12:53:31PM 0 points [-]

I don't think you fucked up. Down-votes aren't from me.

Anyway, yeah I agree, Stephenson's own position is very different from the Vicks'. I still think they're the "good guys" in the story though, even though their opinions aren't held by the author.

Comment author: Multiheaded 18 June 2012 11:48:49AM *  -1 points [-]

(Note for those who haven't read it: the Vics aren't depicted as "the good guys" all in all, although they have badass moments. Actually, it might be a good "scary eutopia" by Eliezer's standards, but the major factions are optimized to look simply scary to the intended audience of geeks, with the exception of - surprise! - the ones based on "hacker values" such as CryptNet or the Distributed Republic.)

Comment author: robertskmiles 18 June 2012 11:59:26AM 1 point [-]

I'm not sure I agree with you on that. CryptNet and the Distributed Republic have quite minor roles in the story, and pretty much every single major character (with the exception of the Confucians) is a Neo-Victorian. It's too good a book to have Righteous Kind And Noble Heroes Beyond All Reproach, and the Vicks have their problems, but I'd say they are basically "the good guys", and if not that then certainly the protagonists, of the story.

Comment author: robertskmiles 18 June 2012 11:38:08AM *  14 points [-]

I think the criticism of 6 is a misunderstanding. It doesn't say "the world resembles the ancestral savanna", it says "the world resembles the ancestral savanna more than say a windowless office". The best environment is unlikely to be anything like the ancestral savanna, but it's likely to be closer to that than to a windowless office, in terms of sensory experience. The point I think is not the specifics of the environment, but that it engages with our bodies and senses in a way that we, as evolved creatures, find satisfying, and in a way that the purely mental stimulation available in the office does not.

That's what I took away from the linked post.

View more: Prev | Next