Immortal humans can go horribly wrong, unless "number of dying humans" is really what you want to minimize.
"Increase my utility as much as you can"?
Immortal humans can go horribly wrong, unless "number of dying humans" is really what you want to minimize.
"Increase my utility as much as you can"?
"Increase my utility as much as you can"?
That would just cause them to pump chemicals in you head, I think. But it's definitely thinking in the right direction.
"number of dying humans" is really what you want to minimize.
Even with pseudo immortality, accidents happen, which means that the best way to minimize the number of dying humans is either to sterilize the entire species or to kill everyone. The goal shouldn't be to minimize death but to maximize life.
Overwrite my current utility function upon your previous motivational networks, leaving no motivational trace of their remains.
That actually seems like it'd work.
I want my old model of Eliezer Yudkowsky back!
Eliezer Yudkowsky is the supreme being to whom it is up to all of us to become superior!
I think chaosmosis would prefer to perceive this as occurring through a change in chaosmosis than a change in chaosmosis's evidence about Eliezer.
No preference.
I don't understand how your comment is responsive to atorm's though, so I might be missing something here.
And lo, people began tweeting:
Which is false. This pushes as far in the opposite wrong direction as the viewpoint it means to criticize.
Evolutionary biology, the non-epistemological part of the exposition of quantum mechanics, and of course heuristics and biases, are all not original. They don't look deceptively original either; they cite or attributed-quote the sources from which they're taken. I have yet to encounter anyone who thinks the Sequences are more original than they are.
When it comes to the part that isn't reporting on standard science, the parts that are mostly dealt with by modern "philosophers" rather than experimental scientists of one kind or another, the OP is vastly overstating how much of the Sequences are similar to the standard stuff out there. There is such a vast variety of philosophy that you can often find a conclusion similar to anything, to around the same degree that Leibniz's monadology anticipated timeless quantum mechanics, i.e., not very much. The motivations, the arguments by which things are pinned down, the exact form of the conclusions, and what is done with those conclusions, is most of the substance - finding a conclusion that happens to look vaguely similar does not mean that I was reporting someone else's academic work and failing to cite it, or reinventing work that had already been done. It is not understating any sort of "close agreement" with even those particular concluders, let alone the field as a whole within which those are small isolated voices. Hofstadter's superrationality is an acknowledged informal forerunner of TDT. But finding other people who think you ought to cooperate in the PD, but can't quite formalize why, is not the same as TDT being preinvented. (Also TDT doesn't artifically sever decision nodes from anything upstream; the idea is that observing your algorithm, but not its output, is supposed to screen off things upstream. This is "similar" to some attempts to rescue evidential decision theory by e.g. Eels, but not quite the same thing when it comes to important details like not two-boxing on Newcomb's Problem.) And claiming that in principle philosophical intuitions arise within the brain is not the same as performing any particular dissolution of a confused question, or even the general methodology of dissolution as practiced and described by Yudkowsky or Drescher (who actually does agree and demonstrate the method in detail within "Good and Real").
I'm also still not sure that Luke quite understands what the metaethics sequence is trying to say, but then I consider that sequence to have basically failed at exposition anyway. Unfortunately, there's nothing I can point Luke or anyone else at which says the same thing in more academic language.
Several of these citations are from after the originals were written! Why not (falsely) claim that academia is just agreeing with the Sequences, instead?
I don't understand what the purpose of this post was supposed to be - what positive consequence it was supposed to have. Lots of the Sequences are better exposition of existing ideas about evolutionary biology or cognitive biases or probability theory or whatever, which are appropriately quoted or cited within them? Yes, they are. People introducing Less Wrong should try to refer to those sources as much as possible when it comes to things like heuristics and biases, rather than talking like Eliezer Yudkowsky somehow invented the idea of scope insensitivity, so that they don't sound like phyg victims? Double yes. But writing something that predictably causes some readers to get the impression that ideas presented within the Sequences are just redoing the work of other academics, so that they predictably tweet,
...I do not think the creation of this misunderstanding benefits anyone. It is also a grave sin to make it sound like you're speaking for a standard academic position when you're not!
And I think Luke is being extremely charitable in his construal of what's "already" been done in academia. If some future anti-Luke is this charitable in construing how much of future work in epistemology and decision theory was "really" all done within the Sequences back in 2008, they will claim that everything was just invented by Eliezer Yudkowsky way back then - and they will be wrong - and I hope somebody argues with that anti-Luke too, and doesn't let any good feeling for ol E. Y. stand in their way, just like we shouldn't be prejudiced here by wanting to affiliate with academia or something.
I get what this is trying to do. There's a spirit in LW which really is a spirit that exists in many other places, you can get it from Feynman, Hofstadter, the better class of science fiction, Tooby and Cosmides, many beautiful papers that were truly written to explain things as simply as possible, the same place I got it. (Interesting side note: John Tooby is apparently an SF fan who grew up reading van Vogt and Null-A, so he got some of his spirit from the same sources I did! There really is an ancient and honorable tradition out there.) If someone encounters that spirit in LW for the first time, they'll think I invented it. Which I most certainly did not. If LW is your first introduction to these things, then you really aren't going to know how much of the spirit I learned from the anncient masters... because just reading a citation, or even a paragraph-long quote, isn't going to convey that at all. The only real way for people to learn better is to go out and read Language in Thought and Action or The Psychological Foundations of Culture. Doing this, I would guess, gave Luke an epiphany he's trying to share - there's a whole world out there, not just LW the way I first thought. But the OP doesn't do that. It doesn't get people to read the literature. Why should they? From what they can see, it's already been presented to them on LW, after all. So they won't actually read the literature and find out for themselves that it's not what they've already read.
There's literature out there which is written in the same spirit as LW, but with different content. Now that's an exciting message. It might even get people to read things.
With both your comment here and your comments on the troll-fee issue I've found you coming across as arrogant. This perception seems to roughly match the response that other people have had to those comments as well, since most people disagreed with you in both areas (judging by number of upvotes). I hadn't perceived you that way before now, so I'm wondering if something happened to you recently that's altered the way you post or the way you think. This change is for the worse; I want my old model of Eliezer Yudkowsky back!
Frankly, I have found the sequences to be primarily useful for condensing concepts that I already had inside my head. The ideas expressed in almost all of the sequences are blatantly obvious, but they come across as catchy and often are reducible to a quick phrase. Their value lies in the fact that they make it easy to internalize certain ideas so that they're more readily accessible to me. They also helped clarify the boundaries of some concepts, to a certain extent. The sequences have provided me with a useful terminology, but I don't think they've offered me much else.
What ideas do you believe to be original that you've produced?
Is there a reason that defending the originality of the sequences is so important to you?
For strength, you use Dugbogs that were crushed by a strong Re'em.
For heat, you use bronze that was forged in a hot forge.
For immortality, you use a corpse that was burned by an immortal phoenix.
K. I was confused because Bellatrix hadn't died, mostly. Your edit helped.
There are no decision theory experts.
In practice, true.
This may sound a bit crazy right now, but hear me out: rationality is not the territory.
Amusing.
Replace humanity with paperclips.
Clippy.
Have you read My Little Horcrux: Friendship is Torture?
My second favorite one.
Timeless sex is highly correlated with Friendly Lukeprog.
I do not know Luke in any way, and that's not my orientation, but I lol'd.
I haven't seen any links to this on Lesswrong yet, and I just discovered it myself. It's extremely interesting, and has a lot of implications for how the way that people perceive and think of others are largely determined by their environmental context. It's also a fairly good indict of presumably common psychiatric practices, although it's also presumably outdated by now. Maybe some of you are already familiar with it, but I thought I'd mention it and post a link for those of you who aren't.
There's probably newer research on this, but I don't have time to investigate it at the moment.
http://en.wikipedia.org/wiki/Rosenhan_experiment
how would a possibly insane person determine that insanity X is a possible kind of insanity?
Perhaps they couldn't. I'm not sure what that has to do with anything.
Also, this approach presumes that your understanding of the way probabilities work and of the existence of probability at all is accurate. Using the concept of probability to justify your position here is just a very sneaky sort of circular argument
Sure. If I'm wrong about how probability works, then I might be wrong about whether I can rule out having X-type insanity (and also might be wrong about whether I can rule out being a butterfly).
Perhaps they couldn't. I'm not sure what that has to do with anything.
I didn't think that your argument could function on even a probabilistic level without the assumption that X-insanity is an objectively real type of insanity. On second thought, I think your argument functions just as well as it would have otherwise.
But, if someone doesn't want to admit that logic exists or you just disagree with someone as to what logic is, there's really nothing to be done but to walk away.
That's not necessarily true. If we disagree on what logic is, I can work out the rules of what you consider logic and decide whether, using those rules, I come to a different conclusion than you do (in which case I can try to convince you of that different conclusion using your rules), or I can attempt to convince you that you're wrong via illogical means (like telling you a convincing story, or using question-begging language, or etc.). I can also do the latter if you reject logic altogether.
Truth, thanks.
I saw something for the first time today. I replied to a comment that had been down-voted, and the site asked me,
Replies to downvoted comments are discouraged. Pay 5 Karma points to proceed anyway?
So, if one person dislikes a comment, it shouldn't be responded to? I disagree strongly. This makes the site enforce a tyranny of the majority. It punishes resistance to groupthink.
I don't think Alice should be prohibited from responding to Bob, ever. If two users create drama with back-and-forth responses, they have both chosen to do so.
I missed some of the earlier threads and didn't want to reignite them. I feel more comfortable replying to PhilGoetz's comment since it's only from two days ago.
One problem that I didn't see anyone discuss is that this feature is likely to drive away new users. This policy discourages interaction with new users because unpopular comments overlap significantly with comments from new users. By discouraging commenters from responding to the low quality posts of new users, we disincentivize the picking of low hanging fruit, which is the opposite of what we should be doing. In addition, by doling out karma penalties at a set level rather than as a fraction of total accumulated karma, new users face much heavier fees than regular users, which will also result in increased insularity.
I'd saying telling an interviewer you have sufficient confidence in your product to not need a backup plan is rational, actually not having one isn't.
This comes across as inauthentic and slightly scared to me. At best, he's not great at PR. At worst, he doesn't have any back up plan. So that would support calling it irrationality.
Well. I was thinking about it, and it seems like not having a backup plan is the kind of thing that would send bad signals to investors and whatnot. It's not clear to me that he's better off doing this than explaining how Microsoft is a fantastically professional company that's innovating and reaching into new frontiers, etc.
I don't know specifically what alternate products would potentially be good ideas for them though. I agree that backup plans are good in general but I don't know if they're good for Microsoft specifically, based on the resources they have. Windows is kind of their thing, I don't know if they could execute on anything else.