Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

The Fundamental Question

43 Post author: MBlume 19 April 2010 04:09PM

It has been claimed on this site that the fundamental question of rationality is "What do you believe, and why do you believe it?".

A good question it is, but I claim there is another of equal importance. I ask you, Less Wrong...

What are you doing?

And why are you doing it?

Comments (277)

Comment author: sketerpot 20 April 2010 08:05:34AM *  11 points [-]

I'm writing some programs to take some numbers from a typical "new renewable energy plant under construction!!" news article and automatically generate summaries of how much it costs compared to other options, what the expected life span will be, and so on. I intend to go on a rampage across the internet, leaving concise summaries in the comment sections of these news articles, so that people reading the comments will be able to see past the press release bullshit.

Why? Because I believe that people will be more rational if the right thing is obvious. A simple table of numbers and a few sentences of commentary can strip away layers of distortions and politics in a matter of seconds, if you do it right.

Essentially, it's a more elaborate and more automated version of what I did in this comment thread on Reddit: give the perspective that lazy journalists don't, and do the simple arithmetic that most journalists can't do.

It's very simple, but maybe it'll be effective. A lot of people respond well to straight talk, if they don't have a strongly-held position already.

Comment author: aausch 20 April 2010 08:42:38PM 1 point [-]

Does anyone know of studies which measure how much of an effect access to reliable information has on decision making?

Comment author: sketerpot 21 April 2010 03:17:05AM *  2 points [-]

I'm not sure how you would even define that well enough to measure it. How do you define "access to reliable information"? Does a large, confusing web site with lots of reliable information constitute "access to reliable information"?

What I do know is that the vast majority of "renewable energy" articles are worse than worthless, because they give the average reader the illusion of understanding, while systematically distorting the facts. Case in point: every wind farm announcement I've ever seen has conflated the maximum power output with the average power output. This is off by a factor of 2.5--5, which is similar to saying that I'm 15--30 feet tall. (That's 4.5--9 meters.)

Simply pointing this out to people can help a lot, if my experience is anything to go by. This is anecdotal evidence, I know, but it should work and it looks like it does work.

Comment author: Johnicholas 20 April 2010 12:19:49PM 8 points [-]

3 priorities, in no particular order: support myself, become more capable, enhance rationality by publishing "seed exoshell" software.

An exoshell, as I understand it, is the software that you think with, in much the same way that you think with a piece of paper or a whiteboard. Current exoshell-ish software might include emacs ("emacs as operating system") or unix shell scripting ("go away or I will replace you with a small shell script"). Piotr Wozniak clearly uses SuperMemo as an exoshell. Mark Hurst's "Bit Literacy" annoys me more than fingernails on a chalkboard, but I think he's talking about his exoshell and life with an exoshell. Quicksilver, Maple, Mathematica, Matlab might also be candidates.

One of the problems with present exoshell-ish software is the long learning process before you get to "fully hackable". The gurus (e.g. RMS, Wozniak) achieved their close integration by gradually growing from a simpler, fully hackable version.

Comment author: XFrequentist 20 April 2010 06:27:25PM 4 points [-]

To a fledgling computer geek, this sounds absolutely awesome, and I would love some elaboration!

Comment author: Johnicholas 20 April 2010 07:25:23PM *  0 points [-]

Well... the idea is that the tiniest exoshell would simply be one that continuously verifies / trains the user to make changes to the exoshell. So I took a standard design for a quine, and modified it so it injects a random error into the source code that it spits out.

I call it "ExoMustard" Source is at: http://www.johnicholas.com/exomustard.c

So the idea is if you took this, and repeatedly fixed it, then ran it, then fixed it, then ran it, et cetera, you would soon be comfortable adding other features. Maybe it acts as a little to-do list maintainer program, as well as its previous features. Maybe it also acts as a compiler or a shell or a virtual machine emulator - or stores recipes for how to download and install the compiler, shell, and virtual machine that you like to use.

I've done about 10 iterations from that starting point, combining it with SQLite and adding a self-test framework and so on, but I haven't gotten to the point of using it routinely for interacting with the world.

Recently, I've started studying tiny self-compilers (Fabrice Bellard's otcc, and Edward Grimley-Evans's bcompiler and cc500). Maybe an ideal seed exoshell would have the functionality of Lennart Augustsson's 1996 ioccc entry, the readability of Darius Bacon's Halp and the host-independence of Rob Landley's Firmware Linux..

Comment author: gwillen 20 April 2010 01:34:37PM 2 points [-]

Has there been a 'what is your exoshell' thread on LW yet? Would it be appropriate to have one? In the purest definition, mine is pretty small (a thousand lines of Perl or so), but if you include 'software you communicate with', which I think I do, it grows rather large, to include most of what's running on my server.

Comment author: PeerInfinity 20 April 2010 02:27:23PM *  1 point [-]

My exoshell used to be plain text files, then it was MediaWiki, now it's Google Wave.

Not just the raw text itself, but scripts to extract XML tags and other data from the text files, and do stuff with the data.

That was a relatively straightforward process with raw text files.

It was a bit more complicated with MediaWiki, but that seemed to work even better.

Google Wave has the advantage of collaborative editing in realtime, and some advanced search features, but it has lots of serious disadvantages, (including there currently being no way to export to XML!) but hopefully this limitation will soon be overcome. For now I'm using the Ferry extension to export to Google Documents, and from there I batch-export to html.

Oh, and I recently started some experiments with scripts to extract tags from this data and make some fancy quantifiedself graphs. If anyone is interested in hearing more about this, please let me know.

Comment author: aausch 20 April 2010 08:39:33PM 2 points [-]

I think you are the first person I know of, who actively uses Google Wave.

Comment author: PeerInfinity 20 April 2010 09:31:06PM *  0 points [-]

Adelene Dawner and some of my other friends use Google Wave too, but mostly to read stuff I wrote, and to comment on it, or chat live about it.

Oh, and another friend, MetaFire Horsley, aka MetaHorse, uses Google Wave to write some awesome sci-fi stories, and get live feedback on them. And that's working pretty good.

I might as well ask: does anyone else here use Google Wave? Or does anyone here have a Wave account that they're not using because they don't know anyone else who uses Wave?

oh, and here's some more information about what I'm using Wave for.

Comment author: mattnewport 20 April 2010 10:16:14PM *  1 point [-]

I've read a few attempted descriptions of what Google Wave is and have not really been able to make sense of it or understand how it might be useful to me. Several of these descriptions have admitted to difficulty expressing either its function or purpose clearly as well. I haven't been motivated to try to understand it further because I'm not aware of any problem I have which it appears to be a solution to.

Comment author: PeerInfinity 20 April 2010 10:39:54PM *  0 points [-]

The main useful feature of Wave is the realtime collaborative editing, and the ability to be instantly alerted to any update to any wave you're monitoring. There's more, but there's probably not much point for me to list all of the other features here. And I'm reluctant to try to convince other people to join, because it can be extremely addictive, and it's still kinda annoyingly glitchy, and is still missing some important features.

If you're not the sort of person who tries new things just for the sake of trying them, or if you didn't get immediately excited about Wave when you first heard about it, or you don't think you have any use for realtime collaborative editing, then you're probably better off waiting until someone you know is using Wave for something specific that you want to join in on.

and yes, it can be used as a persistent, HTML form of IRC, where you can leave or resume a conversation at any time, or return to an old branch of the conversation and visually branch it off, or even have multiple branches running simultaneously, in separate parts of the wave, rather than the different threads constantly overlapping, which always ends up happening in IRC.

Comment author: gwern 20 April 2010 10:39:28PM 0 points [-]

As far as I can tell, it's a HTML form of IRC, but persistent.

Comment author: Vladimir_Golovin 20 April 2010 08:53:31AM *  8 points [-]

My way of asking these questions:

What is the single most important thing you should be doing?

Are you doing it?

(I'm writing the damn help file. Why? Because nobody on the team has the necessary domain experience and writing skills and English knowledge needed for that. Why is the help file important? Because it's the single biggest chunk of work needed for the final release of our software. Very few users read help, but those who do are important. Why release the software? Because a release of a major new version brings in additional revenue and new customers -- and because abandoning a project at 95% completion is (usually) a stupid idea. Why do we need more revenue? To explore a more mainstream, less nerdy business than our current one. Why explore a more mainstream business? I could go on and on and on, but sorry -- time to write the help file.)

Comment author: alasarod 20 April 2010 01:45:52PM *  2 points [-]

Why explore a more mainstream business? I could go on and on and on, but sorry -- time to write the help file.)


It's possible to feel meaning without those questions having a final answer. As in, those whys really can string on indefinitely, but when I'm involved in a task, the meaning can be apparent to me, but not in a way that language captures.

I'm not satisfied with the answer that a hidden, higher-order goal or a secondary reinforcer is at work here. I think the action of carrying out a meaningful task has meaning in itself, not something that terminates in a final "because."

Is this clumsy of me to say? I honestly don't know what value this community would place on a claim that starts with "language is unable to capture it" - sounds pretty fishy, no? Am I just giving too much credit to what is really a preference?

Comment author: PeerInfinity 20 April 2010 04:36:01PM 18 points [-]

What am I doing?: Working at a regular job as a C++ programmer, and donating as much as possible to SIAI. And sometimes doing other useful things in my spare time.

Why am I doing it?: Because I want to make lots of money to pay for Friendly AI and existential risk research, and programming is what I'm good at.

Why do I want this?: Well, to be honest, the original reason, from several years ago, was "Because Eliezer told me to". Since then I've internalized most of Eliezer's reasons for recommending this, but this process still seems kinda backwards.

I guess the next question is "Why did I originally choose to follow Eliezer?": I started following him back when he still believed in the most basic form of utilitarianism: Maximize pleasure and minimize pain, don't bother keeping track of which entity is experiencing the pleasure or pain. Even back then, Eliezer wasn't certain that this was the value system he really wanted, but for me it seemed to perfectly fit my own values. And even after years of thinking about these topics, I still haven't found any other system that more closely matches what I actually believe. Not even Eliezer's current value system. And yes, I am aware that my value system means that an orgasmium shockwave is the best possible scenario for the future. And I still haven't found any logically consistent reason why I should consider that a bad thing, other than "but other people don't want that". I'm still very conflicted about this.

(off-topic: oh, and SPOILER: I found the "True Ending" to Three Worlds Collide severely disturbing. Destroying a whole planet full of people, just to KEEP the human ability to feel pain??? oh, and some other minor human values, which the superhappies made very clear were merely minor aesthetic preferences. That... really shook my "faith" in Eliezer's values...)

Anyway, the reason why I started following Eliezer was that even back then, he seemed like one of the smartest people on the planet, and he had a mission that I strongly believed in, and he was seriously working towards this mission, with more dedication than I had seen in anyone else. And he was seeking followers, though he made it very clear that he wasn't seeking followers in the traditional sense, but was seeking people to help him with his mission who were capable of thinking for themselves. And at the time I desperately wanted a belief system that was better than the only other belief system I knew of at the time, which was christianity. And so I basically, um... converted directly from christianity to Singularitarianism. (yes, that's deliberate noncapitalization. somehow capitalizing the word "christianity" just feels wrong...)

And now the next question: "Why am I still following Eliezer?": Basically, because I still haven't found anyone to follow who I like better than Eliezer. And I don't dare to try to start my own competing branch of Singularitarianism, staying true to Eliezer's original vision, despite his repeated warnings why this would be a bad idea... Though, um... if anyone else is interested in the idea... please contact me... preferably privately.

Another question is "What other options are worth considering?": Even if I do decide that it would be a good idea to stop following Eliezer, I definitely don't plan to stop being a transhumanist, and whatever I become instead will still be close enough to Singularitarianism that I might as well continue calling it Singularitarianism. And reducing existential risks would still be my main priority. So far the only reasons I know of to stop giving most of my income to SIAI is that maybe their mission to create Friendly AI really is hopeless, and maybe there's something else I should be doing instead. Or maybe I should be splitting my donations between SIAI and someplace else. But where? The Oxford Future of Humanity Institute? The Foresight Institute? The Lifeboat Foundataion? no, definitely not the Venus Project or the Zeitgeist movement. A couple of times I asked SIAI about the idea of splitting my donations with some other group, and of course they said that donating all of the money to them would still be the most leveraged way for me to reduce existential risks. Looking at the list of projects they're currently working on, this does sound plausible, but somehow it still feels like a bad idea to give all of the money I can spare exclusively to SIAI.

Actually, there is one other place I plan to donate to, even if SIAI says that I should donate exclusively to SIAI. Armchair Revolutionary is awesome++. Everyone reading this who has any interest at all in having a positive effect on the future, please check out their website right now, and sign up for the beta. I'm having trouble describing it without triggering a reaction of aversion to cliches, or "this sounds too good to be true", but... ok, I won't worry about sounding cliched: They're harnessing the addictive power of social games, where you earn points, and badges, and stuff, to have a significant, positive impact on the future. They have a system that makes it easy, and possibly fun, to earn points by donating small amounts (99 cents) to one or more of several projects, or by helping in other ways: taking quizzes, doing some simple research, writing an email, making a phone call, uploading artwork, and more. And the system of limiting donations to 99 cents, and limiting it to one donation per person per project, provides a way to not feel guilty about not donating more. Personally, I find this extremely helpful. I can easily afford to donate the full amount to all of these projects, and spend some time on the other things I can do to earn points, and still have plenty of money and time left over to donate to SIAI. Oh, and so far it looks like donating small amounts to a wide variety of projects generates more warm fuzzies than donating large amounts to a single project. I like that.

It would be awesome if SIAI or LW or some of the other existential-risk-reducing groups could become partners of ArmRev, and get their own projects added to the list. Someone get on this ASAP. (What's that you say? Don't say "someone should", say "I will"? Ok, fine, I'll add it to my to-do list, with all of that other stuff that's really important but I don't feel at all qualified to do. But please, I would really appreciate if someone else could help with this, or take charge of this. Preferably someone who's actually in charge at SIAI, or LW, or one of the other groups)

Anyway, there's probably lots more I could write on these topics, but I guess I had better stop writing now. This post is already long enough.

Comment author: Jack 21 April 2010 03:00:37AM 16 points [-]

Nothing at all against SIAI but

A couple of times I asked SIAI about the idea of splitting my donations with some other group, and of course they said that donating all of the money to them would still be the most leveraged way for me to reduce existential risks.

If you're in doubt and seeking expert advice you should pick an expert that lacks really obvious institutional incentives to give one answer over others.

Regarding the rest of the comment I found it kind of weird and something freaked me out about it, though I'm not sure quite what. That doesn't mean you're doing anything wrong, I might just have biases or assumptions that make what you're doing seem weird to me. I think it has something to do with your lack of skepticism or cynicism and the focus on looking for someone to follow that MatthewB mentioned. I guess your comment pattern matches with things a very religious person would say: I'm just not sure if that means you're doing something wrong or if I'm having an adverse reaction to a reasonable set of behaviors because I have irrationally averse reactions to things that look religious.

Comment author: PeerInfinity 21 April 2010 04:09:24PM *  8 points [-]

Yeah, I realized that it was silly for me to ask SIAI what they thought about the idea of giving SIAI less money, but I didn't know who else to ask, and I still didn't have enough confidence in my own sanity to try to make this decision on my own. And I was kinda hoping that the people at SIAI were rational enough to give an accurate and reasonably unbiased answer, despite the institutional incentives. SIAI has a very real and very important mission, and I would have hoped that its members would be able to rationally think about what is best for the mission, rather than what is best for the group. And the possibility remains that they did, in fact, give a rational and mostly unbiased answer.

The answer they gave was that donating exclusively to SIAI was the most leveraged way to reduce existential risks. Yes, there are other groups that are doing important work, but SIAI is more critically underfunded than they are, and the projects that we (yes, I said "we", even though I'm "just" a donor) are working on this year are critical for figuring out what the most optimal strategies would be for humanity/transhumanity to maximize its probability of surviving into a post-Singularity future.

heh, one of these projects they're finally getting around to working on this year is writing a research paper examining how much existential-risk-reduction you get for each dollar donated to SIAI. That's something I've really been wanting to know, and had actually been feeling kinda guilty about not making more of an effort to try to figure out on my own, or at least to try to get a vague estimate, to within a few orders of magnitude. And I had also been really annoyed that noone more qualified than me had already done this. But now they're finally working on it. yay :)

Someone from SIAI, please correct me if I'm wrong about any of this.

And yes, my original comment seemed weird to me too, and kinda freaked me out. But I think it would have been a bad idea to deliberately avoid saying it, just because it sounds weird. If what I'm doing is a bad idea, then I need to figure this out, and find what I should be doing instead. And posting comments like this might help with that. Anyway, I realize that my way of thinking sounds weird to most people, and I don't make any claim that this is a healthy way to think, and I'm working on fixing this.

And as I mentioned in another comment, it would just feel wrong to deliberately not say this stuff, just because it sounds weird and might make SIAI look bad. But that kind of thinking belongs to the Dark Arts, and is probably just a bad habit I had left over from christianity, and isn't something that SIAI actually endorses, afaik.

And I do, in fact, have lots of skepticism and cynicism about SIAI, and their mission, and the people involved. This skepticism probably would have caused me to abandon them and their mission long ago... if I would have had somewhere better to go instead, or a more important mission. But after years of looking, I haven't found any cause more important than existential risk reduction, and I haven't found any group working towards this cause more effectively than SIAI, except possibly for some of the other groups I mentioned, but a preliminary analysis shows that they're not actually doing any better than SIAI. And starting my own group still looks like a really silly idea.

And yes, I'm aware that I still seem to talk and think like a religious person. I was raised as a christian, and I took christianity very seriously. Seriously enough to realize that it was no good, and that I needed to get out. And so I tried to replace my religious fanaticism with what's supposed to be an entirely non-religious and non-fanatical cause, but I still tend to think and act both religiously and fanatically. I'm working on that.

I also have an averse reaction to things that look religious. This is one of the many things causing me to have trouble with self-hatred. Anyway, I'm working on that.

Oh, and one more comment about cynicism: I currently think that 1% is an optimistic estimate of the probability that humanity/transhumanity will survive into a positive post-Singularity future, but it's been a while since I reviewed why I believe this. Another thing to add to my to-do list.

Comment author: PhilGoetz 23 April 2010 07:26:06PM 6 points [-]

somehow it still feels like a bad idea to give all of the money I can spare exclusively to SIAI.

If you were Bill Gates, that might be a valid concern. (The "exclusively" part, not the "SIAI" part.)

Otherwise, it's most efficient to donate to just one cause. Especially if you itemize deductions.

Comment author: blogospheroid 21 April 2010 06:51:42AM 3 points [-]

The problems of the world the way the way it is right now and the incentives of the people in power as per the current structure does not seem optimal to me.

There are so many obvious things that could be done that are not being done right now. for eg. Competition in the space of governments. Proposing solutions to many present problems of the world does not require a superintelligence. Economists do that everyday. But untangling the entire mess of incentives, power and leverage so that these formerly simple, but now complicated, solutions could be implemented requires a superintelligence.

This super intelligence needs to be benevolent today and tomorrow. I have not found a better goal structure than CEV which can maintain this benevolence. Singinst has openly written that they are open to better goal systems. If I find something better, I will move my support there.

Comment author: PeerInfinity 21 April 2010 04:52:32PM 2 points [-]

I agree.

I'm aware that there are problems with CEV (mainly: we're probably not going to have enough time to figure out how to actually implement it before the Singularity, and the CEV is biased to exclude only the volition of humanity, which means that there may be a risk of the CEV allowing arbitrary amounts of cruelty to entities that don't qualify as "human")

Anyway, I'm aware that there are problems with CEV, but I still don't know of any better plan.

Because of the extreme difficulty of actually implementing CEV, I am tempted to advocate the backup plan of coding a purely Utilitarian AI, maximizing pleasure and minimizing pain.An orgasmium shockwave is better than a lifeless universe. The idea would be to not release this AI unless it looks like we're running out of time to implement CEV, but if we are running out of time, then we're not likely to get much warning that we're running out of time. And then there's the complication that according to my current belief system (which I'm still very conflicted about) the orgasmium shockwave scenario is actually better than the CEV scenario, since it would result in greater total utility. But I'm nowhere near confident enough about this to actually advocate the plan of deliberately releasing a pure Utilitarian AI. And this plan has its own dangers, like... shudder... what if we get the utility formula wrong?

Oh, and one random idea I had to make CEV easier to actually implement: remove the restriction of the CEV not being allowed to simulate sentient minds. Just try to make sure that these sentient minds have at least a minimum standard of living. Or, if that's too hard, and you somehow need to simulate minds that are actually suffering, you could save a backup copy of them, rather than deleting them, and after the CEV has finished applying its emergency first-aid to the human condition, you can reawaken these simulated minds, and give them full rights as citizens. There should be more than enough resources available in the universe for these minds to live happy, fulfilling lives. They might even be considered heroes, who endured a few moments of discomfort, and existential confusion, in order to help bring about a positive post-Singularity future. But still it somehow feels wrong for me to suggest a plan that involves the suffering of others. If it makes anyone feel anyone better about this suggestion, then I, personally, volunteer to experience a playback of a recording of all of the unpleasant experiences that these simulated minds have experienced, while the CEV was busy doing its thing. There, now I'm not heartlessly advocating a plan that involves the suffering of others, but no harm to myself. And I'm expecting that the amount of this suffering would be small enough that the amount of pleasure I could experience in the rest of my life, after I'm finished experiencing this playback, would vastly outweigh the suffering. It would be nice if there would be some way to guarantee this, but that would make the system more complicated, and the whole point of all this was to make the system less complicated.

Comment author: PhilGoetz 23 April 2010 07:28:30PM 2 points [-]

CEV is too vague to call a plan. It bothers me that people are dedicating themselves to pursuing a goal that hasn't yet been defined.

Comment author: Strange7 27 April 2010 02:53:54AM 1 point [-]

That was part of my motivation for proposing an alternative.

Comment author: MatthewB 21 April 2010 02:22:57AM 6 points [-]

It may just be me, but why do you need to find someone to follow?

I have always found that forging my own path through the wilderness to be far more enjoyable and yield far greater rewards that following a path, no matter how small or large that path may be.

Comment author: PeerInfinity 21 April 2010 02:36:51PM 10 points [-]

Well, one reason why I feel that I need someone to follow is... severe underconfidence in my ability to make decisions on my own. I'm still working on that. Choosing a person to follow, and then following them, feels a whole lot easier than forging my own path.

I should mention again that I'm not actually "following" Eliezer in the traditional sense. I used his value system to bootstrap my own value system, greatly simplifying the process of recovering from christianity. But now that I've mostly finished with that (or maybe I'm still far from finished?), I am, in fact, starting to think independently. It's taking a long time for me to do this, but I am constantly looking for things that I'm doing or believing just because someone else told me to, and then reconsidering whether these things are a good idea, according to my current values and beliefs. And yes, there are some things I disagree with Eliezer about (the "true ending" to TWC, for example), and things that I disagree with SIAI about ("we're the only place worth donating to", for example). I'll probably start writing more about this, now that I'm starting to get over my irrational fear of posting comments here.

Though part of me is still worried about making SIAI look bad. And I'm still worried that the stuff I've already posted may end up harming SIAI's mission (and my mission) more than it could possibly have helped. Though of course it would be a bad idea to try to hide problems that need to be examined and dealt with. And the idea of deliberately trying to hide information just feels wrong. It feels like Dark Arts. I should also mention that the idea of deliberately not saying things, in order to avoid making the group look bad, isn't actually something I was told by anyone from SIAI, I think it was a bad habit I brought with me from christianity.

Comment author: Nick_Tarleton 21 April 2010 02:58:20PM *  4 points [-]

And the idea of deliberately trying to hide information just feels wrong. It feels like Dark Arts.

If by 'dark arts' you mean 'non-rational methods of persuasion', such things may be ethically questionable (in general; not volunteering information you aren't obligated to provide almost certainly isn't) but are not (categorically) wrong. Rational agents win.

Comment author: khafra 21 April 2010 03:46:58PM 11 points [-]

I like the way steven0461 put it:

...promoting less than maximally accurate beliefs is an act of sabotage. Don’t do it to anyone unless you’d also slash their tires, because they’re Nazis or whatever. Specifically, don’t do it to yourself.

Comment author: PeerInfinity 21 April 2010 04:59:52PM 1 point [-]

I think I agree with both khafra and Nick.

I like this quote, and I've used it before in conversations with other people.

Comment author: RobinZ 21 April 2010 02:45:32PM 2 points [-]

I think it's worth distinguishing between "underconfidence" and "lack of confidence" - the former implies the latter (although not absolutely), but under some circumstances you are justified in questioning your competence. Either way, it sounds like you're working on both ends of that balance, which is good.

Though part of me is still worried about making SIAI look bad. And I'm still worried that the stuff I've already posted may end up harming SIAI's mission (and my mission) more than it could possibly have helped. Though of course it would be a bad idea to try to hide problems that need to be examined and dealt with. And the idea of deliberately trying to hide information just feels wrong. It feels like Dark Arts. I should also mention that the idea of deliberately not saying things, in order to avoid making the group look bad, isn't actually something I was told by anyone from SIAI, I think it was a bad habit I brought with me from christianity.

I think this is good thinking.

Comment author: PeerInfinity 21 April 2010 04:57:13PM 2 points [-]

good point about underconfidence versus lack of confidence, thanks

Comment author: MatthewB 22 April 2010 07:17:39AM 0 points [-]

That puts it into an understandable context... I can't quite understand about the having to shake off Christian Beliefs. I was raised with a tremendously religious mother, but about the age of 6 I began to question her beliefs and by 14 was sure that she was stark raving mad to believe what she did. So, I managed to keep from being brainwashed to begin with.

I've seen the results of people who have been brainwashed and who have not managed to break completely free from their old beliefs. Most of them swung back and forth between the extremes of bad belief systems (From born-again Christian to Satanist, and back, many times)... So, what you are doing is probably best for the time being, until you learn the tools needed to step off into the wilderness by yourself.

Comment author: PeerInfinity 22 April 2010 03:18:19PM 11 points [-]

In my case, I knew pretty much from the beginning that something was seriously wrong. But since every single person I had ever met was a christian (with a couple of exceptions I didn't realize until later), I assumed that the problem was with me. The most obvious problem, at least for me, was that none of the so-called christians was able to clearly explain what a christian is, and what it is that I need to do in order to not go to hell. And the people who came closest to being able to give a clear explanation, they were all different from each other, and the answer changed if I asked different questions. So I guess I was... partly brainwashed. I knew that there was something really important I was supposed to do, and that people's souls were at stake (a matter of infinite utility/anti-utility!) but noone was able to clearly explain what it was that I was supposed to do. But they expected me to do it anyway, and made it sound like there was something wrong with me for not instinctively knowing what it was that I was supposed to do. There's lots more I could complain about, but I guess I had better stop now.

So it was pretty obvious that I wasn't going to be able to save anyone's soul by converting them to christianity by talking to them. And I was also similarly unqualified for most of the other things that christians are supposed to do. But there was still one thing I saw that I could do: living as cheaply as possible, and donating as much money as possible to the church so that the people who claim to actually know what they're doing can just get on with doing it. And just being generally helpful when there was some simple everyday thing I could be helpful with.

Anyway, it wasn't until I went to university that I actually met any atheists who openly admitted to being atheists. Before then, I had heard that there was such a thing as an atheist, and that these were the people whose souls we were supposed to save by converting them to christianity, but Pascal's Wager prevented me from seriously considering becoming an atheist myself. Even if you assign a really tiny probability to christianity being true, converting to atheism seemed like an action with an expected utility of negative infinity. But then I overheard a conversation in the Computer Science students' lounge. That-guy-who-isn't-all-that-smart-but-likes-to-sound-smart-by-quoting-really-smart-people was quoting Eliezer Yudkowsky. Almost immediately after that conversation, I googled the things he was talking about. I discovered Singularitarianism. An atheistic belief system, based entirely on a rational, scientific worldview, to which Pascal's Wager could be applied. (there is an unknown probability that this universe can support an infinite amount of computation, therefore there is an unknown probability that actions can have infinite positive or negative utility.) I immediately realized that I wanted to convert to this belief system. But it took me a few weeks of swinging back and forth before I finally settled on Singularitarianism. And since then I haven't had any desire at all to switch back to christianity. Though I was afraid that, because of my inability to stand up to authority figures, someone might end up convincing me to convert back to christianity against my will. Even now, years later, there are scary situations, when dealing with an authority figure who is a christian, part of me still sometimes thinks "OMG maybe I really was wrong about all this!"

Anyway, I'm still noticing bad habits from christianity that I'm still doing, and I'm still working on fixing this. Also, I might be oversensitive to noticing things that are similar between christianity and Singularitarianism. For example, the expected utility of "converting" someone to Singularitarianism. Though in this case you're not guaranteeing that one soul is saved, you're slightly increasing the probability that everyone gets "saved", because there is now one more person helping the efforts to help us achieve a positive Singularity.

Oh, and now, after reading LW, I realize what's wrong with Pascal's Wager, and even if I found out for certain that this universe isn't capable of supporting an infinite amount of computation, I still wouldn't be tempted to convert back to christianity.

Random trivia: I sometimes have dreams where a demon, or some entirely natural thing that for some reason is trying to look like a demon, is trying to trick or scare me into converting back to christianity. And then I discover that the "demon" was somehow sent by someone I know, and end up not falling for it. I find this amusingly ironic.

As usual, there's lots more I could write about, but I guess I had better stop writing for now.

Comment author: cousin_it 23 April 2010 08:26:44AM *  19 points [-]

But it took me a few weeks of swinging back and forth before I finally settled on Singularitarianism.

Here's a quote from an old revision of Wikipedia's entry on The True Believer that may be relevant here:

A core principle in the book is Hoffer's insight that mass movements are interchangeable; he notes fanatical Nazis later becoming fanatical Communists, fanatical Communists later becoming fanatical anti-Communists, and Saul, persecutor of Christians, becoming Paul, a fanatical Christian. For the true believer the substance of the mass movement isn't so important as that he or she is part of that movement.

And from the current revision of the same article:

Hoffer quotes extensively from leaders of the Nazi and communist parties in the early part of the 20th Century, to demonstrate, among other things, that they were competing for adherents from the same pool of people predisposed to support mass movements. Despite the two parties' fierce antagonism, they were more likely to gain recruits from their opposing party than from moderates with no affiliation to either.

Can't recommend this book enough, by the way.

Comment author: PeerInfinity 23 April 2010 06:17:26PM *  13 points [-]

Thanks for the link, and the summary. Somehow I don't find that at all surprising... but I still haven't found any other cause that I consider worth converting to.

At the time I converted, Singularitarianism was nowhere near a mass movement. It consisted almost entirely of the few of us in the SL4 mailing list. But maybe the size of the movement doesn't actually matter.

And it's not "being part of a movement" that I value, it's actually accomplishing something important. There is a difference between a general pool of people who want to be fanatical about a cause, just for the emotional high, and the people who are seriously dedicated to the cause itself, even if the emotions they get from their involvement are mostly negative. This second group is capable of seriously examining their own beliefs, and if they realize that they were wrong, they will change their beliefs. Though as you just explained, the first group is also capable of changing their minds, but only if they have another group to switch to, and they do this mostly for social reasons.

Seriously though, the emotions I had towards christianity were mostly negative. I just didn't fit in with the other christians. Or with anyone else, for that matter. And when I converted to Singularitarianism, I didn't exactly get a warm welcome. And when I converted, I earned the disapproval of all the christians I know. Which is pretty much everyone I have ever met in person. I still have not met any Singularitarian, or even any transhumanist, in person. And I've only met a few atheists. I didn't even have much online interaction with other transhumanists or Singularitarians until very recently. I tried to hang out in the SL4 chatroom a few years ago, but they were openly hostile to the way I treated Singularitarianism as another belief system to convert to, another group to be part of, rather than... whatever it is that they thought they were doing instead. And they didn't seem to have a high opinion of social interaction in general. Or maybe I'm misremembering this.

Anyway, I spent my first approximately 7 years as a Singularitarian in almost complete isolation. I was afraid to request social interaction for the sake of social interaction, because somehow I got the idea that every other Singularitarian was so totally focused on the mission that they didn't have any time at all to spare to help me feel less lonely, and so I should either just put up with the loneliness or deal with it on my own, without bothering any of the other Singularitarians for help. The occasional attempt I made to contact some of the other Singularitarians only further confirmed this theory. I chose the option of just putting up with the loneliness. That may have been a bad decision.

And just a few weeks ago, I found out that I'm "a valued donor", to SIAI. Though I'm still not sure what this means. And I found out that other Singularitarians do, in fact, socialize just for the sake of socializing. And I found out that most of them spend several hours a day "goofing off". And that they spend a significant percentage of their budget on luxuries that technically they could do without, without having a significant effect on their productivity. And that most of them live generally happy, productive, and satisfying lives. And that it was silly of me to feel guilty for every second and every penny that I wasted on anything that wasn't optimally useful for the mission. In addition to the usual reasons why feeling guilty is counterproductive

Anyway, things are finally starting to get better now, and I don't think I'll accomplish anything by complaining more.

Also, most of this was probably my own fault. It turns out that everyone living at the SIAI house was totally unaware of my situation. And this is mostly my fault, because I was deliberately avoiding contacting them, because I was afraid to waste their time. And wasting the time of some one who's trying to save the universe is a big no-no. I was also afraid that if I tried to contact them, then they would ask me to do things that I wasn't actually able to do, but wouldn't know for sure that I wasn't able to do, and would try anyway because I felt like giving up wasn't an option. And it turns out this is exactly what happened. A few months ago I contacted Michael Vassar, and he started giving me things to help with. I made a terrible mess out of trying to arrange the flights for the speakers at the 2009 Singularity Summit. And then I went back to avoiding any contact with SIAI. Until Adelene Dawner talked to them for me, without me asking her to. Thanks Ade :)

Um... one other thing I just realized... well, actually Adelene Dawner just mentioned it in Wave, where I was writing a draft of this post... the reason why I haven't been trying to socialize with people other than Singularitarians is... I was afraid that anyone who isn't a Singularitarian would just write off my fanaticism as general insanity, and therefore any attempt to socialize with non-Singularitarians would just end up making the Singularitarian movement look bad... I already wrote about how this is a bad habit I carried with me from christianity. It's strange that I hadn't actually spent much time thinking about this, I just somehow wrote it off as not an option, to try to socialize with non-Singularitarians, and ended up just not thinking about it after that. I still made a few careful attempts at socializing with non-Singularitarians, but the results of these experiments only confirmed my suspicions.

Oh, and another thing I just realized: Confirmation Bias. These experiments were mostly invalid, because they were set up to detect confirming evidence of my suspicions, but not set up to be able to falsify them. oops. I made the same mistake with my suspicions that normal people wouldn't be able to accept my fanatical Singularitarianism, my suspicions that the other Singularitarians are all so totally focused on the mission that they don't have any time at all for socializing, and also my suspicions that my parents wouldn't be able to accept my atheism. yeah, um, oops. So I guess it would be really silly of me to continue blaming this situation on other people. Yes, it may have been theoretically possible for someone else to notice and fix these problems, but I was deliberately taking actions that ended up preventing them from having a chance to do so.

There's probably more I could say, but I'll stop writing now.

Comment author: PeerInfinity 23 April 2010 08:10:25PM 8 points [-]

um... after reviewing this comment, I realize that the stuff I wrote here doesn't actually count as evidence that I don't have True Believer Syndrome. Or at least not conclusive evidence.

oh, and did I mention yet that I also seem to have some form of Saviour Complex? Of course I don't actually believe that I'm saving the world through my own actions, but I seem to be assigning at least some probability that my actions may end up making the difference between whether our efforts to achieve a positive Singularity succeed or fail.

but... if I didn't believe this, then I wouldn't bother donating, would I?

Do other people manage to believe that their actions might result in making the difference between whether the world is saved or not, without it becoming a Saviour Complex?

Comment author: cousin_it 24 April 2010 05:17:31AM *  5 points [-]

PeerInfinity, I don't know you personally and can't tell whether you have True Believer Syndrome. I'm very sorry for provoking so many painful thoughts... Still. Hoffer claims that the syndrome stems from lack of self-esteem. Judging from what you wrote, I'd advise you to value yourself more for yourself, not only for the faraway goals that you may someday help fulfill.

Comment author: PeerInfinity 24 April 2010 10:18:06PM *  3 points [-]

no need to apologise, and thanks for pointing out this potential problem.

(random trivia: I misread your comment three times, thinking it said "I know you personally can't tell whether you have True Believe Syndrome")

as for the painful thoughts... It was a relief to finally get them written down, and posted, and sanity-checked. I made a couple attempts before to write this stuff down, but it sounded way too angry, and I didn't dare post it. And it turns out that the problem was mostly my fault after all.

oh, and yeah, I am already well aware that I have dangerously low self-esteem. but if I try to ignore these faraway goals, then I have trouble seeing myself as anything more valuable than "just another person". Actually I often have trouble even recognizing that I qualify as a person...

also, an obvious question: are we sure that True Believer Syndrome is a bad thing? or that a Saviour Complex is a bad thing?

random trivia: now that I've been using the City of Lights technique for so long, I have trouble remembering not to use a plural first-person pronoun when I'm talking about introspective stuff... I caught myself doing that again as I checked over this comment.

Comment author: cupholder 23 April 2010 08:35:18PM 3 points [-]

Maybe instead of imagining your actions as having some probability of 'making the difference,' try thinking of them as slightly boosting the probability of a positive singularity?

At any rate, the survival of someone wheeled in through the doors of a hospital might depend on the EMTs, the nurses, the surgeons, the lab techs, the pharmacists, the janitors and so on and so on. I'd say they're all entitled to take a little credit without being accused of having a savior complex!

Comment author: PeerInfinity 23 April 2010 08:43:41PM 1 point [-]

um... can you please explain what the difference is, between "having some probability X of making the difference between success and failure, of achieving a positive Singularity" and "boosting the probability of a positive Singularity, by some amount Y"? To me, these two statements seem logically equivalent. Though I guess they focus on different details...

oh, I just noticed one obvious difference: X is not equal to Y

Comment author: AdeleneDawner 23 April 2010 06:24:38PM 2 points [-]

Yes, it may have been theoretically possible for someone else to notice and fix these problems, but I was deliberately taking actions that ended up preventing them from having a chance to do so.

Nitpick for clarity's sake: I've seen no evidence that this was deliberate in the sense implied, and I would expect to have seen such evidence if it did exist. It may have been deliberate or quasi-deliberate for some other reason, such as social anxiety (which I have seen evidence of).

Comment author: PeerInfinity 23 April 2010 06:28:14PM 2 points [-]

er, yes, that's what I meant. sorry for the confusion. I wasn't deliberately trying to prevent anyone from helping, I was deliberately trying to avoid wasting their time, by having no contact with them, which prevented them from being able to help.

Comment author: NancyLebovitz 23 April 2010 12:45:47PM 4 points [-]

I've heard from an ex-fundamentalist that for some people, conversion is a high in itself (I don't know if this is mostly true for Christians, or applies to movements in general. In any case, he said the high lasts for about two years, and then wears off, so that those people then convert to something else.

Comment author: juliawise 24 September 2011 07:25:20PM 2 points [-]

Huh. I knew this was true of me, but didn't realize it was common. I went from being an extreme Christian at 11 to an extreme utilitarian by about 14 (despite not knowing people who were extreme about either thing).

Comment author: Utilitarian 23 April 2010 06:02:12AM *  5 points [-]

PeerInfinity, I'm rather struck by a number of similarities between us:

  • I, too, am a programmer making money and trying to live frugally in order to donate to high-expected-value projects, currently SIAI.
  • I share your skepticism about the cause and am not uncomfortable with your 1% probability of positive Singularity. I agree SIAI is a good option from an expected-value perspective even if the mainline-probability scenario is that these concerns won't materialize.
  • As you might guess from my user name, I'm also a Utilitronium-supporting hedonistic utilitarian who is somewhat alarmed by Eliezer's change of values but who feels that SIAI's values are sufficiently similar to mine that it would be unwise to attempt an alternative friendly-AI organization.
  • I share the seriousness with which you regard Pascal's wager, although in my case, I was pushed toward religion from atheism rather than the other way around, and I resisted Christian thinking the whole time I tried to subscribe to it. I think we largely agree in our current opinions on the subject. I do sometimes have dreams about going to the Christian hell, though.

I'm not sure if you share my focus on animal suffering (since animals outnumber current humans by orders of magnitude) or my concerns about the implications of CEV for wild-animal suffering. Because of these concerns, I think a serious alternative to SIAI in cost-effectiveness is to donate toward promoting good memes like concern about wild animals (possibly including insects) so that, should positive Singularity occur, our descendants will do the right sorts of things according to our values.

Comment author: PeerInfinity 23 April 2010 04:26:19PM 3 points [-]

Hi Utilitarian!

um... are you the same guy who wrote those essays at utilitarian-essays.com? If you are, we have already talked about these topics before. I'm the same Peer Infinity who wrote that "interesting contribution" on Singularitarianism in that essay about Pascal's Wager, the one that tried to compare the different religions to examine which of them would be the best to Wager on.

And, um... I used to have some really nasty nightmares about going to the christian hell. But then, surprisingly, these nightmares somehow got replaced with nightmares of a hell caused by an Evil AI. And then these nightmares somehow got replaced with nightmares about the other hells that modal realism says must already exist in other universes.

I totally agree with you that the suffering of humans is massively outweighed by the suffering of other animals, and possibly insects, by a few orders of magnitude, I forget how many exactly, but I think it was less than 10 orders of magnitude. But I also believe that the amount of positive utility that could be achieved through a positive Singularity is... I think it was about 35 orders of magnitude more than all of the positive or negative utility that has been experienced so far in the entire history of Earth. But I don't remember the details of the math. For a few years now I was planning to write about that, but somehow never got around to it. Well, actually, I did make one feeble attempt to do the math, but that post didn't actually make any attempt to estimate how many orders of magnitude were involved

Oh, and I totally share your concerns about the possible implications of CEV. Specifically, that it might end up generating so much negative utility that it outweighs the positive utility, which would mean that a universe completely empty of life would be preferable.

Oh, and I know one other person who shares your belief that promoting good memes like concern about wild animals would be more cost effective than donating to Friendly AI research. He goes by the name MetaFire Horsley in Second Life, and by the name MetaHorse in Google Wave. I have spent lots of time discussing this exact topic with him. I agree that spreading good memes is totally a good idea, but I remain skeptical about how much leverage we could get out of this plan, and I suspect that donating to Friendly AI research would be a lot more leveraged. But it's still totally a good idea to spread positive memes in your spare time, whenever you're in a situation that gives you an opportunity to do some positive meme spreading. MetaHorse is currently working on some sci-fi stories that he hopes will be useful for spreading these positive memes. He writes these stories in Google Wave, which means that you can see him writing the stories in real-time, and give instant feedback. I really think it would be a good idea for you to get in contact with him. If you don't already have a Google Wave account, please send me your gmail address in a private email, and I'll send you a Wave invite.

Oh, and I'm still really confused about how CEV is supposed to work. It seems like it's supposed to take into our account our beliefs that the suffering of animals, or any sentient creatures, is unacceptable, and consider that as a source of decoherence if someone else advocates an action that would result in suffering. And apparently it's not supposed to just average out everyone's preferences, it's supposed to... I don't know what, exactly, but it's supposed to have the same or better results than if we spent lots and lots of time talking with the people who would advocate suffering, and we all learned more, were smarter, and "grew up further together", whatever that means, and other stuff. And that sounds nice in theory, but I'm still waiting for a more detailed specification. It's been a few years since the original CEV document was published, and there haven't been any updates at all. Well, other than Eliezer's posts to LW.

Oh, and I read all of your essays (yes, all of them, though I only skimmed that really huge one that listed lots of numbers for the amount of suffering of animals) a few months ago, and we chatted about them briefly. Though that was long enough ago that it would probably be a good idea for me to review them.

Anyway, um... keep up the good work, I guess, and thanks for the feedback. :)

Comment author: Utilitarian 25 April 2010 11:04:41AM *  5 points [-]

Bostrom's estimate in "Astronomical Waste" is "10^38 human lives [...] lost every century that colonization of our local supercluster is delayed," given various assumptions. Of course, there's reason to be skeptical of such numbers at face value, in view of anthropic considerations, simulation-argument scenarios, etc., but I agree that this consideration probably still matters a lot in the final calculation.

Still, I'm concerned not just with wild-animal suffering on earth but throughout the cosmos. In particular, I fear that post-humans might actually increase the spread of wild-animal suffering through directed panspermia or lab-universe creation or various other means. The point of spreading the meme that wild-animal suffering matters and that "pristine wilderness" is not sacred would largely be to ensure that our post-human descendants place high ethical weight on the suffering that they might create by doing such things. (By comparison, environmental preservationists and physicists today never give a second thought to how many painful experiences are or would be caused by their actions.)

As far as CEV, the set of minds whose volitions are extrapolated clearly does make a difference. The space of ethical positions includes those who care deeply about sorting pebbles into correct heaps, as well as minds whose overriding ethical goal is to create as much suffering as possible. It's not enough to "be smarter" and "more the people we wished we were"; the fundamental beliefs that you start with also matter. Some claim that all human volitions will converge (unlike, say, the volitions of humans and the volitions of suffering-maximizers); I'm curious to see an argument for this.

Comment author: Nick_Tarleton 25 April 2010 10:18:16PM 3 points [-]

Some claim that all human volitions will converge

Who are you thinking of? (Eliezer is frequently accused of this, but has disclaimed it. Note the distinction between total convergence, and sufficient coherence for an FAI to act on.)

Comment author: PeerInfinity 25 April 2010 08:20:17PM *  3 points [-]

(edit: The version of utilitarianism I'm talking about in this comment is total hedonic utilitarianism. Maximize the total amount of pleasure, minimize the total amount of pain, and don't bother keeping track of which entity experiences the pleasure or pain. A utilitronium shockwave scenario based on preference utilitarianism, and without any ethical restrictions, is something that even I would find very disturbing.)

I totally agree!!!

Astronomical waste is bad! (or at least, severely suboptimal)

Wild-animal suffering is bad! (no, there is nothing "sacred" or "beautiful" about it. Well, ok, you could probably find something about it that triggers emotions of sacredness or beauty, but in my opinion the actual suffering massively outweighs any value these emotions could have.)

Panspermia is bad! (or at least, severely suboptimal. Why not skip all the evolution and suffering and just create the end result you wanted? No, "This way is more fun", or "This way would generate a wider variety of possible outcomes" are not acceptable answers, at least not according to utilitarianism.)

Lab-universes have great potential for bad (or good), and must be created with extreme caution, if at all!

Environmental preservationists... er, no, I won't try to make any fully general accusations about them. But if they succeed in preserving the environment in its current state, that would involve massive amounts of suffering, which would be bad!

I also agree with your concerns about CEV.

Though of course we're talking about all this as if there is some objective validity to Utilitarianism, and as Eliezer explained: (warning! the following sentence is almost certainly a misinterpretation!) You can't explain Utilitarianism to a rock, therefore Utilitarianism is not objectively valid.

Or, more accurately, our belief in utilitarianism is a fact about ourselves, not a fact about the universe. Well, indirectly it's a fact about the universe, because these beliefs were generated by a process that involves observing the universe. We observe that pleasure really does feel good, and that pain really does feel bad, and therefore we want to maximize pleasure and minimize pain. But not everyone agrees with us. Eliezer himself doesn't even agree with us anymore, even though some of his previous writing implied that he did before. (I still can't get over the idea that he would consider it a good idea to kill a whole planet just to PREVENT an alien species from removing the human ability to feel pain, and a few other minor aesthetic preferences. Yeah, I'm so totally over any desire to treat Eliezer as an Ultimate Source of Wisdom...)

Anyway, CEV is supposed to somehow take all of these details into account, and somehow generate an outcome that everyone will be satisfied with. I still don't see how this could be possible, but maybe that's just a result of my own ignorance. And then there's the extreme difficulty of actually implementing CEV...

And no, I still don't claim to have a better plan. And I'm not at all comfortable with advocating the creation of a purely Utilitarian AI.

Your plan of trying to spead good memes before the CEV extrapolates everyone's volition really does feel like a good idea, but I still suspect that if it really is such a good idea, then it should somehow be a part of the CEV extrapolation. I suspect that if you can't incorporate this process into CEV somehow, then any other possible strategy must involve cheating somehow.

Oh, I had another conversation recently on the topic of whether it's possible to convince a rational agent to change its core values through rational discusson alone. I may be misinterpreting this, but I think the conversation was inconclusive. The other person believed that... er, wait, I think we actually agreed on the conclusion, but didn't notice at the time. The conclusion was that if an agent's core values are inconsistent, then rational discussion can cause the agent to resolve this inconsistency. But if two agents have different core values, and neither agent has internally inconsistent core values, then neither agent can convince the other, without cheating. There's also the option of trading utilons with the other agent, but that's not the same as changing the other agent's values.

Anyway, I would hope that anyone who disagrees with utilitarianism, only disagrees because of an inconsistency in their value system, and that resolving this inconsistency would leave them with utilitarianism as their value system. But I'm estimating the probability that this is the case at... significantly less than 50%. Not because I have any specific evidence about this, but as a result of applying the Pessimistic Prior. (Is that a standard term?)

Anyway, if this is the case, then the CEV algorithm will end up resulting in the outcome that you wanted. Specifically, an end to all suffering, and some form of utilitronium shockwave.

Oh, and I should point out that the utilitronium shockwave doesn't actually require the murder of everyone now living. Surely even us hardcore utilitarians should be able to afford to leave one planet's worth of computronium for the people now living. Or one solar system's worth. Or one galaxy's worth. It's a big universe, after all.

Oh, and if it turns out that some people's value systems would make them terribly unsatisfied to live without the ability to feel pain, or with any of the other brain modifications that a utilitarian might recommend... then maybe we could even afford to leave their brains unmodified. Just so long as they don't force any other minds to experience pain. Though the ethics of who is allowed to create new minds, and what sorts of new minds they're allowed to create... is kinda complicated and controversial.

Actually, the above paragraph assumed that everyone now living would want to upload their minds into computronium. That assumption was way too optimistic. A significant percentage of the world's population is likely to want to remain in a physical body. This would require us to leave this planet mostly intact. Yes, it would be a terribly inefficient use of matter, from a utilitarian perspective, but it's a big universe. We can afford to leave this planet to the people who want to remain in a physical body. We can even afford to give them a few other planets too, if they really want. It's a big universe, plenty of room for everyone. Just so long as they don't force any other mind to suffer.

Oh, and maybe there should also be rules against creating a mind that's forced to be wireheaded. There will be some complex and controversial issues involved in the design of the optimally efficient form of utilitronium that doesn't involve any ethical violations. One strategy that might work is a cross between the utilitronium scenario and the Solipsist Nation scenario. That is, anyone who wants to retreat entirely into solipsism, let them do their own experiments with what experiences generate the most utility. There's no need to fill the whole universe with boring, uniform bricks of utilitronium that contain minds that consist entirely of an extremely simple pleasure center, endlessly repeating the same optimally pleasurable experience. After all, what if you missed something when you originally designed the utilitronium that you were planning to fill the universe with? What if you were wrong about what sorts of experiences generate the most utility? You would need to allocate at least some resources to researching new forms of utilitronium, why not let actual people do the research? And why not let them do the research on their own minds?

I've been thinking about these concepts for a long time now. And this scenario is really fun for a solipsist utilitarian like me to fantasize about. These concepts have even found their way into my dreams. One of these dreams was even long, interesting, and detailed enough to make into a short story. Too bad I'm no good at writing. Actually, that story I just linked to is an example of this scenario going bad...

Anyway, these are just my thoughts on these topics. I have spent lots of time thinking about them, but I'm still not confident enough about this scenario to advocate it too seriously.

Comment author: thomblake 27 April 2010 01:40:52PM 5 points [-]

Your comments are tending to be a bit too long.

Comment author: Utilitarian 27 April 2010 05:49:28AM 3 points [-]

Environmental preservationists... er, no, I won't try to make any fully general accusations about them. But if they succeed in preserving the environment in its current state, that would involve massive amounts of suffering, which would be bad!

Indeed. It may be rare among the LW community, but a number of people actually have a strong intuition that humans ought to preserve nature as it is, without interference, even if that means preserving suffering. As one example, Ned Hettinger wrote the following in his 1994 article, "Bambi Lovers versus Tree Huggers: A Critique of Rolston"s Environmental Ethics": "Respecting nature means respecting the ways in which nature trades values, and such respect includes painful killings for the purpose of life support."

Or, more accurately, our belief in utilitarianism is a fact about ourselves, not a fact about the universe.

Indeed. Like many others here, I subscribe to emotivism as well as utilitarianism.

Anyway, CEV is supposed to somehow take all of these details into account, and somehow generate an outcome that everyone will be satisfied with.

Yes, that's the ideal. But the planning fallacy tells us how much harder it is to make things work in practice than to imagine how they should work. Actually implementing CEV requires work, not magic, and that's precisely why we're having this conversation, as well as why SIAI's research is so important. :)

but I still suspect that if it really is such a good idea, then it should somehow be a part of the CEV extrapolation.

I hope so. Of course, it's not as though the only two possibilities are "CEV" or "extinction." There are lots of third possibilities for how the power politics of the future will play out (indeed, CEV seems exceedingly quixotic by comparison with many other political "realist" scenarios I can imagine), and having a broader base of memetic support is an important component of succeeding in those political battles. More wild-animal supporters also means more people with economic and intellectual clout.

I would hope that anyone who disagrees with utilitarianism, only disagrees because of an inconsistency in their value system, and that resolving this inconsistency would leave them with utilitarianism as their value system. But I'm estimating the probability that this is the case at... significantly less than 50%.

If you include paperclippers or suffering-maximizers in your definition of "anyone," then I'd put the probability close to 0%. If "anyone" just includes humans, I'd still put it less than, say, 10^-3.

Just so long as they don't force any other minds to experience pain.

Yeah, although if we take the perspective that individuals are different people over time (a "person" is just an observer-moment, not the entire set of observer-moments of an organism), then any choice at one instant for pain in another instant amounts to "forcing someone" to feel pain....

Comment author: Jack 27 April 2010 06:11:17AM 2 points [-]

When discussing utilitarianism it is important to indicate whether you're talking about preference utilitarianism or hedonistic utilitarianism, especially in this context.

Comment author: Strange7 27 April 2010 02:48:43AM 1 point [-]

Actually, the above paragraph assumed that everyone now living would want to upload their minds into computronium. That assumption was way too optimistic. A significant percentage of the world's population is likely to want to remain in a physical body. This would require us to leave this planet mostly intact. Yes, it would be a terribly inefficient use of matter, from a utilitarian perspective, but it's a big universe. We can afford to leave this planet to the people who want to remain in a physical body. We can even afford to give them a few other planets too, if they really want. It's a big universe, plenty of room for everyone. Just so long as they don't force any other mind to suffer.

You could also almost certainly convert a considerable percentage of the planet's mass to computronium without impacting the planet's ability to support life. A planet isn't a very mass-efficient habitat, and I doubt many people would even notice if most of the core was removed, provided it was replaced with something structurally and electrodynamically equivalent.

Comment author: cupholder 22 April 2010 10:47:43PM 2 points [-]

That-guy-who-isn't-all-that-smart-but-likes-to-sound-smart-by-quoting-really-smart-people was quoting Eliezer Yudkowsky. Almost immediately after that conversation, I googled the things he was talking about. I discovered Singularitarianism.

Guess there's a use for that-guy after all!

Comment author: MatthewB 23 April 2010 09:30:00AM 2 points [-]

A couple of points:

I could not tell from your post if you understood that Pascal's Wager is a flawed argument for believing in ANY belief system. You do understand this don't you (That Pascal's Wager is horribly flawed as an argument for believing in anything)?

Also, as Counsin it seems to be implying (And I would suspect as well), you seem to be exhibiting signs of the True Believer complex.

This is what I alluded to when I discussed friends of mine who would swing back and forth between Born-Again Christian and Satanists. Don't make the same mistake with a belief in the Singularity. One needn't have "Faith" in the Singularity as one would God in a religious setting, as there are clear and predictable signs that a Singularity is possible (highly possible), yet there exists NO SUCH EVIDENCE for any supernatural God figure.

Forming beliefs is about evidence, not about blindly following something due to a feel good that one gets from a belief.

Comment author: byrnema 23 April 2010 06:48:37PM *  0 points [-]

Pascal's wager is not such a horribly flawed argument. In fact, I wager we can't even agree on why its flawed.

Later edit: I assume I am getting voted down for trolling (that is, disrupting the flow of conversation), and I agree with that. An argument about Pascal's wager is not really relevant in this thread. However, especially in the context of being a 'true believer', it is interesting to me that statements are often made that something is 'obvious', when there are many difficult steps in the argument, or 'horrible flawed', when it's actually just a little bit flawed or even controversially flawed. If anyone wants to comment in a thread dedicated to Pascal's wager, we can move this to the open thread, which I hope ultimately makes this comment less trollish of me.

Comment author: Nick_Tarleton 24 April 2010 03:17:03AM *  3 points [-]

Partially seconded. (I think most people agree that the primary flaw is the symmetry argument, but I don't think that argument does what they think it does, and I do see people holding up other, minority flaws. I do think the classic wager is horribly flawed for other, related but less commonly mentioned, reasons.)

I'll write a top-level post about this today or tomorrow. (In the meantime, see Where Does Pascal's Wager Fail? and Carl Shulman's comments on The Pascal's Wager Fallacy Fallacy.)

Comment author: byrnema 24 April 2010 04:17:14AM *  1 point [-]

Thanks for the link to the Overcoming Bias post. I read that and it clarified some things for me. If I had known about that post, above I would have just linked to it when I wrote that the fallacy behind Pascal's wager is probably actually unclear, minor or controversial.

Comment author: SilasBarta 23 April 2010 07:34:02PM *  1 point [-]

There aren't many difficult steps in refuting Pascal's wager, and I dont' think there's be much disagreement on it here.

The refutation of PW, in short, is this: it infers high utility based on a very complex (and thus highly-penalized) hypothesis, when you can find equally complex (and equally well-supported) hypotheses that imply the opposite (or worse) utility.

(Btw, I was one of those who voted you down.)

Comment author: byrnema 23 April 2010 07:37:16PM *  1 point [-]

Again, is it the argument that is wrong, or Pascal's application of it?

(Can you confirm whether you down-voted me because it's off-topic and inflammatory, or because I'm wrong?)

Comment author: SilasBarta 23 April 2010 07:42:12PM *  0 points [-]

Again, is it the argument that is wrong, or Pascal's application of it?

It is always wrong to give weight to hypotheses beyond that justified by the evidence and the length penality (and your prior, but Pascal attempts to show what you should do irrespective of prior). Pascal's application is a special case of this error, and his reasoning about possible infinite utility is compounded by the fact that you can construct contradictory advice that is equally well-grounded.

(Can you confirm whether you down-voted me because it's off-topic and inflammatory, or just because I'm wrong?)

I downvoted you not just for being wrong, but for having made such a bold statement about PW without (it seems) having read the material about it on LW. I also think that such over-reaching trivializes the contribution of writers on the topic and so comes off as inflammatory.

Comment author: JGWeissman 23 April 2010 07:12:56PM 1 point [-]

The reason I believe Pascal's wager is flawed is that it is a false dichotomy. It looks at only one high utility impact, low probability scenario, while excluding others that cancel out its effect on expected utility.

Is there anyone who disagrees with this reason, but still believes it is flawed for a different reason?

Comment author: byrnema 23 April 2010 07:22:00PM 1 point [-]

This is an argument for why the argument doesn't work for theism, it doesn't mean the argument itself is flawed. If you would be willing to multiply the utility of each belief times the probability of each belief and proceed in choosing your belief in this way, then that is an acceptance of the general form of the argument.

Comment author: JGWeissman 23 April 2010 07:41:34PM 0 points [-]

If you assume that changing your belief is an available action (which is also questionable), then the idealized form is just expected utility maximization. The criticism is that Pascal incorrectly calculated the expected utility.

Comment author: RobinZ 23 April 2010 07:44:20PM 0 points [-]

Taboo "Pascal's wager", please.

Comment author: byrnema 23 April 2010 08:22:50PM *  0 points [-]


Here's an argument:

Suppose there is a dichotomy of beliefs, X and Y, their probabilities are Px and Py, and the utilities of having each belief is Ux and Uy. Then, the average utility of having belief X is Px*Ux and the utility of having belief Y is Py*Uy. You "should" choose having the belief (or set of beliefs) that maximizes average utility, because having beliefs are actions and you should choose actions that maximize utility.

What is the flaw in this argument?

For me, the flaw that you should identify is that you should choose beliefs that are most likely to be true, rather than those which maximize average utility. But this is a normative argument, rather than a logical flaw in the argument.

Comment author: Vladimir_Nesov 23 April 2010 08:37:40PM 3 points [-]

Normally, you should keep many competing beliefs with associated levels of belief in them. The mindset of choosing the action with estimated best expected utility doesn't apply, as actions are mutually exclusive, while mutually contradictory beliefs can be maintained concurrently. Even when you consider which action to carry out, all promising candidates should be kept in mind until moment of execution.

Comment author: RobinZ 23 April 2010 08:27:49PM 0 points [-]
Comment author: khafra 23 April 2010 02:30:19PM 1 point [-]

In chapter five of Jaynes, "Queer Uses for Probability Theory," he explains that although a claimed telepath tested 25.8 standard deviations away from chance guessing, that isn't the probability we should assign to the hypothesis that she's actually a telepath, because there are many simpler hypotheses that fit the data (for instance, various forms of cheating).

This example is instructive when using Pascal's Wager to minimax expected utility. Pascal's Wager is a losing bet for a Christian, because even though expecting positive infinity utility with infitesimal probability seems like a good bet, there are many likelier ways of getting negative infinity utility from that choice. Doing what you can to promote a friendly singularity can still be called "Pascal's Wager" because it's betting on a very good outcome with a low probability, but the low probability is so many orders of magnitude better than Christianity's that it's actually a rather good bet.

Obviously, you don't want to let wishful thinking guide your epistemology, but I don't think that's what PI's talking about.

Comment author: Unknowns 05 August 2010 12:06:40PM 1 point [-]

I haven't yet seen an answer to Pascal's Wager on LW that wasn't just wishful thinking. In order to validly answer the Wager, you would also have to answer Eliezer's Lifespan Dilemma, and no one has done that.

Comment author: PeerInfinity 06 August 2010 04:32:00AM 1 point [-]

Can you please remind me what the question is, that you're looking for an answer to?

And can you please provide a link to an explanation of what Eliezer's Lifespan Dilemma is?

Comment author: Unknowns 06 August 2010 05:38:20AM 1 point [-]


If you read the article and the comments, you will see that no one really gave an answer.

As far as I can see, it absolutely requires either a bounded utility function (which Eliezer would consider scope insensitivity), or it requires accepting an indefinitely small probability of something extremely good (e.g. Pascal's Wager).

Comment author: Blueberry 06 August 2010 09:25:51AM *  3 points [-]

If you believe that there is something with arbitrarily high utility, then by definition, you will accept an indefinitely small probability of it.

Assume my life has a utility of 10 right now. My preferences are such that there is absolutely nothing I would take a 99% chance of dying for. Then, by definition, there's nothing with a utility of 1000 or more. The problem comes from assuming that there is such a thing when there isn't. I don't see how this is scope insensitivity; it's just how my preferences are.

Someone who really had an unbounded utility function would really take as many steps down the Lifespan Dilemma path as Omega allowed. That's really what they'd prefer. Most of us just don't have a utility function like that.

Comment author: Unknowns 06 August 2010 10:26:44AM 0 points [-]

So you wouldn't die to save the world? Or do you mean hypothetically if you had those preferences?

I agree with the basic argument, it is the same thing I said. But Eliezer at least does not, since he has asserted a number of times that his utility function is unbounded, and that it allows for arbitrarily high utilities.

Comment author: Blueberry 06 August 2010 09:12:37AM 1 point [-]

I'm pretty sure Peer meant the original version of Pascal's Wager, the argument for Christianity, which has the obvious answer, "What if the Muslims are right? or "What if God punishes us for believing?"

Comment author: Unknowns 06 August 2010 10:25:26AM 0 points [-]

That's not an answer, because the probabilities of those things are not equal.

"God punishes us for believing" has a much lower probability, because no one believes it, while many people believe in Christianity.

"Muslims are right" could easily be more probable, but then there is a new Wager for becoming Muslim.

The probabilities simply do not balance perfectly. That is basically impossible.

Comment author: Blueberry 06 August 2010 12:34:23PM 2 points [-]

"God punishes us for believing" has a much lower probability, because no one believes it, while many people believe in Christianity.

Why does the probability have anything to do with the number of people who believe it?

"Muslims are right" could easily be more probable, but then there is a new Wager for becoming Muslim.

There's then the problem that the expected value involves adding multiples of positive infinity (if you choose the right religion) to multiples of negative infinity (if you choose the wrong one), which gives you an undefined result.

The probabilities simply do not balance perfectly. That is basically impossible.

The probability of any kind of God existing is extremely low, and it's not clear we have any information on what kind of God would exist conditioned on some God existing.

There's also the problem that if you know the probability that God exists is very small, you can't believe, you can only believe in belief, which may not be enough for the wager.

Comment author: Unknowns 06 August 2010 01:13:05PM *  1 point [-]

The probability has something to do with the number of people who believe it because it is possible that some of those people have a good reason to believe it, which automatically gives it some probability (even if very small.) But for positions that no one believes, this probability is lacking.

That adding positive and negative infinity is undefined may be true mathematically, but you have to decide one way or another. And it is wishful thinking to say that it is just as good to choose the less probable way as the more probable way. For example, there are two doors. One has a 99% chance of giving negative infinite utility, and a 1% chance of positive infinite. The second door has a 1% chance of negative infinite utility, and a 99% chance of positive infinite utility. Defined or not, it is perfectly obvious that you should choose the second door.

We do have information on what kind of God would exist if one existed: it would probably be one of the ones that are claimed to exist. Anyway, as Nick Bostrom points out, even without this kind of evidence, the probabilities still will not balance EXACTLY, since you will have some evidence even from your intuitions and so on.

It may be true that some people couldn't make themselves believe in God, but only in belief, but that would be a problem with them, not with the argument.

Comment deleted 06 August 2010 08:32:28AM [-]
Comment author: Oscar_Cunningham 06 August 2010 08:38:49AM 0 points [-]

All this does is show that the dilemma must have a flaw somewhere, but it doesn't explicitly show that flaw. The same problem occurs with finding the flaws in proposed perpeptual motion machines, you know there must be a flaw somewhere, but it's often tricky to find it.

I think the flaw in Pascal's wager is allowing "Heaven" to have infinite utility. Unbounded utilities, fine; infinite utilities, no.

Comment author: Nisan 06 August 2010 12:25:13PM 0 points [-]
Comment author: XiXiDu 06 August 2010 12:30:19PM 2 points [-]
Comment author: Unknowns 06 August 2010 02:50:41PM 0 points [-]

Elliezer in that article:

"The original problem with Pascal's Wager is not that the purported payoff is large. This is not where the flaw in the reasoning comes from. That is not the problematic step. The problem with Pascal's original Wager is that the probability is exponentially tiny (in the complexity of the Christian God) and that equally large tiny probabilities offer opposite payoffs for the same action (the Muslim God will damn you for believing in the Christian God). "

This is just wishful thinking, as I said in another reply. The probabilities do not balance.

Comment author: Unknowns 06 August 2010 10:29:23AM 0 points [-]

What about "living forever"? According to Eliezer, this has infinite utility. I agree that if you assign it a finite utility, then the lifespan dilemma fails (at some point), and similarly, if you assign "heaven" a finite utility, then Pascal's Wager will fail, if you make the utility of heaven low enough.

Comment author: Jack 22 April 2010 05:23:58PM 1 point [-]

Your story and perspective are very interesting. You don't need to self-censor.

Comment author: PeerInfinity 22 April 2010 05:25:26PM 1 point [-]

Thanks. Actually, the reason why I said "I guess I had better stop writing now" is because this comment was already getting too long.

Comment author: thomblake 22 April 2010 05:36:21PM 3 points [-]

Just a note - don't take Jack's advice to not self-censor too literally. There is much weirdness in you, and even the borders of this place would groan under its weight.

Not that there's anything wrong with that.

Comment author: AdeleneDawner 22 April 2010 06:46:44PM *  3 points [-]

The above (below? Depends on your settings, I guess) comment, which is now hidden, involves a poll, and would not (I predict) have otherwise become hidden.

Comment author: Blueberry 22 April 2010 08:11:20PM 2 points [-]

It's also hidden depending on your settings: you can change the threshold for hiding comments as well. I don't hide any comments, because seeing a hidden comment makes me so curious I have to click it, and just draws more attention to it for me.

Comment author: TraderJoe 02 November 2012 01:03:43PM *  0 points [-]

[comment deleted]

Comment author: PhilGoetz 23 April 2010 07:29:32PM 3 points [-]

Can you write a post about satanism? I'd love to know whether there are any actual satanists, and what they believe/do.

Comment author: AdeleneDawner 23 April 2010 07:39:44PM 4 points [-]

I used to know one, and have done a bit of reading about it. It struck me as a reversed-stupidity version of Christianity, though there were a few interesting memes in the literature.

Comment author: MatthewB 24 April 2010 02:16:32AM 2 points [-]

Depending upon the Type of Satanist, yes, they are often just people looking for a high "Boo-Factor" (A term made-up by many of the early followers of a musical Genre called "Deathrock" (it's more public name is now Goth, although that is like comparing a chain saw to a kitchen pealing knife - the "Goths" are the kitchen knife).

Many Satanists, especially those who hadn't really read much of the published Satanic literature would just make something up themselves and it was almost always based in Christian motifs and archetypes. The two institutions who have publicly claimed the title of "Satanist" (The Church of Satan and The Temple of Set) both reject any and all of Christian Theology, Motifs, Archetypes, Symbolism and Characters as being ingenuous and twisted archetypes of older more healthy god archetypes (If you read Jung and Joseph Campbell, this is not uncommon for a rising religious paradigm to hijack an older competing paradigm as its bad-guys)

As Phil has suggested, maybe a front page post will come in handy. It should be recognized that some Satanists happen to be very rational people. They are just using the symbolism to manipulate their environment (although most of the more mature ones have found more mature symbols with which to manipulate the environment and their peers and subordinates).

The types to which I was referring in my post were the Christian Satanists (people who are worshiping the Christian version of Satan), which is just as bad as worshiping the Christian God. Both the Christian God and the Christian Satan are required for that mythology to be complete.

Comment author: wedrifid 24 April 2010 08:37:02AM 7 points [-]

which is just as bad as worshipping the Christian God

Wow! We make worshipping the devil sound bad around here by comparing him to God! Excuse me if I take a hint of pleasure at the irony. ;)

Comment author: MatthewB 25 April 2010 05:17:02AM 1 point [-]

Well, they both (according to Christian Myth) are truly bad characters.

It is unfortunate for God that Satan (Lucifer) had such a reasonable request "Gee, Jehovah, It would certainly be nice if you let us try out that chair every once in a while." Basically, Lucifer's crime was one that is only a crime in a state where the King is seen as having divine authority to rule, and all else is seen as beneath such things (thus reflecting the Divine Order)

It was this act upon which Modern Satanists seized to create a new mythology for Satanism, where it was reason rebelling again an order that was corrupt and tyrannical.

Comment author: Jack 25 April 2010 07:00:46AM *  5 points [-]

It is unfortunate for God that Satan (Lucifer) had such a reasonable request "Gee, Jehovah, It would certainly be nice if you let us try out that chair every once in a while." Basically, Lucifer's crime was one that is only a crime in a state where the King is seen as having divine authority to rule, and all else is seen as beneath such things (thus reflecting the Divine Order)

To be fair this stuff isn't Christian mythology in the way that Adam and Eve, or Loaves and Fishes is Christian mythology. It's just religious fiction.


Unless someone has declared John Milton a prophet and possessor of divine revelation. Which would be hilarious.

Comment author: wedrifid 25 April 2010 08:16:55AM 2 points [-]

Well, they both (according to Christian Myth) are truly bad characters.

The Christian Myth includes a quite specific definition of bad so according to the Christian myth only one of them is bad. Is what you mean that according to you the characters as described in the Christian Myth were both truly bad?

Basically, Lucifer's crime was one that is only a crime in a state where the King is seen as having divine authority to rule

That description loses something when the ruler is, in fact, God. One of the bad things about claiming that the king is king because God says so is that it is not the case that any god said any such thing. When the ruler is God then yes, God does say so. The objection that remains is "Who gives a @$@# what God says?" I agree with what (I think) you are saying about the implications of claims of authority but don't like the loaded language. It confuses the issue and well, I would say that technically (that counterfactual) God does have the divine authority to rule. It's just that divine authority doesn't count for squat in my book.

Comment author: Blueberry 24 April 2010 04:30:54AM 4 points [-]

There are Christian Satanists? Correct me if I'm wrong, but I thought Satanism was a religion founded around Rand-like rational selfishness, and explicitly denied any supernatural entities.

Comment author: MatthewB 25 April 2010 05:11:31AM 3 points [-]

Yes, they are "Christian" in the sense that all of the mythology and practices for their worship of Satan are derived from Christianity, and they still believe in a Christian God.

It is just that these people believe that they are defying and opposing the Christian God (Fighting for the other team). They still believe in this God, just no longer have it as the object of their worship and devotion.

This is also the more traditional form of Satanist in our society, and one which the more modern Satanist tends to oppose. The Modern Satanist is a self-worshiping atheist, and as has been pointed out, tend to place everything in the context of self-interest. It is a highly utilitarian philosophy, but often marred in actual practice by ignorant fools who don't seem to understand the difference between just acting like a selfish dick and acting out of self-interest (doing things which improve one's condition in life, not things which worsen one's condition)

Comment author: NancyLebovitz 25 April 2010 09:38:39AM 1 point [-]

There's an Ayn Rand quote I don't have handy to the effect that if the virtues needed for life are considered evil, people are apt to embrace actual evils in response.

Comment author: wedrifid 24 April 2010 08:34:57AM 1 point [-]

Nope, worshipping the devil is right up there as far as meanings for 'Satanism' go.

Comment author: MatthewB 24 April 2010 02:07:41AM 0 points [-]

You mean, like a main page post? I'd love to.

You would be surprised about how rational the real Satanists (and their various offshoots and schisms) are (as the non-Christian based Satanist is an athiest).

In fact, the very first Schism of the Church of Satan gave birth to the Temple of Set (Founded by the then head of the Army's Psychological Warfare Division), which was described as a "Hyper-Rational Belief System" (Although in reality it still had some rather unfortunately insane beliefs among its constituents). The Founder was very rational though. He even had quite a bit of science behind his position... It's just that his job caused him to be a rather creepy and scary guy.

Comment author: PhilGoetz 25 April 2010 08:11:51PM 0 points [-]

Has today's Satanism retained any connections to Alistair Crowley?

Comment author: Document 20 January 2011 11:19:38PM 1 point [-]

Most of them swung back and forth between the extremes of bad belief systems (From born-again Christian to Satanist, and back, many times)...

At least they're maintaining lightness.

Comment author: Yvain 20 April 2010 11:10:41PM 6 points [-]

I'm doing exactly what I would be doing if I had never found Less Wrong, but now I'm telling myself this is provably the best course because it will make me a lot of money which I can donate to the usual worthy projects. This argument raises enough red flags that I'm well aware of how silly it sounds, but I can't find any particular flaws in the logic.

Comment author: BenAlbahari 20 April 2010 01:40:46PM *  6 points [-]

Actions speak louder than words. A thousand "I love you"s doesn't equal one "I do". Perhaps our most important beliefs are expressed by what we do, not what we say. Daniel Dennett's Intentional Stance theory uses an action-oriented definition of belief:

Here is how it works: first you decide to treat the object whose behavior is to be predicted as a rational agent; then you figure out what beliefs that agent ought to have, given its place in the world and its purpose. Then you figure out what desires it ought to have, on the same considerations, and finally you predict that this rational agent will act to further its goals in the light of its beliefs. A little practical reasoning from the chosen set of beliefs and desires will in most instances yield a decision about what the agent ought to do; that is what you predict the agent will do.

What we say or think we believe is vulnerable to distortion due to a desire to signal, and due to the fact that our consciousness only has partial access to our brain's map of reality. By ignoring our words and looking at our actions we can bypass these shortcomings. It's probably a major reason as to why prediction markets work so well.

Comment author: goldfishlaser 20 April 2010 12:10:42AM 4 points [-]

What am I doing?: I am studying to be an electrical engineer at SPSU.

Why am I doing it?: Because I want to make a lot of money to pay for Friendly AI, anti-aging, and existential risk research, while following my own interest in power and similar technology.

Of course, these are my LessWrong what and whys, and not necessarily the other what and whys of other sectors of my life. If you took a look at the percentage of my time being devoted towards these activities... lets just say my academic transcript would not flatter me well....

(PS: Asking what and why is a good way to get far-mode about what you're doing, which increases motivation. So thanks for getting me kick-started on my studying ^_^)

Comment author: Kevin 20 April 2010 09:59:57AM 7 points [-]

You should go and donate $10 to the SIAI if you haven't, because people that donate any amount of money are much more likely to later donate more amounts of money. Anti-akrasia, etc. etc.

Comment author: timtyler 22 April 2010 09:15:38AM *  1 point [-]

These seem like the most relevant links to the associated cognitive biases:



Comment author: aoxfordca 20 April 2010 05:15:17PM 0 points [-]

SPSU? I would express lesswrong folk to be at GaTech ;). Glad to know there are others in the neighborhood (You are an uber-slingshot projectile away)

Comment author: JGWeissman 20 April 2010 01:16:48AM 0 points [-]

Have you considered participating in SIAI's Visiting Fellow Program? It is a good opportunity to learn what is being done for existential risk reduction.

Comment author: goldfishlaser 20 April 2010 02:02:26AM 0 points [-]

Oh, you bet I have! But I have a few more responsibilities to deal with here before I can join in the effort in California... but it is my goal to start helping you guys with the LessWrong wiki from my residence here at the very least, and I've always wanted to start a rationalist youtube video playlist! Oh procrastination...

Comment author: Kevin 20 April 2010 10:00:46AM 4 points [-]

At the SIAI house, surfing Less Wrong doesn't count as procrastination! Except when it does.

Comment author: thomblake 19 April 2010 05:12:53PM 10 points [-]

What are you doing?

Are our answers confined to 140 characters or less?

Comment author: CannibalSmith 20 April 2010 01:00:53PM 0 points [-]


Comment author: Morendil 19 April 2010 05:28:44PM *  9 points [-]

You forgot one high-leverage component in this kind of inquiry: ask the question "why" not one but five times or more.

Just before reading the above, I was looking at instructions for building a laser show from cheap parts and the Arduino microcontroller I've been playing with lately.

Why - because I'm getting an interest lately in programming that affects the physical world.

Why - because, in turn, I believe that will broaden my horizon as a programmer.

Why - because I think learning about programming is one of the more important things anyone interested in the origins and improvement of thinking can learn. (Post about this coming sometime in the next few weeks.)

Why - because I want to improve my thinking in general. Which is also the reason I stopped here after I figured I had collected enough information about the laser stuff.

Why - because my thinking is my highest leverage tool in dealing with the world.

(ETA: to be quite honest, another reason is "because it's fun", but that tends to apply to a lot of the things I do.)

Comment author: olimay 27 April 2010 09:18:03AM 3 points [-]

Background: two years ago, I dropped out of college with a tremendous amount of debt. I'd failed several classes right before I dropped out, and generally made a big mess of things.

Still alive today, I'm beginning to step free of a lot of social conventions, letting go of shame and the habit of groveling, and learning to really value (and not just know I should value) important things. I am searching for how to make my strongest contribution. In the short term, that probably has to do with making a lot of money, but on the side, I have an inkling that working on my writing and learning to take in and express complicated ideas in speech, prose, poetry, and myth could come in handy. I'm only okay at helping other people with their hangups, but I think it'd be a great thing if I could get really good at overcoming my own, especially the difficult seeming ones.

I owe Michael Vassar for some particularly good advice from a few months back. He also pointed me in the direction of the ancient Cynics--they've been a huge help to me, philosophically.

Comment author: Will_Newsome 23 April 2010 12:18:29AM *  3 points [-]

What am I doing? Working for SIAI. For the last hour or so I've been making a mindmap of the effects of 'weird cosmology' on strategies to reduce existential risk: whether or not the simulation hypothesis changes how we should be thinking about the probability of an existential win (conditional on the probability (insofar as probability is a coherent concept here) that something like all possible mathematical/computable structures exist); whether or not we should look more closely at possible inflationary-magnetic-monopole-infinite-universe-creation-horrors; how living in a spatially infinite universe might affect ethics (.pdf warning. Also, I found it a lot easier to think about it as infinite pizza instead of infinite ethics. I don't remember this leading to any significant problems besides a strong desire for pizza. YMMV.); et cetera.

Why am I doing it? I'm not sure if many of these ideas have been compiled into a single place to be synthesized and tested against each other. Weird things happen when you put a recursive 'simulation' in a Tegmark level 4 multiverse with an infinite amount of inflationary universes being formed out of magnetic monopoles with further universes coming into existence at the moment blackholes decohere and then play 'follow the measure' (with a heavy dose of anthropic reasoning, of course). (If someone has done something like this and found interesting results, please let me know, as an hour thinking up crazy stuff does not seem like nearly enough analysis.)

Why do I care about that? It seems like there's a good chance we're missing some much-needed information and it's hidden in a fog of metaphysics. And we really do need it if we want to maximize the probability of a continued humanity.

So what, why does that matter? Well, I love many people and many things, and I would like them to continue existing; and each millionth of a millionth of a percent chance that humanity can live on and flourish, reaching its greatest potential, whatever that may be, is worth whatever effort I can put into it.

Comment author: Metafire 24 April 2010 04:13:19PM *  1 point [-]

That's very interesting. It sounds like you start digging into the problems with level 4 multiverse ethics I plan to write a series of sci-fi novels about. What I have already written (not too much) can be read with Google Wave if you add "our-ascent-noofactory@googlegroups.com" to you contacts and then display the group waves. There are a couple of underlying concepts and questions which come together in my fictional work:

  1. How could a possible positive post-singularity world look like?
  2. How could sci-fi in a possible post-singularity world look like? -> My answer to that question is that "post-singularity sci-fi" doesn't make sense and the corresponding works of fictions would be about alternative possible worlds (which exist by modal realism or "classical" multiverse theories) and their hypothetical interactions.
  3. How does modal realism multiverse ethics work, if at all?
  4. How to portrait a civilization whichs ascends rather infinitely in the most impressive and interesting way?
  5. How can a positive post-singularity world be reached after all?

Why do I care about the hypothetical consequences of modal realism? Because I think it's the best ontology for modelling the world as correctly as possible, and I'm pretty convinced that it's true (for reasons similar to those in the map that is the territory, but more founded in mathematical logic and philosophy of mathematics). Trying to apply "pure" utilitarianist reasoning to a modal realism multiverse leads to serious problems, for example:

A) The amount of joy and suffering are actually infinite, which destroys the point of summing it up or integrating it. You could fix the problem by doing "local" computations, but then how do you define the "locations" in the right way, if worlds are nested or infinite in space or time. All of this is a huge headache for a convinced hedonistic utilitarianist, which I used to be before. (I find it hard to tell what I am, in an ethical sense, now. Possibly, "confused" might be the best shortest description.)

B) Every configuration of sentience actually is realized somewhere in the modal realism omnicosmos, which I call Apeiron. From a purely abstract point of view the mental states with positive and negative valence should be in one-to-one correspondence, which means that from a purely apeironal (maximally holistic) point of view, there is a perfect balance of good and bad feelings in every respect. Interestingly, this observation seconds that theories like hedonic utilitarianism are only meaningful if applied "locally".

C) Above of each (computable) world lies an infinite (!) chain of worlds in which the first one is simulated directly or indirectly (unless that is impossible, which I think is not the case). If you haven't considered the problems of simulation ethics yet, this is a good reason for starting to do so.

D) Trying to define a probability measure over anything on a whole level 4 multiverse is rather hopeless. Maybe it's possible to define some fancy something measures, but ultimately you have to face the probem of unbounded infinite cardinality.

Oh, I don't know how so solve these problems in the most convenient way. All those questions and thoughts have left me with some kind of meta-ethical nihilism. However, I tried to invent some new meta-ethical concepts like "(meta-)ethical synergism" (quantify ethical systems and use ensembles of those systems for making moral judgements) and "thelemanomics" (extract the underlying economic, social, political and ethical systems from the volitions of all people; roughly comparable to CEV), which could fix some meta-ethical problems.

I think we should stay in contact.

Comment author: PeerInfinity 25 April 2010 11:36:22PM *  1 point [-]

I should mention that Metafire and I have spent lots of time discussing these concepts, and I am familiar with the ideas he is presenting in this comment. The metaphysics that Metafire is trying to describe here is pretty much identical to what ata described in the post on The mathematical universe: the map that is the territory.

Oh, and please excuse the confusing grammar, English isn't Metafire's native language. Metafire lives in Germany. Metafire, please check today's post in my waveblog for some suggestions on how to make the grammar of that comment less confusing.

I know I've gotten negative feedback when I've tried to do polls before, but... did anyone actually understand what Metafire was saying in this comment? Even I didn't understand that last paragraph. It will need lots more explanation.

Oh, and Metafire and I disagree on the implications of this metaphysics. I believe that this metaphysics doesn't invalidate total hedonic utilitarianism, but Metafire thinks it does, afaik.

Oh, and I also disagree with ata about some of the moral conclusions drawn from that post, though these disagreements are probably a matter of intuition and interpretation. Though I suspect that ata did something similar to that mathematical proof that tries to prove that 1=0, by misusing the concept of infinity.

Comment author: Larks 09 August 2010 12:01:54AM 1 point [-]

I understood it, though if you're in regular contact, maybe I only think I do.

Essentially, the belief that to be definable is to exist leads to moral nihilism, much as some thing Many-Worlds does ( no matter what you do, it'll still be undone in other worlds. Each 'good' world as an equal and opposite 'bad' world).

The second half I understand the sentences of, but don't think Metafire has provided enough details here. Certainly, I don't see how some kind of weighted function over all ethical systems or volitions could help expand their scope.

Comment author: james_edwards 22 April 2010 03:49:17AM 3 points [-]

What am I doing? Trying to write a few thousand words on legal reasoning about people's dispositions.

Why? To finish my dissertation, and graduate with an Honours degree in law.

Why am I doing that? To increase my status and career opportunities, but also due to inertia. I've almost finished this degree, and a few more weeks of work for this result seem worthwhile. Also, doing otherwise would make me look weird and potentially cut off valuable opportunities.

Why does that matter? Much intellectually interesting and well paid work seems to require signalling a certain level of competence, as well as a willingness to follow social norms.

And why does that matter? I want to have access to interesting work, as well as sufficient skills, money, and influence to usefully assist my favoured causes. This means the SIAI!

At the same time, I'm not sure how well-reasoned my career decisionmaking is. JRMayne's diagnosis may be on point:

One particular example is high-powered law students joining big firms for just a little while until they end up doing [thing they like/thing they think is good for society].

I could try and become a corporate lawyer within the next few years, which would help me to gain wealth, and thereby contribute financially to SIAI. The downside would be long working hours, and a high-pressure, potentially conservative work environment. I worry that working in such an evironment, my attitudes would change dramatically.

Why does that matter? I like my attitudes the way they are! I also enjoy working with people who share an interest in being rational. This is a terminal value, or close to it.

The main use of these questions might be to imply a further one:

Could you be doing something more worthwhile?

It seems likely. Perhaps I should find out whether my skills are directly useful for the SIAI. Knowledge either way would be useful.

Comment author: magfrump 20 April 2010 08:55:53PM 3 points [-]

What am I doing?: Bug testing a program for my thesis. (A mathematical computation.)

Why?: Because I want it to work perfectly (and it doesn't).

Why?: Because I want to impress my advisor.

Why?: two reasons: a) to have a good working relationship since I will be at the same school for a while. (why?: because it's unpleasant to work with people you don't get along with; terminal value.)

b) to get better recommendations.

Why?: to have better career prospects.

Why?: to make more money.

Why?: a) to make my life more pleasant (terminal value) b) to donate to reducing existential risk (terminal value).

Why am I on Less Wrong meanwhile? My code is compiling.

Comment author: magfrump 20 April 2010 08:57:16PM 1 point [-]

Other questions: why am I studying math? Why am I staying in school?

Because it is interesting; because if I don't do something interesting I will be (a) less happy, (b) less effective.

Comment author: Alicorn 25 May 2010 09:12:52PM *  2 points [-]
Comment author: nhamann 17 June 2010 03:19:41AM 2 points [-]

For those of you as confused as I was, the above post should actually link here.

Comment author: Alicorn 17 June 2010 03:28:16AM 1 point [-]

Whoops, thanks.

Comment author: alasarod 21 April 2010 04:50:24PM 2 points [-]

I'm sorry to do this because I'm sure it's off topic, but Tim Minchin (comedian) just did a 10 minute piece that will make skeptic that's had to sit through exchanges about auras, and magic, and how science is "just a theory too," just holler.


Isn't this enough? Just this world?

Comment author: Alicorn 21 April 2010 05:05:08PM 3 points [-]

This would make a good Open Thread comment. Also, "Storm" isn't especially new; you had me excited for a moment that Minchin had "just" done something.

Comment author: alasarod 22 April 2010 03:27:27AM 4 points [-]

You're right on both accounts. I admit I'm new to commenting on LW. It's intimidating but I've decided to learn from practice rather than observation. Thanks for the input!

Comment author: JoshB 20 April 2010 01:24:20PM 2 points [-]

Listening to Autechre's new album

Because it contains sufficient audial texture and sophisticated sound modulation combined with an intermittent hip-hop beat, so as to sit nicely within my current, self-imposed tolerances of what good electronic music should sound like....which results in a state of favorable brain chemistry.

Comment author: roland 19 April 2010 06:56:01PM 2 points [-]

I think these two questions are the basic questions of rationality that you should be asking yourself as much as possible. There is this great quote that I have on my desktop:

Only the real determinants of our beliefs can ever influence our real-world accuracy, only the real determinants of our actions can influence our effectiveness in achieving our goals. -- Eliezer Yudkowsky

Comment author: Metafire 22 April 2010 03:15:59PM *  3 points [-]

What I am doing: 0. I registered here on LW today, because this is the first posting I thought I really should comment on. Those two questions are one of the most important and most thought provoking of all. They are even more important than the questions "What do you believe, and why do you believe it?", because most of your believes might not be relevant for your actions at all.

    1. I study mathematics and physics, because
  • 1.1. I want to know what the world is and how it works.
  • 1.1.1. Understanding the world better is generally useful, although it might lead to pretty unsettling insights.
  • Actually, I go on exploring the world in a mathematical/philosophical sense, because my curiosity is stronger than my fears.
  • 1.1.2. If I understand how things work, I might be able to improve them.
  • I'm very fond of improving things, because perceiving suboptimality evokes negative emotional reactions in myself.
  • 1.1.3. I do want to understand things, because understanding things is fun.

  • 1.2. I suspect that studying could help me to earn more money.

  • 1.2.1. Earning more money increases my capabilities at changing the world according to my values.
  • 1.2.2. I'm pretty annoyed that I have to earn money! That fact is restricting my freedom (ok, there are alternatives to using money, but they don't look attractive enough to me). Earning money by using maths seems relatively convenient compared to most other alternatives.

  • 1.3. I want to write sci-fi stories and think that studying those subjects might help a bit.

  • 1.3.1. Most sci-fi stuff is not really compatible to my transhumanist views, so writing myself seems like a good solution.
  • 1.3.2. I'm a hobby philosopher, but bare philosophy isn't sexy enough, so I do some kind of implicit philosophy by creating sci-fi settings and stories, in order to explain my views.
  • Explaining the way I think is important, because it enables real communication about interesting topics.
  • Because of a lack of such communication I sometimes feel lonely and misunderstood.
  • It's important to make my views popular, because I think they are too awesome to be restricted to a single individual.
  • I think my views are awesome, because I spent a lot of time at pondering about difficult philosophical questions, and had some insights only few other people, or maybe noone else ever had.
  • 1.3.3. At the moment I'm not sure how to resolve some very difficult ethical problems, and I try exploring possible solutions by writing stories.
  • I think it would be great to come up with an "optimal" ethical system, but I've realized that you can't measure can't measure ethical optimality without already having an underlying ethical system. Oh, I'm pretty disenchanted, so I'm conent with anything that "feels right".
  • Classifying different ethical systems might be a worthwile goal, if there's no single best candidate.
  • 1.3.4. Writing can be more entertaining than less productive forms of entertainment.

  • Two. (I wrote this as word, because strangely, writing "2" resulted in "1" being shown!) Unfortunately I don't feel that I have the necessary ressources for finishing any stories, because I'm currently pretty preoccupied with doing maths. So, at the moment I do not work on them.

  • 2.1. I feel that I have been pretty inefficient at learning maths. Somehow I think that I have to compensate for this by using more time on learning, so I can finish reasonably soon, and with feeling reasonably competent.
  • 2.1.1. I don't want to finish my thesis and exams as quickly as possible, because I want to have the feeling that I really understand all the stuff I am supposed to understand.
  • Not really having that feeling is annoying!
  • 2.2. I'm really not sure whether that's the best decision, but I'm affraid of getting too stressed out with trying to learn and write at the same time, while I still have a pretty full curriculum.
  • 2.2.1. After having written 2.2 I feel stupid, because my "full curriculum" was my own decision and I could reduce it or try writing nevertheless.

Umm, actually I suspected that I could end at a conclusion like that. Perhaps that's also the most important reason why I started this comment. I should become better at using my time efficiently. My hope is that LW could help me with that.

Comment author: PeerInfinity 25 April 2010 11:54:28PM *  0 points [-]

Is it too late to modify your curriculum? It sounds like you would be much better off leaving yourself enough time to study math and to write your stories.

Comment author: Metafire 26 April 2010 04:36:58AM 0 points [-]

Actually my curriculum is pretty flexible. The real issue isn't time, but setting clear priorities. I'm not really good at that. I find it pretty difficult to make myself write a story, compared to almost everything else. Also, I could need a better spoon-management system.

Comment author: PeerInfinity 26 April 2010 02:49:50PM 0 points [-]

Well, you just finished setting some clear priorities in that last comment, didn't you? It sounds like you would be better off leaving yourself enough free time or spoons or whatever to continue writing your stories whenever you feel inspired to do so. But I would advise against setting a schedule where you're forced to write your stories even if you don't feel inspired. That just seems like a bad idea.

Anyway, that's just my opinion, I'm no expert on these matters.

Oh, and I guess I should admit that my advice may be biased by the fact that I like your stories, and would like to read more of them.

Heh, I was about to say that I still think that utilitarianism is the optimal ethical system, but I just realized that utilitarianism is about what you value, not about how to ethically go about achieving those values. There's a rather extreme difference between a utilitronium shockwave scenario without any ethical restrictions, and a version of the utilitronium shockwave scenario with, for example, ethical restrictions against murder and coersion.

And, um... who downvoted Metafire's last comment, and why?

Comment author: SilasBarta 19 April 2010 05:41:49PM *  0 points [-]

What are you doing?

Voting this post down.

And why are you doing it?

Because it contributes nothing substantive to the site; we're aware of the problem of grounding actions, and this doesn't help solve it.

ETA: On reflection, I made this point way too harshly, in an attempt to be cute. Sorry about that.

ETA2: Someone seems to be modding down everyone who's replied to this. Just so you know, it's not me. I don't vote on comments in arguments I'm directly involved in, and I've made a big deal about adhering to this in the past.

Comment author: JGWeissman 19 April 2010 06:11:10PM 3 points [-]

I see a lot of value in the post, which can help a person to expose a disconnect between their normative ideal of what they should do and what they actually are doing. A large part of the point is that this rationality stuff is not just theoretical, it has practical implications, and we should remember to apply the practical lessons in our daily lives.

The short, easy to remember "What are you doing? And why are you doing it?" is great for prompting oneself to examine how well they are practically applying rationality, how effective they are at achieving their goals.

Comment author: SilasBarta 19 April 2010 06:28:35PM 4 points [-]

Perhaps, but at its current state, it still seems more appropriate for open thread. For a top-level comment, I would expect to see more detail, take us through examples, and generally provide a more thorough exposition of how to best go about the process.

Comment author: Oscar_Cunningham 19 April 2010 06:36:53PM 1 point [-]

Agreed, nothing is lost by posting something in the open thread first, and then posting an expanded version if it generates interest. Personally, I'd like to see the idea expanded.

Comment author: wedrifid 19 April 2010 06:42:17PM *  2 points [-]

Because it contributes nothing substantive to the site; we're aware of the problem of grounding actions, and this doesn't help solve it.

I think these questions are important. In fact, I have built the habit of asking myself the question "What do I want?", which prompts thinking along similar lines. (I did actually vote the post down, but just because it presents something as "fundamental to rationality" when it just isn't. It is rather a useful tool of applied rationality.

What would have been particularly interesting is if MBlume presented some insights into what he has gained from this kind of introspection, including any changes he has made to "what he is doing" based on the "why he is doing it" sucking.

Comment author: SilasBarta 19 April 2010 06:56:47PM *  2 points [-]

I don't think our positions are very different on this. Like I said to Morendil and JGW, the fact that this is a good question for discussion makes it belong on the site -- but in Open Thread. In its top-level form, it needs to do a more thorough handling of the topic.

Comment author: wedrifid 19 April 2010 07:13:28PM *  0 points [-]

I don't think our positions are very different on this.

Yes, I was expressing a similar position, so included it here to reduce clutter.

I wonder why the grandparent was downvoted. It wasn't particularly controversial position. Presumably either because it was in reply to your comment so someone voted systematically, assuming it was a fundamental disagreement (other replies to your comment were downvoted at some stage) or because of a parenthetical confession that I too downvoted the post (although I'd expect more umbrage to be taken at your burn!)

Comment author: Morendil 19 April 2010 05:50:36PM 0 points [-]

I agree that the post has little substantive content, but it may generate some interesting discussion even while sitting at 0.

You could fault it for timing - the flood of former lurkers who saying Hi, sometimes with interesting commentary, is already too much to keep up with - but I'd rather downvote an overly long post than a short one. Less of a strain on my attention economy.

Comment author: SilasBarta 19 April 2010 05:52:09PM 1 point [-]

I agree that the post has little substantive content, but it may generate some interesting discussion even while sitting at 0.

Perhaps, but that would better justify making it an open thread comment.

Comment author: nhamann 19 April 2010 04:38:29PM 1 point [-]

Nitpicking, but this "doing" question can't possibly be of equal importance to the fundamental question of rationality, because answering the "Why are you doing it?" part obviously depends on you having come to terms with what you believe, and why you believe it.

That said, I think this "doing" question is fundamental as well, second in importance only to the Fundamental Question. Good post.

Comment author: khafra 20 April 2010 02:13:35PM 0 points [-]

The "Why" in "why are you doing it" could be interpreted as "for what purpose," or "as a result of what causal chain." Neither of these, at first blush, appears all that fundamental or difficult--but perhaps there's another sense I'm missing.

Comment author: nhamann 20 April 2010 04:14:47PM 2 points [-]

The "Why" in "why are you doing it" could be interpreted as "for what purpose," or "as a result of what causal chain."

Certainly. What's important, however, is that the process of repeating the "why?" question forces you to to 1) think about what it is that you're doing, in detail, 2) understand what ends these actions serve, and 3) confront the beliefs that make these ends seem desirable in the first place. In effect, asking "what are you doing, and why are you doing it?" forces you to look at not only what you believe, but whether or not your actions are in alignment with your beliefs. For example:

What are you doing?

I am studying information theory, Bayesian statistics and neuroscience.

and why are you doing that?

I am trying to understand how the brain works, and it appears that the former two areas of mathematics are useful tools in formulating theories about the brain. (Notice I've already had to confront a belief here. A "why do you believe this?" question should go here.)

and why are you doing that?

1) It is a very interesting problem. 2) Ultimately, having a good theory of the brain will likely contribute to both AI and WBE technologies (belief!), both of which I view as necessary to confront issues that will likely arise in the world as the world population increases and as more and more dangerous technologies get developed (synthetic biology, nanotechnology, etc.) (a tangled network of beliefs, here, all of which need explaining).

If I were to unravel this further, I would have to confront the fact that AI is itself a dangerous technology, so I should address whether my current course of action results in a net positive or net negative impact on the chances of a beneficial Singularity (there's a belief implicit here: that the Singularity is plausible enough to warrant thinking about. This too requires explanation).

Of course, this process quickly gets messy, but in my view "what are you doing and why are you doing it" is of fundamental importance to any rationalist.

Comment author: Matt_Simpson 26 April 2010 04:53:24AM *  1 point [-]

What am I doing?

Finally responding to this post on LessWrong.

Why am I doing it?

I don't quite feel tired yet, and I don't know which book to pick up for pre-sleep reading: Wicked, so I have some context when I see the musical with my girlfriend in June. The Ancestor's Tale because I find evolution extremely interesting, and there's the off chance that it will be relevant to my future research (and I'm obligated to read it since it was a gift). Or The Theory of Moral Sentiments because I find the moral sense theorist to be interesting precursors to Eliezer and others in metaethics, and I still haven't read it yet, and for Vassarian reasons.

Hmm, Which one should I start reading?

Comment author: Matt_Simpson 26 April 2010 05:07:42AM 1 point [-]

I think I'm going with Wicked

Comment author: Thomas 20 April 2010 07:10:35AM 1 point [-]

I do this: http://critticall.com/SQU_cir.html

In fact, the machine on my left does it, I do something else.

Comment author: MartinB 19 April 2010 04:12:42PM 1 point [-]

That looks to me like applied rationality.

Comment author: RobinZ 19 April 2010 06:47:17PM 4 points [-]

Yes – and rationality must be applied, or else it collapses into sophistry. That was the essential idea behind Something to Protect, for example.

Secondarily, asking this question may prevent a subset of disputes about definitions.

Comment author: MartinB 19 April 2010 06:54:41PM 2 points [-]

Fully agreed!

I still struggle with applying what I learn, or even having it available at the right time. But i make progress. How does the question prevent disputes about definitions? I fail to see that.

Comment author: RobinZ 19 April 2010 07:16:58PM 2 points [-]

If Fred says, "The test scores of the group trained by method A is greater than that by method B at a 98% significance level, and therefore method A should be preferred", and Sheila says, "The Bayes factor between hypothesis M1, which assumes that method A and method B produce a similar distribution of test results, and M2, which predicts superior results from A, is 1:38, suggesting that method A is superior" ... they don't actually disagree. Both Fred and Sheila would recommend training by method A.

It's not a traditional dispute about definitions, but (for example) Sheila sniping at Fred for using frequentist methods would be inappropriate. If he genuinely deserves criticism, she will not need to wait long for an occasion where he is wrong.

Comment author: CronoDAS 19 April 2010 07:35:25PM 0 points [-]

I'm reading LessWrong, because I'm bored.

Comment author: Larks 20 April 2010 12:08:34AM 6 points [-]

Comparing your comment to PlaidX's, there is apparently a 2 karma premium for grammatical correctness.

-either that or for signalling group loyalty.

Comment author: Kevin 20 April 2010 08:51:37AM -1 points [-]

Determinism, with a little bit of free will here and there

Comment author: PlaidX 19 April 2010 04:51:40PM 0 points [-]

Reading my RSS feeds, cuz I'm bored.

Comment deleted 16 June 2010 04:06:03AM [-]
Comment author: Alicorn 16 June 2010 04:52:17AM 1 point [-]
Comment author: abramdemski 17 June 2010 03:02:35AM 1 point [-]


Comment author: anonym 21 April 2010 07:07:41AM *  1 point [-]

A bit off topic, but you've got me thinking about Babylon 5, so have a few more question:

  • Who are you? (The Vorlon Question)
  • What do you want? (The Shadow Question)
  • Why are you here? & Do you have anything worth living for? (Lorien's Questions)
  • Where are you going? (Techno Mage's Question)
Comment author: Alicorn 21 April 2010 07:29:24AM 2 points [-]
Comment author: anonym 21 April 2010 07:35:29AM 1 point [-]

Ha, I'd totally forgotten about that excellent post of yours, which I did actually read at the time. Thanks for the reminder.

Comment author: MartinB 21 April 2010 10:21:00AM 0 points [-]

It continues with 'who do you serve and who do you trust'. http://www.youtube.com/watch?v=RWKNGNh1I-4

Now B5 is an amazingly well done show, but its not particularly singularitarian. (http://en.wikipedia.org/wiki/Deathwalker)

Comment author: thatrenfrewguy 20 April 2010 12:18:55AM 1 point [-]

I was, before reading this, reading the Selfish Gene by Dawkins Why? Because it is well written Why? Because it relates to the topic I chose for my IB EE Why? Because I enjoy learning about alternative descriptions of functional units of evolution and of organisms, and Dawkins treats genes like the puppet master of evolution. Why? Because I am trying to bridge a cultural gap with my father (reasons for that are for somewhere else) Why? Because it is refreshingly different from the magic realism I had been reading.

Comment author: Sperling 20 April 2010 07:04:07PM 0 points [-]

"Define And thus expunge The ought The should ... Truth's to be sought In Does and Doesn't "

-B. F. Skinner (an interesting soundbite from an otherwise misguided disagreement with Chomsky over language acquisition)

Comment author: byrnema 19 April 2010 09:16:43PM 0 points [-]

I think that what you do (and why you do it) follow your beliefs, and that's why interrogating beliefs is the more fundamental question.

For example, you might do 'X' because you believe 'X' matters, or, more meta -- and more fundamental -- you might believe that whether you do 'X' or not matters because you believe that what you do matters. This is only true within a particular belief structure.

Comment author: JRMayne 21 April 2010 01:36:11PM 2 points [-]

I think what you do and why you do it generates beliefs and actions more than people think.

One particular example is high-powered law students joining big firms for just a little while until they end up doing [thing they like/thing they think is good for society]. Walking away from buckets of money once the buckets are coming is very, very hard and few can do it. At that point, rationalization sets in. The valuation of money increases, because that becomes a self-worth measurement.

As has been pointed out on LW, people do things that they want to do and then make up reasons in their head why that's good. (Being a jerk educates the other guy/The government's just going to waste the money if I pay the proper amount of taxes/The government uses money very efficiently, but I shouldn't pay extra because that would defeat the system/If he didn't want his money taken, he shouldn't have been so stupid as to trust me/I cheat because I should win, and only bad luck causes me not to win, so cheating brings a more just result.)

This also applies to jobs. People find reasons to value/overvalue their jobs because they've landed there. Part of this may be pre-existing belief, but this gets cemented in. I think people's actions and jobs end up morphing beliefs - which is one reason why examining actions is important.

Comment author: Oscar_Cunningham 19 April 2010 09:48:51PM 1 point [-]

The problem being that we often find ourselves doing things for reasons other than the ones we think we do. Robin Hanson will tell you that.

Comment author: byrnema 20 April 2010 12:09:29AM 0 points [-]

Why is this a problem? (Along the lines of, why do you need to accurately know the reasons why you do things?) I'm trying to relate. I see beliefs as something I need in order to decide what to do. As long as I'm doing what I decide to do, why would I worry about varied reasons for doing it?

Comment author: roundsquare 20 April 2010 08:57:59AM 1 point [-]

As long as I'm doing what I decide to do, why would I worry about varied reasons for doing it?

One reason that comes to mind is that you might be avoiding something you should be doing.

Comment author: XiXiDu 20 April 2010 11:36:06AM 0 points [-]

I'm doing my best.

"Narns, Humans, Centauri… we all do what we do for the same reason: because it seems like a good idea at the time." -- G’Kar, Babylon 5