Less Wrong: Open Thread, September 2010

3 Post author: matt 01 September 2010 01:40AM

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

Comments (610)

Comment author: matt 01 September 2010 01:27:30AM 8 points [-]

Singularity Summit AU
Melbourne, Australia
September 7, 11, 12 2010

More information including speakers at http://summit.singinst.org.au.
Register here.

Comment author: wedrifid 02 September 2010 02:46:15AM 1 point [-]

Wow. Next Tuesday and in my hometown! Nice.

Comment author: meta_ark 02 September 2010 11:58:24AM 0 points [-]

Sigh... I would consider flying down from Sydney to go to it, but sadly I'm in a show that whole week and have to miss out entirely. Ah well. Hopefully they'll have the audio online, but I would have loved to mingle with people who share my worldview.

Comment author: MartinB 01 September 2010 02:37:53AM 12 points [-]

[tl;dr: quest for some specific cryo data references]

I prepare to do my own, deeper evaluation of cryogenics. For that I read through many of the case reports on the Alcor and CI page. Due to my geographic situation I am particularly interested in the ability of actually getting a body from Europe, Germany over to their respective facilities. Now the reports are quite interesting and provide lots of insight into the process, but what I still look for are the unsuccessful reports. In which cases a signed up member was not brought in due to legal interference, next of kin decisions and the likes. Is anyone aware of a detailed log of those? Also I would like to see how many of the signed clients get lost due to the circumstances of their death.

Comment author: Daniel_Burfoot 01 September 2010 02:48:18AM 4 points [-]

Anyone here working as a quant in the finance industry, and have advice for people thinking about going into the field?

Comment author: xamdam 01 September 2010 03:34:23AM *  3 points [-]

Ping Arthur breitman fb or linked in. He is part of NYC lw meetup, and a quant at goldman.

Comment author: kim0 01 September 2010 09:08:23AM 3 points [-]

I am, and I am planning to leave it to get a higher more average pay. From my viewpoint, it is terribly overrated and undervalued.

Comment author: Daniel_Burfoot 01 September 2010 04:18:01PM 4 points [-]

Can you expand on this? Do you think your experience is typical?

Comment author: kim0 03 September 2010 08:19:18AM 4 points [-]

Most places I have worked, the reputation of the job has been quite different from the actual job. I have compared my experiences with those of friends and colleagues, and they are relatively similar. Having a M.Sc. in physics and lots of programming experience made it possible for me to have more different kinds of engineering jobs, and thus more varied experience.

My conclusion is that the anthropic principle holds for me in the work place, so that each time I experience Dilbertesque situations, they are representative of typical work situations. So yes, I do think my work situation is typical.

My current job doing statistical analysis for stock analysts pay $ 73 000, while the average pay elsewhere is $ 120 000.

Comment author: James_Miller 01 September 2010 04:36:53AM *  0 points [-]

Eliezer has been accused of delusions of grandeur for his belief in his own importance. But if Eliezer is guilty of such delusions then so am I and, I suspect, are many of you.

Consider two beliefs:

  1. The next millennium will be the most critical in mankind’s existence because in most of the Everett branches arising out of today mankind will go extinct or start spreading through the stars.

  2. Eliezer’s work on friendly AI makes him the most significant determinant of our fate in (1).

Let 10^N represent the average across our future Everett branches of the total number of sentient beings whose ancestors arose on earth. If Eliezer holds beliefs (1) and (2) then he considers himself the most important of these beings and the probability of this happening by chance is 1 in 10^N. But if (1) holds then the rest of us are extremely important as well through how our voting, buying, contributing, writing… influences mankind’s fate. Let say that makes most of us one of the trillion most important beings who will ever exist. The probability of this happening by chance is 1 in 10^(N-12).

If N is at least 18 it’s hard to think of a rational criteria under which believing you are 1 in 10^N is delusional whereas thinking you are 1 in 10^(N-12) is not.

Comment author: wedrifid 01 September 2010 04:58:45AM *  3 points [-]

If N is at least 18 it’s hard to think of a rational criteria under which believing you are 1 in 10^N is delusional whereas thinking you are 1 in 10^(N-12) is not.

Really? How about "when you are, in fact, 1/10^(N-12) and have good reason to believe it"? Throwing in a large N doesn't change the fact that 10^N is still 1,000,000,000,000 times larger than 10^(N-12) and nor does it mean we could not draw conclusions about belief (2).

(Not commenting on Eliezer here, just suggesting the argument is not all that persuasive to me.)

Comment author: James_Miller 01 September 2010 05:05:31AM 1 point [-]

To an extremely good approximation one in a million events don't ever happen.

Comment author: wedrifid 01 September 2010 05:11:32AM *  2 points [-]

To an extremely good approximation this Everett Branch doesn't even exist. Well, it wouldn't if I used your definition of 'extremely good'.

Comment author: James_Miller 01 September 2010 05:28:29AM *  1 point [-]

Your argument seems to be analogous to the false claim that it's remarkable that a golf ball landed exactly where it did (regardless of where it did land) because the odds of that happening were extremely small.

I don't think my argument is analogous because there is reason to think that being one of the most important people to ever live is a special happening clearly distinguishable from many, many others.

Comment author: gwern 01 September 2010 01:44:17PM 1 point [-]

Yet they are quite easy to generate - flip a coin a few times.

Comment author: Snowyowl 01 September 2010 12:03:21PM 1 point [-]

I agree. Somebody has to be the most important person ever. If Elizer really has made significant contributions to the future of humanity, he's much more likely to be that most important person than a random person out of 10^N candidates would be.

Comment author: James_Miller 01 September 2010 02:25:24PM 1 point [-]

The argument would be that Eliezer should doubt his own ability to reason if his reason appears to cause him to think he is 1 in 10^N. My claim is that if this argument is true everyone who believes in (1) and thinks N is large should, to an extremely close approximation, have just as much doubt in their own ability to reason as Eliezer should have in his.

Comment author: Snowyowl 01 September 2010 03:12:45PM 1 point [-]

Agreed. Not sure if Eliezer actually believes that, but I take your point.

Comment author: KevinC 01 September 2010 05:02:39AM *  3 points [-]

Can you provide a cite for the notion that Eliezer believes (2)? Since he's not likely to build the world's first FAI in his garage all by himself, without incorporating the work of any of other thousands of people working on FAI and FAI's necessary component technologies, I think it would be a bit delusional of him to beleive (2) as stated. Which is not to suggest that his work is not important, or even among the most significant work done in the history of humankind (even if he fails, others can build on that and find the way that works). But that's different than the idea that he, alone, is The Most Significant Human Who Will Ever Live. I don't get the impression that he's that cocky.

Comment author: James_Miller 01 September 2010 05:19:27AM 2 points [-]

Eliezer has been accused on LW of having or possibly having delusions of grandeur for essentially believing in (2). See here:

http://lesswrong.com/lw/2lr/the_importance_of_selfdoubt/

My main point is that even if Eliezer believes in (2) we can't conclude that he has such delusions unless we also accept that many LW readers also have such delusions.

Comment author: JamesAndrix 01 September 2010 05:41:48AM 4 points [-]

2 is ambiguous. Getting to the stars requires a number of things to go right. Eliezer serves relatively little use in preventing a major nuclear exchange in the next 10 years, or bad nanotech , or garage made bio weapons, or even UFAI development.

FAI is just the final thing that needs to go right, everything else needs to go mostly right until then.

Comment author: Snowyowl 01 September 2010 11:19:59AM 2 points [-]

And I can think of a few ways humanity can get to the stars even if FAI never happens.

Comment author: prase 01 September 2010 02:15:03PM 1 point [-]

I think I don't understand (1) and its implications. How the fact that in most of the branches we are going extinct implies that we are the most important couple of generations (this is how I interpret the trillion)? Our importance lies in our decisions. These decisions influence the number of branches in which people die out. If we take (1) as given, it means we weren't successful in mitigating the existential risk, leaving no place to excercise our decisions and thus importance.

Comment author: rwallace 01 September 2010 04:54:57PM 2 points [-]

It's not about the numbers, and it's not about Eliezer in particular. Think of it this way:

Clearly, the development of interstellar travel (if we successfully accomplish this) will be one of the most important events in the history of the universe.

If I believe our civilization has a chance of achieving this, then in a sense that makes me, as a member of said civilization, important. This is a rational conclusion.

If I believe I'm going to build a starship in my garage, that makes me delusional. The problem isn't the odds against me being the one person who does this. The problem is that nobody is going to do this, because building a starship in your garage is simply impossible; it's just too hard a job to be done that way.

Comment author: Houshalter 03 September 2010 01:30:04AM 0 points [-]

If I believe I'm going to build a starship in my garage, that makes me delusional. The problem isn't the odds against me being the one person who does this. The problem is that nobody is going to do this, because building a starship in your garage is simply impossible; it's just too hard a job to be done that way.

You assume it is. But maybe you will invent AI and then use it to design a plan of how to build a starship in your garage. So it's not simply impossible. It's just unknown and even if you could theres no reason to believe that would be a good decision. But hey, in a hundred years, who knows what people will build in their garages, or the equivalent of. I immagine people a hundred years ago would believe our projects to be pretty strange.

Comment author: Liron 01 September 2010 05:50:16AM 14 points [-]

I made this site last month: areyou1in1000000.com

Comment author: Snowyowl 01 September 2010 11:54:33AM *  0 points [-]

It seems that I am not one in a million. Pity.

Comment author: Oscar_Cunningham 01 September 2010 12:56:03PM 0 points [-]

Me neither. :(

Comment author: Erik 03 September 2010 07:38:53AM 0 points [-]

At least you're not alone.

Comment author: billswift 01 September 2010 10:16:22AM *  6 points [-]

The key to persuasion or manipulation is plausible appeal to desire. The plausibility can be pretty damned low if the desire is strong enough.

Comment author: billswift 01 September 2010 10:27:38AM *  6 points [-]

In "The Shallows", Nicholas Carr makes a very good argument that replacing deep reading books, with the necessarily shallower reading online or of hypertext in general, causes changes in our brains which makes deep thinking harder and less effective.

Thinking about "The Shallows" later, I realized that laziness and other avoidance behaviors will also tend to become ingrained in your brain, at the expense of your self-direction/self-discipline behaviors they are replacing.

Another problem with the Web, that wasn't discussed in "The Shallows", is that hypertext channels you to the connections the author chooses to present. Wide and deep reading, such that you make the information presented yours, gives you more background knowledge that helps you find your own connections. It is in the creation of your own links within your own mind that information is turned into knowledge.

Carr actually has two other general theses in the book; that neural plasticity to some degree undercuts the more extreme claims of evolutionary psych, which I have some doubts about and am doing further reading on; and he winds up with a pretty silly argument about the implausibility of AI. Fortunately, his main argument about the problems with using hypertext is totally independent of these two.

Comment author: JohnDavidBustard 01 September 2010 03:43:05PM 2 points [-]

It is very difficult to distinguish rationalisations of the discomfort of change, with actual consequences. If this belief that hypertext leads to a less sophisticated understanding than reading a book, what behaviour would change that could be measured?

Comment author: PhilGoetz 01 September 2010 04:34:46PM 7 points [-]

I haven't read Nicholas Carr, but I've seen summaries of some of the studies used to claim that book reading results in more comprehension than hypertext reading. All the ones I saw are bogus. They all use, for the hypertext reading, a linear extract from a book, broken up into sections separated by links. Sometimes the links are placed in somewhat arbitrary places. Of course a linear text can be read more easily linearly.

I believe hypertext reading is deeper, and that this is obvious, almost true by definition. Non-hypertext reading is exactly 1 layer deep. Hypertext lets the reader go deeper. Literally. You can zoom in on any topic.

A more fair test would be to give students a topic to study, with the same material, but some given books, and some given the book material organized and indexed in a competent way as hypertext.

Wide and deep reading, such that you make the information presented yours, gives you more background knowledge that helps you find your own connections.

Hypertext reading lets you find your own connections, and lets you find background knowledge that would otherwise simply be edited out of a book.

Comment author: xamdam 01 September 2010 08:55:10PM 3 points [-]

I believe hypertext reading is deeper, and that this is obvious, almost true by definition. Non-hypertext reading is exactly 1 layer deep. Hypertext lets the reader go deeper. Literally. You can zoom in on any topic.

It has deeper structure, but that is not necessarily user-friendly. A great textbook will have different levels of explanation, an author-designed depth-diving experience. Depending on author, material, you and the local wikipedia quality that might be a better or worse learning experience.

Hypertext reading lets you find your own connections, and lets you find background knowledge that would otherwise simply be edited out of a book.

Yep, definitely a benefit, but not without a trade-off. Often a good author will set you up with connections better than you can.

Comment author: allenwang 01 September 2010 09:04:58PM *  7 points [-]

It seems to me that the main reason most hypertext sources seem to produce shallower reading is not the fact that it contains hypertext itself, but that the barriers of publication are so low that the quality of most written work online is usually much lower than printed material. For example, this post is something that I might have spent 3 minutes thinking about before posting, whereas a printed publication would have much more time to mature and also many more filters such as publishers to take out the noise.

It is more likely that book reading seems more deep because the quality is better.

Also, it wouldn't be difficult to test this hypothesis with print and online newspaper since they both contain the same material.

Comment author: Kaj_Sotala 02 September 2010 09:28:09PM *  10 points [-]

It seems to me like "books are slower to produce than online material, so they're higher quality" would belong to the class of statements that are true on average but close to meaningless in practice. There's enormous variance in the quality of both digital and printed texts, and whether you absorb more good or bad material depends more on which digital/print sources you seek out than on whether you prefer digital or print sources overall.

Comment author: jacob_cannell 01 September 2010 09:15:54PM 2 points [-]

I like allenwang's reply below, but there is another consideration with books.

Long before hyperlinks, books evolved comprehensive indices and references, and these allow humans to relatively easily and quickly jump between topics in one book and across books.

Now are the jumps we employ on the web faster? Certainly. But the difference is only quantitative, not qualitative, and the web version isn't enormously faster.

Comment author: whpearson 01 September 2010 11:52:32AM 7 points [-]

I'm writing a post on systems to govern resource allocation, is anyone interested in having any input into it or just proof reading it?

This is the intro/summary:

How do we know what we know? This is an important question, however there is another question which in some ways is more fundamental, why did we choose to devote resources to knowing those things in the first place?

As a physical entity the production of knowledge take resources that could be used for other things, so the problem expands to how to use resources in general. This I'll call the resource allocation problem (RAP). This problem is widespread and occurs in the design of organisations as well as computer systems.

The problem is this, we want to allocate resources in such a fashion that enables us to achieve our goals. What makes the problem interesting is that making a decision about how to allocate resources takes resources itself. This makes the formalisation of optimal solutions to this problem seemingly impossible.

However you can formalise potential near optimality. That is look how to design systems that can change the amount of resources allocated to the different activities of the system with the minimum of overhead.

Comment author: Snowyowl 01 September 2010 12:35:46PM 6 points [-]

This sounds interesting and relevant. Here's my input: I read this back in 2008 and I am summarising it from memory, so I may make a few factual errors. But I read that one of the problem facing large Internet companies like Google is the size of their server farms, which need cooling, power, space, etc. Optimising the algorithms used can help enormously. A particular program was responsible for allocating system resources so that the systems which were operating were operating at near full capacity, and the rest could be powered down to save energy. Unfortunately, this program was executed many times a second, to the point where the savings it created were much less than the power it used. The fix was simply to execute it less often. Running the program took about the same amount of time no mater how many inefficiencies it detected, so it was not worth checking the entire system for new problems if you only expected to find one or two.

My point: To reduce resources spent on decision-making, make bigger decisions but make them less often. Small problems can be ignored fairly safely, and they may be rendered irrelevant once you solve the big ones.

Comment author: Oscar_Cunningham 01 September 2010 01:07:42PM *  4 points [-]

I was having similar thoughts the other day while watching a reality TV show where designers competed for a job from Philippe Starck. Some of them spent ages trying to think of a suitable project, and then didn't have enough time to complete it; some of them launched into the first plan they had and it turned out rubbish. Clearly they needed some meta-planning. But how much? Well, they'll need to do some meta-meta planning...

I'd be happy to give your post a read through.

ETA: The buck stops immediately, of course.

Comment author: xamdam 01 September 2010 08:57:10PM 1 point [-]

Upvoted for importance of subject - looking forward to the post. Have you read up on Information Foraging?

Comment author: Spurlock 01 September 2010 12:33:27PM 20 points [-]

Not sure what the current state of this issue is, apologies if it's somehow moot.

I would like to say that I strongly feel Roko's comments and contributions (save one) should be restored to the site. Yes, I'm aware that he deleted them himself, but it seems to me that he acted hastefully and did more harm to the site than he probably meant to. With his permission (I'm assuming someone can contact him), I think his comments should be restored by an admin.

Since he was such a heavy contributor, and his comments abound(ed) on the sequences (particularly Metaethics, if memory serves), it seems that a large chunk of important discussion is now full of holes. To me this feels like a big loss. I feel lucky to have made it through the sequences before his egress, and I think future readers might feel left out accordingly.

So this is my vote that, if possible, we should proactively try to restore his contributions up to the ones triggering his departure.

Comment author: Vladimir_Nesov 01 September 2010 03:29:13PM *  4 points [-]

He did give a permission to restore the posts (I didn't ask about comments), when I contacted him originally. There remains the issue of someone being technically able to restore these posts.

Comment author: matt 02 September 2010 04:16:28AM 4 points [-]

We have the technical ability, but it's not easy. We wouldn't do it without Roko's and Eliezer's consent, and a lot of support for the idea. (I wouldn't expect Eliezer to consent to restoring the last couple of days of posts/comments, but we could restore everything else.)

Comment author: wedrifid 02 September 2010 04:22:04AM 4 points [-]

It occurs to me that there is a call for someone unaffiliated to maintain a (scraped) backup of everything that is posted in order to prevent such losses in the future.

Comment author: Morendil 01 September 2010 01:23:13PM 6 points [-]

The journalistic version:

[T]hose who abstain from alcohol tend to be from lower socioeconomic classes, since drinking can be expensive. And people of lower socioeconomic status have more life stressors [...] But even after controlling for nearly all imaginable variables - socioeconomic status, level of physical activity, number of close friends, quality of social support and so on - the researchers (a six-member team led by psychologist Charles Holahan of the University of Texas at Austin) found that over a 20-year period, mortality rates were highest for those who had never been drinkers, second-highest for heavy drinkers and lowest for moderate drinkers.

The abstract from the actual study (on "Late-Life Alcohol Consumption and 20-Year Mortality"):

Controlling only for age and gender, compared to moderate drinkers, abstainers had a more than 2 times increased mortality risk, heavy drinkers had 70% increased risk, and light drinkers had 23% increased risk. A model controlling for former problem drinking status, existing health problems, and key sociodemographic and social-behavioral factors, as well as for age and gender, substantially reduced the mortality effect for abstainers compared to moderate drinkers. However, even after adjusting for all covariates, abstainers and heavy drinkers continued to show increased mortality risks of 51 and 45%, respectively, compared to moderate drinkers. Findings are consistent with an interpretation that the survival effect for moderate drinking compared to abstention among older adults reflects 2 processes. First, the effect of confounding factors associated with alcohol abstention is considerable. However, even after taking account of traditional and nontraditional covariates, moderate alcohol consumption continued to show a beneficial effect in predicting mortality risk.

(Maybe the overlooked confounding factor is "moderation" by itself, and people who have a more relaxed, middle-of-the-road attitude towards life's pleasures tend to live longer?)

Comment author: jimrandomh 01 September 2010 01:33:39PM 0 points [-]

That's very interesting, but I'm not sure I trust the article's statistics, and I don't have access to the full text. Could someone take a closer look and confirm that there are no shennanigans going on?

Comment author: Vladimir_M 02 September 2010 05:49:40AM *  4 points [-]

The study looks at people over 55 years of age. It is possible that there is some sort of selection effect going on -- maybe decades of heavy drinking will weed out all but the most alcohol-resistant individuals, so that those who are still drinking heavily at 55-60 without ever having been harmed by it are mostly immune to the doses they're taking. From what I see, the study controls for past "problem drinking" (which they don't define precisely), but not for people who drank heavily without developing a drinking problem, but couldn't handle it any more after some point and decided themselves to cut back.

Also, it should be noted that papers of this sort use pretty conservative definitions of "heavy drinking." In this paper, it's defined as more than 42 grams of alcohol per day, which amounts to about a liter of beer or three small glasses of wine. While this level of drinking would surely be risky for people who are exceptionally alcohol-intolerant or prone to alcoholism, lots of people can handle it without any problems at all. It would be interesting to see a similar study that would make a finer distinction between different levels of "heavy" drinking.

Comment author: Vladimir_M 02 September 2010 08:50:57PM *  1 point [-]

The discussion of the same paper on Overcoming Bias has reminded me of another striking correlation I read about recently:
http://www.marginalrevolution.com/marginalrevolution/2010/07/beer-makes-bud-wiser.html

It seems that for whatever reason, abstinence does correlate with lower performance on at least some tests of mental ability. The question is whether the controls in the study cover all the variables through which these lower abilities might have manifested themselves in practice; to me it seems quite plausible that the answer could be no.

Comment author: cousin_it 02 September 2010 09:16:16PM *  3 points [-]

These are fine conclusions to live by, as long as moderate drinking doesn't lead you to heavy drinking, cirrhosis and the grave. Come visit Russia to take a look.

Comment author: homunq 01 September 2010 03:52:49PM *  17 points [-]

I had a top-level post which touched on an apparently-forbidden idea downvoted to a net of around -3 and then deleted. This left my karma pinned (?) at 0 for a few months. I am not sure of the reasons for this, but suspect that the forbidden idea was partly to blame.

My karma is now back up to where I could make a top-level post. Do people think that a discussion forum on the moderation and deletion policies would be beneficial? I do, even if we all had to do silly dances to avoid mentioning the specifics of any forbidden idea(s). In my opinion, such dances are both silly and unjustified; but I promise that I'd do them and encourage them if I made such a post, out of respect for the evident opinions of others, and for the asymmetrical (though not one-sided) nature of the alleged danger.

I would not be offended if someone else "took the idea" and made such a post. I also wouldn't mind if the consensus is that such a post is not warranted. So, what do you think?

Comment author: PhilGoetz 01 September 2010 04:26:39PM 8 points [-]

If there's just one topic that's banned, then no. If it's increased to 2 topics - and "No riddle theory" is one I hadn't heard before - then maybe. Moderation and deletion is very rare here.

I would like moderation or deletion to include sending an email to the affected person - but this relies on the user giving a good email address at registration.

Comment author: homunq 01 September 2010 04:32:40PM *  4 points [-]

My registration email is good, and I received no such email. I can also be reached under the same user name using English wikipedia's "contact user" function (which connects to the same email.)

Suggestions like your email idea would be the main purpose of having the discussion (here or on a top-level post). I don't think that some short-lived chatter would change a strongly-held belief, and I have no desire nor capability of unseating the benevolent-dictator-for-life. However, I think that any partial steps towards epistemic glasnost, such as an email to deleted post authors or at least their ability to view the responses to their own deleted post, would be helpful.

Comment author: Emile 01 September 2010 04:40:01PM 5 points [-]

If it's increased to 2 topics - and "No riddle theory" is one I hadn't heard before - then maybe.

I'm pretty sure that "riddle theory" is a reference to Roko's post, not a new banned topic.

Comment author: Airedale 01 September 2010 04:35:21PM 4 points [-]

I think such discussion wouldn't necessarily warrant its own top-level post, but I think it would fit well in a new Meta thread. I have been meaning to post such a thread for a while, since there are also a couple of meta topics I would like to discuss, but I haven't gotten around to it.

Comment author: Emile 01 September 2010 04:37:54PM 2 points [-]

Do people think that a discussion forum on the moderation and deletion policies would be beneficial?

I don't. Possible downsides are flame wars among people who support different types of moderation policies (and there are bound to be some - self-styled rebels who pride themselves in challenging the status quo and going against groupthink are not rare on the net), and I don't see any possible upsides. Having a Benevolent Dictator For Life works quite well.

See this on Meatball Wiki, that has quite a few pages on organization of Online Communities.

Comment author: homunq 01 September 2010 05:58:07PM 7 points [-]

I don't want a revolution, and don't believe I'll change the mind of somebody committed not to thinking too deeply about something. I just want some marginal changes.

I think Roko got a pretty clear explanation of why his post was deleted. I don't think I did. I think everyone should. I suspect there may be others like me.

I also think that there should be public ground rules as to what is safe. I think it is possible to state such rules so that they are relatively clear to anyone who has stepped past them, somewhat informative to those who haven't, and not particularly inviting of experimentation. I think that the presence of such ground rules would allow some discussion as to the danger or non-danger of the forbidden idea and/or as to the effectiveness or ineffectiveness of supressing it. Since I believe that the truth is "non-danger" and "ineffectiveness", and the truth will tend to win the argument over time, I think that would be a good thing.

Comment author: JGWeissman 01 September 2010 06:13:37PM *  0 points [-]

the truth is "non-danger"

Normally yes, but this case involves a potentially adversarial agent with intelligence and optimizing power vastly superior to your own, and which cares about your epistemic state as well as your actions.

Comment author: homunq 01 September 2010 06:49:44PM *  4 points [-]

Look, my post addressed these issues, and I'd be happy to discuss them further, if the ground rules were clear. Right now, we're not having that discussion; we're talking about whether that discussion is desirable, and if so, how to make it possible. I think that the truth will out; if you're right, you'll probably win the discussion. So although we disagree on danger, we should agree on discussing danger within some well-defined ground rules which are comprehensibly summarized in some safe form.

Comment author: wedrifid 02 September 2010 03:14:49AM -1 points [-]

I think that the truth will out

Really? Go read the sequences! ;)

Comment author: Emile 02 September 2010 08:21:43AM 2 points [-]

I think Roko got a pretty clear explanation of why his post was deleted. I don't think I did.

It's probably better to solve this by private conversation with Eliezer, than by trying to drum up support in an open thread.

Too much meta discussion is bad for a community.

Comment author: homunq 02 September 2010 09:30:06AM *  0 points [-]

The thing I'm trying to drum up support for is an incremental change in current policy; for instance, a safe and useful version of the policy being publicly available. I believe that's possible, and I believe it is more appropriate to discuss this in public.

(Actually, since I've been making noise about this, and since I've promised not to reveal it, I now know the secret. No, I won't tell you, I promised that. I won't even tell who told me, even though I didn't promise not to, because they'd just get too many requests to reveal it. But I can say that I don't believe in it, and also that I think [though others might disagree] that a public policy could be crafted which dealt with the issue without exacerbating it, even if it were real.)

Comment author: timtyler 02 September 2010 08:26:15AM *  2 points [-]
Comment author: homunq 02 September 2010 09:33:06AM 0 points [-]

Your sarcasm would not be obvious if I didn't recognize your username.

Comment author: timtyler 02 September 2010 09:57:02AM *  0 points [-]

Hmm - I added a link to the source, which hopefully helps to explain.

Comment author: homunq 02 September 2010 03:41:03PM 0 points [-]

Quotes can be used sarcastically or not.

Comment author: timtyler 02 September 2010 07:51:14PM *  0 points [-]

I don't think I was being sarcastic. I won't take the juices out of the comment by analysing it too completely - but a good part of it was the joke of comparing Less Wrong with Fight Club.

We can't tell you what materials are classified - that information is classified.

Comment author: Perplexed 01 September 2010 06:47:49PM *  14 points [-]

Do people think that a discussion forum on the moderation and deletion policies would be beneficial?

I would like to see a top-level post on moderation policy. But I would like for it to be written by someone with moderation authority. If there are special rules for discussing moderation, they can be spelled out in the post and commenters can abide by them.

As a newcomer here, I am completely mystified by the dark hints of a forbidden topic. Every hypothesis I can come up with as to why a topic might be forbidden founders when I try to reconcile with the fact that the people doing the forbidding are not stupid.

Self-censorship to protect our own mental health? Stupid. Secrecy as a counter-intelligence measure, to safeguard the fact that we possess some counter-measure capability? Stupid. Secrecy simply because being a member of a secret society is cool? Stupid, but perhaps not stupid enough to be ruled out. On the other hand, I am sure that I haven't thought of every possible explanation.

It strikes me as perfectly reasonable if certain topics are forbidden because discussion of such topics has historically been unproductive, has led to flame wars, etc. I have been wandering around the internet long enough to understand and even appreciate somewhat arbitrary, publicly announced moderation policies. But arbitrary and secret policies are a prescription for resentment and for time wasted discussing moderation policies.

Edit: typo correction - insert missing words

Comment author: homunq 01 September 2010 07:29:12PM 2 points [-]

I think it's safe to tell you that your second two hypotheses are definitely not on the right track.

Comment author: wnoise 01 September 2010 07:50:16PM *  5 points [-]

Self-censorship to protect our own mental health? Stupid.

My gloss on it is that this is at best a minor part, though it figures in.

The topic is an idea that has horrific implications that are supposedly made more likely the more one thinks about it. Thinking about it in order to figure out what it may be is a bad idea because you may come up with something else. And if the horrific is horrific enough, even a small rise in the probability of it happening would be very bad in expectation.

More explaining why many won't think it dangerous at all. This doesn't directly point anything out, but any details do narrow the search-space: V fnl fhccbfrqyl orpnhfr lbh unir gb ohl va gb fbzr qrpvqrqyl aba-znvafgernz vqrnf gung ner pbzzba qbtzn urer.

I personally don't buy this, and think the censorship is an overblown reaction. Accepting it is definitely not crazy, however, especially given the stakes, and I'm willing to self-censor to some degree, even though I hate the heavy-handed response.

Comment author: cata 01 September 2010 08:00:09PM 7 points [-]

Another perspective: I read the forbidden idea, understood it, but I have no sense of danger because (like the majority of humans) I don't really live my life in a way that's consistent with all the implications of my conscious rational beliefs. Even though it sounded like a convincing chain of reasoning to me, I find it difficult to have a personal emotional reaction or change my lifestyle based on what seem to be extremely abstract threats.

I think only people who are very committed rationalists would find that there are topics like this which could be mental health risks. Of course, that may include much of the LW population.

Comment author: Perplexed 01 September 2010 08:47:58PM 9 points [-]

How about an informed consent form:

  • (1) I know that the SIAI mission is vitally important.
  • (2) If we blow it, the universe could be paved with paper clips.
  • (3) Or worse.
  • (4) I hereby certify that points 1 & 2 do not give me nightmares.
  • (5) I accept that if point 3 gives me nightmares that points 1 and 2 did not give me, then I probably should not be working on FAI and should instead go find a cure for AIDS or something.
Comment author: wedrifid 02 September 2010 03:10:12AM 1 point [-]

I like it!

Although 5 could be easily replaced by "Go earn a lot of money in a startup, never think about FAI again but still donate money to SIAI because you remember that you have some good reason to that you don't want to think about explicitly."

Comment author: Snowyowl 02 September 2010 01:27:43PM *  0 points [-]

I feel you should detail point (1) a bit more (explain in more detail what the SIAI intends to do), but I agree with the principle. Upvoted.

Comment author: Kaj_Sotala 02 September 2010 08:57:51PM 6 points [-]

I read the idea, but it seemed to have basically the same flaw as Pascal's wager does. On that ground alone it seemed like it shouldn't be a mental risk to anyone, but it could be that I missed some part of the argument. (Didn't save the post.)

Comment author: homunq 01 September 2010 11:27:12PM 0 points [-]

<blockquote>My gloss on it is that this is at best a minor part, though it figures in.</blockquote>

I think that, even if this is a minor part of the reasoning for those who (unlike me) believe in the danger, it could easily be the best, most consensus* basis for an explicit deletion policy. I'd support such a policy, and definitely think a secret policy is stupid for several reasons.

*no consensus here will be perfect.

Comment author: JohnDavidBustard 01 September 2010 04:41:57PM 1 point [-]

Is there a rough idea of how the development of AI will be achieved. I.e. something like the whole brain emulation roadmap? Although we can imagine a silver bullet style solution, AI as a field seems stubbornly gradual. When faced with practical challenges, AI development follows the path of much of engineering, with steady development of sophistication and improved results, but few leaps. As if the problem itself is a large collection of individual challenges whose solution requires masses of training data and techniques that do not generalise well.

That is why I prefer the destructive scanning and brain emulation route, I can much more easily imagine the steps necessary to achieve it. Assuming an approximate model is sufficient, this would be a simple but world changing achievement. An achievement that society seems completely unprepared for. Do any less wrong readers know of strong arguments against this view (assuming simple emulation is sufficient)? Or know of any hypothesised or fictional accounts of likely social outcomes?

Comment author: rwallace 01 September 2010 05:27:42PM 4 points [-]

Your assessment is along the right lines, though if anything a little optimistic; uploading is an enormously difficult engineering challenge, but at least we can see in principle how it could be done, and recognize when we are making progress, whereas with AI we don't yet even have a consensus on what constitutes progress.

I'm personally working on AI because I think that's where my talents can be best used, and I think it can deliver useful results well short of human equivalence, but if you figure you'd rather work on uploading, that's certainly a reasonable choice.

As for what uploads will do if and when they come to exist, well, there's going to be plenty of time to figure that out, because the first few of them are going to spend the first few years having conversations like,

"Uh... a hatstand?"

"Sorry Mr. Jones, that's actually a picture of your wife. I think we need to revert yesterday's bug fixes to your visual cortex."

But e.g. The Planck Dive is a good story set in a world where that technology is mature enough to be taken for granted.

Comment author: sketerpot 01 September 2010 07:48:47PM 4 points [-]

"Sorry Mr. Jones, that's actually a picture of your wife. I think we need to revert yesterday's bug fixes to your visual cortex."

The phrase "Fork me on GitHub" has just taken on a more sinister meaning.

Comment author: Houshalter 02 September 2010 09:33:49PM 1 point [-]

Emulating an entire brain, and finding out how the higher intelligence parts work and adapting them for practical purposes, are two entirely different achievements. Even if you could upload a brain onto your computer and let it run, it would be absurdly slow, however, simulating some kind of new optimization process we find from it might be plausible.

And either way, don't expect a singularity anytime soon with that. Scientists believe it took thousands of years after modern intelligence emerged for us to learn symbolic thought. Then thousands more before we discovered the scientific method. It's only now we are finally discovering rational thinking. Maybe an AI could start where we left off, or maybe it would take years before it could even get to the level to be able to do that, and then years more before it could make the jump to the next major improvement, assuming there even is one.

I'm not arguing against AI here at all. I believe a singularity will probably happen and soon, but Emulation is definitely not the way to go. Humans have way to many flaws we don't even know would be possible to fix, even if we knew what the problem was in the first place.

What is the ultimate goal in the first place? To do something along the lines of replicating brains of some of the most intelligent people and forcing them to work on improving humanity/developing AI? Has anyone considered there is a far more realistic way of doing this through cloning, eugenics, education research, etc. Of course no one would do it because it is amoral, but then again, what is the difference between the two?

Comment author: JohnDavidBustard 03 September 2010 09:34:27AM 0 points [-]

The question of the ultimate goal is a good one. I don't find arguments of value based on utilitarian values to be very convincing. In contrast I prefer enlightened self interest (other people are important because I like them and feel safe in a world where they are valued). So for me, some form of immortality is much more important than my capabilities (or something else's in the case of AI) in that state.

In addition, the efficiency gains of being able to 'step through' a simulation of a system and the ability to perform repeatable automated experiments on such a system, convey enormous benefits (arguably this capability is what is driving our increasing productivity) so being able to simulate the brain may well lead to exponential improvements in our understanding of psychology and conciousness.

In terms of performance concerns, there is the potential for a step change in the economics of high performance computing, while you may only be willing to spend a couple of thousand dollars on a computer to play games with, you may well take out a (lifetime?) mortgage to ensure you don't die. In terms of social consequences one could imagine that the world economy would switch from supporting biology to supporting technology (it would be interesting to calculate the relative economic cost of supporting a simulated person rather than a biological one).

Recent work with brain machine interfaces also points towards the enormous flexibility of the mind to adapt to new inputs and outputs. With the improved debugging capability of simulation, mental enhancement becomes substantially more feasible. As our understanding of such interactions improve, a virtual environment could be created which convincingly provides the illusion of a world of limitless abundance.

And then there is the possibility of replication, storing a person in a willing state and reseting them to that state after they complete a task. This leads to the enormous social consequence of convincingly disproving notions such as the soul, free will etc. and creating a world where lives would lose their value in the same way that pirated software does. Such an event has the potential to change our entire culture, perhaps more than any other event, at least equivalent to the reduction in the influence of religion as a result of evolutionary theory and other scientific developments.

Comment author: Kaj_Sotala 01 September 2010 04:46:23PM 15 points [-]

Neuroskeptic's Help, I'm Being Regressed to the Mean is the clearest explanation of regression to the mean that I've seen so far.

Comment author: Vladimir_M 02 September 2010 04:00:14AM *  2 points [-]

When I tried making sense of this topic in the context of the controversies over IQ heritability, the best reference I found was this old paper:

Brian Mackenzie, Fallacious use of regression effects in the I.Q. controversy, Australian Psychologist 15(3):369-384, 1980

Unfortunately, the paper failed to achieve any significant impact, probably because it was published in a low-key journal long before Google, and it's now languishing in complete obscurity. I considered contacting the author to ask if it could be put for open access online -- it would be definitely worth it -- but I was unable to find any contact information; it seems like he retired long ago.

There is also another paper with a pretty good exposition of this problem, which seems to be a minor classic, and is still cited occasionally:

Lita Furby, Interpreting regression toward the mean in developmental research, Developmental Psychology, 8(2):172-179, 1973

Comment author: Snowyowl 02 September 2010 01:24:54PM *  6 points [-]

Wow. I thought I understood regression to the mean already, but the "correlation between X and Y-X" is so much simpler and clearer than any explanation I could give.

Comment author: steven0461 01 September 2010 10:07:46PM 1 point [-]

Does anyone else think it would be immensely valuable if we had someone specialized (more so than anyone currently is) at extracting trustworthy, disinterested, x-rationality-informed probability estimates from relevant people's opinions and arguments? This community already hopefully accepts that one can learn from knowing other people's opinions without knowing their arguments; Aumann's agreement theorem, and so forth. It seems likely to me that centralizing that whole aspect of things would save a ton of duplicated effort.

Comment author: Vladimir_Nesov 01 September 2010 10:11:33PM *  6 points [-]

This community already hopefully accepts that one can learn from knowing other people's opinions without knowing their arguments; Aumann's agreement theorem, and so forth.

I don't think Aumann's agreement theorem has anything to do with taking people's opinions as evidence. Aumann's agreement theorem is about agents turning out to have been agreeing all along, given certain conditions, not about how to come to an agreement, or worse how to enforce agreement by responding to others' beliefs.

More generally (as in, not about this particular comment), the mentions of this theorem on LW seem to have degenerated into applause lights for "boo disagreement", having nothing to do with the theorem itself. It's easier to use the associated label, even if such usage would be incorrect, but one should resist the temptation.

Comment author: steven0461 01 September 2010 10:32:27PM *  2 points [-]

People sometimes use "Aumann's agreement theorem" to mean "the idea that you should update on other people's opinions", and I agree this is inaccurate and it's not what I meant to say, but surely the theorem is a salient example that implicitly involves such updating. Should I have said Geanakoplos and Polemarchakis?

Comment author: Vladimir_Nesov 01 September 2010 10:43:58PM 0 points [-]

People sometimes use "Aumann's agreement theorem" to mean "the idea that you should update on other people's opinions", and I agree this is inaccurate and it's not what I meant to say, but surely the theorem is a salient example that implicitly involves such updating.

The theorem doesn't involve any updating, so it's not a salient example in discussion of updating, much less proxy for that.

Should I have said Geanakoplos and Polemarchakis?

To answer literally, simply not mentioning the theorem would've done the trick, since there didn't seem to be a need for elaboration.

Comment author: Wei_Dai 01 September 2010 11:47:07PM 2 points [-]

I think LWers have been using "Aumann agreement" to refer to the whole literature spawned by Aumann's original paper, which includes explicit protocols for Bayesians to reach agreement. This usage seems reasonable, although I'm not sure if it's standard outside of our community.

This community already hopefully accepts that one can learn from knowing other people's opinions without knowing their arguments

I'm not sure this is right... Here's what I wrote in Probability Space & Aumann Agreement:

But in such methods, the agents aren't just moving closer to each other's beliefs. Rather, they go through convoluted chains of deduction to infer what information the other agent must have observed, given his declarations, and then update on that new information. The two agents essentially still have to communicate I(w) and J(w) to each other, except they do so by exchanging posterior probabilities and making logical inferences from them.

Is there a result in the literature that shows something closer to your "one can learn from knowing other people's opinions without knowing their arguments"?

Comment author: steven0461 02 September 2010 12:11:42AM 1 point [-]

I haven't read your post and my understanding is still hazy, but surely at least the theorems don't depend on the agents being able to fully reconstruct each other's evidence? If they do, then I don't see how it could be true that the probability the agents end up agreeing on is sometimes different from the one they would have had if they were able to share information. In this sort of setting I think I'm comfortable calling it "updating on each other's opinions".

Regardless of Aumann-like results, I don't see how:

one can learn from knowing other people's opinions without knowing their arguments

could possibly be controversial here, as long as people's opinions probabilistically depend on the truth.

Comment author: MBlume 02 September 2010 12:30:51AM 2 points [-]

for an ideal Bayesian, I think 'one can learn from X' is categorically true for all X....

Comment author: Perplexed 02 September 2010 12:52:02AM *  2 points [-]

... surely at least the theorems don't depend on the agents being able to fully reconstruct each other's evidence?

They don't necessarily reconstruct all of each other's evidence, just the parts that are relevant to their common knowledge. For example, two agents have common priors regarding the contents of an urn. Independently, they sample from the urn with replacement. They then exchange updated probabilities for P(Urn has Freq(red)<Freq(black)) and P(Urn has Freq(red)<0.9*Freq(black)). At this point, each can reconstruct the sizes and frequencies of the other agent's evidence samples ("4 reds and 4 blacks"), but they cannot reconstruct the exact sequences ("RRBRBBRB"). And they can update again to perfect agreement regarding the urn contents.

Edit: minor cleanup for clarity.

At least that is my understanding of Aumann's theorem.

Comment author: steven0461 02 September 2010 01:16:45AM 1 point [-]

That sounds right, but I was thinking of cases like this, where the whole process leads to a different (worse) answer than sharing information would have.

Comment author: Perplexed 02 September 2010 02:22:06AM 1 point [-]

Hmmm. It appears that in that (Venus, Mars) case, the agents should be exchanging questions as well as answers. They are both concerned regarding catastrophe, but confused regarding planets. So, if they tell each other what confuses them, they will efficiently communicate the important information.

In some ways, and contrary to Jaynes, I think that pure Bayesianism is flawed in that it fails to attach value to information. Certainly, agents with limited communication channel capacity should not waste bandwidth exchanging valueless information.

Comment author: timtyler 02 September 2010 08:56:49AM 0 points [-]

That comment leaves me wondering what "pure Bayesianism" is.

I don't think Bayesianism is a recipe for action in the first place - so how can "pure Bayesianism" be telling agents how they should be spending their time?

Comment author: Perplexed 02 September 2010 01:21:54PM 1 point [-]

By "pure Bayesianism", I meant the attitude expressed in Chapter 13 of Jaynes, near the end in the section entitled "Comments" and particularly the subsection at the very end entitled "Another dimension?". A pure "Jaynes Bayesian" seeks the truth, not because it is useful, but rather because it is truth.

By contrast, we might consider a "de Finetti Bayesian" who seeks the truth so as not to lose bets to Dutch bookies, or a "Wald Bayesian" who seeks truth to avoid loss of utility. The Wald Bayesian clearly is looking for a recipe for action, and the de Finetti Bayesian seeks at least a recipe for gambling.

Comment deleted 02 September 2010 01:41:32PM [-]
Comment author: Wei_Dai 02 September 2010 03:39:24AM *  2 points [-]

but surely at least the theorems don't depend on the agents being able to fully reconstruct each other's evidence?

You're right, sometimes the agreement protocol terminates before the agents fully reconstruct each other's evidence, and they end up with a different agreed probability than if they just shared evidence.

But my point was mainly that exchanging information like this by repeatedly updating on each other's posterior probabilities is not any easier than just sharing evidence/arguments. You have to go through these convoluted logical deductions to try to infer what evidence the other guy might have seen or what argument he might be thinking of, given the probability he's telling you. Why not just tell each other what you saw or what your arguments are? Some of these protocols might be useful for artificial agents in situations where computation is cheap and bandwidth is expensive, but I don't think humans can benefit from them because it's too hard to do these logical deductions in our heads.

Also, it seems pretty obvious that you can't offload the computational complexity of these protocols onto a third party. The problem is that the third party does not have full information of either of the original parties, so he can't compute the posterior probability of either of them, given an announcement from the other.

It might be that a specialized "disagreement arbitrator" can still play some useful role, but I don't see any existing theory on how it might do so. Somebody would have to invent that theory first, I think.

Comment author: Stuart_Armstrong 02 September 2010 10:00:13AM 1 point [-]

You have to also be able to deduce how much of the other agent's information is shared with you. If you and them got your posteriors by reading the same blogs and watching the same TV shows, then this is very different from the case when you reached the same conclusion from completely different channels.

Comment author: Mitchell_Porter 02 September 2010 10:07:10AM 3 points [-]

If you and them got your posteriors by reading the same blogs and watching the same TV shows

Somewhere in there is a joke about the consequences of a sedentary lifestyle.

Comment author: JohnDavidBustard 02 September 2010 12:07:47PM 0 points [-]

I'm not sure about having a centralised group doing this but I did experiment with making a tool that could help infer consequences from beliefs. Imagine something a little like this but with chains of philosophical statements that have degrees of confidence. Users would assign confidence to axioms and construct trees of argument using them. The system would automatically determine confidences of conclusions. It could even exist as a competitive game with a community determining confidence of axioms. It could also be used to rapidly determine differences in opinion i.e. infer the main inferred points of contention based on different axiom weightings. If anyone knows of anything similar or has suggestions for such a system I'd love to hear them. Including any reasons why it might fail. Because I think it's an interesting solution to the 'how to efficiently debate reasonably'.

Comment author: SilasBarta 01 September 2010 10:44:31PM *  2 points [-]

Grab the popcorn! Landsburg and I go at it again! (See also Previous Landsburg LW flamewar.)

This time, you get to see Landsburg:

  • attempt to prove the existence of the natural numbers while explicitly dismissing the relevance of what sense he's using "existence" to mean!
  • use formal definitions to make claims about the informal meanings of the terms!
  • claim that Peano arithmetic exists "because you can see the marks on paper" (guess it's not a platonic object anymore...)!

(Sorry, XiXiDu, I'll reply to you on his blog if my posting privileges stay up long enough ... for now, I would agree with what you said, but am not making that point in the discussion.)

Comment author: DanielVarga 02 September 2010 12:30:31AM 3 points [-]

Wow, a debate where the most reasonable-sounding person is a sysop of Conservapedia. :)

Comment author: SilasBarta 02 September 2010 04:55:52AM 0 points [-]

Who?

Comment author: DanielVarga 02 September 2010 10:18:49AM *  1 point [-]

Roger Schlafly. Or Roger Schlafly, if you prefer that. His blog is Singular Values. His whole family is full of very interesting people.

Comment author: JamesAndrix 02 September 2010 01:49:42AM 3 points [-]

I would like to see more on fun theory. I might write something up, but I'd need to review the sequence first.

Does anyone have something that could turn into a top level post? or even a open thread comment?

Comment author: komponisto 02 September 2010 02:15:57AM *  5 points [-]

I've long had the idea of writing a sequence on aesthetics; I'm not sure if and when I'll ever get around to it, however. (I have a fairly large backlog of post ideas that have yet to be realized.)

Comment author: JohnDavidBustard 02 September 2010 08:10:52AM *  10 points [-]

I used to be a professional games programmer and designer and I'm very interested in fun. There are a couple of good books on the subject: A theory of fun and Rules of play. As a designer I spent many months analyzing sales figures for both computer games and other conventional toys. The patterns within them are quite interesting: for example child's toys pass from amorphous learning tools (bright objects and blobby humanoids), through mimicking parents (accurate baby dolls), to mimicking older children (sexualised dolls and makeup). My ultimate conclusions were that fun takes many forms whose source can be ultimately reduced to what motivates us. In effect, fun things are mental hacks of our intrinsic motivations. I gave a couple of talks on my take on what these motivations are. I'd be happy to repeat this material here (or upload and link to the videos if people prefer).

Comment author: JamesAndrix 02 September 2010 05:02:02PM 0 points [-]

Will upvote

Comment author: Mass_Driver 02 September 2010 05:13:26PM 3 points [-]

I found Rules of Play to be little more than a collection of unnecessary (if clearly-defined) jargon and glittering generalities about how wonderful and legitimate games are. Possibly an alien or non-neurotypical who had no idea what a game was might gather some idea of games from reading the book, but it certainly didn't do anything for me to help me understand games better than I already do from playing them. Did I miss something?

Comment author: JohnDavidBustard 02 September 2010 05:41:35PM *  5 points [-]

Yes I take your point. There isn't a lot of material on fun, and game design analysis is often very genre specific. I like rules of play, not so much because it provides great insight into why games are fun but more as a first step towards being a bit more rigorous about what game mechanics actually are. There is definitely a lot further to go and there is a tendency to ignore the cultural and psychological motivations (e.g. why being a gangster and free roaming mechanics work well together) in favour of analysing abstract games. However it is fascinating to imagine a minimal game, in fact some of the most successful game titles have stripped the interactions down to their most basic motivating mechanics (Farmville or Diablo for example) To provide a concrete example, I worked on a game (Medievil Resurrection) where the player controlled a crossbow in a minigame, by adjusting the speed and acceleration of the mapping between joystick and bow the sensation of controlling it passed through distinct stages. As the parameters approach the sweet spot, my mind (and that of other testers) experienced a transition from feeling I was controlling the bow indirectly to feeling like I was holding the bow. Deviating slightly around this value adjusted its perceived weight, but there was a concrete point at which this sensation was lost. Although Rules of Play does not cover this kind of material it did feel for me like an attempt to examine games in a more general way so that these kinds of element could be extracted from their genre specific contexts and be understood in isolation.

Comment author: wedrifid 02 September 2010 04:16:50AM 0 points [-]

Does anyone else ever browse through comments, spot one and think "why is the post upvoted to 1?" and then realise that the vote was from you? I seem to do that a lot. (In nearly every case I leave the votes stand.)

Comment author: Kaj_Sotala 02 September 2010 09:36:59PM 3 points [-]

I don't recall ever doing that.

Do you leave the votes stand because you remember/re-invent your original reason for upvoting, or because something along the lines of "well, I must've had a good reason at the time"?

Comment author: wedrifid 03 September 2010 03:32:57AM *  0 points [-]

you remember/re-invent your original reason for upvoting,

This one. And sometimes my surprise is because the upvoted comment is surrounded by other comments that are 'better' than it. This is I can often fix by upvoting the context instead of removing my initial upvote.

(And, if I went around removing my votes I would quite possibly end up in an infinite loop of contrariness.)

Comment author: blogospheroid 02 September 2010 06:31:22AM 0 points [-]

I'd like to discuss, with anyone who is interested, the ideas of Metaphysics Of Quality, by Robert Pirsig (laid out in Lila, An enquiry into Morals)

There are many aspects to MOQ that might make a rationalist cringe, like moral realism and giving evolution a path and purpose. But there are many interesting concepts which i heard for the first time when I read MOQ. The fourfold division of inorganic, biological, social and intellectual static patterns of quality is quite intruiging. Many things that the transhumanist community talks about actually interact at the edges of these definitions.

nanotech runs at the border of inorganic quality and biological quality.

evolutionary psychology runs at the border of biological and social quality

at a much simpler level, a community like less wrong runs at the border of social and intellectual quality

Inspite of this, I find the layered level of this understanding is probably useful in understanding present systems and designing new systems.

Maintaining stability at a lower level of quality is probably very important whenever new dynamic things are done at a higher level. Freidrich Hayek emphasises the rule of law and stable contracts, which are the basis of the dynamism of the free market.

Francis Fukuyama came out with the idea of "The end of history" with democratic liberalism being the final system, a permanent social static quality. This was an extremely bold view, but someone who understood even a bit of MOQ could understand that changes at a lower level could stil happen. No social structure can be permanent without the biological level being fixed. And Bingo! Fukuyama being a smart man, understood this and his next book was "Our posthuman future", which urged the extreme social control of biological manipulation, in particular, ceasing research.

In Pirsig's view, social quality overriding biological quality is moral. I don't agree with Pirsig's view that when social quality overrides biological quality, it is always moral. It is societal pressure that creates incentives for female infanticide in India, which overrides the biological 50-50 ratio. This will result in huge social problems in the future.

A proper understanding of the universe, when we arrive at it, would have all these intricate layers laid out in detail. But it is interesting to talk about even now,when the picture is incomplete.

Comment author: Snowyowl 02 September 2010 10:22:03AM *  1 point [-]

No social structure can be permanent without the biological level being fixed. And Bingo! Fukuyama being a smart man, understood this and his next book was "Our posthuman future", which urged the extreme social control of biological manipulation, in particular, ceasing research.

Really? I would have arrived at the opposite conclusion. No social structure can be permanent without the biological level being fixed, therefore we should do more research into biological alteration in order to stabilize our biology should it become unstable.

For instance, pre-implantation genetic diagnosis would enable us to almost eradicate most genetic diseases, thus maintaining our biological quality. I'm not saying it doesn't have corresponding problems, just that an attitude of "we should cease research in this field because we might find something dangerous" is overreacting.

Comment author: blogospheroid 02 September 2010 11:34:17AM 0 points [-]

I don't support Fukuyama's conclusion. I just was mentioning that Fukuyama realised that his "end of history" hypothesis was obsolete as the biological quality patterns, that he assumed were more or less unchanging, are not fixed.

Genetic engineering is an intellectual + social pattern imposing on a biological pattern. By a naive reading of Pirsig, it appears as moral. But if the biological pattern is not fully understood, then it might lead to many unanticipated consequences. I definitely support the eradication of genetic diseaeses, if the changes made are those that are present in many normal people and without much downside. I support intelligence amplification, but we simply don't know enough to do it without issues.

Eliezer's perspective is that humans are godshatter (a hodge podge of many biological, social and intellectual static patterns) and it will take a very powerful intelligence to understand morality and extrapolate it. I believe that thinking about Pirsig's work can inform us a little on areas we should choose to understand first.

Comment author: hegemonicon 02 September 2010 05:06:41PM 0 points [-]

No social structure can be permanent without the biological level being fixed.

This seems incorrect, as it's not hard to imagine a social structure supporting a wide variety of different biological/non-biological intelligences, as long as they were reasonably close to each other in morality-space. There's plenty of things at the level of biology that have no impact on morality that we'd certainly like to change.

Comment author: blogospheroid 03 September 2010 09:54:59AM 0 points [-]

During the process of creation of those non-biological intelligences or modification of the biological persons, the social structure would be in a flux. There will be some similarities maintained, but many changes would also be there.

According to our laws, murder is illegal, but erasure of an upload with backup till the last day would not be classified as a grave crime as much as murdering an un-backed up person. These changes would be at the social level.

Comment author: JanetK 02 September 2010 07:54:27AM 1 point [-]

The penny has just dropped! When I first encountered LessWrong, the word 'Rationality' did not stand out. I interpreted it to mean its everyday meaning of careful, intelligent, sane, informed thought (in keeping with 'avoiding bias'). But I have become more and more uncomfortable with the word because I see it having a more restricted meaning in the LW context. At first, I thought this was an economic definition of the 'rational' behaviour of the selfish and unemotional ideal economic agent. But now I sense an even more disturbing definition: rational as opposed to empirical. As I use scientific evidence as the most important arbiter of what I believe, I would find the anti-empirical idea of 'rational' a big mistake.

Comment author: Emile 02 September 2010 08:15:56AM 2 points [-]

But now I sense an even more disturbing definition: rational as opposed to empirical.

I don't think that's how most people here understand "rationalism".

Comment author: JanetK 02 September 2010 09:09:40AM 1 point [-]

I don't think that's how most people here understand "rationalism".

Good

Comment author: wedrifid 02 September 2010 08:17:38AM *  1 point [-]

But now I sense an even more disturbing definition: rational as opposed to empirical. As I use scientific evidence as the most important arbiter of what I believe, I would find the anti-empirical idea of 'rational' a big mistake.

Indeed. It is heretic in the extreme! Burn them!

Comment author: JanetK 02 September 2010 09:07:18AM 0 points [-]

Do you have a reason of sarcasm? I notice a tendency that seems to me disturbing and I am pointing it out to see if others have noticed it and have opinions, but I am not attacking. I am deciding whether I fit this group or not - hopefully I can feel comfortable in LW.

Comment author: wedrifid 02 September 2010 10:08:54AM *  3 points [-]

Do you have a reason of sarcasm?

It felt like irony from my end - a satire of human behaviour.

As a general tendency of humanity we seem to be more inclined to be abhored by beliefs that are similar to what we consider the norm but just slightly different. It is the rebels within the tribe that are the biggest threat, not the tribe that lives 20 kms away.

I hope someone can give you an adequate answer to your question. The very short one is that empirical evidence is usually going to be the most heavily weighted 'bayesian' (rational) evidence. However everything else is still evidence, even though it is far weaker.

Comment author: FAWS 02 September 2010 08:27:58AM *  0 points [-]

In a certain sense rationality is using evidence efficiently. Perhaps overemphasis on that type of rationality tempts one to be sparing with evidence - after all if you use less evidence to reach your conclusion you used whatever evidence you did use more efficiently! But not using evidence doesn't mean there is more evidence left afterwards, not using free or very cheap evidence is wasteful, so proper rationality, even in that sense, means using all easily available evidence when practical.

Comment author: kodos96 02 September 2010 08:33:28AM 1 point [-]

But now I sense an even more disturbing definition: rational as opposed to empirical.

Ummmmmmmm.... no.

The word "rational" is used here on LW in essentially its literal definition (which is not quite the same as its colloquial everyday meaning).... if anything it is perhaps used by some to mean "bayesian"... but bayesianism is all about updating on (empirical) evidence.

Comment author: JanetK 02 September 2010 08:56:11AM 1 point [-]

According to my dictionary: rationalism 1. Philos. the theory that reason is the foundation of certainty in knowledge (opp. empiricism, sensationalism)

This is there as well as: rational 1. of or based on reasoning or reason

So although there are other (more everyday) definitions also listed at later numbers, the opposition to empirical is one of the literal definitions. The Bayesian updating thing is why it took me a long time to notice the other anti-scientific tendency.

Comment author: timtyler 03 September 2010 07:55:54AM *  2 points [-]

I wouldn't say "anti-scientific" - but it certainly would be good if scientists actually studied rationality more - and so were more rational.

With lab equipment like the human brain, you have really got to look into its strengths and weaknesses - and read the manual about how to use it properly.

Personally, when I see material like Science or Bayes - my brain screams: false dichotomy: Science and Bayes! Don't turn the scientists into a rival camp: teach them.

Comment author: JanetK 03 September 2010 01:35:19PM 0 points [-]

I think you may have misunderstood what I was trying to say. Because the group used Bayesian methods, I had assumed that they would not be anti-scientific. I was surprised when it seemed that they were willing to ignore evidence. I have been reassured that many in the group are rational in the everyday sense and not opposed to empiricism. Indeed it is Science AND Bayes.

Comment author: timtyler 02 September 2010 08:39:23AM *  1 point [-]

There is at least one post about that - though I don't entirely approve of it.

Occam's razor is not exactly empirical. Evidence is involved - but it does let you choose between two theories both of which are compatible with the evidence without doing further observations. It is not empirical - in that sense.

Comment author: Kenny 03 September 2010 10:56:39PM 2 points [-]

Occam's razor isn't empirical, but it is the economically rational decision when you need to use one of several alternative theories (that are exactly "compatible with the evidence"). Besides, "further observations" are inevitable if any of your theories are actually going to be used (i.e. to make predictions [that are going to be subsequently 'tested']).

Comment author: Sniffnoy 02 September 2010 05:13:34PM *  0 points [-]

Here is our definition of rationality. See also the "unnamed virtue".

Comment author: thomblake 02 September 2010 05:16:33PM 5 points [-]

No, here is our definition of rationality.

For the canonical article, see What Do We Mean By "Rationality"?.

Comment author: Sniffnoy 02 September 2010 05:17:43PM 1 point [-]

Ah, that does seem to be better, yes.

Comment author: JanetK 02 September 2010 05:40:30PM 3 points [-]

Thank you. That seems clear. I will assume that my antennas were giving me the wrong impression. I can relax/

Comment author: thomblake 02 September 2010 05:19:34PM 3 points [-]

The philosophical tradition of 'Rationalism' (opposed to 'Empiricism') is not relevant to the meaning here. Though there is some relationship between it and "Traditional Rationality" which is referenced sometimes.

Comment author: blogospheroid 02 September 2010 12:32:25PM 3 points [-]

Idea - Existential risk fighting corporates

People of normal IQ are advised to work our normal day job, the best competency that we have and after setting aside enough money for ourselves, contribute to prevention of existential risk. That is a good idea if the skills of the people here are getting their correct market value and there is such a diversity of skills that they cannot make a sensible corporation together.

Also, consider that as we make the world's corporations more agile, we bring closer the moment where an unfriendly optimization process might just be let loose.

But just consider, the small probability that some of the rationalists come together as a non-profit corporation to contribute to mitigating existential risk. There are many reasons our kind cannot cooperate . Also, the fact is that coordination is hard

But if we could, then with the latest in decision theory, argument diagrams ( 1,2, 3 ), internal futarchy (after the size of the coporation gets big), we could create a corporation that wins. There are many people from the world of software here. Within the corporation itself, there is no need to stick to legacy systems. We could interact with the best of coordination software and keep the corporation "sane".

We can create products and services like any for-profit corporation and sell them at market rates, but use the surplus to mitigate existential risk. In other words, it is difficult, but in the everett branches where x-rationalists manage a synergistic outcome, it might be possible to strengthen the funding of existential risk mitigation considerably.

Some criticisms of this idea which I could think of

  • The corporation becomes a lost cause. Goodhart's law kicks in and the original purpose of forming the corporation is lost.
  • People are polite when in a situation where no important decisions are being made (like an internet forum like lesswrong), but if actual productivity is involved, they might get hostile when someone lowers their corporate karma. Perfect internet buddies might become co-workers who hate each other's guts.
  • The argument that there is no possibility of synergy. The present situation, where rational people spread over the world and in different situations are money pumping from less rational people around them is better.
  • People outside the corporation might mentally slot existential risk as a kooky topic that "that creepy company talks about all the time" and not see it as a genuine issue that diverse persons from different walks of life are interested in.

and so on..

But still, my question is - Shouldn't we atleast consider the possibilities of synergy in a manner indicated?

Comment author: wedrifid 02 September 2010 01:45:11PM *  1 point [-]

The would be more likely to work if you completely took out the 'for existential risk' part. Find a way to cooperate with people effectively "to make money". No need to get religion all muddled up in it.

Comment author: JohnDavidBustard 02 September 2010 01:15:08PM 2 points [-]

Apologies if this question seems naive but I would really appreciate your wisdom.

Is there a reasonable way of applying probability to analogue inference problems?

For example, if two substances A and B are being measured using a device which produces an analogue value C. Given a history of analogue values, how does one determine the probability of each substance. Unless the analogue values match exactly, how can historical information contribute to the answer without making assumptions of the shape of the probability density function created by A or B? If this assumption must be made how can this be reasonably determined, and crucially, what events could occur that would lead to it being changed?

A real example would be, that the PDF is often modelled as a Gaussian distribution, but more recent approaches tend to use different distributions because of outliers. This seems like the right thing to do because our visual sense of distribution can easily identify such points, but is there any more rigorous justification?

Is, in effect, the selection of the underlying model the real challenge of rational decision making, not the inference rules?

Comment author: Perplexed 02 September 2010 01:57:10PM 4 points [-]

Is there a reasonable way of applying probability to analogue inference problems?

Your examples, certainly show a grasp of the problem. The solution is first sketched in Chapter 4.6 of Jaynes

Is, in effect, the selection of the underlying model the real challenge of rational decision making, not the inference rules?

Definitely. Jaynes finishes deriving the inference rules in Chapter 2 and illustrates how to use them in Chapter 3. The remainder of the book deals with "the real challenge". In particular Chapters 6, 7, 12, 19, and especially 20. In effect, you use Bayesian inference and/or Wald decision theory to choose between underlying models pretty much as you might have used them to choose between simple hypotheses. But there are subtleties, ... to put things mildly. But then classical statistics has its subtleties too.

Comment author: b1shop 02 September 2010 05:01:47PM *  6 points [-]

I just listened to Robin Hanson's pale blue dot interview. It sounds like he focuses more on motives than I do.

Yes, if you give most/all people a list of biases, they will use it less like a list of potential pitfalls and more like a list of accusations. Yes, most, if not all, aren't perfect truth-seekers for reasons that make evolutionary sense.

But I wouldn't mind living in a society where using biases/logical fallacies results in a loss of status. You don't have to be a truth-seeker to want to seem like a truth-seeker. Striving to overcome bias still seems like a good goal.

Edit: For example, someone can be a truth-seeking scientist if they are doing it to answer questions or if they're doing it for the chicks.

Comment author: Kaj_Sotala 02 September 2010 09:04:37PM *  22 points [-]

It seems to me, based on purely anecdotal experience, that people in this community are unusually prone to feeling that they're stupid if they do badly at something. Scott Adams' The Illusion of Winning might help counteract becoming too easily demotivated.

Let's say that you and I decide to play pool. We agree to play eight-ball, best of five games. Our perception is that what follows is a contest to see who will do something called winning.

But I don't see it that way. I always imagine the outcome of eight-ball to be predetermined, to about 95% certainty, based on who has practiced that specific skill the most over his lifetime. The remaining 5% is mostly luck, and playing a best of five series eliminates most of the luck too.

I've spent a ridiculous number of hours playing pool, mostly as a kid. I'm not proud of that fact. Almost any other activity would have been more useful. As a result of my wasted youth, years later I can beat 99% of the public at eight-ball. But I can't enjoy that sort of so-called victory. It doesn't feel like "winning" anything.

It feels as meaningful as if my opponent and I had kept logs of the hours we each had spent playing pool over our lifetimes and simply compared. It feels redundant to play the actual games.

I see the same thing with tennis, golf, music, and just about any other skill, at least at non-professional levels. And research supports the obvious, that practice is the main determinant of success in a particular field.

As a practical matter, you can't keep logs of all the hours you have spent practicing various skills. And I wonder how that affects our perception of what it takes to be a so-called winner. We focus on the contest instead of the practice because the contest is easy to measure and the practice is not.

Complicating our perceptions is professional sports. The whole point of professional athletics is assembling freaks of nature into teams and pitting them against other freaks of nature. Practice is obviously important in professional sports, but it won't make you taller. I suspect that professional sports demotivate viewers by sending the accidental message that success is determined by genetics.

My recommendation is to introduce eight-ball into school curricula, but in a specific way. Each kid would be required to keep a log of hours spent practicing on his own time, and there would be no minimum requirement. Some kids could practice zero hours if they had no interest or access to a pool table. At the end of the school year, the entire class would compete in a tournament, and they would compare their results with how many hours they spent practicing. I think that would make real the connection between practice and results, in a way that regular schoolwork and sports do not. That would teach them that winning happens before the game starts.

Yes, I know that schools will never assign eight-ball for homework. But maybe there is some kid-friendly way to teach the same lesson.

ETA: I don't mean to say that talent doesn't matter: things such as intelligence matter more than Adams gives them credit for, AFAIK. But I've noticed in many people (myself included) a definite tendency to overvalue intelligence relative to practice.

Comment author: Houshalter 02 September 2010 10:28:40PM 1 point [-]

Yes, I know that schools will never assign eight-ball for homework. But maybe there is some kid-friendly way to teach the same lesson.

Make them play some kind of simplified RPG until they realise the only achievement is how much time they put into doing mindless repetitive tasks.

Comment author: mattnewport 02 September 2010 10:34:37PM 9 points [-]

Make them play some kind of simplified RPG until they realise the only achievement is how much time they put into doing mindless repetitive tasks.

I imagine lots of kids play Farmville already.

Comment author: Kaj_Sotala 03 September 2010 08:53:06AM *  3 points [-]

Those games don't really improve any sort of skill, though, and neither does anyone expect them to. To teach kids this, you need a game where you as a player pretty much never stop improving, so that having spent more hours on the game actually means you'll beat anyone who has spent less.

Go might work.

Comment author: rwallace 03 September 2010 12:57:16PM 5 points [-]

There are schools that teach Go intensively from an early age, so that a 10-year-old student from one of those schools is already far better than a casual player like me will ever be, and it just keeps going up from there. People don't seem to get tired of it.

Every time I contemplate that, I wish all the talent thus spent, could be spent instead on schools providing similarly intensive teaching in something useful like science and engineering. What could be accomplished if you taught a few thousand smart kids to be dan-grade scientists by age 10 and kept going from there? I think it would be worth finding out.

Comment author: Sniffnoy 03 September 2010 07:32:38PM *  3 points [-]

There's a large difference between the "leveling up" in such games, where you gain new in-game capabilities, and actually getting better, where your in-game capabilities stay the same but you learn to use them more effectively.

ETA: I guess perhaps a better way of saying it is, there's a large difference between the causal chains time->winning, and time->skill->winning.

Comment author: Wei_Dai 02 September 2010 11:51:21PM 1 point [-]

But I've noticed in many people (myself included) a definite tendency to overvalue intelligence relative to practice.

I'm not sure I agree with that. In what areas do you see overvalue of intelligence relative to practice and why do you think there really is overvalue in those areas?

I've noticed for example that people's abilities to make good comments on LW do not seem to improve much with practice and feedback from votes (beyond maybe the first few weeks or so). Does this view represent an overvalue of intelligence?

Comment author: Kaj_Sotala 03 September 2010 08:45:02AM *  5 points [-]

In what areas do you see overvalue of intelligence relative to practice and why do you think there really is overvalue in those areas?

I should probably note that my overvaluing of intelligence is more of an alief than a belief. Mostly it shows up if I'm unable to master (or at least get a basic proficiency in) a topic as fast as I'd like to. For instance, on some types of math problems I get quickly demotivated and feel that I'm not smart enough for them, when the actual problem is that I haven't had enough practice on them. This is despite the intellectual knowledge that I could master them, if I just had a bit more practice.

I've noticed for example that people's abilities to make good comments on LW do not seem to improve much with practice and feedback from votes (beyond maybe the first few weeks or so). Does this view represent an overvalue of intelligence?

That sounds about right, though I would note that there's a huge amount of background knowledge that you need to absorb on LW. Not just raw facts, either, but ways of thinking. The lack of improvement might partially be because some people have absorbed that knowledge when they start posting and some haven't, and absorbing it takes such a long time that the improvement happens too slowly to notice.

Comment author: wedrifid 03 September 2010 09:25:10AM *  3 points [-]

I've noticed for example that people's abilities to make good comments on LW do not seem to improve much with practice and feedback from votes (beyond maybe the first few weeks or so). Does this view represent an overvalue of intelligence?

That's interesting. I hadn't got that impression but I haven't looked too closely at such trends either. There are a few people whose comments have improved dramatically but the difference seems to be social development and and not necessarily their rational thinking - so perhaps you have a specific kind of improvement in mind.

I'm interested in any further observations on the topic by yourself or others.

Comment author: Daniel_Burfoot 03 September 2010 03:47:37AM 4 points [-]

I don't mean to say that talent doesn't matter: things such as intelligence matter more than Adams gives them credit for

I think the relative contribution of intelligence vs. practice varies substantially depending on the nature of the particular task. A key problem is to identify tasks as intelligence-dominated (the smart guy always wins) vs. practice-dominated (the experienced guy always wins).

As a first observation about this problem, notice that clearly definable or objective tasks (chess, pool, basketball) tend to be practice-dominated, whereas more ambiguous tasks (leadership, writing, rationality) tend to be intelligence-dominated.

Comment author: Kaj_Sotala 03 September 2010 08:38:20AM 2 points [-]

I think the relative contribution of intelligence vs. practice varies substantially depending on the nature of the particular task.

This is true. Intelligence research has shown that intelligence is more useful for more complex tasks, see e.g. Gottfredson 2002.

Comment author: hegemonicon 03 September 2010 03:59:53AM *  6 points [-]

people in this community are unusually prone to feeling that they're stupid if they do badly at something

I suspect this is a result of the tacit assumption that "if you're not smart enough, you don't belong at LW". If most members are anything like me, this combined with the fact that they're probably used to being "the smart one" makes it extremely intimidating to post anything, and extremely de-motivational if they make a mistake.

In the interests of spreading the idea that it's ok if other people are smarter than you, I'll say that I'm quite certainly one of the less intelligent members of this community.

I've noticed in many people (myself included) a definite tendency to overvalue intelligence relative to practice.

Practice and expertise tend to be domain-specific - Scott isn't any better at darts or chess after playing all that pool. Even learning things like metacognition tend not to apply outside of the specific domain you've learned it in. Intelligence is one of the only things that gives you a general problem solving/task completion ability.

Comment author: xax 03 September 2010 09:07:19PM 1 point [-]

Intelligence is one of the only things that gives you a general problem solving/task completion ability.

Only if you've already defined intelligence as not domain-specific in the first place. Conversely, meta-cognition about a person's own learning processes could help them learn faster in general, which has many varied applications.

Comment author: jimrandomh 03 September 2010 01:30:47PM 6 points [-]

It seems to me, based on purely anecdotal experience, that people in this community are unusually prone to feeling that they're stupid if they do badly at something.

This is certainly true of me, but I try to make sure that the positive feeling of having identified the mistakes and improved outweighs the negative feeling of having needed the improvement. Tsuyoku Naritai!

Comment author: Morendil 02 September 2010 10:21:35PM 5 points [-]

I have argued in various places that self-deception is not an adaptation evolved by natural selection to serve some function. Rather, I have said self-deception is a spandrel, which means it’s a structural byproduct of other features of the human organism. My view has been that features of mind that are necessary for rational cognition in a finite being with urgent needs yield a capacity for self-deception as a byproduct. On this view, self-deception wasn’t selected for, but it also couldn’t be selected out, on pain of losing some of the beneficial features of which it’s a byproduct.

Neil Van Leuween, Why Self-Deception Research Hasn’t Made Much Progress

Comment author: realitygrill 03 September 2010 04:18:06AM 5 points [-]

This is perhaps a bit facetious, but I propose we try to contact Alice Taticchi (Miss World Italy 2009) and introduce her to LW. Reason? She cited she'd "bring without any doubt my rationality", among other things, when asked what qualities she would bring to the competition.

Comment author: Oscar_Cunningham 03 September 2010 09:14:09AM *  1 point [-]

Someone made a page that automatically collects high karma comments. Could someone point me at it please?

Comment author: wedrifid 03 September 2010 09:29:02AM 1 point [-]

They did? I've been wishing for something like that myself. I'd also like another page that collects just my high karma comments. Extremely useful feedback!

Comment author: Will_Newsome 03 September 2010 11:02:19AM *  10 points [-]

I want to write a post about an... emotion, or pattern of looking at the world, that I have found rather harmful to my rationality in the past. The closest thing I've found is 'indignation', defined at Wiktionary as "An anger aroused by something perceived as an indignity, notably an offense or injustice." The thing is, I wouldn't consider the emotion I feel to be 'anger'. It's more like 'the feeling of injustice' in its own right, without the anger part. Frustration, maybe. Is there a word that means 'frustration aroused by a perceived indignity, notably an offense or injustice'? Like, perhaps the emotion you may feel when you think about how pretty much no one in the world or no one you talk to seems to care about existential risks. Not that you should feel the emotion, or whatever it is, that I'm trying to describe -- in the post I'll argue that you should try not to -- but perhaps there is a name for it? Anyone have any ideas? Should I just use 'indignation' and then define what I mean in the first few sentences? Should I use 'adjective indignation'? If so, which adjective? Thanks for any input.

Comment author: wedrifid 03 September 2010 12:09:08PM *  4 points [-]

Should I just use 'indignation' and then define what I mean in the first few sentences?

That could work well when backed up by with the description of just what you will be using the term to mean.

I will be interested to read your post - from your brief introduction here I think I have had similar observations about emotions that interfere with thought, independent of raw overwhelm from primitives like anger.

Comment author: Airedale 03 September 2010 03:08:20PM *  8 points [-]

The words righteous indignation in combination are sufficiently well-recognized as to have their own wikipedia page. The page also says that righteous indignation has overtones of religiosity, which seems like a reason not to use it in your sense . It also says that it is akin to a "sense of injustice," but at least for me, that phrase doesn't have as much resonance.

Edited to add this possibly relevant/interesting link I came across, where David Brin describes self-righteous indignation as addictive.

Comment author: Perplexed 03 September 2010 04:20:22PM 4 points [-]

which seems like a reason not to use it in your sense.

Strikes me as exactly the reason you should use it. What you are describing is indignation, it is righteous, and it is counterproductive in both rationalists and less rational folks for pretty much the same reasons.

Comment author: Airedale 03 September 2010 04:53:34PM 0 points [-]

I meant that the religious connotations might not be a reason to use the term if Will is trying to come up with the most accurate term for what he’s describing. To the extent the term is tied up in Christianity, it may not convey meaning in the way Will wants – although the more Will explains how he is using the term, the less problematic this would be. And I agree that what you say suggests an interesting way that Will can appropriate a religious term and make some interesting compare-and-contrast type points.

Comment author: Eliezer_Yudkowsky 03 September 2010 07:07:18PM 5 points [-]

Sounds related to the failure class I call "living in the should-universe".

Comment author: Will_Newsome 03 September 2010 10:51:03PM *  3 points [-]

It seems to be a pretty common and easily corrected failure mode. Maybe you could write a post about it? I'm sure you have lots of useful cached thoughts on the matter.

Added: Ah, I'd thought you'd just talked about it at LW meetups, but a Google search reveals that the theme is also in Above-Average AI Scientists and Points of Departure.

Comment author: jimrandomh 03 September 2010 07:08:17PM 6 points [-]

I noticed this emotion cropping up a lot when I read Reddit, and stopped reading it for that reason. It's too easy to, for example, feel outraged over a video of police brutality, but not notice that it was years ago and in another state and already resolved.

Comment author: komponisto 04 September 2010 12:54:13AM 4 points [-]

Interestingly enough, this sounds like the emotion that (finally) induced me to overcome akrasia and write a post on LW for the first time, which initiated what has thus far been my greatest period of development as a rationalist.

It's almost as if this feeling is to me what plain anger is to Harry Potter(-Evans-Verres): something which makes everything seem suddenly clearer.

It just goes to show how difficult the art of rationality is: the same technique that helps one person may hinder another.

Comment author: David_Allen 04 September 2010 06:02:09AM *  3 points [-]

In myself, I have labeled the rationality blocking emotion/behavior as defensiveness. When I am feeling defensive, I am less willing to see the world as it is. I bind myself to my context and it is very difficult for me to reach out and establish connections to others.

I am also interested in ideas related to rationality and the human condition. Not just about the biases that arise from our nature, but about approaches to rationality that work from within our human nature.

I have started an analysis of Buddhism from this perspective. At its core (ignoring the obvious mysticism), I see sort of a how-to guide for managing the human condition. If we are to be rational we need to be willing to see the world as it is, not as we want it to be.

Comment author: Taure 03 September 2010 12:56:25PM *  0 points [-]

An Introduction to Probability and Inductive Logic by Ian Hacking

Have any of you read this book?

I have been invited to join a reading group based around it for the coming academic year and would like the opinions of this group as to whether it's worth it.

I may join in just for the section on Bayes. I might even finally discover the correct pronunciation of "Bayesian". ("Bay-zian" or "Bye-zian"?)

Here's a link to the book: http://www.amazon.co.uk/Introduction-Probability-Inductive-Logic/dp/0521775019/ref=sr_1_2?ie=UTF8&s=books&qid=1283464939&sr=8-2

Comment author: Perplexed 03 September 2010 04:05:53PM *  1 point [-]

Over on a cognitive science blog named "Childs Play", there is an interesting discussion of theories regarding human learning of language. These folks are not Bayesians (except for one commenter who mentions Solomonoff induction), so some bits of it may make you cringe, but the blogger does provide links to some interesting research pdfs.

Nonetheless, the question about which they are puzzled regarding humans does raise some interesting questions regarding AIs, whether they be of the F persuasion or whether they are practicing uFs. The questions are:

  • Are these AIs born speaking English, Chinese, Arabic, Hindi, etc., or do they have to learn these languages?
  • If they learn these languages, do they have to pass some kind of language proficiency test before they are permitted to use them?
  • Are they born with any built in language capability or language learning capability at all?
  • Are the "objective functions" with which we seek to leash AIs expressed in some kind of language, or in something more like "object code"?
Comment author: taw 04 September 2010 06:04:29AM 1 point [-]

A question about modal logics.

Temporal logics are quite successful in terms of expressiveness and applications in computer science, so I thought I'd take a look at some other modal logics - in particular deontic logic that deal with obligations, rules, and deontological ethics.

It seems like an obvious approach, as we want to have "is"-statements, "ought"-statements, and statements relating what "is" with what "ought" to be.

What I found was rather disastrous, far worse than with neat and unambiguous temporal logics. Low expressiveness, ambiguous interpretations, far too many paradoxes that seem to be more about failing to specify underlying logic correctly than about actual problems, and no convergence on a single deontic logic than works.

After reading all this, I made a few quick attempts at defining logic of obligations, just to be sure it's not some sort of collective insanity, but they all ran into very similar problems extremely quickly.

Now I'm in no way deontologically inclined, but if I were it would really bother me. If it's really impossible to formally express obligations, this kind of ethics is built on extremely flimsy basis. Consequentialism has plenty of problems in practice, but at least in hypothetical scenarios it's very easy to model correctly. Deontic logic seems to lack even that.

Is there any kind of deontic logic that works well that I missed? I'm not talking about solving FAI, constructing universal rules of morality or anything like it - just about a language that expresses exactly the kind of obligations we want, and which works well in simple hypothetical worlds.

Comment author: CronoDAS 04 September 2010 07:00:13AM *  1 point [-]