Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: pnrjulius 23 May 2012 03:57:22AM 1 point [-]

Really? Got any examples?

I've read some in which the transhuman technologies were ambiguous (had upsides and downsides), but I can't think of any where it was just better, the way that actual technologies often are---would any of us willingly go back to the days before electricity and running water?

In response to comment by pnrjulius on Applause Lights
Comment author: Hul-Gil 23 May 2012 07:07:32AM 0 points [-]

but I can't think of any where it was just better, the way that actual technologies often are

I find that a little irritating - for people supposedly open to new ideas, science fiction authors sure seem fearful and/or disapproving of future technology.

Comment author: John_Maxwell_IV 11 May 2012 05:39:52AM *  4 points [-]

It seems like everyone is talking about SL4; here is a link to what Richard was probably complaining about:

http://www.sl4.org/archive/0608/15895.html

Comment author: Hul-Gil 11 May 2012 07:24:24AM *  8 points [-]

Thanks. I read the whole debate, or as much of it as is there; I've prepared a short summary to post tomorrow if anyone is interested in knowing what really went on ("as according to Hul-Gil", anyway) without having to hack their way through that thread-jungle themselves.

(Summary of summary: Loosemore really does know what he's talking about - mostly - but he also appears somewhat dishonest, or at least extremely imprecise in his communication.)

Comment author: RomeoStevens 10 May 2012 08:32:53PM 2 points [-]

any amount and quality of question answering is not.

"how do I build an automated car?"

Comment author: Hul-Gil 11 May 2012 03:44:40AM *  3 points [-]

That doesn't help you if you need a car to take you someplace in the next hour or so, though. I think jed's point is that sometimes it is useful for an AI to take action rather than merely provide information.

Comment author: metaphysicist 11 May 2012 01:49:07AM 7 points [-]

So in summary, I am very curious about this situation; why would a community that has been - to me, almost shockingly - consistent in its dedication to rationality, and honestly evaluating arguments regardless of personal feelings, persecute someone simply for presenting a dissenting opinion?

The answer is probably that you overestimate that community's dedication to rationality because you share its biases. The main post demonstrates an enormous conceit among the SI vanguard. Now, how is that rational? How does it fail to get extensive scrutiny in a community of rationalists?

My take is that neither side in this argument distinguished itself. Loosemore called for an "outside adjudicator" to solve a scientific argument. What kind of obnoxious behavior is that, when one finds oneself losing an argument? Yudkowsky (rightfully pissed off) in turn, convicted Loosemore of a scientific error, tarred him with incompetence and dishonesty, and banned him. None of these "sins" deserved a ban (no wonder the raw feelings come back to haunt); no honorable person would accept a position where he has the authority to exercise such power (a party to a dispute is biased). Or at the very least, he wouldn't use it the way Yudkowsky did, when he was the banned party's main antagonist.

Comment author: Hul-Gil 11 May 2012 03:29:11AM *  4 points [-]

The answer is probably that you overestimate that community's dedication to rationality because you share its biases.

That's probably no small part of it. However, even if my opinion of the community is tinted rose, note that I refer specifically to observation. That is, I've sampled a good amount of posts and comments here on LessWrong, and I see people behaving rationally in arguments - appreciation of polite and lucid dissension, no insults or ad hominem attacks, etc. It's harder to tell what's going on with karma, but again, I've not seen any one particular individual harassed with negative karma merely for disagreeing.

The main post demonstrates an enormous conceit among the SI vanguard. Now, how is that rational? How does it fail to get extensive scrutiny in a community of rationalists?

Can you elaborate, please? I'm not sure what enormous conceit you refer to.

My take is that neither side in this argument distinguished itself. Loosemore called for an "outside adjudicator" to solve a scientific argument. What kind of obnoxious behavior is that, when one finds oneself losing an argument? Yudkowsky (rightfully pissed off) in turn, convicted Loosemore of a scientific error, tarred him with incompetence and dishonesty, and banned him. None of these "sins" deserved a ban

I think that's an excellent analysis. I certainly feel like Yudkowsky overreacted, and as you say, in the circumstances no wonder it still chafes; but as I say above, Richard's arguments failed to impress, and calling for outside help ("adjudication" for an argument that should be based only on facts and logic?) is indeed beyond obnoxious.

Comment author: Richard_Loosemore 10 May 2012 07:11:15PM 1 point [-]

Holden, I think your assessment is accurate ... but I would venture to say that it does not go far enough.

My own experience with SI, and my background, might be relevant here. I am a member of the Math/Physical Science faculty at Wells College, in Upstate NY. I also have had a parallel career as a cognitive scientist/AI researcher, with several publications in the AGI field, including the opening chapter (coauthored with Ben Goertzel) in a forthcoming Springer book about the Singularity.

I have long complained about SI's narrow and obsessive focus on the "utility function" aspect of AI -- simply put, SI assumes that future superintelligent systems will be driven by certain classes of mechanism that are still only theoretical, and which are very likely to be superceded by other kinds of mechanism that have very different properties. Even worse, the "utility function" mechanism favored by SI is quite likely to be so unstable that it will never allow an AI to achieve any kind of human-level intelligence, never mind the kind of superintelligence that would be threatening.

Perhaps most important of all, though, is the fact that the alternative motivation mechanism might (and notice that I am being cautious here: might) lead to systems that are extremely stable. Which means both friendly and safe.

Taken in isolation, these thoughts and arguments might amount to nothing more than a minor addition to the points that you make above. However, my experience with SI is that when I tried to raise these concerns back in 2005/2006 I was subjected to a series of attacks that culminated in a tirade of slanderous denunciations from the founder of SI, Eliezer Yudkowsky. After delivering this tirade, Yudkowsky then banned me from the discussion forum that he controlled, and instructed others on that forum that discussion about me was henceforth forbidden.

Since that time I have found that when I partake in discussions on AGI topics in a context where SI supporters are present, I am frequently subjected to abusive personal attacks in which reference is made to Yudkowsky's earlier outburst. This activity is now so common that when I occasionally post comments here, my remarks are very quickly voted down below a threshold that makes them virtually invisible. (A fate that will probably apply immediately to this very comment).

I would say that, far from deserving support, SI should be considered a cult-like community in which dissent is ruthlessly suppressed in order to exaggerate the point of view of SI's founders and controllers, regardless of the scientific merits of those views, or of the dissenting opinions.

Comment author: Hul-Gil 11 May 2012 01:00:49AM *  9 points [-]

Can you provide some examples of these "abusive personal attacks"? I would also be interested in this ruthless suppression you mention. I have never seen this sort of behavior on LessWrong, and would be shocked to find it among those who support the Singularity Institute in general.

I've read a few of your previous comments, and while I felt that they were not strong arguments, I didn't downvote them because they were intelligent and well-written, and competent constructive criticism is something we don't get nearly enough of. Indeed, it is usually welcomed. The amount of downvotes given to the comments, therefore, does seem odd to me. (Any LW regular who is familiar with the situation is also welcome to comment on this.)

I have seen something like this before, and it turned out the comments were being downvoted because the person making them had gone over, and over, and over the same issues, unable or unwilling to either competently defend them, or change his own mind. That's no evidence that the same thing is happening here, of course, but I give the example because in my experience, this community is almost never vindictive or malicious, and is laudably willing to consider any cogent argument. I've never seen an actual insult levied here by any regular, for instance, and well-constructed dissenting opinions are actively encouraged.

So in summary, I am very curious about this situation; why would a community that has been - to me, almost shockingly - consistent in its dedication to rationality, and honestly evaluating arguments regardless of personal feelings, persecute someone simply for presenting a dissenting opinion?

One final thing I will note is that you do seem to be upset about past events, and it seems like it colors your view (and prose, a bit!). From checking both here and on SL4, for instance, your later claims regarding what's going on ("dissent is ruthlessly suppressed") seem exaggerated. But I don't know the whole story, obviously - thus this question.

In response to comment by Hul-Gil on Circular Altruism
Comment author: Salivanth 01 May 2012 12:23:30PM 2 points [-]

Ben Jones didn't recognise the dust speck as "trivial" on his torture scale, he identified it as "zero". There is a difference: If dust speck disutility is equal to zero, you shouldn't pay one cent to save 3^^^3 people from it. 0 * 3^^^3 = 0, and the disutility of losing one cent is non-zero. If you assign an epsilon of disutility to a dust speck, then 3^^^3 * epsilon is way more than 1 person suffering 50 years of torture. For all intents and purposes, 3^^^3 = infinity. The only way that Infinity(X) can be worse than a finite number is if X is equal to 0. If X = 0.00000001, then torture is preferable to dust specks.

Comment author: Hul-Gil 01 May 2012 06:22:43PM *  10 points [-]

Well, he didn't actually identify dust mote disutility as zero; he says that dust motes register as zero on his torture scale. He goes on to mention that torture isn't on his dust-mote scale, so he isn't just using "torture scale" as a synonym for "disutility scale"; rather, he is emphasizing that there is more than just a single "(dis)utility scale" involved. I believe his contention is that the events (torture and dust-mote-in-the-eye) are fundamentally different in terms of "how the mind experiences and deals with [them]", such that no amount of dust motes can add up to the experience of torture... even if they (the motes) have a nonzero amount of disutility.

I believe I am making much the same distinction with my separation of disutility into trivial and non-trivial categories, where no amount of trivial disutility across multiple people can sum to the experience of non-trivial disutility. There is a fundamental gap in the scale (or different scales altogether, à la Jones), a difference in how different amounts of disutility work for humans. For a more concrete example of how this might work, suppose I steal one cent each from one billion different people, and Eliezer steals $100,000 from one person. The total amount of money I have stolen is greater than the amount that Eliezer has stolen; yet my victims will probably never even realize their loss, whereas the loss of $100,000 for one individual is significant. A cent does have a nonzero amount of purchasing power, but none of my victims have actually lost the ability to purchase anything; whereas Eliezer's, on the other hand, has lost the ability to purchase many, many things.

I believe utility for humans works in the same manner. Another thought experiment I found helpful is to imagine a certain amount of disutility, x, being experienced by one person. Let's suppose x is "being brutally tortured for a week straight". Call this situation A. Now divide this disutility among people until we have y people all experiencing (1/y)*x disutility - say, a dust speck in the eye each. Call this situation B. If we can add up disutility like Eliezer supposes in the main article, the total amount of disutility in either situation is the same. But now, ask yourself: which situation would you choose to bring about, if you were forced to pick one?

Would you just flip a coin?

I believe few, if any, would choose situation A. This brings me to a final point I've been wanting to make about this article, but have never gotten around to doing. Mr. Yudkowsky often defines rationality as winning - a reasonable definition, I think. But with this dust speck scenario, if we accept Mr. Yudkowsky's reasoning and choose the one-person-being-tortured option, we end up with a situation in which every participant would rather that the other option had been chosen! Certainly the individual being tortured would prefer that, and each potentially dust-specked individual* would gladly agree to experience an instant of dust-speckiness in order to save the former individual.

I don't think this is winning; no one is happier with this situation. Like Eliezer says in reference to Newcomb's problem, if rationality seems to be telling us to go with the choice that results in losing, perhaps we need to take another look at what we're calling rationality.


*Well, assuming a population like our own, not every single individual would agree to experience a dust speck in the eye to save the to-be-tortured individual; but I think it is clear that the vast majority would.

Comment author: Johnicholas 27 April 2012 11:38:14AM 3 points [-]

Analogous in that people once discriminated against other races, other sexes, but over time with better ethical arguments, we decided it was better to treat other races, other sexes as worthy members of the "circle of compassion". I predict that if and when we interact with another species with fairly similar might (for example if and when humans speciate) then humancentrism will be considered as terrible as racism or sexism is now.

Moral realism (if I understand it correctly) is the position that moral truths like 'eating babies is wrong' are out in the world something like the law of gravitation. Yudkowsky has argued convincingly in the Baby-Eater sequence against moral realism (and I agree with him). However, he implied a false fork that, if moral realism is false, then humancentrism is the answer. Yes, our sense of morality is based on our history. No, our history is not the same as our species.

DNA is one residue of our history, but libraries are also a similar residue. There are two instances in our history of allying with a very alien form of life: Viral eukaryogenesis, and the alliance with memes.

Does this help at all? I feel like I'm saying the same thing over again just with more words.

Comment author: Hul-Gil 30 April 2012 04:52:02AM 1 point [-]

I feel like you're trying to say we should care about "memetic life" as well as... other life. But the parallel you draw seems flawed: an individual of any race and sex is still recognizably conscious, and an individual. Do we care about non-sentient life, memetic or otherwise? Should we care?

Is it okay to take toilet-pills? / Rationality vs. the disgust factor

5 Hul-Gil 25 July 2011 09:08AM

Well, I've a chance to prove my commitment to cold, hard rationality, unswayed by emotional concerns... I'm just not sure which route really is the more rational (assuming a desire to stay healthy).

In doubt as to the most logical course of action, I thought I'd get some LessWrongian input. To back up a bit and explain: I opened a pill bottle and was shaking one out into my hand, and since I'm a klutz the upshot was three pills in the (thankfully flushed) toilet. I fished them out, because these are three out of my last four pills; I take half a tablet a day, and don't get a refill until a week from now.

Now they're sitting on a dish in front of me, soaking for a few minutes in 91% isopropyl alcohol. Does LessWrong think they'll be okay to take? The alcohol should kill most germs, but I know it doesn't get all of them. What about viruses? Should I attempt to scrub the tablets to remove them? I've also always enjoyed informing my friends about various surfaces with more germs than toilet water (keyboard, phone), but that doesn't mean toilet water isn't horrifically toxic...

 

You decide. I promise to abide by the collective decision of LessWrong in this matter: should I take the toilet pills?

Overcoming Suffering & Buddhism

2 Hul-Gil 31 May 2011 04:35AM

The recent post (http://lesswrong.com/lw/5xx/overcoming_suffering_emotional_acceptance) by Kaj_Sotala is very reminiscent of Buddhism to me. Since no one has commented with similar sentiments, and since I get the impression Buddhism is not a common topic of discussion here, I thought I'd make a quick article for the curious. I'm not exactly a Buddhist myself, but I have a good few books about the topic and have experienced mild success with meditation.

Buddhism is one of the few religious belief systems not entirely repellent to me, for a couple of reasons. For one, Buddhism - or some traditions thereof, including the "original" (Theravada), I believe - encourages adherents to be skeptical. The emphasis is not on faith, gods, or symbolism, but rather on actual practice and experience: in other words, on obtaining evidence. You can see for yourself whether or not the system works, because the reward is not in another life. It is the cessation of suffering in this one.

For two, that emphasis on the problem of suffering seems very reasonable to me. Buddhism holds that the problem with this world is suffering, and that suffering can be alleviated by methods somewhat similar to the ones in Kaj_Sotala's post. (The choice of the word "mindfulness" - was that a coincidence, or a reference to the Buddhist concept of the same name?) The idea is that suffering results from unfulfilled desires, themselves a product of an uncontrolled mind. You become upset when the world is This Way, but you want it to be That Way; and even if you try to accept the world-as-it-is, your brain is rebellious. Unpleasant feelings arise, unbidden and unwelcome.

The solution, according to Buddhism, is meditation. There are many different types of meditation, both in technique and in topic meditated upon, but I won't go into them here. Meditation appears to be physically healthy just on its own; a quick Google search on "meditation brain" will bring up hundreds of articles about how it affects the thinking organ. However, the main goals of Buddhist meditation are a.) attaining control over your own mind (i.e., learning to separate sense impressions from emotions and values, so that harsh words or even blows cause no corresponding mental disturbance), and b.) attaining insight into Buddhist thought about subjects such as love, impermanence, mindfulness, or skillfulness.

Buddhist thought on some subjects (see next-to-final paragraph) I can leave, but mindfulness and skillfulness seem appropriate to LessWrong. As I understand it, the idea behind mindfulness is simply to be aware of what you're doing, rather than going through the motions - and to be aware of, and fix, cognitive biases. For beliefs and mental processes, failing to hit the "Explain" button (to steal from Mr. Yudkowsky) could be considered un-mindful. Things you don't think about are things you could be getting wrong. Skillfulness is related; it's not about skill at some particular task - it's about maximizing utility, to put it simply. The goal is no wasted or mistaken actions. Your actions should not result in unintended consequences, and your intended consequences should never fail to advance your goals in some way. Rationality is thus a very big part of Buddhism, since it is necessary to be rational to be mindful and skillful!

**One important note:** Buddhism has many traditions, and many, many different beliefs. A great deal of it is about as credible as any other religion. For instance, Buddhism holds that there is no "self", ultimately; however, it also holds that people are reincarnated... so what is it that is being reincarnated? I'm sure there is an apology for this somewhere, but the only explanation I've read made less sense than the question. Karma is also a silly idea, in my opinion. I've picked and chosen regarding Buddhist beliefs, and I'm no expert, so if it turns out what I've written isn't orthodox - well, I've warned you!

That's about all I have to say on the subject. Buddhist methods for overcoming suffering have served me well; it is from Buddhism that I first learned to fight depression over things I can do nothing about, and that regret is only useful insofar as it can inspire you to change, and that there is no excuse for being unskillful and unmindful even in the smallest task. I hope this post has served to impart some knowledge, and/or satisfy (or impart!) some curiosity.

A puzzle on the ASVAB

4 Hul-Gil 30 May 2011 04:01AM

 I was linked to this on another forum. No instructions were given, apparently - just this picture. What's the deal?

It seems to me the answer is clearly C, not A as the test indicates; and the members in the original thread appear to agree. However, attempted justifications of A have been made, none of which are very convincing to me - mainly because if there are no instructions and an obvious answer, there's not really any benefit for them to reward a different interpretation, which would almost certainly involve arbitrary assumptions regarding the rules they really want you to apply.

 Trick questions on exams seem to rely on failure to pay close attention to instructions, or insufficiently rigorously apply rules; when there are no instructions, what justification would anyone have for not choosing the most obvious interpretation? Any could be right!

What do the geniuses here at MoreRight think?

View more: Next