Comment author: pnrjulius 23 May 2012 03:57:22AM 1 point [-]

Really? Got any examples?

I've read some in which the transhuman technologies were ambiguous (had upsides and downsides), but I can't think of any where it was just better, the way that actual technologies often are---would any of us willingly go back to the days before electricity and running water?

In response to comment by pnrjulius on Applause Lights
Comment author: Hul-Gil 23 May 2012 07:07:32AM 0 points [-]

but I can't think of any where it was just better, the way that actual technologies often are

I find that a little irritating - for people supposedly open to new ideas, science fiction authors sure seem fearful and/or disapproving of future technology.

Comment author: John_Maxwell_IV 11 May 2012 05:39:52AM *  4 points [-]

It seems like everyone is talking about SL4; here is a link to what Richard was probably complaining about:

http://www.sl4.org/archive/0608/15895.html

Comment author: Hul-Gil 11 May 2012 07:24:24AM *  8 points [-]

Thanks. I read the whole debate, or as much of it as is there; I've prepared a short summary to post tomorrow if anyone is interested in knowing what really went on ("as according to Hul-Gil", anyway) without having to hack their way through that thread-jungle themselves.

(Summary of summary: Loosemore really does know what he's talking about - mostly - but he also appears somewhat dishonest, or at least extremely imprecise in his communication.)

Comment author: RomeoStevens 10 May 2012 08:32:53PM 2 points [-]

any amount and quality of question answering is not.

"how do I build an automated car?"

Comment author: Hul-Gil 11 May 2012 03:44:40AM *  3 points [-]

That doesn't help you if you need a car to take you someplace in the next hour or so, though. I think jed's point is that sometimes it is useful for an AI to take action rather than merely provide information.

Comment author: metaphysicist 11 May 2012 01:49:07AM 7 points [-]

So in summary, I am very curious about this situation; why would a community that has been - to me, almost shockingly - consistent in its dedication to rationality, and honestly evaluating arguments regardless of personal feelings, persecute someone simply for presenting a dissenting opinion?

The answer is probably that you overestimate that community's dedication to rationality because you share its biases. The main post demonstrates an enormous conceit among the SI vanguard. Now, how is that rational? How does it fail to get extensive scrutiny in a community of rationalists?

My take is that neither side in this argument distinguished itself. Loosemore called for an "outside adjudicator" to solve a scientific argument. What kind of obnoxious behavior is that, when one finds oneself losing an argument? Yudkowsky (rightfully pissed off) in turn, convicted Loosemore of a scientific error, tarred him with incompetence and dishonesty, and banned him. None of these "sins" deserved a ban (no wonder the raw feelings come back to haunt); no honorable person would accept a position where he has the authority to exercise such power (a party to a dispute is biased). Or at the very least, he wouldn't use it the way Yudkowsky did, when he was the banned party's main antagonist.

Comment author: Hul-Gil 11 May 2012 03:29:11AM *  4 points [-]

The answer is probably that you overestimate that community's dedication to rationality because you share its biases.

That's probably no small part of it. However, even if my opinion of the community is tinted rose, note that I refer specifically to observation. That is, I've sampled a good amount of posts and comments here on LessWrong, and I see people behaving rationally in arguments - appreciation of polite and lucid dissension, no insults or ad hominem attacks, etc. It's harder to tell what's going on with karma, but again, I've not seen any one particular individual harassed with negative karma merely for disagreeing.

The main post demonstrates an enormous conceit among the SI vanguard. Now, how is that rational? How does it fail to get extensive scrutiny in a community of rationalists?

Can you elaborate, please? I'm not sure what enormous conceit you refer to.

My take is that neither side in this argument distinguished itself. Loosemore called for an "outside adjudicator" to solve a scientific argument. What kind of obnoxious behavior is that, when one finds oneself losing an argument? Yudkowsky (rightfully pissed off) in turn, convicted Loosemore of a scientific error, tarred him with incompetence and dishonesty, and banned him. None of these "sins" deserved a ban

I think that's an excellent analysis. I certainly feel like Yudkowsky overreacted, and as you say, in the circumstances no wonder it still chafes; but as I say above, Richard's arguments failed to impress, and calling for outside help ("adjudication" for an argument that should be based only on facts and logic?) is indeed beyond obnoxious.

Comment author: Richard_Loosemore 10 May 2012 07:11:15PM 1 point [-]

Holden, I think your assessment is accurate ... but I would venture to say that it does not go far enough.

My own experience with SI, and my background, might be relevant here. I am a member of the Math/Physical Science faculty at Wells College, in Upstate NY. I also have had a parallel career as a cognitive scientist/AI researcher, with several publications in the AGI field, including the opening chapter (coauthored with Ben Goertzel) in a forthcoming Springer book about the Singularity.

I have long complained about SI's narrow and obsessive focus on the "utility function" aspect of AI -- simply put, SI assumes that future superintelligent systems will be driven by certain classes of mechanism that are still only theoretical, and which are very likely to be superceded by other kinds of mechanism that have very different properties. Even worse, the "utility function" mechanism favored by SI is quite likely to be so unstable that it will never allow an AI to achieve any kind of human-level intelligence, never mind the kind of superintelligence that would be threatening.

Perhaps most important of all, though, is the fact that the alternative motivation mechanism might (and notice that I am being cautious here: might) lead to systems that are extremely stable. Which means both friendly and safe.

Taken in isolation, these thoughts and arguments might amount to nothing more than a minor addition to the points that you make above. However, my experience with SI is that when I tried to raise these concerns back in 2005/2006 I was subjected to a series of attacks that culminated in a tirade of slanderous denunciations from the founder of SI, Eliezer Yudkowsky. After delivering this tirade, Yudkowsky then banned me from the discussion forum that he controlled, and instructed others on that forum that discussion about me was henceforth forbidden.

Since that time I have found that when I partake in discussions on AGI topics in a context where SI supporters are present, I am frequently subjected to abusive personal attacks in which reference is made to Yudkowsky's earlier outburst. This activity is now so common that when I occasionally post comments here, my remarks are very quickly voted down below a threshold that makes them virtually invisible. (A fate that will probably apply immediately to this very comment).

I would say that, far from deserving support, SI should be considered a cult-like community in which dissent is ruthlessly suppressed in order to exaggerate the point of view of SI's founders and controllers, regardless of the scientific merits of those views, or of the dissenting opinions.

Comment author: Hul-Gil 11 May 2012 01:00:49AM *  9 points [-]

Can you provide some examples of these "abusive personal attacks"? I would also be interested in this ruthless suppression you mention. I have never seen this sort of behavior on LessWrong, and would be shocked to find it among those who support the Singularity Institute in general.

I've read a few of your previous comments, and while I felt that they were not strong arguments, I didn't downvote them because they were intelligent and well-written, and competent constructive criticism is something we don't get nearly enough of. Indeed, it is usually welcomed. The amount of downvotes given to the comments, therefore, does seem odd to me. (Any LW regular who is familiar with the situation is also welcome to comment on this.)

I have seen something like this before, and it turned out the comments were being downvoted because the person making them had gone over, and over, and over the same issues, unable or unwilling to either competently defend them, or change his own mind. That's no evidence that the same thing is happening here, of course, but I give the example because in my experience, this community is almost never vindictive or malicious, and is laudably willing to consider any cogent argument. I've never seen an actual insult levied here by any regular, for instance, and well-constructed dissenting opinions are actively encouraged.

So in summary, I am very curious about this situation; why would a community that has been - to me, almost shockingly - consistent in its dedication to rationality, and honestly evaluating arguments regardless of personal feelings, persecute someone simply for presenting a dissenting opinion?

One final thing I will note is that you do seem to be upset about past events, and it seems like it colors your view (and prose, a bit!). From checking both here and on SL4, for instance, your later claims regarding what's going on ("dissent is ruthlessly suppressed") seem exaggerated. But I don't know the whole story, obviously - thus this question.

In response to comment by Hul-Gil on Circular Altruism
Comment author: Salivanth 01 May 2012 12:23:30PM 2 points [-]

Ben Jones didn't recognise the dust speck as "trivial" on his torture scale, he identified it as "zero". There is a difference: If dust speck disutility is equal to zero, you shouldn't pay one cent to save 3^^^3 people from it. 0 * 3^^^3 = 0, and the disutility of losing one cent is non-zero. If you assign an epsilon of disutility to a dust speck, then 3^^^3 * epsilon is way more than 1 person suffering 50 years of torture. For all intents and purposes, 3^^^3 = infinity. The only way that Infinity(X) can be worse than a finite number is if X is equal to 0. If X = 0.00000001, then torture is preferable to dust specks.

Comment author: Hul-Gil 01 May 2012 06:22:43PM *  10 points [-]

Well, he didn't actually identify dust mote disutility as zero; he says that dust motes register as zero on his torture scale. He goes on to mention that torture isn't on his dust-mote scale, so he isn't just using "torture scale" as a synonym for "disutility scale"; rather, he is emphasizing that there is more than just a single "(dis)utility scale" involved. I believe his contention is that the events (torture and dust-mote-in-the-eye) are fundamentally different in terms of "how the mind experiences and deals with [them]", such that no amount of dust motes can add up to the experience of torture... even if they (the motes) have a nonzero amount of disutility.

I believe I am making much the same distinction with my separation of disutility into trivial and non-trivial categories, where no amount of trivial disutility across multiple people can sum to the experience of non-trivial disutility. There is a fundamental gap in the scale (or different scales altogether, à la Jones), a difference in how different amounts of disutility work for humans. For a more concrete example of how this might work, suppose I steal one cent each from one billion different people, and Eliezer steals $100,000 from one person. The total amount of money I have stolen is greater than the amount that Eliezer has stolen; yet my victims will probably never even realize their loss, whereas the loss of $100,000 for one individual is significant. A cent does have a nonzero amount of purchasing power, but none of my victims have actually lost the ability to purchase anything; whereas Eliezer's, on the other hand, has lost the ability to purchase many, many things.

I believe utility for humans works in the same manner. Another thought experiment I found helpful is to imagine a certain amount of disutility, x, being experienced by one person. Let's suppose x is "being brutally tortured for a week straight". Call this situation A. Now divide this disutility among people until we have y people all experiencing (1/y)*x disutility - say, a dust speck in the eye each. Call this situation B. If we can add up disutility like Eliezer supposes in the main article, the total amount of disutility in either situation is the same. But now, ask yourself: which situation would you choose to bring about, if you were forced to pick one?

Would you just flip a coin?

I believe few, if any, would choose situation A. This brings me to a final point I've been wanting to make about this article, but have never gotten around to doing. Mr. Yudkowsky often defines rationality as winning - a reasonable definition, I think. But with this dust speck scenario, if we accept Mr. Yudkowsky's reasoning and choose the one-person-being-tortured option, we end up with a situation in which every participant would rather that the other option had been chosen! Certainly the individual being tortured would prefer that, and each potentially dust-specked individual* would gladly agree to experience an instant of dust-speckiness in order to save the former individual.

I don't think this is winning; no one is happier with this situation. Like Eliezer says in reference to Newcomb's problem, if rationality seems to be telling us to go with the choice that results in losing, perhaps we need to take another look at what we're calling rationality.


*Well, assuming a population like our own, not every single individual would agree to experience a dust speck in the eye to save the to-be-tortured individual; but I think it is clear that the vast majority would.

Comment author: FrankAdamek 30 April 2012 03:14:06PM *  1 point [-]

I would summarize the main points as:

  • The process behind deprecation
  • The role of social considerations in rationality and dysrationality
  • More information on how the unconscious works (and what it can do when we understand it)
  • A more detailed overview of the ways we can improve unconscious thinking, along with examples of actually doing so.
  • Information on the process of investigating this thinking

The remainder is unsupported folk psychology, repetition, and superfluous elaboration.

There should be a "looks like" in there somewhere, at least with regard to "unsupported folk psychology" (repetition and superfluous elaboration...I wouldn't put the latter in those terms, but those may be an issue). Again, this may look similar in ways. But it is the process of multiple revisions of the ideas, looking for different ways to think of them that help me use them more productively, cutting things down to their fundamentals and removing elements from the model that didn't buy me any bits of prediction. (Mostly) everything here is load-bearing.

I think you need to improve your own writing, rather than using someone else to fix it up afterwards. A programmer has to fix his own code, and a writer likewise.

Obviously that would be better! While I've received moderate compliments on my writing in the past, I obviously wish I was much better. I would love to be able to phrase an idea more clearly, simply, and accurately, while keeping the reader engaged and perhaps even entertained. These posts are my current best efforts, and I know that despite this the writing isn't going to be that excellent, and that a more experienced writer would probably be able to put together something much better, and with less work. I would love to know how to do that!

But that doesn't mean I'm not going to try and use whatever tools I might find available to improve that writing, such as looking at a professionally-edited version of the very thing I worked on, if I get a chance to read something like that.

Comment author: Hul-Gil 01 May 2012 02:03:32AM *  5 points [-]

I think you're a good writer, in that you form sentences well, and you understand how the language works, and your prose is not stilted or boring. The problem I personally had, mostly with the previous two entries in this series, was that the "meat" - the interesting bits telling me what you had concluded, and why, and how to apply it, and how (specifically) you have applied it - seemed very spread out among a lot of filler or elaboration. I couldn't tell what you were eventually going to arrive at, and whether it'd be of use or interest to me. Too much generality, perhaps: compare "this made my life better" with "by doing X I caught myself thinking Y and changed this to result in the accomplishment of Z."

I tell you this only in case you are interested in constructive criticism from yet another perspective; some undoubtedly consider the things I have mentioned virtues in an author. In any case, I have upvoted this article; it doesn't deserve a negative score, I think - long-winded, maybe; poorly done or actively irrational, certainly not. The ideas are interesting, the methodology is reasonable, and the effort is appreciated.

Comment author: RichardKennaway 30 April 2012 10:34:25AM *  8 points [-]

Was there an obvious way to cut it to 1/3 the length? If a professional editor was able to do so and you were willing to send it to me, that would probably be really helpful for me.

1/3 the length would still be far too long. Does the following leave anything out?

To improve your performance in any sphere:

  1. Observe and learn what works.

  2. Most goals are subgoals of higher goals. Conflicts among them can often be resolved by looking for the higher goals and asking what will really serve them.

There. 41 words instead of 4388. The remainder is unsupported folk psychology, repetition, and superfluous elaboration.

I think you need to improve your own writing, rather than using someone else to fix it up afterwards. A programmer has to fix his own code, and a writer likewise.

Comment author: Hul-Gil 01 May 2012 01:42:58AM *  0 points [-]

That's nicely done! Clear, concise, and immediately applicable. I think Frank himself is an intelligent person with good and interesting ideas, but the "meat" of these posts seems spread out among a lot of filler/elaboration - possibly why they're hard to skim. I wasn't even sure, for quite a while, what the whole series was really about, beyond "general self-improvement."

This latest article is much more "functional" than the previous two, though, so I think we're moving in the right direction.

One thing your comment brings to mind - Frank notes something about unconscious mental processes being trainable, and the suggestion is that one can train them to be rational, or at least more accurate. (If I remember correctly.) Is this idea included your comment? Perhaps under "folk psychology"?

It seems like an interesting concept, though I was unable to find any instruction on how to actually accomplish it. (But I haven't looked too hard yet.)

Comment author: dlthomas 01 May 2012 12:20:07AM *  4 points [-]

While I don't have anything in particular to recommend in its place, it's perhaps worth noting that the contributors over at Language Log don't think terribly highly of Strunk & White; to paraphrase from my recollection, I think the criticism runs that the authors frequently ignore their own advice, much of which isn't any good anyway.

Comment author: Hul-Gil 01 May 2012 01:28:13AM 1 point [-]

Upvoted both this and its parent, because the quoted bit of Strunk and White seems like good advice, and because the linked criticism of Strunk and White is lucid and informative as well as entertaining. I learned about two new but related things, one right after the other; my conclusions about Strunk and White swung rapidly from one position to the opposite in quick succession. Quite an experience! ("Oh look, there are these two folks who are recognized authorities on English, and they're presenting good writing advice. Strunk and White... must remember. Wait; here's a response... Oh - turns out not much of their advice is that good after all! Passive voice IS acceptable! Language Log... must remember.")

Comment author: Ghazzali 28 April 2012 01:38:05AM -3 points [-]

There can be any number of anomolies that can be discussed, lets just name the Cambrian Explosion as one of the main ones, albeit a very general one. Where would you put the problem of the Cambrian Explosion? A, B, or C? But more importantly, why?

Not sure what you mean by 'if something happens as a predictable, inevitable consequence of the rules regarding how things behave, it makes little sense to call it a consequence of chance'. All you would have to do is keep going back to the source of the rocks behavior in order to see if it was by chance or design. Are those rules you are talking about designed or by chance? And so on....If you agree that those rules that govern the falling of the rock, and the rock's existence itself, (and yes, any rules that governed how it came to existence) came about by chance then you hold one side of the dialectic; that is you have a world-view that believes existence is produced through chance. You can't say I dont believe in that because I believe existence has come about through natural laws, and so on, because in the end you would have come to some kind of conclusion as to whether those laws are by chance or designed.

You cannot escape these two conclusion, you must pick one or the other. If you pick the chance worldview, you are heavilly reliant on evolution to validate your worldview.

Comment author: Hul-Gil 01 May 2012 01:12:18AM *  2 points [-]

If you pick the chance worldview, you are heavilly reliant on evolution to validate your worldview.

No, not at all. Evolution is one aspect of one field of one discipline. One can argue that existence came about by chance (and I'm not comfortable with that term) without referring to evolution at all; there are many other reasons to reject the idea of a designer.

See Desrtopa's reply, below, regarding chance and design and whether a designer helps here. S/he said it better than I could!

View more: Next