Comment author: Robin_Hanson2 18 May 2007 10:20:24PM 14 points [-]

Also, whoever saves a person to live another fifty years, it is as if they had saved fifty people to live one more year. Whoever saves someone who very much enjoys life, it is as if they saved many people who are not sure they really want to live. And whoever creates a life that would not have otherwise existed, it is as if they saved someone who had an entire lifetime yet to live.

Comment author: adamisom 24 January 2013 05:33:26PM 3 points [-]

Which is why I"m still puzzled by a simplistic moral dilemma that just won't go away for me: are we morally obligated to have children, and as many as we can? Sans using that using energy or money to more efficiently "save" lives, of course. It seems to me we should encourage people to have children, a common thing that many more people will actually do than donate philanthropically, in addition to other philanthropy encouragements.

Comment author: Eliezer_Yudkowsky 11 December 2008 10:46:12PM 2 points [-]

Venu: Given the tiny minority of AIs that will FOOM at all, what is the probability that an AI which has been designed for a purpose other than FOOMing, will instead FOOM?

It seems to me like a pretty small probability that an AI not designed to self-improve will be the first AI that goes FOOM, when there are already many parties known to me who would like to deliberately cause such an event.

Why not anti-predict that no AIs will FOOM at all?

A reasonable question from the standpoint of antiprediction; here you would have to refer back to the articles on cascades, recursion, the article on hard takeoff, etcetera.

Re Tim's "suddenly develop the ability reprogram and improve themselves all-at-once" - the issue is whether something happens efficiently enough to be local or fast enough to accumulate advantage between the leading Friendly AI and the leading unFriendly AI, not whether things can happen with zero resource or instantaneously. But the former position seems to be routinely distorted into the straw latter.

Comment author: adamisom 30 December 2012 07:27:22PM 0 points [-]

It seems to me like a pretty small probability that an AI not designed to self-improve will be the first AI that goes FOOM, when there are already many parties known to me who would like to deliberately cause such an event.

I know this is four years old, but this seems like a damn good time to "shut up and multiply" (thanks for that thoughtmeme by the way).

Comment author: Davorak 12 September 2012 02:25:03PM 8 points [-]

If I remember correctly the second quote was edited to be something along the lines of "will_newsome is awesome."

Comment author: adamisom 11 December 2012 11:03:14PM -1 points [-]

That is cute.. no? More childish than evil. He should just be warned that's trolling.

There really should be a comment edit history feature. Maybe it only activates once a comment reaches +10 karma.

Comment author: Vaniver 10 December 2012 08:51:44PM 9 points [-]

Consider Bob. Bob, like most unreflective people, settles many moral questions by "am I disgusted by it?" Bob is disgusted by, among other things, feces, rotten fruit, corpses, maggots, and men kissing men. Internally, it feels to Bob like the disgust he feels at one of those stimuli is the same as the disgust he feels at the other stimuli, and brain scans show that they all activate the insula in basically the same way.

Bob goes through aversion therapy (or some other method) and eventually his insula no longer activates when he sees men kissing men.

When Bob remembers his previous reaction to that stimuli, I imagine he would remember being disgusted, but not be disgusted when he remembers the stimuli. His positions on, say, same-sex marriage or the acceptability of gay relationships have changed, and he is aware that they have changed.

Do you think this example agrees with your account? If/where it disagrees, why do you prefer your account?

Comment author: adamisom 11 December 2012 06:32:19PM *  3 points [-]

I just wanted to tell everyone that it is great fun to read this in the voice of that voice actor for the Enzyte commercial :)

Comment author: shminux 11 December 2012 06:03:31AM 0 points [-]

They have an agenda (prewritten bottom line), which effectively nullifies anything they say.

Comment author: adamisom 11 December 2012 06:11:50PM *  1 point [-]

This is wrong.

If you discard the emotionally-laden word "agenda" (in my experience, its usage always indicates negative affect toward the thing with the "agenda"), what you're basically saying is this: Anyone or any organization that concludes that the evidence for something is strong and that it matters, and who consequently takes a stand---their conclusions should be thrown out a priori. You did say "effectively nullifies anything they say"--those are damn strong words. So what you're implying, AFAICT, is that you only listen to 'what someone has to say' if they don't come to a strong conclusion and become an advocate for change (despite that one would say you have a moral obligation to).

I'm disappointed to find this kind of thinking on LessWrong, to be honest, not least from one of the regulars.

Edit: specifically on the topic at hand, my initial response to yourbrainonporn.com is positive not only because of the comprehensive and well-cited posts I read on the homepage, but because of Gary Wilson's response (about halfway down) here: http://www.yourbrainrebalanced.com/index.php?topic=2754.0 -- It's clear that he really knows what he's talking about, even when the average neurologist doesn't. (I'm not saying I believe it's perfect--I can see motivated cognition going on, and am disappointed in the lack of mention of selection bias--but from what I can tell he is... (removes sunglasses).... less wrong than the average expert.)

Comment author: shminux 15 October 2011 06:49:02PM 17 points [-]

You say "porn" like it's a bad thing.

There are some useful bits on that site, but it seems too one-sided (with the "porn is addictive and bad" bottom line already written) to be taken seriously. Neither of the authors has any formal training in neuroscience or psychology, which does not help their case, either.

Comment author: adamisom 11 December 2012 05:54:27AM 0 points [-]

I know this is old. What is really meant by "does not help their case, either" is "it hurts their case that they don't have formal training". I vehemently disagree. Not that I think formal training is bad. Just that I think giving emphasis to this indirect indicator of their competence is misleading, because there's plenty of direct evidence--if you read the site--that they 'know what they're talking about'.

Comment author: ChristianKl 04 December 2012 05:12:23PM 0 points [-]

Most predictions in daily life don't include making prediction about sports or about which politician get's elected. Most meaningful predictions that I make in my daily life aren't of the type you would find on intrade.

How often do you make a decision in your daily life where it matters which sport team wins? In my life that doesn't happen. Most of my personal decisions are also not depended on which politician's win an election.

To get educated you sent students into university where they try to learn the knowledge in textbooks, Students who seek to study sport focus on studying sport statistics. Students who study politics don't focus on studying which politician won which elections.

Most of the knowledge that people can aquire is outside of the category of predictions you find on Intrade.

If people want to learn how the world works reading textbooks is better than reading the news. On the same token it makes sense to calibrate on textbook knowledge.

Calibrating on actual personal events is also good. That means that you get better at predicting other personal events.

Comment author: adamisom 10 December 2012 07:01:53PM *  1 point [-]

It seems to me this could be a smartphone app. Whenever a person wants to make a prediction about a personal event, they click on the app and speak, with a pause between the thing and how likely you think it is. The app could just store verbatim text, separating question/answer, and timestamping recordings in case you want to update your prediction later. If you learn to specify when you think the outcome will occur, it can make a sound to remind you to check off whether it happened; otherwise it could remind you periodically, like at the end of every day. Why couldn't it have data analysis tools to let you visualize calibration, or find useful patterns and alert you? Seems a plausible app to me.

Comment author: Vladimir_Nesov 04 December 2012 12:32:55PM 22 points [-]

The world is complicated.

Comment author: adamisom 06 December 2012 09:10:01AM *  1 point [-]

The only time I've ever read a vague four-word sentence that deserves an upvote. Such things tickle me.

Comment author: chaosmosis 06 December 2012 08:10:49AM *  2 points [-]

The Best Way Anyone Have Found So Far By A Fair Margin.

This also seems problematic, for the same reasons.

Comment author: adamisom 06 December 2012 09:04:10AM 3 points [-]

And what if it is? I am not claiming this is so. It is rhetorical. What then?

Comment author: Kawoomba 04 December 2012 06:09:16PM *  5 points [-]

Which is why everyone should just provide the result of a certified IQ test, just so there's less incentive to signal intellectual superiority, with the lines already drawn.

(Heh, that was smart signalling!)

(Also, that last sentence.)

(And this one?)

(Diminishing returns probably.)

Comment author: adamisom 04 December 2012 08:42:20PM *  1 point [-]

Darn it.

Even though you are talking explicitly about signaling, I still couldn't help myself from liking it.

I also like chaosmis' comment. It expressed what I should have.... Though his comment might also be a sinister meta-signaling-signaling trolling :P

God, I hate signaling.

(Wait, am I doing it right now?)

(Oh shit, and now.)

(THERE IS NO RELEASE FROM THE KRAKEN! RUN FOR YOUR LIFE AND NEVER LOOK BACK!!)

View more: Prev | Next