Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: evand 20 June 2017 05:32:49AM 0 points [-]

I hope you have renter's insurance, knowledge of a couple evacuation routes, and backups for any important data and papers and such.

Comment author: ThoughtSpeed 19 June 2017 02:11:54AM 0 points [-]

There is some minimum threshold below which it just does not count, like saying, "What if we exposed 3^^^3 people to radiation equivalent to standing in front of a microwave for 10 seconds? Would that be worse than nuking a few cities?" I suppose there must be someone in 3^^^3 who is marginally close enough to cancer for that to matter, but no, that rounds down to 0.

Why would that round down to zero? That's a lot more people having cancer than getting nuked!

(It would be hilarious if Zubon could actually respond after almost a decade)

Comment author: Jiro 16 June 2017 08:52:50AM *  0 points [-]

Or, in less binary terms, why do you assign things the probabilities that you do?

I'm assuming that you assign it a high probability.

I personally am assigning it a high probability only for the sake of argument.

Since I am doing it for the sake of argument, I don't have, and need not have, any reason for doing so (other than its usefulness in argument).

In response to comment by Jiro on Nonperson Predicates
Comment author: John_Mlynarski 16 June 2017 03:37:30AM 0 points [-]

Eliezer suggested that, in order to avoid acting unethically, we should refrain from casually dismissing the possibility that other entities are sentient. I responded that I think that's a very good idea and we should actually implement it. Implementing that idea means questioning assumptions that entities aren't sentient. One tool for questioning assumptions is asking "What do you think you know, and why do you think you know it?" Or, in less binary terms, why do you assign things the probabilities that you do?

Now do you see the relevance of asking you why you believe what you do as strongly as you do, however strongly that is?

I'm not trying to "win the debate", whatever that would entail.

Tell you what though, let me offer you a trade: If you answer my question, then I will do my best to answer a question of yours in return. Sound fair?

Comment author: Jiro 15 June 2017 10:40:35PM *  0 points [-]

Having confidence in the belief is irrelevant. Assuming that you agree with it is relevant, because

1) Arguments should be based on premises that the other guy accepts. You probably accept the premise that video game characters aren't conscious.

2) It is easy to filibuster an argument by questioning things that you don't actually disagree with. Because the belief that video game characters aren't conscious is so widespread, this is probably such a filibuster. I wish to avoid those.

Comment author: cousin_it 15 June 2017 08:20:19AM 0 points [-]

Welcome! You can ask your question in the open thread as well.

Comment author: VAuroch 15 June 2017 05:54:37AM 0 points [-]


Comment author: research_prime_space 14 June 2017 09:40:43PM 2 points [-]

Hi! I'm 18 years old, female, and a college student (don't want to release personal information beyond that!). I'm majoring in math, and I hopefully want to use those skills for AI research :D

I found you guys from EA, and I started reading the sequences last week, but I really do have a burning question I want to post to the Discussion board so I made an account.

Comment author: entirelyuseless 13 June 2017 01:34:45PM 1 point [-]

I desire to be the greatest man of the 21st century.

A good preliminary estimate of the probability of this happening would be one in ten billion, given the number of people who will live during the 21st century.

Comment author: DragonGod 13 June 2017 12:43:42PM *  1 point [-]

I’m a 19-year-old Nigerian male. I am strictly heterosexual and an atheist. I am a strong narcissist, and I may have Narcissist Personality Disorder (though I am cognizant of this vulnerability and do work against it which would lower the probability of me suffering from NPD). I am ambitious, and my goal in life is to plant my flag on the sand of time; engrave my emblem in history; immortalise myself in the memory of humanity. I desire to be the greatest man of the 21st century. I am a transhumanist, and intend to live indefinitely, but failing that being the greatest man of the 21st century would suffice. I fear death.

I'm an insatiably curious person. My interests are broad; rationality, science, mathematics, philosophy, economics, computing, literature.

My hobbies include discourse and debate, writing, reading, anime and manga, strategy games, problem solving and learning new things.

I find intelligence the most attractive quality in a potential partner—ambition and drive form a close second.

In response to comment by Jiro on Nonperson Predicates
Comment author: John_Mlynarski 12 June 2017 01:12:03PM *  0 points [-]

Firstly, it seems more accurate to say that the standard default belief is that video characters possess awareness. That the vast majority rationalize their default belief as false doesn't change that.

Secondly, that's argumentum ad populum, which is evidence -- Common beliefs do seem to be usually true-- but not very strong evidence. I asked why you're as confident in your belief as you are. Are you as convinced of this belief as you are of most beliefs held by 99.99% of people? If you're more (or less) convinced, why is that?

Thirdly, you seem to be describing a reason for believing that I share your belief that video game characters aren't sentient, which is different from a reason for thinking that your belief is correct. I was asking why you think you're right, not why you assumed that I agree with you.

Comment author: adamzerner 11 June 2017 04:59:12PM 0 points [-]

I don't. I'm not scope sensitive. The alarm system is working fine, it's just that it's sensitive to people who are cooking (I think). I'm eager to move out ASAP though.

Comment author: Lumifer 10 June 2017 07:33:13PM *  2 points [-]

Counterpoint: do you understand the magnitude of how bad it would be if there was a fire and you ended up getting seriously injured or dying?

You continue to live in the apartment building which already had two fires and which has a malfunctioning alarm system.

In response to Scope Insensitivity
Comment author: adamzerner 10 June 2017 12:14:33AM *  0 points [-]

Real world example: the fire alarm goes off in my apartment building at night about once every two weeks. Many people decide to stay in their room, as opposed to evacuating the building. They aren't understanding the magnitude of how bad it would be if there was a fire and they ended up getting seriously injured or dying. (There have been two real fires so far; the chance of a real fire is not trivial)

In response to The Power of Agency
Comment author: cousin_it 09 June 2017 11:34:15AM *  2 points [-]

Coming back to this post, I feel like it's selling a dream that promises too much. I've come to think of such dreams as Marlboro Country ads. For every person who gets inspired to change, ten others will be slightly harmed because it's another standard they can't achieve, even if they buy what you're selling. Figuring out more realistic promises would do us all a lot of good.

Comment author: Elo 09 June 2017 09:44:52AM 0 points [-]

done for you

In response to The Power of Agency
Comment author: AshwinV 09 June 2017 09:30:59AM 0 points [-]

I want to upvote this again.

Comment author: strangepoop 08 June 2017 03:05:14PM 0 points [-]

Isn't this just a special case of Berkson's paradox?

Comment author: DragonGod 08 June 2017 01:41:07PM 0 points [-]

I doubt this.

In response to The Moral Void
Comment author: themusicgod1 05 June 2017 01:58:52AM 0 points [-]

This post is generalizable, even if you don't think that it's wrong to kill people as a general rule there's probably some other moral act #G30429 that you probably *don't* think that it would be appropriate and the point still holds: Rowhammering the bit that says "Don't do #G30429" is probably not as impossible as it seems in the long run.

(Meta: when thinking about this I found it difficult to recall all of the arguments I've learned in moral philosophy over the past 16 years of trying that would have been applicable. I knew where you were going, roughly, but it was like traveling through a city I haven't been to in years in terms of whether or not I recognized the territory. This gave me an extra impression of 'this bit could be easily flipped')

View more: Next