Comment author: Gondolinian 21 February 2015 12:47:27AM 5 points [-]

[META]

Is anyone in favor of going back to the old system of having one discussion thread per HPMOR chapter instead of the current system based on number of comments?

Submitting...

Comment author: see 21 February 2015 05:39:31AM 6 points [-]

Note that this poll only samples people who care about these threads enough to read them. People who avoid these threads and don't like them cluttering /discussion will not see it.

Comment author: spencerth 18 January 2015 01:24:13PM 3 points [-]

Right. I think this is one of the key issues. When things like 'natural', 'random' (both in where, when, and how often they happen) or are otherwise uncontrollable, humans are much keener to accept them. When agency comes into play, it changes the perspective on it completely: "how could we have changed culture/society/national policies/our surveillance system/educational system/messaging/nudges/pick your favorite human-controllable variable" to have prevented this, or prevent it in the future? It's the very idea that we could influence it and/or that it's perpetuated by 'one of us' that makes it so salient and disturbing. From a consequentialist perspective, it's definitely not rational, and we shouldn't (ideally) affect our allocation of resources to combat threats.

Is there a particular bias that covers "caring about something more, however irrelevant/not dangerous, just because a perceived intelligent agent was responsible?"

Comment author: see 20 January 2015 10:55:02AM 2 points [-]

Well, there are definitely forms that are irrational, but there's also the perfectly rational factor of having to account for feedback loops.

We don't have to consider that shifting resources from lightning death prevention to terrorism prevention will increase the base rate of lightning strikes; we do have to consider that a shift in the other direction can increase (or perhaps decrease) the base rate of terrorist activity. It is thus inherently hard to compare the expected effect of a dollar of lightning strike prevention against a dollar of terrorism prevention, over and above the uncertainties involved in comparing the expected effect of (say) a dollar of lightning strike prevention against a dollar of large asteroid collision protection.

Comment author: Daniel_Burfoot 17 January 2015 08:16:02PM 16 points [-]

in the USA you're four times more likely to be struck by thunder than by terrorists

Our minds are actually picking up on a valid statistical issue here, which is that the number of people killed by terrorists is much more variable than the number of people killed by lightning. Since lightning strikes are almost completely uncorrelated random events, the distribution of deaths by lightning is governed by the Central Limit Theorem and so is nearly Gaussian. If X people died from lightning in 2014, then it is very unlikely that 2X people will die from lightning in 2015, and astronomically unlikely that 100X people will so die.

In contrast, if X people die from terrorism in 2014, you cannot deduce very much about the probability that 100X people will die from terrorism in 2015. Nassim Taleb would say that lightning deaths happen in Mediocristan while terrorism deaths happen in Extremistan.

Comment author: see 17 January 2015 11:15:45PM 7 points [-]

Further, of course, we know that lightning strikes are not controlled by intelligent beings, while terrorist strikes are.

If there's a major multi-fatality lightning strike, it's unlikely to encourage weather phenomena to engage in copycat attacks. Nor will all sorts of counter-lightning measures dissuade clouds from generating static electricity and instead dumping more rain or something.

Comment author: see 27 December 2014 08:48:17PM 0 points [-]

Some people (including me) have made comments along these lines before. There's nothing theoretically wrong with the view that evolutionary history may have created multiple less-than-coordnated utility functions that happen to share one brain.

The consequences have some serious implications, though. If a single human has multiple utility functions, it is highly unlikely (for reasons similar to Arrow's Paradox) that these work out compatibly enough that you can have an organism-wide utility expressed as a real number (as opposed to a hypercomplex number or matrix). And if you have to map utility to a hypercomplex number or matrix, you can't "shut up and multiply", because while 7*3^^^3 is always a really big number, matrix math is a lot more complicated. Utilitarianism becomes mathematically intractable as a result.

Comment author: Yvain 02 November 2014 01:52:17AM 2 points [-]

I stated that all disputes would be resolved by Wikipedia, and here is Wikipedia's verdict on the matter: http://en.wikipedia.org/wiki/List_of_best-selling_PC_games

Comment author: see 02 December 2014 11:55:36PM 0 points [-]

The contention that "'computer games', as defined by Wikipedia" is "PC games" is, of course, true.

However, did you deliberately intend that people who knew with high confidence Tetris was (by far) the best-selling game played on computers (as computers are defined by Wikipedia) would get caught by not knowing that Wikipedia redirects "computer game" to "PC game" rather than to "video game"?

Comment author: see 18 November 2014 09:40:42PM *  0 points [-]

1) Conscious beings reasonably often try to predict their own future state or the state of other minds.

2) In order to successfully mimic a conscious being, a p-zombie would have to also engage in this behavior, predicting its own future states and the future states of other minds.

3) In order to predict such future states, it would seem necessary that a p-zombie would have to have at least some ability to model the states of minds, including its own.

Now, before we go any further, how does consciousness differ from having a model of the internal states of one's own mind?

Comment author: Adriano_Mannino 27 January 2013 08:53:18PM 7 points [-]

Why would the chicken have to learn to follow the ethics in order for its interests to be fully included in the ethics? We don't include cognitively normal human adults because they are able to understand and follow ethical rules (or, at the very least, we don't include them only in virtue of that fact). We include them because to them as sentient beings, their subjective well-being matters. And thus we also include the many humans who are unable to understand and follow ethical rules. We ourselves, of course, would want to be still included in case we lost the ability to follow ethical rules. In other words: Moral agency is not necessary for the status of a moral patient, i.e. of a being that matters morally.

The question is how we should treat humans and chickens (i.e. whether and how our decision-making algorithm should take them and their interests into account), not what social behavior we find among humans and chickens.

Comment author: see 28 January 2013 05:57:58PM 1 point [-]

Constructing an ethics that demands that a chicken act as a moral agent is obviously nonsense; chickens can't and won't act that way. Similarly, constructing an ethics that demands humans value chickens as much as they value their own children is nonsense; humans can't and won't act that way. If you're constructing an ethics for humans follow, you have to start by figuring out humans.

It's not until after you've figured out how much humans should value the interests of chickens that you can determine how much to weigh the interests of chickens in how humans should act. And how much humans should weigh the value of chickens is by necessity determined by what humans are.

Comment author: Andreas_Giger 27 January 2013 02:47:38PM 3 points [-]

Human beings are physically/genetically/mentally similar within certain tolerances; this implies there is one system of ethics (within certain tolerances) that is best suited all of us

It does not imply that there exists even one basic moral/ethical statement any human being would agree with, and to me that seems to be a requirement for any kind of humanity-wide system of ethics. Your 'one size fits all' approach does not convince me, and your reasoning seems superficial and based on words rather than actual logic.

Comment author: see 27 January 2013 07:47:04PM 0 points [-]

All humans as they currently exist, no. But is there a system of ethics as a whole that humans, even currently disagreeing with some parts of it, would recognize as superior at doing what they really want from an ethical system that they would switch to it? Even in the main? Maybe, indeed, human ethics are so dependent on alleles that vary within the population and chance environmental factors that CEV is impossible. But there's no solid evidence to require assuming that a priori, either.

By analogy, consider the person who in 1900 wanted to put together the ideal human diet. Obviously, the diets in different parts of the world differed from each other extensively, and merely averaging all of them that existed in 1900 would not be particularly conducive to finding an actual ideal diet. The person would have to do all the sorts of research that discovered the roles of various nutrients and micronutrients, et cetera. Indeed, he'd have to learn more than we currently do about them. And he'd have to work out the variations to react to various medical conditions, and he'd have to consider flavor (both innate response pathways and learned ones), et cetera. And then there's the limit of what foods can be grown where, what shipping technologies exist, how to approximate the ideal diet in differing circumstances.

It would be difficult, but eventually you probably could put together a dietary program (including understood variations) that would, indeed, suit humans better than any of the existing diets in 1900, both in nutrition and pleasure. It wouldn't suit sharks at all; it would not be a universal nutrition. But it would be an objectively determined diet just the same.

Comment author: randallsquared 26 January 2013 09:30:40PM 5 points [-]

The point you quoted is my main objection to CEV as well.

You might object that a person might fundamentally value something that clashes with my values. But I think this is not likely to be found on Earth.

Right now there are large groups who have specific goals that fundamentally clash with some goals of those in other groups. The idea of "knowing more about [...] ethics" either presumes an objective ethics or merely points at you or where you wish you were.

Comment author: see 27 January 2013 07:11:57AM -1 points [-]

Objective? Sure, without being universal.

Human beings are physically/genetically/mentally similar within certain tolerances; this implies there is one system of ethics (within certain tolerances) that is best suited all of us, which could be objectively determined by a thorough and competent enough analysis of humans. The edges of the bell curve on various factors might have certain variances. There might be a multi-modal distribution of fit (bimodal on men and women, for example), too. But, basically, one objective ethics for humans.

This ethics would clearly be unsuited for cats, sharks, bees, or trees. It seems vanishingly unlikely that sapient minds from other evolutions would also be suited for such an ethics, either. So it's not universal, it's not a code God wrote into everything. It's just the best way to be a human . . . as humans exposed to it would in fact judge, because it's fitted to us better than any of our current fumbling attempts.

Comment author: buybuydandavis 19 December 2012 05:25:09AM *  1 point [-]

That's how I read it as well.

Snape saw that Voldemort had kept his word, and only killed Lily when she attacked him first.

It seemed to me that Harry hadn't learned his lesson about his talks with Snape. He even noted that Snape's allegiance was wavering, and yet he shows him that Voldie given Lily her chance.

Comment author: see 20 December 2012 05:30:35AM 5 points [-]

Harry didn't learn, no. But is that an advantage or a disadvantage? To go back to Chapter 76:

"It's strange," Snape said quietly. "I have had two mentors, over the course of my days. Both were extraordinarily perceptive, and neither one ever told me the things I wasn't seeing. It's clear enough why the first said nothing, but the second..." Snape's face tightened. "I suppose I would have to be naive, to ask why he stayed silent."

Now, yes, this separates Snape from Dumbledore. But Dumbledore is not the protagonist. Harry is the protagonist. And what Snape can learn from Harry's actions are:

Harry Potter will tell him the truth; Snape can trust Harry Potter. -or- Harry Potter is a brilliant plotter; so good that even at age eleven he outclasses both Voldemort and Dumbledore with his ability to fake being honest and trustworthy.

If the first is true, Snape can put his trust in Harry, where he cannot trust Voldemort or Dumbledore. In a world where the prophecy clearly declares Harry Potter a power that ranks with Voldemort, isn't the obvious power to align oneself with the one who you can trust? When looking at the future, do you want it dominated by someone who let you wallow in foolishness and pain for their own advantage, or someone who treated you as you would wish to be treated? (Well, it might just mean the boy doesn't have enough guile to win, of course, but that suggests merely not burning your bridges. You're already in the other camp, after all . . .)

If the second is true, the only sensible course is to make oneself as useful to Harry as possible, because Harry is unstoppable.

View more: Prev | Next