All of UmamiSalami's Comments + Replies

1%? Shouldn't your basic uncertainty over models and paradigms be great enough to increase that substantially?

I think it's about a 0.75 probability, conditional upon smarter-than-human AI being developed. Guess I'm kind of an optimist. TL;DR I don't think it will be very difficult to impart your intentions into a sufficiently advanced machine.

0The_Jaded_One
Counterargument: it will be easy to impart an approximate version of your intentions, but hard to control the evolution of those values as you crank up the power. E.g. evolution, humans, make us want sex, we invent condoms. No-one will really care about this until it's way too late and we're all locked up in nice padded cells and drugged up, or something equally bad but hard for me to imagine right now.

I haven't seen any parts of Givewell's analyses that involve looking for the right buzzwords. Of course, it's possible that certain buzzwords subconsciously manipulate people at Givewell in certain ways, but the same can be said for any group, because every group has some sort of values.

Why do you expect that to be true?

Because they generally emphasize these values and practices when others don't, and because they are part of a common tribe.

How strongly? ("Ceteris paribus" could be consistent with an extremely weak effect.) Under what criterion for classifying people as EAs or non-EAs?

Somewhat weakly, but not extremely weakly. Obviously there is no single clear criteria, it's just about people's philosophical values and individual commitment. At most, I think that being a solid EA is about as important as having a couple... (read more)

3Viliam
Could hypothetically also make them more vulnerable to a person who correctly uses the right buzzwords to gain their trust for ill purposes, while someone who is not a member of the same tribe would be more skeptical.

I bet that most of the people who donated to Givewell's top charities were, for all intents and purposes, assuming their effectiveness in the first place. From the donor end, there were assumptions being made either way (and there must be; it's impractical to do all kinds of evaluation on one's own).

I think EA is something very distinct in itself. I do think that, ceteris paribus, it would be better to have a fund run by an EA than a fund not run by an EA. Firstly, I have a greater expectation for EAs to trust each other, engage in moral trades, be rational and charitable about each other's points of view, and maintain civil and constructive dialogue than I do for other people. And secondly, EA simply has the right values. It's a good culture to spread, which involves more individual responsibility and more philosophical clarity. Right now it's embryo... (read more)

3ole.koksvik
> And secondly, EA simply has the right values. I think this is false, because I think EA is too heterogeneous to count as having the same set of values.
0Benquo
Why do you expect that to be true? How strongly? ("Ceteris paribus" could be consistent with an extremely weak effect.) Under what criterion for classifying people as EAs or non-EAs?

It seems to me that Givewell has already acknowledged perfectly well that VillageReach is not a top effective charity. It also seems to me that there's lots of reasons one might take GiveWell's recommendations seriously, and that getting "particularly horrified" about their decision not to research exactly how much impact their wrong choice didn't have is a rather poor way to conduct any sort of inquiry on the accuracy of organizations' decisions.

Benquo150

It was very much not obvious to me that GiveWell doubted its original VillageReach recommendation until I emailed. What published information made this obvious to you?

The main explanation I could find for taking VillageReach off the Top Charities list was that they no longer had room for more funding. At the time I figured this simply meant they'd finished scaling up inside the country and didn't have more work to do of the kind that earned the Top Charity recommendation.

In fact, it seems to me that the less intelligent an organism is, the easier its behavior can be approximated with model that has a utility function!

Only because those organisms have fewer behaviors in general. If you put a human in an environment where its options and sensory inputs were as simple as those experienced by apes and cats, humans would probably look like equally simple utility maximizers.

Kantian ethics: do not violate the categorical imperative. It's derived logically from the status of humans as rational autonomous moral agents. It leads to a society where people's rights and interests are respected.

Utilitarianism: maximize utility. It's derived logically from the goodness of pleasure and the badness of pain. It leads to a society where people suffer little and are very happy.

Virtue ethics: be a virtuous person. It's derived logically from the nature of the human being. It leads to a society where people act in accordance with moral ideals.

Etc.

pigs strike a balance between the lower suffering, higher ecological impact of beef and the higher suffering, lower ecological impact of chicken.

This was my thinking for coming to the same conclusion. But I am not confident in it. Just because something minimaxes between two criteria doesn't mean that it minimizes overall expected harm.

All of the architectures assumed by people who promote these scenarios have a core set of fundamental weaknesses (spelled out in my 2014 AAAI Spring Symposium paper).

The idea of superintelligence at stake isn't "good at inferring what people want and then decides to do what people want," it's "competent at changing the environment". And if you program an explicit definition of 'happiness' into a machine, its definition of what it wants - human happiness - is not going to change no matter how competent it becomes. And there is no reas... (read more)

0TheAncientGeek
It's both. Superintelligence is definitionally equal or greater than human ability at a variety of tasks, so it implies equal or greater ability to understand words and concepts. Also competence at changing the environment requires accurate beliefs. So the default expectation is accuracy. If you think an AI would be selectively inaccurate about its values you need to explain why. What has that to do with NNs? You seem to be just regurgitating standard dogma. There is no reason to expect
-4[anonymous]
You have shown too little sign of understanding the issues, so I am done. Thank you for your comment.

I came here to write exactly what gjm said, and your response is only to repeat the assertion "Scenarios in which the AI Danger comes from an AGI that is assumed to be an RL system are so ubiquitous that it is almost impossible to find a scenario that does not, when push comes to shove, make that assumption."

What? What about all the scenarios in IEM or Superintelligence? Omohundro's paper on instrumental drives? I can't think of anything which even mentions RL, and I can't see how any of it relies upon such an assumption.

So you're alleging that d... (read more)

2[anonymous]
Perhaps I assumed it was clearer than it was, so let me spell it out. All of the architectures assumed by people who promote these scenarios have a core set of fundamental weaknesses (spelled out in my 2014 AAAI Spring Symposium paper). Those weaknesses lead straight to a set of solutions that are manifestly easy to implement. For example, in the case of Steve Omohundro's paper, it is almost trivial to suggest that for ALL of the types of AI he considers, he has forgot to add a primary supergoal which imposes a restriction on the degree to which all kinds of "instrumental goals" are allowed to supercede the power of other goals. At a stroke, every problem he describes in the paper disappears. So, in response to the easy demolition of those weak scenarios, people who want to salvage the scenarios invariably resort to claims that the AI could be developing itself through the use of RL, completely independently of all human attempts to design the control mechanism. By this means, they eliminate the idea that there is any such thing as a human who comes along and writes the supergoal which stops the instrumental goals from going up to the top of the stack. This maneuver is, in my experience of talking to people about such scenarios, utterly universal. I repeat: every time they are backed into a corner and confronted by the manifestly easy solutions, they AMEND THE SCENARIO TO MAKE THE AI CONTROLLED BY REINFORCEMENT LEARNING. That is why I said what I said. We discussed it at the 2014 Symposium If I recall correctly Steve used that strategy (although to be fair I do not know how long he stuck it out). I know for sure that Daniel Dewey used the Resort-to-RL maneuver, because that was the last thing he was saying as I had to leave the meeting.

In Bostrom's dissertation he says it's not clear if number of observers or the number of observer-moments is the appropriate reference class for anthropic reasoning.

I don't see how you are jumping to the fourth disjunct though. Like, maybe they run lots of simulations which are very short? But surely they would run enough to outweigh humanity's real history whichever way you measure it. Assuming they have posthuman levels of computational power.

In other words, a decision theory, complete with an algorithm (so you can actually use it), and a full set of terminal goals. Not what anyone else means by "moral theory'.

When people talk about moral theories they refer to systems which describe the way that one ought to act or the type of person that one ought to be. Sure, some moral theories can be called "a decision theory, complete with an algorithm (so you can actually use it), and a full set of terminal goals," but I don't see how that changes anything about the definition of a moral theory.

To say that you may chose any one of two actions when it doesn’t matter which one you chose since they have the same value, isn’t to give “no guidance”.

Proves my point. That's no different from how most most moral theories respond to questions like "which shirt do I wear". So this 'completeness criterion' has to be made so weak as to be uninteresting.

Among hedonistic utilitarians it's quite normal to demand both completeness

Utilitarianism provides no guidance on many decisions: any decision where both actions produce the same utility.

Even if it is a complete theory, I don't think that completeness is demanded of the theory; rather it's merely a tenet of it. I can't think of any good a priori reasons to expect a theory to be complete in the first place.

1cunning_moralist
Two different actions don’t produce exactly the same utility, but even if they did it wouldn’t be any problem. To say that you may chose any one of two actions when it doesn’t matter which one you chose since they have the same value, isn’t to give “no guidance”. Consequentialists want to maximize the intrinsic value, and both these actions do just that. Of course hedonistic utilitarianism doesn’t require completeness, which, by the way, isn’t one of its tenets either. But since it is complete, which of course is better than being incomplete, it’s normal for hedonistic utilitarianists to hold the metaethical view that a proper moral theory should answer all of the question: “Which actions ought to be performed?” What could be so good with answering it incompletely?

The question needs to cover how one should act in all situations, simply because we want to answer the question. Otherwise we’re left without guidance and with uncertainty.

Well first, we normally don't think of questions like which clothes to wear as being moral. Secondly, we're not left without guidance when morality leaves these issues alone: we have pragmatic reasons, for instance. Thirdly, we will always have to deal with uncertainty due to empirical uncertainty, so it must be acceptable anyway.

There is one additional issue I would like to highli

... (read more)

See Omohundro's paper on convergent instrumental drives

It seems like hedging is the sort of thing which tends to make the writer sound more educated and intelligent, if possibly more pretentious.

It's unjustified in the same way that vilalism was an unjustified explanation of life: it's purely a product of our ignorance.

It's not. Suppose that the ignorance went away: a complete physical explanation of each of our qualia - "the redness of red comes from these neurons in this part of the brain, the sound of birds flapping their wings is determined by the structure of electric signals in this region," and so on - would do nothing to remove our intuitions about consciousness. But a complete mechanistic explanation of how organ systems wor... (read more)

You should take a look at the last comment he made in reply to me, where he explicitly ascribed to me and then attacked (at length) a claim which I clearly stated that I didn't hold in the parent comment. It's amazing how difficult it is for the naive-eliminativist crowd to express cogent arguments or understand the positions which they attack, and a common pattern I've noticed across this forum as well as others.

-1entirelyuseless
Yes, I noticed he overlooked the distinction between "I know I am conscious because it's my direct experience" and "I know I am conscious because I say 'I know I am conscious because it's my direct experience.'" And those are two entirely different things.

Not too long ago, it would also have been quite easy to conceive of a world in which heat and motion were two separate things. Today, this is no longer conceivable.

But it is conceivable for thermodynamics to be caused by molecular motion. No part of that is (or ever was, really) inconceivable. It is inconceivable for the sense qualia of heat to be reducible to motion, but that's just another reason to believe that physicalism is wrong. The blog post you linked doesn't actually address the idea of inconceivability.

If something seems conceivable to you

... (read more)

I claim that it is "conceivable" for there to be a universe whose psychophysical laws are such that only the collection of physical states comprising my brainstates are conscious, and the rest of you are all p-zombies.

Yes. I agree that it is conceivable.

Now then: I claim that by sheer miraculous coincidence, this universe that we are living in possesses the exact psychophysical laws described above (even though there is no way for my body typing this right now to know that), and hence I am the only one in the universe who actually experienc

... (read more)

Well, first off, I personally think the Zombie World is logically impossible, since I treat consciousness as an emergent phenomenon rather than a mysterious epiphenomenal substance; in other words, I reject the argument's premise: that the Zombie World's existence is "conceivable".

And yet it seems really quite easy to conceive of a p zombie. Merely claiming that consciousness is emergent doesn't change our ability to imagine the presence or absence of the phenomenon.

That being said, if you do accept the Zombie World argument, then there's no

... (read more)
2dxu
Not too long ago, it would also have been quite easy to conceive of a world in which heat and motion were two separate things. Today, this is no longer conceivable. If something seems conceivable to you now, that might just be because you don't yet understand how it's actually impossible. To make the jump from "conceivability" (a fact about your bounded mind) to "logically possible" (a fact about reality) is a misstep, and a rather enormous one at that. By stipulation, you would have typed the above sentence regardless of whether or not you were actually conscious, and hence your statement does not provide evidence either for or against the existence of consciousness. If we accept the Zombie World as a logical possibility, our priors remain unaltered by the quoted sentence, and continue to be heavily weighted toward the Zombie World. (Again, we can easily get out of this conundrum by refusing to accept the logical possibility of the Zombie World, but this seems to be something you refuse to do.) This exact statement could have been emitted by a p-zombie. Without direct access to your qualia, I have no way of distinguishing the difference based on anything you say or do, and as such this sentence provides just as much evidence that you are conscious as the earlier quoted statement does--that is to say, no evidence at all. Oh, but it does. In particular, for a piece of knowledge to have epistemic value to me (or anyone else, for that matter), I need to have some way of acquiring that knowledge. For me to acquire that knowledge, I must causally interact with it in some manner. If that knowledge is "causally inefficacious", as you put it, by definition I have no way of knowing about it, and it can hardly be called "knowledge" at all, much less have any epistemic value. Allow me to spell things out for you. Your claims, interpreted literally, would imply the following statements: 1. There exists a mysterious substance called "consciousness" that does not causally i

4 is not a correct summary because consciousness being extra physical doesn't imply epiphenominalism; the argument is specifically against physicalism, so it leaves other forms of dualism and panpsychism on the table.

5 and onwards is not correct, Chalmers does not believe that. Consciousness being nonphysical does not imply a lack of knowledge of it, even if our experience of consciousness is not causally efficacious (though again I note that the p zombie argument doesn't show that consciousness is not causally efficacious, Chalmers just happens to believe... (read more)

Which seems to suggest that epiphenominalism either begs the question,

Well, they do have arguments for their positions.

or multiplies entities unnecessarily by accepting unjustified intuitions.

It actually seems very intuitive to most people that subjective qualia are different from neurophysical responses. It is the key issue at stake with zombie and knowledge arguments and has made life extremely difficult for physicalists. I'm not sure in what way it's unjustified for me to have an intuition that qualia are different from physical structures, and r... (read more)

0naasking
It's unjustified in the same way that vilalism was an unjustified explanation of life: it's purely a product of our ignorance. Our perception of subjective experience/first-hand knowledge is no more proof of accuracy than our perception that water breaks pencils. Intuition pumps supporting the accuracy of said perception either beg the question or multiply entities unnecessarily (as detailed below). I disagree. You've said that epiphenominalists hold that having first-hand knowledge is not causally related to our conception and discussion of first-hand knowledge. This premise has no firm justification. Denying it yields my original argument of inconceivability via the p-zombie world. Accepting it requires multiplying entities unnecessarily, for if such knowledge is not causally efficacious, then it serves no more purpose than vital in vitalism and will inevitably be discarded given a proper scientific account of consciousness, somewhat like this one. I previously asked for any example of knowledge that was not a permutation of properties previously observed. If you can provide one such an example, this would undermine my position.

In what ways, and for what reasons, did people think that cybersecurity had failed?

Mostly that it's just so hard to keep things secure. Organizations have been trying for decades to ensure security but there are continuous failures and exploits. One person mentioned that one third of exploits take advantage of security systems themselves.

What techniques from cybersecurity were thought to be relevant?

Don't really remember any specifics, but I think formal methods were part of it.

Any idea what Mallah meant by “non-self-centered ontologies”? I am ima

... (read more)

Flavor is distinctly a phenomenal property and a type of qualia.

It is metaphysically impossible for distinctly physical properties to differ between two objects which are physically identical. We can't properly conceive of a cookie that is physically identical to an Oreo yet contains different chemicals, is more massive or possessive of locomotive powers. Somewhere in our mental model of such an item, there is a contradiction.

Chalmers does believe that consciousness is a direct product of physical states. The dispute is about whether consciousness is identical to physical states.

Chalmers does not believe that p-zombies are possible in the sense that you could make one in the universe. He only believes it's possible that under a different set of psychophysical laws, they could exist.

2dxu
I claim that it is "conceivable" for there to be a universe whose psychophysical laws are such that only the collection of physical states comprising my brainstates are conscious, and the rest of you are all p-zombies. Note that this argument is exactly as plausible as the standard Zombie World argument (which is to say, not very) since it relies on the exact same logic; as such, if you accept the standard Zombie World argument, you must accept mine as well. Now then: I claim that by sheer miraculous coincidence, this universe that we are living in possesses the exact psychophysical laws described above (even though there is no way for my body typing this right now to know that), and hence I am the only one in the universe who actually experiences qualia. Also, I would say this even if we didn't live in such a universe. Prove me wrong.

Yes, this is called qualia inversion and is another common argument against physicalism. There's a detailed discussion of it here: http://plato.stanford.edu/entries/qualia-inverted/

4timujin
It's not about qualia. It's about any arbitrary property. Imagine a cookie like Oreo to the last atom, except that it's deadly poisonous, weighs 100 tons and runs away when scared.

Unlike the other points which I raised above, this one is semantic. When we talk about "knowledge," we are talking about neurophysical responses, or we are talking about subjective qualia, or we are implicitly combining the two together. Epiphenomenalists, like physicalists, believe that sensory data causes the neurophysical responses in the brain which we identify with knowledge. They disagree with physicalists because they say that our subjective qualia are epiphenomenal shadows of those neurophysical responses, rather than being identical to them. There is no real world example that would prove or disprove this theory because it is a philosophical dispute. One of the main arguments for it is, well, the zombie argument.

1naasking
Which seems to suggest that epiphenominalism either begs the question, or multiplies entities unnecessarily by accepting unjustified intuitions. So my original argument disproving p-zombies would seem to be on just as solid footing as the original p-zombie argument itself, modulo our disagreements over wording.

is why or if the p-zombie philosopher postulate that other persons have consciousness.

Because consciousness supervenes upon physical states, and other brains have similar physical states.

2kilobug
But why, how ? If consciousness is not a direct product of physical states, if p-zombies are possible, how can you tell apart the hypothesis "every other human is conscious" from "only some humans are conscious" from "I'm the only one conscious by luck" from "everything including rocks are conscious" ?

This argument is not going to win over their heads and hearts. It's clearly written for a reductionist reader, who accepts concepts such as Occam's Razor and knowing-what-a-correct-theory-looks-like.

I would suggest that people who have already studied this issue in depth would have other reasons for rejecting the above blog post. However, you are right that philosophers in general don't use Occam's Razor as a common tool and they don't seem to make assumptions about what a correct theory "looks like."

If conceivability does not imply logical

... (read more)
2timujin
Okay. In that case, I peg his argument as proving too much. Imagine a cookie that is exactly like an Oreo, down to the last atom, except it's raspberry flavored. This situation is semantically the same as a p-Zombie, so it's exactly as metaphysically possible, whatever that means. Does it prove that raspberry flavor is an extra, nonphysical fact about cookies?
0Rob Bensinger
Chalmers doesn't think 'metaphysical possibility' is a well-specified idea. He thinks p-zombies are logically possible, but that the purely physical facts in our world do not logically entail the phenomenal facts; the phenomenal facts are 'further facts.'

I don't believe that I experience qualia.

Wait, what?

3 doesn't follow from 2, it follows from a contradiction between 1+2.

Well, first of all, 3 isn't a statement, it's saying "consider a world where..." and then asking a question about whether philosophers would talk about consciousness. So I'm not sure what you mean by suggesting that it follows or that it is true.

1 and 2 are not contradictions. Conversely, 1 and 2 are basically saying the exact same thing.

1 states that consciousness has no effect upon matter, and yet it's clear from observation that the concept of subjectivity only follows i

... (read more)
2naasking
Since this is the crux of the matter, I won't bother debating the semantics of most of the other disagreements in the interest of time. As for whether subjectivity is causally efficacious, all knowledge would seem to derive from some set of observations. Even possibly fictitious concepts, like unicorns and abstract mathematics, are generalizations or permutations of concepts that were first observed. Do you have even a single example of a concept that did not arise in this manner? Generalizations remove constraints on a concept, so they aren't an example, it's just another form of permutation. If no such example exists, why should I accept the claim that knowledge of subjectivity can arise without subjectivity?

Indeed. The condensed argument against p-zombies:

I would hope not. 3 is entirely conceivable if we grant 2, so 4 is unsupported, and nothing that EY said supports 4. 5 does not follow from 3 or 4, though it's bundled up in the definition of a p-zombie and follows from 1 and 2 anyway. In any case, 6 does not follow from 5.

What EY is saying is that it's highly implausible for all of our ideas and talk of consciousness to have come to be if subjective consciousness does not play a causal role in our thinking.

Except such discussions would have no motivati

... (read more)
3naasking
It's not, and I'm surprised you find this contentious. 3 doesn't follow from 2, it follows from a contradiction between 1+2. 1 states that consciousness has no effect upon matter, and yet it's clear from observation that the concept of subjectivity only follows if consciousness can affect matter, ie. we only have knowledge of subjectivity because we observe it first-hand. P-zombies do not have first-hand knowledge of subjectivity as specified in 2. If there were another way to infer subjectivity without first-hand knowledge, then that inference would resolve how physicalism entails consciousness and epiphenomenalism can be discarded using Occam's razor. Except the zombie world wouldn't have feelings and consciousness, so your rebuttal doesn't apply. That's an assertion, not an argument. Basically, you and epiphenominalists are merely asserting that that a) p-zombies would somehow derive the concept of subjectivity without having knowledge of subjectivity, and b) that this subjectivity would actually be meaningful to p-zombies in a way that would influence their decisions despite them having no first-hand knowledge of any such thing or its relevance to their life. So yes, EY is saying it's implausible because it seems to multiply entities unnecessarily, I'm taking it one step further and flat out saying this position either multiplies entities unnecessarily, or it's inconsistent.
-2TheAncientGeek
Although he is also saying that our ideas about free will come about from a source other than free will.

Well that's answered by what I said about psychophysical laws and the evolutionary origins of consciousness. What caused us to believe in consciousness is not (necessarily) the same issue as what reasons we have to believe it.

4Vladimir
I think you're smuggling the gunman into evolution. I can come up with good evolutionary reasons why people talk about God despite him not existing, but I can't come up with good evolutionary reasons why people talk about consciousness despite it not existing. It's too verbose to go into detail, but I think if you try to distinguish the God example and the consciousness example you'll see that the one false belief is in a completely different category from the other.

This was longer than it needed to be, and in my opinion, somewhat mistaken.

The zombie argument is not an argument for epiphenomenalism, it's an argument against physicalism. It doesn't assume that interactionist dualism is false, regardless of the fact that Chalmers happens to be an epiphenomenalist.

Chalmers furthermore specifies that this true stuff of consciousness is epiphenomenal, without causal potency—but why say that?

Maybe because interactionism violates the laws of physics and is somewhat at odds with everything we (think we) know about cogniti... (read more)

6naasking
Indeed. The condensed argument against p-zombies: 1. Assume consciousness has no effect upon matter, and is therefore not intrinsic to our behaviour. 2. P-zombies that perfectly mimic our behaviour but have no conscious/subjective experience are then conceivable. 3. Consider then a parallel Earth that was populated only by p-zombies from its inception. Would this Earth also develop philosophers that argue over consciousness/subjective experience in precisely the same ways we have, despite the fact that none of them could possibly have any knowledge of such a thing? 4. This p-zombie world is inconceivable. 5. Thus, p-zombies are not observationally indistinguishable from real people with consciousness. 6. Thus, p-zombies are inconceivable. Except such discussions would have no motivational impact. A "rich inner life" has no relation to any fact in a p-zombies' brain, and so in what way could this term influence their decision process? What specific sort of discussions of "inner life" do you expect in the p-zombie world? And if it has no conceivable impact, how could we have evolved this behaviour?
5Vladimir
Except there can't be a gunman in the zombie universe if it's the same as ours (unless... that explains everything!). This essay is trying to convince you that there's no way you can write about consciousness without something real causing you to write about consciousness. Even a mistaken belief about consciousness has to come from somewhere. Try now to imagine a zombie world with no metaphorical gunman and see what comes up.

In fairness, I didn't directly ask any of them about it, and it wasn't really discussed. There could have been some who had read the relevant work, and many who believed it to be reasonable, but just didn't happen to speak up during the presentations or in any of the conversations I was in.

1The_Jaded_One
Hmmm ok. It's interesting that this divide is appearing, and it does make me wonder how we can get more people to take the value alignment problem seriously.

There is no objective absolute morality that exists in a vacuum.

No, that's highly contentious, and even if it's true, it doesn't grant a license to promote any odd utility rule as ideal. The anti-realist also may have reason to prefer a simpler version of morality.

Utility theory, prisoner's dilemma, Occam's razor, and many other mathematical structures put constraints on what a self-consistent, formalized morality has to be like. But they can't and won't pinpoint a single formula in the huge hypothesis space of morality, but we'll always have to rely

... (read more)

Would you accept a lottery where there was 1 ticket to maintain your life as a satisfied cookie utility monster and hundreds of trillions of tickets to become a miserable enslaved cookie maker?

Or, after rational reflection and experiencing the alternate possibilities, would you rather prefer a guaranteed life of threshold satisfaction?

The problem is that by doing that you are making your position that much more arbitrary and contrived. It would be better if we could find a moral theory that has solid parsimonious basis, and it would be surprising if the fabric of morality involved complicated formulas.

4kilobug
There is no objective absolute morality that exists in a vacuum. Our morality is a byproduct of evolution and culture. Of course we should use rationality to streamline and improve it, not limit ourselves to the intuitive version that our genes and education gave us. But that doesn't mean we can streamline it to the point of simple average or sum, and yet have it remain even roughly compatible with our intuitive morality. Utility theory, prisoner's dilemma, Occam's razor, and many other mathematical structures put constraints on what a self-consistent, formalized morality has to be like. But they can't and won't pinpoint a single formula in the huge hypothesis space of morality, but we'll always have to rely heavily on our intuitive morality at the end. And this one isn't simple, and can't be made that simple. That's the whole point of the CEV, finding a "better morality", that we would follow if we knew more, were more what we wished we were, but that remains rooted in intuitive morality.

Thanks. I will give some of those articles a look when I have the chance. However, it isn't true that every activity is competitive in nature. Many projects are cooperative, in which case it's not necessarily a problem if you and other people are taking similar approaches and doing them well. We also shouldn't overestimate the competition and assume that they are going to be applying probabilistic reasoning, when in reality we can still outperform by applying basic rules of rationality.

So for us to understand what you're even trying to say, you want us to read a bunch of articles, talk to one of your friends, listen to a speech, and only then will we become EAs good enough for you? No thanks.

8John_Maxwell
Diego points to a variety of resources that all make approximately the same point, which I'll attempt to summarize: If you apply probabilistic "outside view" reasoning to your projects and your career, in practice this means copying approaches that have worked well for other people. But if it's clear that an approach is working well, then others will be copying it too, and you won't outperform. So your only realistic shot at outperforming is to find a useful and underused "inside view" way of looking at things. (FYI, I've found that keeping a notebook has been very useful for generating & recording interesting new ideas. If you do it for long enough you can start to develop your own ontology for understanding areas you're interested in. Don't worry too much about your notebook's structure & organization: embrace that it will grow organically & unpredictably.)
0diegocaleiro
No, that's if you want to understand why a specific Lesswrong afficionado became wary of probabilistic thinking to the point of calling it a problem of the EA community. If you don't care about my opinions in general, you are welcome to take no action about it. He asked for my thoughts, I provided them. But the reference class of Diego's thoughts contains more thoughts that are wrong than that are true. So on priors, you might want to ignore them :p

This is very old but I just wanted to say that I am basically considering changing my college choice due to finding out about this research. Thanks so much for putting this post up and spreading awareness.

1JonahS
Thanks, I'm glad to be able to help :-).

Maybe I am unfamiliar with the specifics of simulated reality. But I don't understand how it is assumed (or even probable, given Occam's Razor) that if we are simulated then there are copies of us. What is implausible about the possibility that I'm in a simulation and I'm the only instance of me that exists?

4Squark
In the Tegmark IV multiverse all consistent possibilities exist so there is always a universe in which you are not in a simulation. The only meaningful question is what universes you should pay more attention to. See also this.

Sorry if this has topic has been beaten to death already here. I was wondering if anyone here has seen this paper and has an opinion on it.

The abstract: "This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief t... (read more)

1Squark
The argument falls apart once you use UDT instead of naive anthropic reasoning: http://lesswrong.com/lw/jv4/open_thread_1117_march_2014/aoym
0[anonymous]
I've seen it. It seemingly ignores the possibility that humanity will not go extinct [EDIT: in the near future, possibly into the tens of megayears] but will also never reach a 'posthuman state' capable of doing arbitrary ancestor simulations.
3gwern
Discussed occasionally: https://www.google.com/search?num=100&q=%22simulation%20argument%22%20site%3Alesswrong.com

Hi, I've been intermittently lurking here since I started reading HPMOR. So now I joined and the first thing I wanted to bring up is this paper which I read about the possibility that we are living in a simulation. The abstract:

"This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are al... (read more)

[This comment is no longer endorsed by its author]Reply

I find myself to have a much clearer and cooler head when it comes to philosophy and debate around the subject. Previously I had a really hard time squaring utilitarianism with the teachings of religion, and I ended up being a total heretic. Now I feel like everything makes sense in a simpler way.