All of Perplexed's Comments + Replies

I think that the point is that emergence is in the mind of the observer. If the observer is describing the situation at the particle level, then superconductivity is not there regardless of the size of the collection of particles considered. But, when you describe things at the flowing-electric-fluid level, then superconductivity may emerge.

0DanielLC
Conductivity isn't there either unless you describe them at the flowing-electric-fluid level.
2lessdazed
Aren't the labels arbitrary? Let's use sharpness. Let's use bluntness. That humans say "sharp", "blunt", "conductive", and "non-conductive" in English is due to circumstances of culture, technology, what minerals are abundant on Earth, etc. At least, I don't know the word, if there is one, for non-conductive. To the extent "sharp" and "blunt" are not opposites, I apologize for the imperfect example.

Right - but there are surely also ultimate values.

Those are the ones that are expected to be resistant to change.

Correct. My current claim is that almost all of our moral values are instrumental, and thus subject to change as society evolves. And I find the source of our moral values in an egoism which is made more effective by reciprocity and social convention.

0timtyler
I think these guys) have a point. So, from my perspective, Egoism is badly named.

My position here is roughly that all 'moral' values are instrumental in this sense. They are ways of coordinating so that people don't step on each other's toes.

Not sure I completely believe that, but it is the theory I am trying on at the moment. :)

1timtyler
Right - but there are surely also ultimate values. Those are the ones that are expected to be resistant to change. It can't be instrumental values all the way down.

I think you are right to call attention to the issue of drift.

Drift is bad in a simple value - at least in agents that consider temporal consistency to be a component of rationality. But drift can be acceptable in those 'values' which are valued precisely because they are conventions.

It is not necessarily bad for a teen-age subculture if their aesthetic values (on makeup, piercing, and hair) drift. As long as they don't drift too fast so that nobody knows what to aim for.

3timtyler
Those are instrumental values. Nobody cares very much if those change, because they were just a means to an end in the first place.

I think the argument is interesting and partly valid. Explaining which part I like will take some explanation.

Many of our problems thinking about morality, I think, arise from a failure to make a distinction between two different things.

  • Morality in daily life
  • Morality as an ideal

Morality of daily life is a social convention. It serves its societal and personal (egoistically prudent) function precisely because it is a (mostly) shared convention. Almost any reasonable moral code, if common knowledge, is better than no common code.

Morality as an idea... (read more)

I think you misinterpreted the context. I endorsed kin selection, together with discounting the welfare of non-kin. Someone (not me!) wishing to be a straight utilitarian and wishing to treat kin and non-kin equally needs to endorse group selection in order to give their ethical intuitions a basis in evolutionary psychology. Because it is clear that humans engage in kin recognition.

0timtyler
Now I see how you are reading the "kind of claim that a utilitarian could make" bit. As you previously observed, the actual answer to this involves cultural evolution - not group selection. The "evolutionary psychology" explanation is that humans developed sophisticated culture which was - on average - beneficial, but which allowed all kinds of deleterious memes in with the beneficial ones. A utilitarian could claim: ...on the grounds that their evolution involved gene-meme coevolution - and that inevitably involves a certain amount of memetic hijacking by deleterious memes - such as utilitarianism.

I have tried to suggest that bacterial purposes are 'merely' teleonomic -to borrow the useful term suggested by timtyler- but that human purposes must be of a different order. ...

As soon as we start to talk about symbols and representation, I'm concerned that a whole new set of very thorny issues get introduced. I will shy away from these.

My position is that, to the extent that the notion of purpose is at all spooky, that spookiness was already present in a virus. The profound part of teleology is already there in teleonomy.

Which is not so say that hu... (read more)

...seemed to me to be a kind of claim that a utilitarian could make with equal credibility.

Well, he could credibly make that claim if he could credibly assert that the ancestral environment was remarkably favorable for group selection.

... you're now saying that you feel noble and proud that your values come from biological instead of cultural evolution...

What I actually said was "my own (genetic) instincts derive a kind of nobility from their origin ...". The value itself claims a noble genealogy, not a noble essence. If I am proud on i... (read more)

0timtyler
Not group, surely: kin. He quoted you as saying: "welfare (fitness) of kin".

... except possibly for the part about no prior metaphysical meaning.

I think I see the source of the difficulty now. My fault. BobTheBob mentioned the mistake of replicating with errors. I took this to be just one example of a possible mistake by a virus, and thought of several more - inserting into the wrong species of host, for example, or perhaps incorporating an instance of the wrong peptide into the viral shell after replicating the viral genome.

I then sought to define 'mistake' to capture the common fitness-lowering feature of all these possi... (read more)

...teleology in nature is merely illusory, but the kind of teleology needed to make sense of rationality is not - it's real. Can you live with this?

No, I cannot. It presumes (or is it argues?) that human rationality is not part of nature.

My apologies for using the phrase "illusion of teleology in nature". It seems to have created confusion. Tabooing that use of the word "teleology", what I really meant was the illusion that living things were fashioned by some rational agent for some purpose of that agent. Tabooing your use of th... (read more)

0BobTheBob
Thanks, yes. This is very clear. I can buy this. Sorry if I'm slow to be getting it, but my understanding of your view is that the sort of purpose that a bacterium has, on the one hand, and the purpose required to be a candidate for rationality, on the other, are, so to speak, different in degree but not in kind. They're the same thing, just orders of magnitude more sophisticated in the latter case (involving cognitive systems). This is the idea I want to oppose. I have tried to suggest that bacterial purposes are 'merely' teleonomic -to borrow the useful term suggested by timtyler- but that human purposes must be of a different order. Here's one more crack at trying to motivate this, using very evidently non-scientific terms. On the one hand, I submit that you cannot make sense of a thing (human, animal, AI, whatever) as rational unless there is something that it cares about. Unless that is, there is something which matters or is important to it (this something can be as simple as survival or reproduction). You may not like to see a respectable concept like rationality consorting with such waffly notions, but there you have it. Please object to this if you think it's false. On the other hand, nothing in nature implies that anything matters (etc) to a thing. You can show me all of the behavioural/cognitive correlates of X's mattering to a thing, or of a thing's caring about X, and provide me detailed evolutionary explanations of the behavioural correlates' presence, but these correlates simply do not add up to the thing's actually caring about X. X's being important to a thing, X's mattering, is more than a question of mere behaviour or computation. Again, if this seems false, please say. If both hands seem false, I'd be interested to hear that, too. As soon as we start to talk about symbols and representation, I'm concerned that a whole new set of very thorny issues get introduced. I will shy away from these. "It requires a different, non-reductionist ... way

A utilitarian might well be indifferent to the self-serving nature of the the meme. But, as I recall, you brought up the question in response to my suggestion that my own (genetic) instincts derive a kind of nobility from their origin in the biological process of natural selection for organism fitness. Would our hypothetical utilitarian be proud of the origin of his meme in the cultural process of selection for meme self-promotion?

1Wei Dai
I don't think you mentioned "nobility" before. What you wrote was just: which seemed to me to be a kind of claim that a utilitarian could make with equal credibility. If you're now saying that you feel noble and proud that your values come from biological instead of cultural evolution... well I've never seen that expressed anywhere else before, so I'm going to guess that most people do not have that kind of feeling.

Hmmm. I may be using "metaphysical" inappropriately here. I confess that I am currently reading something that uses "metaphysical" as a general term of deprecation, so some of that may have worn off. :)

Let me try to answer your excellent question by analogy to geometry, without abandoning "metaphysical". As is well known, in geometry, many technical terms are given definitions, but it is impossible to define every technical term. Some terms (point, line, and on are examples) are left undefined, though their meanings is su... (read more)

With apologies to Ludwig Wittgenstein, if we can't talk about the singularity, maybe we should just remain silent. :)

I happen to agree with you that the SIAI mission will never be popular. But a part of the purpose of this website is to create more people willing and capable to work (directly or indirectly) on that mission. So, not mentioning FAI would be a bit counterproductive - at least at this stage.

0[anonymous]
This is the "point that we can't see beyond" business? Surely that is just a crock.

Is this fair?

Not really. You started by making an argument that listed a series of stages (virus, bacterium, nematode, man) and claimed that at no stage along the way (before the last) were any kind of normative concepts applicable. Then, when I suggested the standard evolutionary explanation for the illusion of teleology in nature, you shifted the playing field. In option 1, you demand that I supply standard scientific expositions of the natural history of your chosen biological examples. In option 2 you suggest that you were just kidding in even m... (read more)

0BobTheBob
I appreciate your efforts to spell things out. I have to say I'm getting confused, though I meant to say that at no stage -including the last!- does the addition of merely naturalistic properties turn a thing into something subject to norms -something of which it is right to say it ought, for its own sake, to do this or that. I also said that the sense of right and wrong and of purpose which biology provides is merely metaphorical. When you talk about "the illusion of teleology in nature", that's exactly what I was getting at (or so it seems to me). That is, teleology in nature is merely illusory, but the kind of teleology needed to make sense of rationality is not - it's real. Can you live with this? I think a lot of people are apt to think that illusory teleology sort of fades into the real thing with increasing physical complexity. I see the pull of this idea, but I think it's mistaken, and I hope I've at least suggested that adherents of the view have some burden to try to defend it. Now that is a whole other can of worms... This is a fair and a difficult question. Roughly, another individual becomes suitable for normative appraisal when and to the extent that s/he becomes a recognizably rational agent -ie, capable of thinking and acting for her/himself and contributing to society (again, very roughly). All kinds of interesting moral issues lurk here, but I don't think we have to jump to any conclusions about them. In case I'm giving the wrong impression, I don't mean to be implying that people are bound by norms in virtue of possessing some special aura or other spookiness. I'm not giving a theory of the nature of norms - that's just too hard. All I'm saying for the moment is that if you stick to purely natural science, you won't find a place for them.

As you may have noticed, that definition was labeled as a "first attempt". It captures some of our intuitions about morality, but not all. In particular, its biggest weakness is that it fails to satisfy moral realists for precisely the reason you point out.

I have a second quill in my quiver. But before using it, I'm going to split the concept of morality into two pieces. One piece is called "de facto morality". I claim that the definition I provided in the grandparent is a proper reductionist definition of de facto morality and c... (read more)

Outstanding comment - particularly the point at the end about the candle cooling the room.

It might be worthwhile to produce a sequence of postings on the control systems perspective - particularly if you could use better-looking block diagrams as illustrations. :)

Consider then a virus particle ... Surely there is nothing in biochemistry, genetics or other science which implies there is anything our very particle ought to do. It's true that we may think of it as having the goal to replicate itself, and consider it to have made a mistake if it replicates itself inaccurately, but these conceptions do not issue from science. Any sense in which it ought to do something, or is wrong or mistaken in acting in a given way, is surely purely metaphorical (no?).

No. The distinction between those viral behaviors that tend to... (read more)

0timtyler
The main trick seems to be getting people to agree on a definition. For instance this:. ...aims rather low. That just tells people to do what they would do anyway. Part of the social function of morality is to give people an ideal to personally aim towards. Another part of the social function of morality is to provide people with an ideal form of behaviour, in order to manipulate others into behaving "better". Another part of the social function of morality is to allow people to signal their goodness by broadcasting their moral code. Done right, that makes them seem more trustworthy and predictable. Your proposal does not score very well on these fronts.
0torekp
I think this is right, except possibly for the part about no prior metaphysical meaning. The later explanation of that part didn't clarify it for me. Instead, I'll just indicate what prior meaning I find attached to the idea that "the virus replicated wrongly." In biology, the idea that organs and behaviors and so on have functions is quite common and useful. The novice medical student can make many correct inferences about the heart by supposing that its function is to pump blood, for example. The idea preceded Darwin, but post-Darwin, we can give a proper naturalistic reduction for it. Roughly speaking, an organ's function is F iff in the ancestral environment, the organ's performance of F is what it was selected for. Various RNA features in a virus might have functions in this sense, and if so, that gives the meaning of saying that in a particular case, the viral reproduction mechanism failed to operate correctly. That's not a moral norm. It's not even the kind of norm relating to an agent's interests, in my view. But it is a norm. There was a pre-existing meaning of "biological function" before Darwin came around. So, a Darwinian definition of biological function was not a purely stipulative one. It succeeded only because it captured enough of the tentatively or firmly accepted notions about "biological function" to make reasonably good sense of all that.
5[anonymous]
I'm having trouble with the word "metaphysical". In order for me to make sense of the claim that "mistake" and "exothermic" do not have prior metaphysical meanings, I would like to see some examples of words that do have prior metaphysical meanings, so that I can try to figure out from contrasting examples of having and not having prior metaphysical meanings what it means to have a prior metaphysical meaning. Because at the moment I don't know what you're talking about.
0BobTheBob
If I bet higher than 1/6th on a fair die's rolling 6 because in the last ten rolls 6 hasn't come up -meaning it's now 'due'- I make a mistake. I commit an error of reasoning; I do something wrong; I act in a manner I ought not to. What about the virus particle which, in the course of sloshing about in an appropriate medium, participates in the coming into existence of a particle composed of RNA which, as it happens, is mostly identical but differs from itself in a few places. Are you saying that this particle makes a mistake in the same sense of 'mistake' as I do in making my bet? Option (1): The sense is precisely the same (and it is unproblematically naturalistic). In this case I have to ask what the principles are by which one infers to conclusions about a virus's mistakes from facts about replication. What are the physical laws, how are their consequences (the consequences, again, being claims about what a virus ought to do) measured or verified, and so on? Option (2): The senses are different. This was the point of calling the RNA mistake metaphorical. It was to convey that the sense is importantly different than it is in the betting case. The idea is that the sense, if any, in which a virus makes a 'mistake' in giving rise to a non-exact replica of itself is not enough to sustain the kind of norms required for rationality. It is not enough to sustain the conclusions about my betting behaviour. Is this fair?
1Peterdjones
That doesn't work. It would mean conformists are always in the right, irrespective of what they are conforming to.

This article lists the top Google+ users by # of followers. Worth a chuckle.

ETA: in general, bare links are usually not appreciated here. Still, here are two more links to interesting articles in the tech blogosphere discussing Google+.

You may have missed a subtlety in my comment. In your grandparent, you said "people's thoughts and words are a byproduct ...". In my comment, I suggested "Thoughts are at one with ...". I didn't mention words.

If we are going to focus on words rather than thoughts, then I am more willing to accept your model. Spoken words are indeed behaviors - behaviors that purport to be accurate reports of thoughts, but probably are not.

Perhaps we should taboo "thought", since we may not be intending the word to designate the same phenomenon.

I take this to be an elliptical way of suggesting that Yvain is offering a false dichotomy in suggesting a choice between the notion of thoughts being in control of the processes determining behavior and the notion of thoughts being a byproduct of those processes.

I agree. Thoughts are at one with (are a subset of) the processes that determine behavior.

2Scott Alexander
I'm not so sure. Using the analogy of a computer program, we could think of thoughts either as like the lines of code in the program (in which case they're at one with, or in control of, the processes generating behavior, depending on how you want to look at it) or you could think of thoughts as like the status messages that print "Reticulating splines" or "50% complete" to the screen, in which case they're byproducts of those processes (very specific, unnatural byproducts, to boot). My view is closer to the latter; they're a way of allowing the brain to make inferences about its own behavior and to communicate those inferences. Opaque processes decide to go to Subway tonight because they've heard it's low calorie, then they produce the verbal sentence "I should go to Subway tonight because it's low calorie", and then when your friend asks you why you went to Subway, you say "Because it's low calorie"). The tendency of thoughts to appear in a conversational phrasing ("I think I'll go to Subway tonight") rather than something like "Dear Broca's Area - Please be informed that we are going to Subway tonight, and adjust your verbal behavior accordingly - yours sincerely, the prefrontal cortex" is a byproduct of their use in conversation, not their internal function. Right now I'm just asserting that this is a possibility and that it's distinct from thoughts being part of the decision-making structure. I'll try to give some evidence for it later.

Your just-so story is more complicated than you seem to think. It involves an equilibrium of at least two memes: an evangelical utilitarianism which damages the host but propagates the meme, plus a cryptic egoism which presumably benefits the host but can't successfully propagate (it repeatedly arises by spontaneous generation, presumably).

I could critique your story on grounds of plausibility (which strategy do crypto-egoists suggest to their own children?) but instead I will ask why someone infected by the evangelical utilitarianism meme would argue a... (read more)

4Wei Dai
What does "subverted" mean in this context? For example I devote a lot of resources into thinking about philosophical problems which does not seem to contribute to my genetic fitness. Have I been "subverted" by a selfish meme (i.e., the one that says "the unexamined life is not worth living")? If so, I don't feel any urge to try to self-modify away from this. Couldn't a utilitarian feel the same?
0timtyler
I struggle to understand what is going on there as well. I think some of these folk have simultaneously embraced a kind of "genes=bad, memes=good" memeplex. This says something like: nature red in tooth and claw is evil, while memes turn brutish cavemen into civilized humans. The memes are the future, and they are good. That is a meme other memes want to associate with. Obviously if you buy into such an idea, then that promotes the interests of all of your memes, often at the expense of your genes.

I'm convinced by mathematical arguments that utility should be additive. If the value of N things in the real world is not N times the value of 1 thing, then I handle that in how I assign utility to world states.

I don't disagree. My choice of slogan wording - "utility is not additive" - doesn't capture what I mean. I meant only to deny that the value of something happening N times is (N x U) where U is the value of it happening once.

what would you say to a utilitarian who say: "Evolution (memetic evolution, that is) has instilled in me the idea of that I should linearly value the welfare of others regardless of kinship, regardless of what instincts I got from my genes."

There are two separate issues here. I assume that by "linearly" you are referring to the subject that started this conversation: my claim that utilities "are not additive", an idea also expressed as "diminishing returns", or diminishing marginal utility of additional people. ... (read more)

2Wei Dai
I'm guessing mostly at the meme level. It seems pretty obvious, doesn't it? Utilitarianism makes a carrier believe that they should act to maximize social welfare and that more people believing utilitarianism would help toward that goal, so carriers think they should try to propagate the meme. Also, many egoists may believe that utilitarians would be more willing to contribute to the production of public goods, which they can free ride upon, so they would tend to not argue publicly against utilitarianism, which further contributes to its propagation.
Perplexed100

... the mistake began as soon as we started calling it a "blue-minimizing robot".

Agreed. But what kind of mistake was that?

Is "This robot is a blue-minimizer" a false statement? I think not. I would classify it as more like the unfortunate selection of the wrong Kuhnian paradigm for explaining the robot's behavior. A pragmatic mistake. A mistake which does not bode well for discovering the truth, but not a mistake which involves starting from objectively false beliefs.

I think that we should follow Jaynes and insist upon 'probability' as the name of the subjective entity. But so-called objective probability should be called 'propensity'. Frequency is the term for describing actual data. Propensity is objectively expected frequency. Probability is subjectively expected frequency. That is the way I would vote.

As I understand it, EY's commitment to MWI is a bit more principled than a choice between soccer teams. MWI is the only interpretation that makes sense given Eliezer's prior metaphysical commitments. Yes rational people can choose a different interpretation of QM, but they probably need to make other metaphysical choices to match in order to maintain consistency.

3[anonymous]
MWI distinguishes itself from Copenhagen by making testable predictions. We simply don't have the technology yet to test them to a sufficient level of precisions as to distinguish which meta-theory models reality. See: http://www.hedweb.com/manworld.htm#unique In the mean time, there are strong metaphysical reasons (Occam's razor) to trust MWI over Copenhagen.
0[anonymous]
Aumann's agreement theorem.
-2Peterdjones
He still shouldn't be stating it as a fact when it based on "commitments".

Does your utility function treat "a life saved by Perplexed" differently from just "a life"?

I'm torn between responding with "Good question!" versus "What difference does it make?". Since I can't decide, I'll make both responses.

Good question! You are correct in surmising that the root justification for much of the value that I attach to other lives is essentially instrumental (via channels of reciprocity). But not all of the justification. Evolution has instilled in me the instinct of valuing the welfare (fit... (read more)

1Wei Dai
Thanks for pointing me to Binmore's work. It does sound very interesting. This is tangential to your point, but what would you say to a utilitarian who says: "Evolution (memetic evolution, that is) has instilled in me the idea of that I should linearly value the welfare of others regardless of kinship, regardless of what instincts I got from my genes." By "orthodox position" are you referring to TDT-related ideas? I've made the point several times that I doubt they apply to humans. (I don't vote myself, actually.) I don't see how Binmore could have "demolished" those ideas as they relate to AIs since he couldn't have learned about them when he wrote his books.

Correct. In fact, I probably confused things here by using the word "discount" for what I am suggesting here. Let me try to summarize the situation with regard to "discounting".

Time discounting means counting distant future utility as less important than near future utility. EY, in the cited posting, argues against time discounting. (I disagree with EY, for what it is worth.)

"Space discounting" is a locally well-understood idea that utility accruing to people distant from the focal agent is less important than utility accr... (read more)

0timtyler
How about: "Large utilities are not additive for humans".

What benelliot said.

Sheesh! Please don't assume that everyone who disagrees with one point you made is doing so because he disagrees with the whole thrust of your thinking.

Baez: ... you shouldn’t always maximize expected utility if you only live once.

BenElliot: [Baez is wrong] Expected utilities do not work like that.

XiXiDu: If a mathematician like John Baez can be that wrong ...

A mathematician like Baez can indeed be that wrong, when he discusses technical topics that he is insufficiently familiar with. I'm sure Baez is quite capable of understanding the standard position of economists on this topic (the position echoed by BenElliot). But, as it apparently turns out, Baez has not yet done so. No big deal. Treat... (read more)

1jsteinhardt
He isn't wrong, he's just used to using different language than you are. And I might add that the language he is using is, as far as I can tell, the far more commonly accepted notion of utility, rather than VNM utility, which is what I assume you are talking about. By "commonly accepted" I mean that the average technical person who uses the word utility probably is not thinking about VNM utility. So if you want to write Baez's views off, you should at least first agree on the same definition and then ask the same question. See my other comment here. I originally misattributed the Baez quote to XiXiDu, so the reply was addressed to him directly.
2XiXiDu
What about Robin Hanson? See for example his post here and here. What is it that he is insufficiently familiar with? Or what about Katja Grace who has been a visiting fellow of the SIAI? See her post here (there are many other posts by her). And the people from GiveWell even knew about Pascal's Mugging, what is it that they are insufficiently familiar with? I mean, those people might disagree for different reasons. But I think that too often the argument is used that people just don't know what they are talking about, rather than trying to find out why else they might disagree. As I said in the OP, none of them doubts that there are risks from AI, just that we don't know enough to take them too seriously at this moment. Whereas the SIAI says that the utility associated with AI related matters outweighs those doubts. So if we were going to pinpoint the exact nature of disagreement, would it maybe all come down to how seriously we should take vague possibilities? And if you are right that the whole problem is that they are insufficiently familiar with the economics of existential risks, then isn't that something that should be improved by putting some effort into raising the awareness of why it is rational not to disregard risks from AI even if one believes that they are very unlikely?

People shouldn't neglect small probabilities. The math works for small probabilities.

People should discount large utilities. Utilities are not additive. Nothing in economic theory suggests that they are additive.

It is well understood that the utility of two million dollars is not necessarily twice the utility of one million dollars. Yet it is taken here as axiomatic that the utility of two saved lives is twice the utility of one saved life. Two people tortured is taken to be twice the disutility of one person tortured. Why? As far as I can tell, ... (read more)

4Wei Dai
Does your utility function treat "a life saved by Perplexed" differently from just "a life"? I could understand an egoist who does not terminally value other lives at all (as opposed to instrumentally valuing saving lives as a way to obtain positive emotions or other benefits for oneself), but a utility function that treats "a life saved by me" differently from just "a life" seems counterintuitive. If the utility of a life saved by Perplexed not different from the utility of another life, then unless your utility function just happens to have a sharp bend at the current world population level, the utility of two saved lives can't be much less than twice the utility of one saved life. (See Eliezer's version of this argument, and more along this vein, here.)
-6Matt_Simpson
0Jonathan_Graehl
The value of saved vs. new vs. cloned lives is a worthwhile question (and yes, it's only one example) - to introspect on. I'd gain more satisfaction out of saving a group of people by defeating the cause directly - safely killing or capturing the kidnappers rather than paying the ransom. I'd rather save all those at risk by defeating the entire threat, permanently. If I can only save a small fraction of the group threatened by a single cause, that's less satisfying. But maybe in what you'd think would be a nearly-linear region (you can save a few people from starvation today, for sure), I'd be more than half as satisfied by helping one identifiable person and being able to monitor the consequences than I would by helping two (out of an ocean of a billion). Further, in those "drop in a bucket" cases, I'd expect some desire to save people from diverse threats, as long as the reduced efficiency wasn't too high to justify the thrill of novelty. This desire would be in tension with conserving research/decision effort (just save one more life in the way already researched, prepared, and tested), consistency, a desire for complete victory (but I postulated that my maximal impact was too small - but becoming part of an alliance that achieves complete victory would be nice). Part of the value of saving existing lives is that I feel a sense of security knowing that I and people like me are fighting such threats as might someday affect me - a reflexive feeling of having allies in the world who might help me - not as a result of anonymous charity (which would be irrational), but as a result of my being the type of person who, when having resources to spare, helps where it's needed more. But I'm convinced by mathematical arguments that utility should be additive. If the value of N things in the real world is not N times the value of 1 thing, then I handle that in how I assign utility to world states. I want to use additive utility, and as far as I can tell I'm immune to argum
-4XiXiDu
Doesn't Eliezer say we shouldn't discount?

You are correct that my comments are missing the mark. Still, there is a sense in which the kinds of non-determinism represented by Born probabilities present problems for Si. I agree that Si definitely does not pretend to generate its predictions based on observation of the whole universe. And it does not pretend to predict everything about the universe. But it does seem to pretend that it is doing something better than making predictions that apply to only one of many randomly selected "worlds".

Can anyone else - Cousin_it perhaps - explain why deterministic evolution of the wave function seems to be insufficient to place Si on solid ground?

0timtyler
They would represent problems for determinism - if they were "real" probabailities. However the idea around here is that probabilities are in the mind. Here is E T Jaynes on the topic: [*] Translation: "A fundamental uncertainty, not only obscurity"

Hmmm. Would you be happier if I changed my last line to read "... we need to discard the whole Si concept as inappropriate to our imperfectly-observed universe."?

0timtyler
I don't think so. Solomonoff induction applies to streams. The most common application is to streams of sense data. There is no pretense of somehow observing the whole of the universe in the first place.

My implicit point was this: Nesov2006 probably did not realize that Nesov2006 was a fool and Nesov2008 probably did not judge himself to be a crackpot. Therefore, a naive extrapolation ("obvious prediction") suggests that Nesov2011 has some cognitive flaws which he doesn't yet recognize; he will recognize this, though, in a few years.

JoshuaZ, as I understood him, was suggesting that one improves ones tolerance by enlarging ones prior for the hypothesis that one is currently flawed oneself. He suggests the time-machine thought experiment as a way of doing so.

You, as I understood you, claimed that JoshuaZ's self-hack doesn't work on you. I'm still puzzled as to why not.

1Vladimir_Nesov
To a significant extent, both would agree with my judgment. The problem was not so much inability to see the presence of a problem, they just didn't know its nature in enough detail to figure out how to get better. So the situation is not symmetric in the way you thought. See The Modesty Argument for a discussion of why I won't believe myself crazy just because there are all those crazy people around who don't believe themselves crazy.

Write suggestion-driven fiction.

What is "suggestion-driven fiction"? Googling was unhelpful.

What it sounds like is fiction in which the author has no particular story in mind as (s)he begins the narration, but rather the author generates plot and characters in response to reader suggestions as each chapter is published.

If that is the kind of thing you are talking about, it sounds very intriguing. But I wonder how a beginner captures enough initial readers to generate the suggestions. Reciprocity? If someone wants to organize a circle of th... (read more)

0AdeleneDawner
You can also write stories set in a collaboratively-created world. I think this is commonly called 'conworlding', but I don't know enough about it to know if I'm actually using the term correctly. Here are two examples of the kind of thing I'm talking about. (If you sign up for the former, which has a rather high proportion of transhumanists playing, let them know that Adelene sent you - I get imaginary currency for recruiting.)
1Armok_GoB
Yea, that's basically it, or at least that falls squarely into the category together with some other things. The most common by far is a sort of communal roleplaying where the actions of the protagonist are determined by the community but you do everything else. There are a lot of sites with communities that do these things, usually the games/roleplaying section of various forums or image boards. There are also sites that specialize in them, in which case it's usually some specific type of them such as illustrated ( http://www.mspaforums.com/forumdisplay.php?85-Forum-Adventures ), branching+anonymous+collaborative ( http://www.epicsplosion.com/epicsploitation/adventures ), etc. If you wait a week or so, the lesswrong forums will probably be a good place for you to start a tradition of them, in case you don't want to learn the culture of some existing place. You could also run it on that biog you already have and rely on the comments functionality. I follow a lot of these things, in a lot of places, and know a fair bit about how to make one successful, so if you're ever in doubt or interest or inspiration is inexplicably dying feel free to ask me. Oh, and please post a link here whenever you start so I can read it and suggest things.

What should I do?

Step 1: Stop being frustrated with them for not knowing/understanding. Instead, direct your frustration at yourself for not successfully communicating.

Step 2: Come to know that the reason for your failure to communicate is not a lack of mastery over your own arguments. It is a lack of understanding of their arguments. Cultivate the skill of listening. Ask which school of martial arts presents the best metaphor for your current disputation habits. Which school best matches the kind of disputation you would like to achieve?

Step 3: ... (read more)

I would easily bite the bullet and say that Nesov2008 was a crackpot and Nesov2006 was a shallow naive fool.

Ah. But would you make the obvious predictions about the opinion Nesov2013 and Nesov2015 will have regarding Nesov2011?

4Vladimir_Nesov
You are being overly cryptic (obvious predictions?). Judgments like this are not relative. I don't think I'm currently in anywhere close to that much of a trouble, and haven't been since about summer 2010 (2009 was so-so, since I regarded some known-to-be-confused hypotheses I entertained then at unjustified level of confidence, and was too quick to make cryptic statements that weren't justified by much more detailed thoughts). I'm currently confused about some important questions in decision theory, but that's the state of my research, and I understand the scope of my confusion well enough.

I disagree. Alicorn's version is more mathematically meaningful, to my mind, than WeiDai's. But to return to the original problem:

A. Two-boxing yields more money than would be yielded by counterfactually one-boxing.
B. Taboo "counterfactually". ...

"Recommends" is math?

0Wei Dai
Sorry, I thought it would be clear that it just means [the CDT formula] = 'two-box'.
2Vladimir_Nesov
It refers to the math that can be filled in on demand (more or less). In Alicorn's dialog, the intended math is not clear from the context, and indeed it seems that there was no specific intended math.

I'm confused. Assuming that I "believe in" the validity of what I have been told of quantum mechanics, I fully expect that a million quantum coin tosses will generate an incompressible string. Are you suggesting that I cannot simultaneously believe in the validity of QM and also believe in the efficacy of Solomonoff induction - when applied to data which is "best explained" as causally random?

Off the top of my head, I am inclined to agree with this suggestion, which in turn suggests that Si is flawed. We need a variant of Si which al... (read more)

0timtyler
I don't think the universe shows any signs of being non-deterministic. The laws of physics as we understand them (e.g. the wave equation) are deterministic. So, Solomonoff induction is not broken.
3cousin_it
Possibly ironically relevant. Eliezer quoting Robyn Dawes:

The universal prior implies you should say "substantially less than 1 million".

Why do you say this? Are you merely suggesting that my prior experience with quantum coins is << a million tosses?

3cousin_it
No, I'm not suggesting that. I think the statement stays true even if you've already seen 100 million quantum coinflips and they looked "fair". The universal prior still thinks that switching to a more ordered generator for the next million coinflips is more likely than continuing with the random generator, because at that point the algorithmic complexity of preceding coinflips is already "sunk" anyway, and the algorithmic complexity of switching universes is just a small constant. ETA: after doing some formal calculations I'm no longer so sure of this. Halp.

Similarly, in 2000, a man named "Alex" (fake name, but real case) suddenly developed pedophilic impulses at age 40, and was eventually convicted of child molestation. Turns out he also had a brain tumor, and once it was removed, his sexual interests went back to normal. ...

At the very least, it seems like we would certainly be justified in judging Charles and Alex differently from people who don't suffer from brain tumors.

Alex was not punished for the impulses he felt. Rather, he was punished for molesting a child. Judge Charles and Alex dif... (read more)

2asr
In general, we don't punish people purely for actions. For most crimes, having the appropriate criminal intent is a required component of the crime. No mens rea, no crime. That's why we don't prosecute people who have a stroke and lose control of their car for vehicular assault. I think the tumor case is analogous -- the tumor was an unexpected factor, not caused in any direct way by the perpetrator's prior choices or beliefs.

Maybe you should start by trying to figure out actual helpful suggestions, and then we can worry about whether people will be offended by them.

Good advice, I think. It is hard to tell (without an example) whether any gimmicks that work to get women into bed with you also work to get women to meetups. I would guess that any convincing way of communicating the message "Come on! It will be fun!!" would help with both.

Anyone have any brilliant ideas?

Call his attention to the stuff under the guise of soliciting a male opinion on whether it is... (read more)

0taryneast
I agree.

do they speculate that our universe is being computed (at a high-level) by a cellular automaton?

What does "at a high level" mean in this context?

Btw, thanks for posting this.

Perplexed-20

I'm not following you. Why is evil action XYZ going to be done regardless? Are you imagining that deontologists seek to have other people do their dirty deeds for them?

2Will_Sawin
Well, exactly. It's a possible situation in the mathematical framework of who-did-what-to-whom you created. I thought of it before I thought of a reason why. For many definitions of what "who-did-what-to-whom" means, a sufficiently clever reason why would be constructed. Maybe it must be done to prevent bad stuff. Maybe it's a fact of the psychology of these two individuals that one of them is going to do it. Maybe an AI in a box is going to convince one of two people with the power to release it, to release it - this is sort of like the last one?
0Will_Sawin
Well, exactly. It's a possible situation in the mathematical framework of who-did-what-to-whom you created. I thought of it before I thought of a reason why. For many definitions of what "who-did-what-to-whom" means, a sufficiently clever reason why would be constructed. Maybe it must be done to prevent bad stuff. Maybe it's a fact of the psychology of these two individuals that one of them is going to do it. Maybe an AI in a box is going to convince one of two people with the power to release it, to release it - this is sort of like the last one?
0Will_Sawin
Well, exactly. It's a possible situation in the mathematical framework of who-did-what-to-whom you created. I thought of it before I thought of a reason why. For many definitions of what "who-did-what-to-whom" means, a sufficiently clever reason why would be constructed. Maybe it must be done to prevent bad stuff. Maybe it's a fact of the psychology of these two individuals that one of them is going to do it. Maybe an AI in a box is going to convince one of two people with the power to release it, to release it - this is sort of like the last one?

Will was rather more patient than he could have been.

Rather less careful, I would say. He failed to notice the typo above until nsheperd pointed it out - the original source of the confusion. And then later he began a comment with:

No, this is not the case. You have to cleverly choose B.

I have no idea at all what "is not the case". And I also don't know when anyone was offered the opportunity to cleverly choose B.

Will's description of his own limited motivation to communicate is the only portion of this thread which is crystal clear.

Yes,... (read more)

-1Peterdjones
It seems to me the substance of Mr Savin's objection could have been expressed more briefly and clearly as "Deontologists would not steal under any circumstances". (Or even the familiar "Deontologists would not lie under any circumstances, even to save a lfie").

isn't preventing the existence of people who have stolen a consequentialist goal?

Taking into account the existence of people who have stolen is one way for a consequentialist to model the thinking of deontologists. If a consequentialist includes history of who-did-what-to-whom in his world states, he is capturing all of the information that a deontologist considers. Now, all that is left is to construct a utility function that attaches value to the history in the way that a deontologist would.

Voila! Something that approximates successful communication between deontologist and consequentialist.

0Will_Sawin
Unfortunately, all I can do is imagine a heated contest between two people over which of them is going to do some evil action XYZ that is going to be done regardless. They each want to ensure that they don't do it, but for some reason it will necessarily be done, so they come to blows over it. I may, in fact, be constitutionally incapable of successful communication with deontologists.
0orthonormal
According to the search tool, this was Less Wrong's first use of "STFU" directed at another contributor. I'm pretty proud of the site for having avoided this term, and I'm pretty chagrined at you for having broken the streak.
0wedrifid
It should be no surprise that this outburst made me far more inclined to view the grandparent in a positive light. In this case the actual content of Will's comment seems easy to understand. Given Peterdjones aggressive use of his own incomprehention Will was rather more patient than he could have been. He could have linking to a wikipedia article on the subject so that he could get a grasp of the basics.

Does world B contain someone who stole Bill's money? Does world C contain someone who stole Alicorn's money?

One reason that you are having trouble seeing the world as a deontologist sees it is that you stubbornly refuse to even try.

0Will_Sawin
In the example, yes, Omega, and yes, peterdjones. But isn't preventing the existence of people who have stolen a consequentialist goal?
Perplexed110

Why does my intuition reject wireheading? Well, I think it has something to do with the promotion of instrumental values to terminal values.

Some pleasures I value for themselves (terminal) - the taste of good food, for example. As it happens, I agree with you that there is no true justification for rejecting wireheading for these kinds of pleasures. The semblance of pleasure is pleasure.

But some things are valued because they provide me with the capability, ability, or power (instrumental) to do what I want to do, including experiencing those terminal p... (read more)

0[anonymous]
But you just said you value those things instrumentally, so you can get pleasurable sensations. Raw power itself doesn't do anything for you, just sitting there. I can see how, when considering being wireheaded, you would come to reject it based on that. Essentially, you'd see (e.g.) wireheaded power as not actually instrumentally useful, so you reject the offer. It sounds like snake-oil. But isn't that a false conclusion? It might feel like it, but you won't actually feel any worse off when you're completely wireheaded. Fake capabilities are a problem when interacting with agents who might exploit you, so the heuristic is certainly useful, but it fails in the case of wireheading that actually delivers on its promises. You won't need knowledge and power and so on when you're in wirehead heaven, so wireheading can simply ignore or fake them. (Disclaimer: muflax does not advocate giving up your autonomy to Omegas claiming to provide wirehead heaven. Said Omegas might, in fact, be lying. Caution is advised.) It does.
Load More