All of CoffeeStain's Comments + Replies

Living in the same house and coordinating lives isn't a method for ensuring that people stay in love; being able to is proof that they are already in love. An added social construct is a perfectly reasonable option to make it harder to change your mind.

4Shmi
The point of the quote is that it tends to make it harder to stay in love. Which is the opposite of what people want when they get married.

It sometimes seems to me that those of us who actually have consciousness are in a minority, and everyone else is a p-zombie.

When I myself run across apparent p-zombies, they usually look at my arguments as if I am being dense over my descriptions of consciousness. And I can see why, because without the experience of consciousness itself, these arguments must sound like they make consciousness out to be an extraneous hypothesis to help explain my behavior. Yet, even after reflecting on this objection, it still seems there is something to explain besid... (read more)

Perhaps abiguity aversion is merely a good heuristic.

Well of course. Finite ideal rational agents don't exist. If you were designing decision-theory-optimal AI, that optimality is a property of its environment, not any ideal abstract computing space. I can think of at least one reason why ambiguity aversion could be the optimal algorithm in environments with limited computing resources:

Consider a self-modification algorithm that adapts to new problem domains. Restructuring (learning) is considered the hardest of tasks, and so the AI modifies scarcel... (read more)

Shouldn't this post be marked [Human] so that uploads and AIs don't need to spend cycles reading it?

...I'd like to think that this joke bears the more subtle point that a possible explanation for the preparedness gap in your rationalist friends is that they're trying to think like ideal rational agents, who wouldn't need to take such human considerations.

I have a friend with Crohn's Disease, who often struggles with the motivation to even figure out how to improve his diet in order to prevent relapse. I suggested he should find a consistent way to not have to worry about diet, such as prepared meals, a snack plan, meal replacements (Soylent is out soon!), or dietary supplement.

As usual, I'm pinging the rationalists to see if there happens to be a medically inclined recommendation lurking about. Soylent seems promising, and doesn't seem the sort of thing that he and his doctor would have even discussed. ... (read more)

2spqr0a1
Consider helminthic therapy. Hookworm infection down-regulates bowel inflammation and my parasitology professor thinks it is a very promising approach. NPR has a reasonably good popularization. Depending on the species chosen, one treatment can control symptoms for up to 5 years at a time. It is commercially available despite lack of regulatory approval. Not quite a magic bullet, but an active area of research with good preliminary results.
2tut
There is no known "cure all solution not entirely endorsed by the FDA that will solve all of your motivational and health problems in one fell swoop." A lot of people with Crohn's seem to get some benefit from changing their diet. But the conclusions they draw always seem to contradict each other and in general the improvements are temporary. What it looks like to me (and at least on person with her own experience of the problem) is that radically changing your diet every few years is what you need to do.

There has been mathematically proven software and the space shuttle came close though that was not proven as such.

Well... If you know what you wish to prove then it's possible that there exists a logical string that begins with a computer program and ends with it as a necessity. But that's not really exciting. If you could code in the language of proof-theory, you already have the program. The mathematical proof of a real program is just a translation of the proof into machine code and then showing it goes both ways.

You can potentially prove a space ... (read more)

Depends if you count future income. Highest paying careers are often so because only those willing to put in extra effort at their previous jobs get promoted. This is at least true in my field, software engineering.

3Antisuji
I more or less agree, but note that extra effort does not necessarily mean extra hours. Though, depending on who you work for the latter might be a good proxy for the former.

The film's trailer strikes me as being aware of the transhumanist community in a surprising way, as it includes two themes that are otherwise not connected in the public consciousness: uploads and superintelligence. I wouldn't be surprised if a screenwriter found inspiration from the characters of Sandberg, Bostrom, or of course Kurzweil. Members of the Less Wrong community itself have long struck me as ripe for fictionalization... Imagine if a Hollywood writer actually visited.

3Dr_Manhattan
LW fictionalization already happened http://lesswrong.com/lw/2ti/greg_egan_disses_standins_for_overcoming_bias/

They can help with depression.

I've personally tried this and can report truth, but will caveat that the expectation that I will force myself into a morning cold shower often causes oversleeping, which rather exacerbates depression.

Often in Knightian problems you are just screwed and there's nothing rational you can do.

As you know, this attitude isn't particularly common 'round these parts, and while I fall mostly in the "Decision theory can account for everything" camp, there may still be a point there. "Rational" isn't really a category so much as a degree. Formally, it's a function on actions that somehow measures how much that action corresponds to the perfect decision-theoretic action. My impression is that somewhere there's Godelian consideration lurki... (read more)

Part of the motivation for the black box experiment is to show that the metaprobability approach breaks down in some cases.

Ah! I didn't quite pick up on that. I'll note that infinite regress problems aren't necessarily defeaters of an approach. Good minds that could fall into that trap implement a "Screw it, I'm going to bed" trigger to keep from wasting cycles even when using an otherwise helpful heuristic.

Maybe the thought experiment ought to have specified a time limit. Personally, I don't think enumerating things the boxes could possi

... (read more)
2David_Chapman
Yes—this is part of what I'm driving at in this post! The kinds of problems that probability and decision theory work well for have a well-defined set of hypotheses, actions, and outcomes. Often the real world isn't like that. One point of the black box is that the hypothesis and outcome spaces are effectively unbounded. Trying to enumerate everything it could do isn't really feasible. That's one reason the uncertainty here is "Knightian" or "radical." In fact, in the real world, "and then you get eaten by a black hole incoming near the speed of light" is always a possibility. Life comes with no guarantees at all. Often in Knightian problems you are just screwed and there's nothing rational you can do. But in this case, again, I think there's a straightforward, simple, sensible approach (which so far no one has suggested...)

But the point about meta probability is that we do not have the nodes. Each meta level corresponds to one nesting of networks in nodes.

Think of Bayesian graphs as implicitly complete, with the set of nodes being every thing to which you have a referent. If you can even say "this proposition" meaningfully, a perfect Bayesian implemented as a brute-force Bayesian network could assign it a node connected to all other nodes, just with trivial conditional probabilities that give the same results as an unconnected node.

A big part of this discussion... (read more)

1Gunnar_Zarncke
Sure you can always use the total net of all possible proposition. But the set of all propositions is intractable. It may not even be sensibly enumerable. For nested nets at least you can construct the net of the powerset of the nodes and that will do the job - in theory. In practive even that is horribly inefficient. And even though our brain is massively parallel it surely doesn't do that.

It is helpful, and was one of the ways that helped me to understand One-boxing on a gut level.

And yet, when the problem space seems harder, when "optimal" becomes uncomputable and wrapped up in the fact that I can't fully introspect, playing certain games doesn't feel like designing a mind. Although, this is probably just due to the fact that games have time limits, while mind-design is unconstrained. If I had an eternity to play any given game, I would spend a lot of time introspecting, changing my mind into the sort that could play iterations... (read more)

"How often do listing sorts of problems with some reasonable considerations result in an answer of 'None of the above' for me?"

If "reasonable considerations" are not available, then we can still:

"How often did listing sorts of problems with no other information available result in an answer of 'None of the above' for me?"

Even if we suppose that maybe this problem bears no resemblance to any previously encountered problem, we can still (because the fact that it bears no resemblance is itself a signifier):

"How often did problems I'd encountered for the first time have an answer I never thought of?"

My LessWrongian answer is that I would ask my mind that was created already in motion what the probability is, then refine it with as many further reflections as I can come up with. Embody an AI long enough in this world, and it too will have priors about black boxes, except that reporting that probability in the form of a number is inherent to its source code rather than strange and otherworldly like it is for us.

The point that was made in that article (and in the Metaethics sequence as a whole) is that the only mind you have to solve a problem is the o... (read more)

0David_Chapman
Part of the motivation for the black box experiment is to show that the metaprobability approach breaks down in some cases. Maybe I ought to have made that clearer! The approach I would take to the black box does not rely on metaprobability, so let's set that aside. So, your mind is already in motion, and you do have priors about black boxes. What do you think you ought to in this case? I don't want to waste your time with that... Maybe the thought experiment ought to have specified a time limit. Personally, I don't think enumerating things the boxes could possibly do would be helpful at all. Isn't there an easier approach?

The idea of metaprobability still isn't particularly satisfying to me as a game-level strategy choice. It might be useful as a description of something my brain already does, and thus give me more information about how my brain relates to or emulates an AI capable of perfect Bayesian inference. But in terms of picking optimal strategies, perfect Bayesian inference has no subroutine called CalcMetaProbability.

My first thought was that your approach elevates your brain's state above states of the world as symbols in the decision graph, and calls the differ... (read more)

0Gunnar_Zarncke
But the point about meta probability is that we do not have the nodes. Each meta level corresponds to one nesting of networks in nodes. Only in so far as you approximate yourself simply as per above.This discards information.
4Vaniver
I find it helpful to think of "the optimal way to play game X" as "design the mind that is best at playing game X." Does that not seem helpful to you?
0David_Chapman
Well, regardless of the value of metaprobability, or its lack of value, in the case of the black box, it doesn't seem to offer any help in finding a decision strategy. (I find it helpful in understanding the problem, but not in formulating an answer.) How would you go about choosing a strategy for the black box?

Right down the middle: 25-75

Hmm, come to think of it, deciding the size of the cash prize (for it being interesting) is probably worth more to me as well. I'll just have to settle for boring old cash.

I defected, because I'm indifferent to whether the prize-giver or prize-winner has 60 * X dollars, unless the prize-winner is me.

3Nornagest
I cooperated, because I'm more or less indifferent to monetary prizes of less than twenty dollars or so, and more substantial prizes imply widespread cooperation. I view it as unlikely that I can get away with putting myself into a separate reference class, so I might as well contribute to that.

Am I walking the wrong path?

Eh, probably not. Heuristically, I shy away from modes of thought that involve intentional self-deception, but that's because I haven't been mindful of myself long enough to know ways I can do this systematically without breaking down. I would also caution against letting small-scale pride translate into larger domains where there is less available evidence for how good you really are. "I am successful" has a much higher chance of becoming a cached self than "I am good at math." The latter is testable ... (read more)

For certain definitions of pride. Confidence is a focus on doing what you are good at, enjoying doing things that you are good at, and not avoiding doing things you are good at around others.

Pride is showing how good you are at things "just because you are able to," as if to prove to yourself what you supposedly already know, namely that you are good at them. If you were confident, you would spend your time being good at things, not demonstrating that you are so.

There might be good reasons to manipulate others. Just proving to yourself that yo... (read more)

1timujin
Maybe that's just my personal quirk (is it?) but my pride is a good motivator for me to become stronger. If I think I am more able in some area than I actually am, then when evidence for the contrary comes knocking, I try as much as I can to defend the 'truth' I believe in by actually training myself in that area until I match that belief. And since I can't keep my mouth shut and thus I tell and demonstrate everyone how awesome I am when I am not actually that good, there is really no way out but to make myself match what other people think of me. Maybe that's not a very good rationality habit, but I am fully mindful of the process, and if I ever need to know my actual level at expense of that motivational factor, it is no trouble to sit down with a pencil and figure out the truth. It can hurt (because my real level almost always is way less than my expectations of it most of the time), but is probably worth it. Manipulating people just out of pride and sense of domination was actually the factor that developed my social skills more than anything else. I became more polite, started to watch my appearance, posture and facial expressions (because it's easier to trick those who like me), became better at detecting lies and other people's attempts to manipulate me. Also, I believe, it helped me to avoid conformity (when you see people making dumb mistakes on a regular basis just because you told them something, the belief in their sanity vanishes quickly). And I am safe from losing friends' trust, because I strive to never trick or decieve close people (in a very broad sense) and maintain something close to (but not quite) Radical Honesty policy wtih those whom I value. Am I walking the wrong path?

Because your prior for "I am manipulating this person because it satisfies my values, rather than my pride" should be very low.

If it isn't, then here's 4 words for you:

"Don't value your pride."

3Moss_Piglet
Sorry to keep adding to the "why?" pile but do you mind explaining this one too?

Whenever I have a philosophical conversation with an artist, invariably we end up talking about reductionism, with the artist insisting that if they give up on some irreducible notion, they feel their art will suffer. I've heard, from some of the world's best artists, notions ranging from "magic" to "perfection" to "muse" to "God."

It seems similar to the notion of free will, where the human algorithm must always insist it is capable of thinking about itself on level higher. The artist must always think of his art o... (read more)

0anandjeyahar
Elizabeth Gilbert presents a reasonably practical justification for the use of such a concept. See [here] (http://www.youtube.com/watch?v=86x-u-tz0MA). Warning: TED talk and generous use of "reasonable"
4Ishaan
I don't think that this is an artist problem- I think this is a human problem, which a few scientists and philosophers have been forced to overcome in pursuit of truth. Too many people have straw-vulcan notions of reductionism. (tvtropes warning)

The closest you can come to getting an actual "A for effort" is through creating cultural content, such as a Kickstarter project or starting a band. You'll get extra success when people see that you're interested in what you're doing, over and beyond as an indicator that what you'll produce is otherwise of quality. People want to be part of something that is being cared for, and in some cases would prefer it to lazily created perfection.

I'd still call it though an "A for signalling effort."

A bunch of 5th grade kids taught you how to convert decimals to fractions?

2Panic_Lobster
... I really don't think my syntax is that unclear.

EDIT: All right then, if you downvoters are so smart, what would you bet if you were in sleeping beauty's place?

This is a fair point. Your's is an attempt at a real answer to the problem. Mine and most answers here seem to say something like that the problem is ill-defined, or that the physical situation described by the problem is impossible. But if you were actually Sleeping Beauty waking up with a high prior to trust the information you've been given, what else could you possibly answer?

If you had little reason to trust the information you've been given, the apparent impossibility of your situation would update that belief very strongly.

The expected value for "number of days lived by Sleeping Beauty" is an infinite series that diverges to infinity. If you think this is okay, then the Ultimate Sleeping Beauty problem isn't badly formed. Otherwise...

If you answered 1/3 to the original Sleeping Beauty Problem, I do not think that there is any sensible answer to this one. I do not however consider this strong evidence that the answer of 1/3 is incorrect for the original problem.

To also expand on this: 1/3 is also the answer to the "which odds should I precommit myself to take" question and uses the same math as SIA to yield that result for the original problem. And so it is also undefined which odds one should take in this problem. Precommitting to odds seems less controversial, so we should transplant our indifference to the apparent paradox there to the problem here.

On your account, when we say X is a pedophile, what do we mean?

Like other identities, it's a mish-mash of self-reporting, introspection (and extrospection of internal logic), value function extrapolation (from actions), and ability in a context to carry out the associated action. The value of this thought experiment is to suggest that the pedophile clearly thought that "being" a pedophile had something to do not with actually fulfilling his wants, but with wanting something in particular. He wants to want something, whether or not he gets it... (read more)

0TheOtherDave
Um.... OK. Thanks for clarifying.

That's a 'circular' link to your own comment.

It was totally really hard, I had to use a quine.

It might decide to do that - if it meets another powerful agent, and it is part of the deal they strike.

Is it not part of the agent's (terminal) value function to cooperate with agents when doing so provides benefits? Does the expected value of these benefits materialize from nowhere, or do they exist within some value function?

My claim entails that the agent's preference ordering of world states consists mostly in instrumental values. If an agent's value... (read more)

So, OK, X is a pedophile. Which is to say, X terminally values having sex with children.

I'm not sure that's a good place to start here. The value of sex is at least more terminal than the value of sex according to your orientation, and the value of pleasure is at least more terminal than sex.

The question is indeed one about identity. It's clear that our transhumans, as traditionally notioned, don't really exclusively value things so basic as euphoria, if indeed our notion is anything but a set of agents who all self-modify to identical copies of the h... (read more)

3TheOtherDave
Well, as I said initially, I prefer to toss out all this "terminal value" stuff and just say that we have various values that depend on each other in various ways, but am willing to treat "terminal value" as an approximate term. So the possibility that X's valuation of sex with children actually depends on other things (e.g. his valuation of pleasure) doesn't seem at all problematic to me. That said, if you'd rather start somewhere else, that's OK with me. On your account, when we say X is a pedophile, what do we mean? This whole example seems to depend on his pedophilia to make its point (though I'll admit I don't quite understand what that point is), so it seems helpful in discussing it to have a shared understanding of what it entails. Regardless, wrt your last paragraph, I think a properly designed accompanying AI replies "There is a large set of possible future entities that include you in their history, and which subset is "really you" is a judgment each judge makes based on what that judge values most about you. I understand your condition to mean that you want to ensure that the future entity created by the modification preserves what you value most about yourself. Based on my analysis of your values, I've identified a set of potential self-modification options I expect you will endorse; let's review them." Well, it probably doesn't actually say all of that.

Example of somebody making that claim.

It seems to me a rational agent should never change its self-consistent terminal values. To act out that change would be to act according to some other value and not the terminal values in question. You'd have to say that the rational agent floats around between different sets of values, which is something that humans do, obviously, but not ideal rational agents. The claim then is that ideal rational agents have perfectly consistent values.

"But what if something happens to the agent which causes it too see that... (read more)

1timtyler
That's a 'circular' link to your own comment. It might decide to do that - if it meets another powerful agent, and it is part of the deal they strike.
-2Lumifer
Only a static, an unchanging and unchangeable rational agent. In other words, a dead one. All things change. In particular, with passage of time both the agent himself changes and the world around him changes. I see absolutely no reason why the terminal values of a rational agent should be an exception from the universal process of change.

I'm not sure that both these statements can be true at the same time.

If you take the second statement to mean, "There exists an algorithm for Omega satisfying the probabilities for correctness in all cases, and which sometimes outputs the same number as NL, which does not take NL's number as an input, for any algorithm Player taking NL's and Omega's numbers as input," then this ...seems... true.

I haven't yet seen a comment that proves it, however. In your example, let's assume that we have some algorithm for NL with some specified probabili... (read more)

Instead of friendliness, could we not code, solve, or at the very least seed boxedness?

It is clear that any AI strong enough to solve friendliness would already be using that power in unpredictably dangerous ways, in order to provide the computational power to solve it. But is it clear that this amount of computational power could not fit within, say, a one kilometer-cube box outside the campus of MIT?

Boxedness is obviously a hard problem, but it seems to me at least as easy as metaethical friendliness. The ability to modify a wide range of complex envir... (read more)

0Eugene
A slightly bigger "large risk" than Pentashagon puts forward is that a provably boxed UFAI could indifferently give us information that results in yet another UFAI, just as unpredictable as itself (statistically speaking, it's going to give us more unhelpful information than helpful, as Robb point out). Keep in mind I'm extrapolating here. At first you'd just be asking for mundane things like better transportation, cures for diseases, etc. If the UFAI's mind is strange enough, and we're lucky enough, then some of these things result in beneficial outcomes, politically motivating humans to continue asking it for things. Eventually we're going to escalate to asking for a better AI, at which point we'll get a crap-shoot. An even bigger risk than that -though - is that if it's especially Unfriendly, it may even do this intentionally, going so far as to pretend it's friendly while bestowing us with data to make an AI even more Unfriendly AI than itself. So what do we do, box that AI as well, when it could potentially be even more devious than the one that already convinced us to make this one? Is it just boxes, all the way down? (spoilers: it isn't, because we shouldn't be taking any advice from boxed AIs in the first place) The only use of a boxed AI is to verify that, yes, the programming path you went down is the wrong one, and resulted in an AI that was indifferent to our existence (and therefore has no incentive to hide its motives from us). Any positive outcome would be no better than an outcome where the AI was specifically Evil, because if we can't tell the difference in the code prior to turning it on, we certainly wouldn't be able to tell the difference afterward.
4Pentashagon
A large risk is that a provably boxed but sub-Friendly AI would probably not care at all about simulating conscious humans. A minor risk is that the provably boxed AI would also be provably useless; I can't think of a feasible path to FAI using only the output from the boxed AI; a good boxed AI would not perform any action that could be used to make an unboxed AI. That might even include performing any problem-solving action.
9wedrifid
Yes, that is possible and likely somewhat easier to solve than friendliness. It still requires many of the same things (most notably provable goal stability under recursive self improvement.)

Is LSD like a thing?

Most of my views on drugs and substances are formed, unfortunately, due to history and invalid perceptions of their users and those who appear to support their legality most visibly. I was surprised to find the truth about acid at least a little further to the side of "safe and useful" than my longtime estimation. This opens up a possibility for an attempt at recreational and introspectively therapeutic use, if only as an experiment.

My greatest concern would be that I would find the results of a trip irreducibly spiritual, o... (read more)

0hyporational
Another data point here. I've done LSD a couple of times, and didn't find the experience "spiritual" at all. The experience was mostly visual: illusion of movement in static objects when eyes open, and intense visualization when eyes closed. It's hard to describe these images, but it felt like my visual cortex was overstimulated and randomly generated geometric patterns intertwined with visual memories and newly generated constructs and sceneries. This all happened while travelling through a fractal-like pattern, so I felt the word "trip" was quite fitting. The trip didn't seem to affect my thinking much during or after. I can see why a susceptible (irrational) mind could find this chemical alteration of consciousness a godly revelation, but I can't imagine taking the stuff for anything else than entertainment purposes. A couple of friends of mine had similar experiences. LSD is known to cause persistent psychosis, apparently in people who already have latent or diagnosed mental health problems. This is what they teach in my med school, but the epidemiology of the phenomenom was left vague.
0FiftyTwo
Datapoint: another halluciogen, ketamine, has been shown to effectively treat depression. Not sure if mechanisms of LSD are similar.
2gattsuru
I don't imbibe (for that matter, pretty much anything stronger than caffeine), so I can't offer any information about the experience of its affects on rationality. From the literature, it has a relatively high ratio of activity threshold to lethal dose (even assuming the lowest supported toxic doses), but that usually doesn't include behavior toxicity. Supervision is strongly recommended. There's some evidence that psychoactive drugs (even weakly psychoactive drugs like marijuana) can aggravate preexisting conditions or even trigger latent conditions like depression, schizophrenia, and schizoid personality disorder.

One data point here. I've taken a few low-to-lowish dose trips. I'm still the same skeptic/pragmatist I was.

When I'd see the walls billowing and more detail generating out of visual details, I didn't think "The universe is alive!" I thought "my visual system is alive".

I did have an experience which-- to the extent I could put it into words-- was that my sense of reality was something being generated. However, it didn't go very deep-- it didn't have aftereffects that I can see. I'm not convinced it was false, and it might be worth exploring to see what's going on with my sense of reality.

7AndyWood
I won't be able to do it justice in words, but I like to try. If you value your current makeup as a "rationalist" - LSD will not necessarily help with that. Whatever your current worldview, it is not "the truth", it is constructed, and it will not be the same after you come down. You can't expect a trip to do anything in particular, except maybe blow your mind. A trip is like finding out you were adopted. It's discovering a secret hidden in plain sight. It's waking up to realize you've never been awake before - you were only dreaming you were awake. It's finding out that everything familiar, everything you took for granted, was something else all along, and you had no idea. No matter how much you've invested in the identity of "rationalist", no matter how much science you've read... Even if you know how many stars there are in the visible universe, and how many atoms. Even if you've cultivated a sense for numbers like that, real reality is so much bigger than whatever your perception of it is. I don't know how acid works, but it seems to open you in a way that lets more of everything in. More light. More information. Reality is not what you think it is. Reality is reality. Acid may not be able to show you reality, but it can viscerally drive home that difference. It can show you that you've been living in your mind all your life, and mistaking it for reality. It will also change your sense of self. You may find that your self-concept is like a mirage. You may experience ego-loss, which is like becoming nobody and nothing in particular, only immediate sensory awareness and thought, unconnected to what you think of as you, the person. I don't know about health dangers. I never experienced any. Tripping does permanently change the way you view the world. It's a special case of seeing something you can't un-see. Whether it's a "benefit" ... depends a lot on what you want.

On Criticism of Me

I don't mean to be antagonistic here, and I apologize for my tone. I'd prefer my impressions to be taken as yet-another-data-point rather than a strongly stated opinion on what your writings should be.

I'm interested in what in my writing is coming across as indicating I expect a stubborn audience.

The highest rated comment to your vegetarianism post and your response demonstrate my general point here. You acknowledge that the points could have been in your main essay, but your responses are why you don't find them to be good objec... (read more)

1Peter Wildeford
I think there's something to be said for making the essay too long by analyzing absolutely every consideration that could ever be brought up. There's dozens of additional considerations that I could have elaborated on at length in my essay (the utilitarianism of it, other meta-ethics, free range, whether nonhuman animal lives actually aren't worth living, logic of the larder, wild animal suffering, etc.) that it would be impossible to cover them all. Therefore, I preferred them to come up in the comments. But generally, should I hedge my claims more in light of more possible counterarguments? Yeah, probably. ~ I did read a large list of essays in this realm prior to writing this essay. A lot played on the decision theory angle and the concern with experts, but none mentioned the potential for biases in favor of x-risk or the history of commonsense. ~ To be fair, the essay did include quite a lot more extended argument than just that. I do agree I could have engaged better with other essays on the site, though. I was mostly concerned with issues of length and amount of time spent, but maybe I erred too much on the side of caution.

A criticism I have of your posts is that you seem to view your typical audience member as somebody who stubbornly disagrees with your viewpoint, rather than as an undecided voter. More critically, you seem to view yourself as somebody capable of changing the former's opinion through (very well-written) restatements of the relevant arguments. But people like me want to know why previous discussions haven't yet resolved the issue even in discussions between key players. Because they should be resolvable, and posts like this suggest to me that at least som... (read more)

2Peter Wildeford
On Criticism of Me I think I am doing persuasive writing (i.e. advocating for my point of view), but I would model myself as talking to an undecided voter or at least someone open minded, not someone stubborn. I'm interested in what in my writing is coming across as indicating I expect a stubborn audience. I think that's the case, yes. But I'm not sure they're restatements as synthesis of many arguments that had not previously been all in one place, and even arguments that had never before been articulated in writing (as is the case in this piece). It's difficult to offer an answer to that question. I think one problem is many of these discussions haven't (at least as far as I know) taken place in writing yet. I'm confused. What's wrong with how they're currently laid out? Do you think there are certain arguments I'm not engaging with? If so, which ones? ~ On X-Risk Arguments I don't understand what you're saying here. It sounds like you're advocating for learned helplessness, but I don't think that's the case. What do you mean? Can you give me an example? I think that's equivocating two different definitions of "proven".

Congrats! What is her opinion on the Self Indication Assumption?

9Stuart_Armstrong
She hasn't indicated it yet herself...

Attackers could cause the unit to unexpectedly open/close the lid, activate bidet or air-dry functions, causing discomfort or distress to user.

Heaven help us. Somebody get X-risk on this immediately.

3NancyLebovitz
To be fair, the article also mentions repeated flushing, which can raise utility bills. I think this could get quite expensive in regions with water shortages.

Can somebody explain a particular aspect of Quantum Mechanics to me?

In my readings of the Many Worlds Interpretation, which Eliezer fondly endorses in the QM sequence, I must have missed an important piece of information about when it is that amplitude distributions become separable in timed configuration space. That is, when do wave-functions stop interacting enough for the near-term simulation of two blobs (two "particles") to treat them independently?

One cause is spatial distance. But in Many Worlds, I don't know where I'm to understand thes... (read more)

5Manfred
First: check this out. Second: Suppose I want to demonstrate decoherence. I start out with an entangled state - two electrons that will always be magnetically aligned, but don't have a chosen collective alignment. This state is written like |up, up> + |down, down> (the electrons are both "both up" and "both down" at the same time; the |> notation here just indicates that it's a quantum state). Now, before introducing decoherence, I just want to check that I can entangle my two electrons. How do I do that? I repeat what's called a "Bell measurement," which has four possible indications: (|up,up>+|down,down>) , (|up,up>-|down,down>) , (|up,down>+|down,up>) , (|up,down>-|down,up>). Because my state is made of 100% Bell state 1, every time I make some entangled electrons and then measure them, I'll get back result #1. This consistency means they're entangled. If the quantum state of my particles had to be expressed as a mixture of Bell States, there might not be any entanglement - for example state 1 + state 2 just looks like |up,up>, which is boring and unentangled. To create decoherence, I send the second electron to you. You measure whether it's up or down, then re-magnetize it and send it back with spin up if you measured up, and spin down if you measured down. But since you remember the state of the electron, you have now become entangled with it, and must be included. The relevant state is now |up, up, saw up> + |down, down, saw down>. This state is weird, because now you, a human, are in a superposition of "saw up" and "saw down." But we'll ignore that for the moment - we can always replace you with with a third electron if it causes philosophical problems :) The question at hand is: what happens when we try to test if our electrons are still entangled? Again, we do this a bunch of times and do a repeated Bell measurement. If we get result #1 every time, they're entangled just like before. To predict the outcome ahead of time, we can factor our state into B
2Emile
(Warning: I am not a physicist; I learnt a bit of about QM from my physics classes, the Sequences, Feynmann Lectures on Physics, and Good and Real, but I don't claim to even understand all that's in there) I'm not sure I totally understand your question, but I'll take a stab at answering: The important thing is configuration space, and spatial distance is just one part of that; there is just one configuration space over which the quantum wave-function is defined, and points in configuration space correspond to "universe states" (the position, spin, etc. of all particles). So two points in configuration space A and B "interfere" if they are similar enough that both can "evolve" into state C, i.e. state C's amplitude will be function of A and B's amplitudes. The more different A and B are, the less likely they are to have shared "descendant states" (or more precisely, descendant states of non-infinitesimal amplitude), so the more they can be treated like "parallel branches of the universe". Differences between A and B can be in psychical distance of particles, but also of polarity/spin, etc. - as long as the distance is significant on one axis (say spin of a single particle), physical distance shouldn't matter. I think spin could be an example of "another axis" you're looking for (though even thinking in terms of Axis may be a bit misleading, since all the attributes aren't nice and orthogonal like positions in cartesian space).

I suspect that those would be longer than should be posted deep in a tangential comment thread.

Yeah probably. To be honest I'm still rather new to the rodeo here, so I'm not amazing at formalizing and communicating intuitions, which might just be boilerplate for that you shouldn't listen to me :)

I'm sure it's been hammered to death elsewhere, but my best prediction for what side I would fall on if I had all the arguments laid out would be the hard-line CS theoretical approach, as I often do. It's probably not obvious why there would be problems with ... (read more)

They do not, because if I value grandma N, a chicken M, where N > 0, M > 0, and N > M, then there exists some positive integer k for which kM > N. This means that for sufficiently many chickens, I would choose the chickens over my grandmother. That is the incorrect answer.

I do appreciate the willingness to shut up and do the impossible here. Your certainty that there is no amount of chickens equal to the worth of your grandmother makes you believe you need to give up one of 3 plausible-seeming axioms, and you're not willing to think there i... (read more)

2Said Achmiz
I see. I confess that I don't find your "preferred ethical self" concept to be very compelling (and am highly skeptical about your claim that this is "what rationality is"), but I'm willing to hear arguments. I suspect that those would be longer than should be posted deep in a tangential comment thread. You shouldn't take me to have any kind of "theory that takes M = 0"; that is, IMO, a misleading way to talk about this. Setting M = 0 is merely the (apparently, at-first-glance) best resolution of a particular problem that arises when one starts with a certain set of moral intuitions and attempts to resolve them with a certain moral system (total utilitarianism). Does this resolution cause further issues? Maybe; it depends on other moral intuitions that we might have. Can we resolve them? Maybe; perhaps with a multi-tier valuation system, perhaps with something else. My primary point, way back at the beginning of this comment thread, is that something has to give. I personally think that giving up nonzero valuation of chickens is the least problematic on its own, as it resolves the issue at hand, most closely accords with my other moral intuitions, and does not seem, at least at first glance, to create any new major issues. Then again, I happen to think that we have other reasons to seriously consider giving up additive aggregation, especially over the real numbers. By the time we're done resolving all of our difficulties, we might end up with something that barely resembles the simple, straightforward total utilitarianism with real-number valuation that we started with, and that final system might not need to assign the real number 0 to the value of a chicken. Or it still might. I don't know. (For what it's worth, I am indifferent between the worm and the chicken, but I would greatly prefer a Mac SE/30 to either of them.)

So I don't think I ought to just say "eh, let's call grandma's worth a googolplex of chickens and call it a day".

Why not? Being wrong about what ideally-solved-metaethics-SaidAchmiz would do isn't by itself disutility. Disutility is X dead grandmas, where X = N / googleplex.

If we are using real-valued utilities, then we're back to either assigning chickens 0 value or abandoning additive aggregation.

Why? I take it that for the set of all possible universe-states under my control, my ideal self could strictly order those states by prefere... (read more)

0Said Achmiz
Because, as you say: Indeed, and the right answer here is choosing my grandmother. (btw, it's "googolplex", not "googleplex") Indeed; but... They do not, because if I value grandma N, a chicken M, where N > 0, M > 0, and N > M, then there exists some positive integer k for which kM > N. This means that for sufficiently many chickens, I would choose the chickens over my grandmother. That is the incorrect answer. Something has to change. Setting M = 0 is easiest and most consistent with my moral intuitions, and leads to correct results in all choices involving humans. (Of course we might have other motivations for choosing a different path, such as abandoning real-valued utilities or abandoning additive aggregation.) Now here, I am not actually sure what you're saying. Could you clarify? What theory?

Automation could reduce the cost of hiring.

Take Uber, Sidecar, and Lyft as examples. I can't find any data, but anecdotally these services appear to reduce the cost, and increase the wages, for patrons and drivers respectively by between 20 and 50%, with increased convenience for both. You know it's working when entrenched, competing sectors of the industry are protesting and lobbying.

Eliezer's suggestion about forgotten industries (maids and butlers) seems much more on point if automatic markets can remove hiring friction. Ride sharing has a rapidly... (read more)

8gwern
Er... mail-order is old. Very old. Like, ordinary farmers in the Midwest would easily send off to Montgomery Ward orders for hugely expensive things like farming equipment. http://en.wikipedia.org/wiki/Mail_order#Ward:_mail_order_pioneer : Unless of course you were putting your emphasis on 2-day mailing. I suspect Ward couldn't sell you a new house and deliver it in 2 days.
7So8res
Let's. I'm on the east coast until Aug 11. Perhaps we can meet up after work on the week of the 12th. (Context for others: The two of us met briefly at a meetup in June and exchanged usernames, but haven't spoken much.)

"What is the part of me that is preventing me from moving forward worried about?"

Be careful not to be antagonistic about the answer. The goal is to make that part of you less worried, thus making you more productive overall, not just on your blocked task. The roadblock is telling you something that you haven't yet explicitly acknowledged, so acknowledge it, thank it, incorporate it into your thinking, and resolve it.

Example: "I'm not smart enough to solve this math problem." Worry: "I would need to learn a textbook's worth of mat... (read more)

Does anybody have any data or reasoning that tracks the history of the relative magnitude of ideal value of unskilled labor versus ideal minimum cost of living? Presumably this ratio has been tracking favorably, even if in current practical economies the median available minimum wage job is in a city with a dangerously tight actual cost of living.

What I'd like to understand is, outside of minimum wage enforcement and solvable inefficiencies that affect the cost of basic goods, how much more economic output does an unskilled worker have over the cost what ... (read more)

Conversely, it is also good to limit reading about what other people are grateful for, especially if you're feeling particularly ungrateful and they have things you don't. Facebook is a huge offender here, because people tend to post about themselves when they're doing well, rather than when they're needing support. Seeing other people as more happy than they are leaves you wondering why you aren't as happy as they are. It also feeds the illusion that others do not need your help.

Vaniver110

Facebook is a huge offender here, because people tend to post about themselves when they're doing well, rather than when they're needing support.

My suspicion is that people are more likely to be specific in positive than negative comments. "Vaguebooking," even if you know it represents serious pain, doesn't give you as vivid an image as someone celebrating a new job.

-2David_Gerard
Could be just your group of friends. I have many Facebook associates who post just like that, but others whine and yet others have problems they're seeking advice on, etc.

Doesn't the act of combining many outside views and their reference classes turn you into somebody operating on the inside view? This is to say, what is the difference between this and the type of "inside" reasoning about a phenomenon's causal structure?

Is it that inside thinking involves the construction of new models whereas outside thinking involves comparison and combination of existing models? From an machine intelligence perspective, the distinction is meaningless. The construction of new models is the extension of old models, albeit mo... (read more)

0Eliezer Yudkowsky
Yes. So does the act of selecting just one outside view as your Personal Favorite, since it gives the right answer.
Load More