Comment author: amit 05 August 2012 05:28:14PM *  0 points [-]

Our values determine our beliefs

I don't think the ugly duckling theorem (ie. the observation that any pair of elements from a finite set share exactly half of the powerset elements that they belong to) goes far towards proving that "our values determine our beliefs". Some offhand reasons why I think that:

  • It should be more like "our values determine our categories".
  • There's still solomonoff induction.
  • It seems like people with different values should still be able to have a bona fide factual disagreement that's not just caused by their differing values.
  • It could be true in a theoretical sense but have little bearing on beliefs, values and disagreements in an everyday human context.

(And even if we grant something like that, I see no reason to think that a "philosopher's mindset" would make you lean towards religion (because I don't know any convincing phiosophical arguments for religious propositions, for one).)

Comment author: Will_Newsome 22 June 2012 12:54:11PM *  -2 points [-]

Fortunately this isn't that common but there is an occasional tendency by some prominent commenters to dismiss personal experience as anecdotes.

(On that note but totally unrelated to gay shit like "objectification": It's amazing how difficult it is to talk to someone sane, reasonable, intelligent, well-intentioned, honest, without obvious incentives to lie, &c. who reports an experience that, if it actually happened, could only be explained by psi. There are anecdotes where pseudo-explanations like "memory bias" just don't cut it—in order for you to confidently deny psi you have to confidently accuse them of lying, and in order to confidently accuse them of lying you have to have a significantly better model of human psychology than I do. I think not realizing that such people are in fact numerous is what kept me from even considering psi for Aumannesque reasons—like most LessWrong types I'd implicitly assumed all reports of psi were either fuzzy in their details such that cognitive biases were a defensible explanation, or were provided by people who were less than credible. Once you eliminate those two categories the skeptic is left with a lot of uncomfortable evidence just waiting to be examined. Of course the evidence will never be very communicable to a wide audience, per the law of conservation of trolling.)

Comment author: amit 22 June 2012 06:05:48PM *  1 point [-]

Of course the evidence will never be very communicable to a wide audience

Why not? First obvious way that comes to mind: take someone that the audience trusts to be honest and to judge people correctly and have them go around talking to people who've had experiences and report back their findings.

In response to Computation Hazards
Comment author: amit 16 June 2012 08:35:23AM *  0 points [-]

An example of a computation that runs most algorithms is a mathematical formalism called Solomonoff induction.

Solomonoff Induction is uncomputable, so it's not a computation. Would be correct if you had written

An example of a computation that runs most algorithms could be some program that approximates a mathematical formalism Solomonoff induction.

Also, strictly speaking no real-world computation could run "most" algorithms, since there are infinitely many and it could only run a finite number. It would make more sense to use an expression like "computations that search through the space of all possible algorithms".

In response to Computation Hazards
Comment author: amit 16 June 2012 08:04:05AM *  0 points [-]

A function that could evaluate an algorithm and return 0 only if it is not a person is called a nonperson predicate. Some algorithms are obviously not people. Some algorithms are obviously not people. For example, any algorithm whose output is repeating with a period less than gigabytes...

Is this supposed to be about avoiding the algorithms simulating suffering people, or avoiding them doing something dangerous to the outside world? Obviously an algorithm could simulate a person while still having a short output, so I'm thinking it has to be about the second one. But then the notion of nonperson predicates doesn't apply, because it's about avoiding simulating people (that might suffer and that will die when the simulation ends). Also, a dangerous algorithm could probably do some serious damage with under a gigabyte of output. So having less than a gigabyte output doesn't really protect you from anything.

Comment author: amit 02 June 2012 12:09:32PM *  2 points [-]

So you're searching for "the most important thing", and reason that this is the same as searching for some utility function, and then you note that one reason this question seems worth thinking about is because it's interesting, and then you refer to Schmidhuber's definition of interestingness (which would yield a utility function), and note that it is itself interesting, so maybe importance is the same as interestingness, because importance has to be itself important and (Schmidhuberian) interestingness satisfies this requirement by being itself interesting

At this point I'm not very impressed. This seems to be the same style of reasoning that gets people obsessed with "complexity" or "universal instrumental values" as the ultimate utility functions.

At the end you say you doubt that interestingness is the ultimate utility function, too, but apparently you still think engaging in this style of reasoning is a good idea, we just have to take it even further.

At this point I'm thinking that it could go either way: you could come up with an interesting proposal in the class of CEV or "Indirect Normativity", which definitely are in some sense the result of going meta about values, or you could come up with something that turns out to be just another fake utility function in the class of "complexity" and "universal instrumental values".

In response to That Alien Message
Comment author: cousin_it 11 May 2012 09:21:10AM *  1 point [-]

The inferred physics of the 3+2 universe is not fully known, at this point; but it seems sure to allow for computers far more powerful than our quantum ones.

(...)

Then we cracked their equivalent of the protein folding problem over a century or so

Nope. If their physics has more computational power than ours, we can't solve their protein folding. And we can't even make efficient inferences about their universe. Anyone care to come up with a more realistic scenario?

Comment author: amit 11 May 2012 04:24:01PM 2 points [-]

But their proteins aren't necessarily making use of the extra computational power. And we can imagine that the physics of our universe allows for super powerful computers, but we can still obviously make efficient inferences about our universe.

Comment author: Will_Newsome 05 April 2012 11:33:22PM *  9 points [-]

(Not sure where to put this:) Yvain's position doesn't seem sane to me, and not just for reasons of preference; attempting to commit suicide will just push most of your experienced moments backwards to regions where you've never heard of quantum suicide or where for whatever reason you thought it was a stupid idea. Anticipating ending up in a world with basically no measure just doesn't make sense: you're literally making yourself counterfactual. If you decided to carve up experience space into bigger chunks of continuity then this problem goes away, but most people agree that (as Katja put it) "anthropics makes sense with shorter people". Suicide only makes sense if you want to shift your experience backwards in time or into other branches, not in order to have extremely improbable experiences. I mean, that's why those branches are extremely improbable: there's no way you can experience them, quantum suicide or no.

Comment author: amit 16 April 2012 12:32:54AM 2 points [-]

You're not saying that if I perform QS I should literally anticipate that my next experience will be from the past, are you? (AFAICT, if QS is not allowed, I should just anticipate whatever I would anticipate if I was going to die everywhere, that is, going to lose all of my measure.)

Comment author: Wei_Dai 15 April 2012 10:42:13PM 7 points [-]

Part of my concern about Eliezer trying to build FAI also stems from his treatment of metaethics. Here's a caricature of how his solution looks to me:

Alice: Hey, what is the value of X?

Bob: Hmm, I don't know. Actually I'm not even sure what it means to answer that question. What's the definition of X?

Alice: I don't know how to define it either.

Bob: Ok... I don't know how to answer your question, but what if we simulate a bunch of really smart people and ask them what the value of X is?

Alice: Great idea! But what about the definition of X? I feel like we ought to be able to at least answer that now...

Bob: Oh that's easy. Let's just define it as the output of that computation I just mentioned.

Comment author: amit 16 April 2012 12:08:27AM *  1 point [-]

I thought the upshot of Eliezer's metaethics sequence was just that "right" is a fixed abstract computation, not that it's (the output of) some particular computation that involves simulating really smart people. CEV is not even mentioned in the sequence (EDIT: whoops it is.).

(Indeed just saying that it's a fixed abstract computation is at the right level of abstraction to qualify as metaethics; saying that it's some particular computation would be more like just plain ethics. The upshot does feel kind of underwhelming and obvious. This might be because I just don't remember how confusing the issue looked before I read those posts. It could also mean that Eliezer claiming that metaethics is a solved problem is not as questionable as it might seem. And it could also mean that metaethics being solved doesn't consitute as massive progress as it might seem.)

Comment author: Will_Newsome 06 April 2012 01:47:56AM *  6 points [-]

I guess I didn't clearly state the relevant hypothesis. The hypothesis is that the stars aren't real, they're just an illusion or a backdrop put their by superintelligences so we don't see what's actually going on. This would explain the great filter paradox (Fermi's paradox) and would imply that if we build an AI then that doesn't necessarily mean it'll get to eat all the stars. If the stars are out there, we should pluck them—but are they out there? They're like a stack of twenties on the ground, and it seems plausible they've already been plucked without our knowing. Maybe my previous comment will make more sense now. I'm wondering if your reasons for focusing on eating all the galaxies is because you think the galaxies actually haven't already been eaten, or if it's because even if it's probable that they've actually already been eaten and our images of them are an illusion, most of the utility we can get is still concentrated in worlds where the galaxies haven't already been eaten, so we should focus on those worlds. (This is sort of orthogonal to the simulation argument because it doesn't necessitate that our metaphysical ideas about how simulations work make sense; the mechanism for the illusion works by purely physical means.)

Comment author: amit 15 April 2012 09:05:14PM 3 points [-]

The hypothesis is that the stars aren't real, they're just an illusion or a backdrop put their by superintelligences so we don't see what's actually going on. This would explain the great filter paradox (Fermi's paradox) and would imply that if we build an AI then that doesn't necessarily mean it'll get to eat all the stars.

If the SI wanted to fool us, why would it make us see something (a lifeless universe) that would make us infer that we're being fooled by an SI?

In response to comment by [deleted] on Our Phyg Is Not Exclusive Enough
Comment author: David_Gerard 14 April 2012 10:55:55PM 11 points [-]

As far as I can tell (low votes, some in the negative, few comments), the QM sequence is the least read of the sequences, and yet makes a lot of EY's key points used later on identity and decision theory. So most LW readers seem not to have read it.

Suggestion: a straw poll on who's read which sequences.

Comment author: amit 14 April 2012 11:41:37PM 5 points [-]

used later on identity

Yes.

and decision theory

No, as far as I can tell.

View more: Next