Comment author: amit 05 August 2012 05:28:14PM *  0 points [-]

Our values determine our beliefs

I don't think the ugly duckling theorem (ie. the observation that any pair of elements from a finite set share exactly half of the powerset elements that they belong to) goes far towards proving that "our values determine our beliefs". Some offhand reasons why I think that:

  • It should be more like "our values determine our categories".
  • There's still solomonoff induction.
  • It seems like people with different values should still be able to have a bona fide factual disagreement that's not just caused by their differing values.
  • It could be true in a theoretical sense but have little bearing on beliefs, values and disagreements in an everyday human context.

(And even if we grant something like that, I see no reason to think that a "philosopher's mindset" would make you lean towards religion (because I don't know any convincing phiosophical arguments for religious propositions, for one).)

Comment author: Will_Newsome 22 June 2012 12:54:11PM *  -2 points [-]

Fortunately this isn't that common but there is an occasional tendency by some prominent commenters to dismiss personal experience as anecdotes.

(On that note but totally unrelated to gay shit like "objectification": It's amazing how difficult it is to talk to someone sane, reasonable, intelligent, well-intentioned, honest, without obvious incentives to lie, &c. who reports an experience that, if it actually happened, could only be explained by psi. There are anecdotes where pseudo-explanations like "memory bias" just don't cut it—in order for you to confidently deny psi you have to confidently accuse them of lying, and in order to confidently accuse them of lying you have to have a significantly better model of human psychology than I do. I think not realizing that such people are in fact numerous is what kept me from even considering psi for Aumannesque reasons—like most LessWrong types I'd implicitly assumed all reports of psi were either fuzzy in their details such that cognitive biases were a defensible explanation, or were provided by people who were less than credible. Once you eliminate those two categories the skeptic is left with a lot of uncomfortable evidence just waiting to be examined. Of course the evidence will never be very communicable to a wide audience, per the law of conservation of trolling.)

Comment author: amit 22 June 2012 06:05:48PM *  1 point [-]

Of course the evidence will never be very communicable to a wide audience

Why not? First obvious way that comes to mind: take someone that the audience trusts to be honest and to judge people correctly and have them go around talking to people who've had experiences and report back their findings.

In response to comment by [deleted] on You're Calling *Who* A Cult Leader?
Comment author: private_messaging 22 June 2012 03:45:39PM *  5 points [-]

Posted on this before:

http://lesswrong.com/lw/aid/heuristics_and_biases_in_charity/5y64

The payoff for exploit calculation is incredibly trivial; if everyone with a flaw diversifies between 5 charities then the payoff for determining and utilizing exploit is 1/5 of the payoff when one pays to the 'top' one. Of course there are some things that can go wrong with this, for instance it may be easier to exploit to the extent sufficient to get into the top 5, which is why it is hard to do applied mathematics on this kind of topic, not a lot of data.

What I believe would happen if the people were to adopt the 'choose top charity, donate everything to it' strategy, is that, since people are pretty bad at determining top charities, and do so using various proxies of performance, and have systematic errors in the evaluation, most of people would just end up donating to some sort of super-stimuli of caring with which no one with truly the best intentions can compete (or to compete with which a lot of effort has to be expended on imitation of superstimuli).

I have made a turret in a game, that would shoot precisely where it expects you to be. Unfortunately, you can easily outsmart this turret's model of where you could be. Adding random noise to the bullet velocity dramatically increases the lethality of this turret, even though under the turret's model of your behaviour it is now not shooting at the point with the highest expected damage. It is very common to add noise or fuzzy spread to eliminate undesirable effects of the predictable systematic error. I believe that one should diversify among several of the subjectively 'best' charities, within the range from the best comparable to the size of systematic error in the process of determination of the best charity.

Comment author: amit 22 June 2012 05:11:29PM *  5 points [-]

From this list

It follows from the assumption that you're not Bill Gates, don't have enough money to actually shift the marginal expected utilities of the charitable investment, and that charities themselves do not operate in an efficient market for expected utilons, so that the two top charities do not already have marginal expected utilities in perfect balance.

the assumption whose violation your argument relies on, is you not having enough money to shift the marginal expected utilities, when "you" are considered to be controlling the choices of all the donors who choose in a sufficiently similar way. I would agree that given the right assumptions about the initial marginal expected utilities and how more money would change the marginal utilities and marginal expected utilities, that this assumption might sometimes be violated doesn't look like an entirely frivolous objection to a naively construed strategy of "give everything to your top charity".

(BTW, It's not clear to me why mistrust in your ability to evaluate the utility of donations to different charities should end up balancing out to produce very close expected utilities. It would seem to have to involve something like Holden's normal distribution for charity effectiveness, or something else that would make it so that whenever large utilites are involved, the corresponding probabilities will necessarily be requisitely small.)

(edit: quickly fixed some errors)

In response to Computation Hazards
Comment author: amit 16 June 2012 08:35:23AM *  0 points [-]

An example of a computation that runs most algorithms is a mathematical formalism called Solomonoff induction.

Solomonoff Induction is uncomputable, so it's not a computation. Would be correct if you had written

An example of a computation that runs most algorithms could be some program that approximates a mathematical formalism Solomonoff induction.

Also, strictly speaking no real-world computation could run "most" algorithms, since there are infinitely many and it could only run a finite number. It would make more sense to use an expression like "computations that search through the space of all possible algorithms".

In response to Computation Hazards
Comment author: amit 16 June 2012 08:04:05AM *  0 points [-]

A function that could evaluate an algorithm and return 0 only if it is not a person is called a nonperson predicate. Some algorithms are obviously not people. Some algorithms are obviously not people. For example, any algorithm whose output is repeating with a period less than gigabytes...

Is this supposed to be about avoiding the algorithms simulating suffering people, or avoiding them doing something dangerous to the outside world? Obviously an algorithm could simulate a person while still having a short output, so I'm thinking it has to be about the second one. But then the notion of nonperson predicates doesn't apply, because it's about avoiding simulating people (that might suffer and that will die when the simulation ends). Also, a dangerous algorithm could probably do some serious damage with under a gigabyte of output. So having less than a gigabyte output doesn't really protect you from anything.

Comment author: amit 02 June 2012 12:09:32PM *  2 points [-]

So you're searching for "the most important thing", and reason that this is the same as searching for some utility function, and then you note that one reason this question seems worth thinking about is because it's interesting, and then you refer to Schmidhuber's definition of interestingness (which would yield a utility function), and note that it is itself interesting, so maybe importance is the same as interestingness, because importance has to be itself important and (Schmidhuberian) interestingness satisfies this requirement by being itself interesting

At this point I'm not very impressed. This seems to be the same style of reasoning that gets people obsessed with "complexity" or "universal instrumental values" as the ultimate utility functions.

At the end you say you doubt that interestingness is the ultimate utility function, too, but apparently you still think engaging in this style of reasoning is a good idea, we just have to take it even further.

At this point I'm thinking that it could go either way: you could come up with an interesting proposal in the class of CEV or "Indirect Normativity", which definitely are in some sense the result of going meta about values, or you could come up with something that turns out to be just another fake utility function in the class of "complexity" and "universal instrumental values".

In response to That Alien Message
Comment author: cousin_it 11 May 2012 09:21:10AM *  1 point [-]

The inferred physics of the 3+2 universe is not fully known, at this point; but it seems sure to allow for computers far more powerful than our quantum ones.

(...)

Then we cracked their equivalent of the protein folding problem over a century or so

Nope. If their physics has more computational power than ours, we can't solve their protein folding. And we can't even make efficient inferences about their universe. Anyone care to come up with a more realistic scenario?

Comment author: amit 11 May 2012 04:24:01PM 2 points [-]

But their proteins aren't necessarily making use of the extra computational power. And we can imagine that the physics of our universe allows for super powerful computers, but we can still obviously make efficient inferences about our universe.

Comment author: Will_Newsome 05 April 2012 11:33:22PM *  9 points [-]

(Not sure where to put this:) Yvain's position doesn't seem sane to me, and not just for reasons of preference; attempting to commit suicide will just push most of your experienced moments backwards to regions where you've never heard of quantum suicide or where for whatever reason you thought it was a stupid idea. Anticipating ending up in a world with basically no measure just doesn't make sense: you're literally making yourself counterfactual. If you decided to carve up experience space into bigger chunks of continuity then this problem goes away, but most people agree that (as Katja put it) "anthropics makes sense with shorter people". Suicide only makes sense if you want to shift your experience backwards in time or into other branches, not in order to have extremely improbable experiences. I mean, that's why those branches are extremely improbable: there's no way you can experience them, quantum suicide or no.

Comment author: amit 16 April 2012 12:32:54AM 2 points [-]

You're not saying that if I perform QS I should literally anticipate that my next experience will be from the past, are you? (AFAICT, if QS is not allowed, I should just anticipate whatever I would anticipate if I was going to die everywhere, that is, going to lose all of my measure.)

Comment author: Wei_Dai 15 April 2012 10:42:13PM 7 points [-]

Part of my concern about Eliezer trying to build FAI also stems from his treatment of metaethics. Here's a caricature of how his solution looks to me:

Alice: Hey, what is the value of X?

Bob: Hmm, I don't know. Actually I'm not even sure what it means to answer that question. What's the definition of X?

Alice: I don't know how to define it either.

Bob: Ok... I don't know how to answer your question, but what if we simulate a bunch of really smart people and ask them what the value of X is?

Alice: Great idea! But what about the definition of X? I feel like we ought to be able to at least answer that now...

Bob: Oh that's easy. Let's just define it as the output of that computation I just mentioned.

Comment author: amit 16 April 2012 12:08:27AM *  1 point [-]

I thought the upshot of Eliezer's metaethics sequence was just that "right" is a fixed abstract computation, not that it's (the output of) some particular computation that involves simulating really smart people. CEV is not even mentioned in the sequence (EDIT: whoops it is.).

(Indeed just saying that it's a fixed abstract computation is at the right level of abstraction to qualify as metaethics; saying that it's some particular computation would be more like just plain ethics. The upshot does feel kind of underwhelming and obvious. This might be because I just don't remember how confusing the issue looked before I read those posts. It could also mean that Eliezer claiming that metaethics is a solved problem is not as questionable as it might seem. And it could also mean that metaethics being solved doesn't consitute as massive progress as it might seem.)

Comment author: Will_Newsome 06 April 2012 01:47:56AM *  6 points [-]

I guess I didn't clearly state the relevant hypothesis. The hypothesis is that the stars aren't real, they're just an illusion or a backdrop put their by superintelligences so we don't see what's actually going on. This would explain the great filter paradox (Fermi's paradox) and would imply that if we build an AI then that doesn't necessarily mean it'll get to eat all the stars. If the stars are out there, we should pluck them—but are they out there? They're like a stack of twenties on the ground, and it seems plausible they've already been plucked without our knowing. Maybe my previous comment will make more sense now. I'm wondering if your reasons for focusing on eating all the galaxies is because you think the galaxies actually haven't already been eaten, or if it's because even if it's probable that they've actually already been eaten and our images of them are an illusion, most of the utility we can get is still concentrated in worlds where the galaxies haven't already been eaten, so we should focus on those worlds. (This is sort of orthogonal to the simulation argument because it doesn't necessitate that our metaphysical ideas about how simulations work make sense; the mechanism for the illusion works by purely physical means.)

Comment author: amit 15 April 2012 09:05:14PM 3 points [-]

The hypothesis is that the stars aren't real, they're just an illusion or a backdrop put their by superintelligences so we don't see what's actually going on. This would explain the great filter paradox (Fermi's paradox) and would imply that if we build an AI then that doesn't necessarily mean it'll get to eat all the stars.

If the SI wanted to fool us, why would it make us see something (a lifeless universe) that would make us infer that we're being fooled by an SI?

View more: Next