Comment author: Sniffnoy 27 July 2010 07:57:34PM 3 points [-]

I may be repeating what Vladimir said here, but it seems to me your objection is basically "Oh shit! We can diagonalize!" (Which if we then collapse the levels can get us a Berry paradox, and others...)

So, yes, it follows that any system of description we can think of, there's some potential truth its corresponding "universal prior" (question - do those exist in general?) won't be able to infer. But the fact that this applies to any such system means we can't use it as a criterion to decide between them. At some point we have to just stop and say, "No, you are not allowed to refer to this concept in formulating descriptions." Maybe computability isn't the best one, but you don't seem to have actually given evidence that would support any other such system over it.

Or am I just missing something big here?

Comment author: Dre 28 July 2010 07:10:24AM 1 point [-]

The thing I got out of it was that human brain processes appear to be able to do something (assign a nonzero probability to a non-computable universe) that our current formalization of general induction cannot do and we can't really explain why this is.

Comment author: Mass_Driver 01 July 2010 05:58:45AM 2 points [-]

Roko, do you think you could lay out, in some detail, the argument for why rational people should busy themselves with getting rich? I'm familiar with some of the obvious arguments at a basic level (entrepreneurship is usually win-win, money can be used to help fund or attract attention for just about any other project or argument you care to have succeed, getting rich should be relatively easy in a world full of both arbitrage opportunities and irrational people), but still don't quite find them convincing.

Comment author: Dre 01 July 2010 06:53:35AM 3 points [-]

As I understand it, it is a comparative advantage argument. More rational people are likely to have comparative advantage in making money as compared to less rational people, so the utility maximizing setup is for more rational people to make money and pay less rational people to do the day to day work of implementing the charitable organization. Thats the basic form of the argument at least.

Comment author: timtyler 30 June 2010 02:59:56PM 1 point [-]

Re: "In MWI one maximizes the fraction of future selves experiencing good outcomes."

Note that the MWI is physics - not morality, though.

Comment author: Dre 01 July 2010 06:18:50AM 1 point [-]

You are right, I should have said something like "implementing MWI over some morality."

Comment author: Dre 25 June 2010 08:13:08PM 8 points [-]

I don't think MWI is analogous to creating extra simultaneous copies. In MWI one maximizes the fraction of future selves experiencing good outcomes. I don't care about parallel selves, only future selves. As you say, looking back at my self-tree I see a single path, and looking forward I have expectations about future copies, but looking sideways just sounds like daydreaming, and I don't have place a high marginal value on that.

Comment author: Douglas_Knight 10 June 2010 10:40:08PM *  3 points [-]

People who know a little bit of statistics - enough to use statistical techniques, not enough to understand why or how they work - often end up horribly misusing them.

How often do people harm themselves with statistics, rather than further their goals through deception? Scientists data-mining get publications; financiers get commissions; reporters get readers.

ETA: the people who are fooled are harming themselves with statistics. But I think the people want to understand for themselves generally only use statistics that they understand.

Comment author: Dre 11 June 2010 01:25:21AM 1 point [-]

There is also an opportunity cost to the poor use of statistics instead of proper use. This may be only externalities (the person doing the test may actually benefit more from deception), but overall the world would be better if all statistics were used correctly.

Comment author: Douglas_Knight 05 June 2010 03:03:21PM 1 point [-]

This isn't a bug in CEV, it's a bug in the universe. Once the majority of conscious beings are Dr. Evil clones, then Dr. Evil becomes a utility monster and it gets genuinely important to give him what he wants.

I think that's wrong. At the very least, I don't think it matches the scenario in the post. In particular, I think "how many people are there?" is a factual question, not a moral question. (and the answer is not an integer)

Comment author: Dre 05 June 2010 03:11:40PM 0 points [-]

But the important (and moral) question here is "how do we count the people for utility purposes." We also need a normative way to aggregate their utilities, and one vote per person would need to be justified separately.

Comment author: Dre 22 May 2010 06:19:38AM 3 points [-]

I don't know game theory very well, but wouldn't this only work as long as not everyone did it. Using the car example, if these contracts were common practice, you could have one for 4000 and the dealer could have one for 5000, in which case you could not reach the pareto optimum.

In general, doesn't this infinitely regress up meta levels? Adopting precomittments is beneficial, so everyone adopts them, then pre-precomittments are beneficial... (up to some constraint from reality like being too young, although then parents might become involved)

Is this (like some of Schelling's stuff I've read) more instrumental than pure game theory? I can see how this would work in the real world, but I'm not sure that it would work in theory. (Please feel free to correct any and all of my game theory)

Comment author: Oscar_Cunningham 10 May 2010 05:50:06PM *  2 points [-]

Surely the incentive to build an AGI is so great that additional incentives are somewhat meaningless?

Comment author: Dre 10 May 2010 08:59:01PM 0 points [-]

I think the majority of people don't evaluate AGI incentives rationally, especially failing to fully see the possibilities of it. Whereas this is an easy to imagine benefit.

Comment author: cupholder 29 April 2010 03:33:15AM *  1 point [-]

I avoided this problem by using a hard-to-Google pseudonym, figuring that I could always make a new account or just stop posting if I majorly screwed up. I don't know if pseudonymity alone would reassure other lurkers, though; framing it as fictional roleplaying might be more useful for people who aren't me.

ETA: perhaps adding a reminder to the FAQ that pseudonymity is acceptable would help? And linking the FAQ more prominently.

Comment author: Dre 29 April 2010 03:51:30AM 2 points [-]

Personally, pseudonymity wasn't that helpful, its not that I didn't want to risk my good name or something, as much as that I just didn't want to be publicly wrong among intelligent people. Even if people didn't know that the comment was from me per se, they were still (hypothetically) disagreeing with my ideas and I would still know that the post was mine. For me it was more hyperbolic discounting than it was rational cost-benefit analysis.

Comment author: Alicorn 27 April 2010 09:07:17PM 5 points [-]

This sounds like fun; I'm not sure how useful it would be, but it might be fun enough to warrant trying even if it probably won't help.

Comment author: Dre 29 April 2010 02:41:57AM 1 point [-]

As a semi-lurker, this likely would have been very helpful for me. One problem that I had is a lack of introduction to posting. You can read everything, but its hard to learn how to post well without practice. As others have remarked, bad posts get smacked down fairly hard, so this makes it hard for people to get practice... vicious cycle. Having this could create an area where people who are not confident enough to post to the full site could get practice and confidence.

View more: Prev | Next