Comment author: Sysice 12 October 2014 04:49:05PM *  2 points [-]

HPMOR is an excellent choice.

What's your audience like? A book club (presumed interest in books, but not significantly higher maturity or interest in rationality than baseline), a group of potential LW readers, some average teenagers?

The Martian (Andy Weir) would be a good choice for a book-club-level group- very entertaining to read and promotes useful values. Definitely not of the "awareness raising" genre, though.

If you think a greater than average amount of them would be interested in rationality, I'd consider spending some time on Ted Chiang's work- only short stories at the moment, but very well received, great to read, and brings up some very good points that I'd bet most of your audience hasn't considered.

Edit: Oh, also think about Speaker for the Dead.

Comment author: Sysice 24 September 2014 11:40:35PM *  8 points [-]

Giving What We Can recommends over 10% of income. I currently donate what I can spare when I don't need the money, and have precommitted to 50% of my post-tax income in the event that I acquire a job that pays over $30,000 a year (read: once I graduate college). The problem with that is that you already have a steady income and have arranged your life around it- it's much easier to not raise expenses in response to income than it is to lower expenses from a set income.

Like EStokes said, however, the important thing isn't to get caught up in how much you should be donating in order to meet some moral requirement. It's to actually give in a way that you, yourself, can give. We all do what we can :)

Comment author: jimrandomh 24 September 2014 06:04:26PM 1 point [-]

Your problem setup contains a contradiction. You said that X and X*_i are identical copies, and then you said that they have different utility functions. This happened because you defined the utility function over the wrong domain; you specified it as (world-history, identity)=>R when it should be (world-history)=>R.

Comment author: Sysice 24 September 2014 11:34:10PM 6 points [-]

How I interpreted the problem- it's not that identical agents have different utility functions, it's just that different things happen to them. In reality, what's behind the door is behind the door, while in the simulation rewards X* with something else. X* is only unaware of whether or not he's in a simulation before he presses the button- obviously once he actually receives the utility he can tell the difference. Although the fact that nobody else has stated this makes me unsure. OP, can you clarify a little bit more?

Comment author: gjm 24 September 2014 12:58:07PM 9 points [-]

So on the face of it it seems that the only accessible outcomes are:

  • original-X chooses "sim" and gets +1; all simulated copies also choose "sim" and get +0.2 (and then get destroyed?)
  • original-X chooses "not sim" and gets +0.9; no simulated copies are made

and it seems like in fact everyone does better to choose "sim" and will do so. This is also fairly clearly the best outcome on most plausible attitudes to simulated copies' utility, though the scenario asks us to suppose that X doesn't care about those.

I'm not sure what the point of this is, though. I'm not seeing anything paradoxical or confusing (except in so far as the very notion of simulated copies of oneself is confusing). It might be more interesting if the simulated copies get more utility when they choose "not sim" rather than less as in the description of the scenario, so that your best action depends on whether you think you're in a simulation or not (and then if you expect to choose "sim", you expect that most copies of you are simulations, in which case maybe you shouldn't choose "sim"; and if you expect to choose "not sim", you expect that you are the only copy, in which case maybe you should choose "sim").

I'm wondering whether perhaps something like that was what pallas intended, and the current version just has "sim" and "not sim" switched at one point...

Comment author: Sysice 24 September 2014 02:09:32PM *  19 points [-]

It's tempting to say that, but I think pallas actually meant what he wrote. Basically, hitting "not sim" gets you a guaranteed 0.9 utility. Hitting "sim" gets you about 0.2 utility, getting closer as the number of copies increases. Even though each person strictly prefers "sim" to "not-sim," and a CDT agent would choose sim, it appears that choosing "not-sim" gets you more expected utility.

Edit: not-sim has higher expected utility for an entirely selfish agent who does not know whether he is simulated or not, because his choice affects not only his utility payout, but also acasually affects his state of simulation. Of course, this depends on my interpretation of anthropics.

In response to CEV-tropes
Comment author: Sysice 23 September 2014 03:38:33AM 9 points [-]

Most of what I know about CEV comes from the 2004 Yudkowsky paper. Considering how many of his views have changed in similar timeframes, and how the paper states multiple times that CEV is a changing work in progress, this seems like a bad thing for my knowledge of the subject. Has there been any significant public changes since then, or are we still debating based on that paper?

Comment author: KatjaGrace 23 September 2014 02:53:51AM 1 point [-]

Why do you think the scale of the bias is unlikely to be more than a few decades?

Because the differences between estimates made by people who should be highly selected for optimism (e.g. AGI researchers) and people who should be much less so (other AI researchers, and more importantly but more noisily, other people) are only a few decades.

Comment author: Sysice 23 September 2014 03:30:27AM *  1 point [-]

I'm interested in your statement that "other people" have estimates that are only a few decades off from optimistic trends. Although not very useful for this conversation, my impression is that a significant portion of informed but uninvolved people place a <50% chance of significant superintelligence occurring within the century. For context, I'm a LW reader and a member of that personality cluster, but none of the people I am exposed to are. Can you explain why your contacts make you feel differently?

Comment author: HopefullyCreative 13 August 2014 12:44:24AM 1 point [-]

Suppose we created an AGI the greatest mind ever conceived and we created it to solve humanities greatest problems. An ideal methodology for the AGI to do this would to ask for factories to produce physical components to copy itself over and over. The AGI then networks its copies all over the world creating a global mind and then generates a hoard of "mobile platforms" from which to observe, study and experiment with the world for its designed purpose.

The "robbery" is not intentional, its not intending to make mankind meaningless. The machine is merely meeting its objective of doing its utmost to find the solutions to problems for humanity. The horror is that as the machine mind expands networking its copies together and as it sends its mobile platforms out into the world eventually human discovery and invention would be dwarfed by this being. Outside of social and political forces destroying or dismantling the machine(quite likely) human beings would ultimately be forced with a problem: with the machine thinking of everything for us, and its creations doing all the hard work we really have nothing to do. In order to have anything to do we must improve ourselves to at the very lest have a mind that can compete.

Basically this is all a look at what the world would be like if our current AGI researchers did succeed in building their ideal machine and what it would mean for humanity.

Comment author: Sysice 13 August 2014 07:22:34AM 2 points [-]

I don't disagree with you- this would, indeed, be a sad fate for humanity, and certainly a failed utopia. But the failing here is not inherent to the idea of an AGI that takes action on its own to improve humanity- it's of one that doesn't do what we actually want it to do, a failure to actually achieve friendliness.

Speaking of what we actually want, I want something more like what's hinted at in the fun theory sequence than one that only slowly improves humanity over decades, which seems to be what you're talking about here. (Tell me if I misunderstood, of course.)

Comment author: Sysice 09 August 2014 04:05:21AM *  1 point [-]

The answer is, as always, "it depends." Seriously , though- I time discount to an extent, and I don't want to stop totally. I prefer more happiness to less, and I don't want to stop. (I don't care about ending date, and I'm not sure why I would want to). If a trade off exists between starting date, quality, and duration of a good situation, I'll prefer one situation over the other based on my utility function. A better course of action would be to try and get more information about my utility function, rather than debating which value is more sacred than the rest.

Comment author: Sysice 09 August 2014 04:07:42AM 0 points [-]

...Which, of course, this post also accomplishes. On second thought, continue!

Comment author: Sysice 09 August 2014 04:05:21AM *  1 point [-]

The answer is, as always, "it depends." Seriously , though- I time discount to an extent, and I don't want to stop totally. I prefer more happiness to less, and I don't want to stop. (I don't care about ending date, and I'm not sure why I would want to). If a trade off exists between starting date, quality, and duration of a good situation, I'll prefer one situation over the other based on my utility function. A better course of action would be to try and get more information about my utility function, rather than debating which value is more sacred than the rest.

Comment author: KnaveOfAllTrades 03 August 2014 06:09:16PM *  5 points [-]

If you donated because of this post, please choose the relevant option, and feel free to reply to this comment for upvotes. Note that if you prefer you can answer this poll anonymously.

(I'm interested in how much of the iceberg of donations is below the surface, since it helps get a better idea of the value of threads like this.)

Submitting...

Comment author: Sysice 04 August 2014 04:07:50AM *  4 points [-]

I've voted, but for sake of clear feedback- I just made my first donation ($100) to MIRI, directly as a result of both this thread and the donation-matching. This thread alone would not have been enough, but I would not have found out about the donation-matching without this thread. I had no negative feelings from having this thread in my recent posts list.

Consider this a positive pattern reinforced :)

View more: Prev | Next