Comment author: lukeprog 13 April 2011 02:47:02AM 5 points [-]

Good question! One way to achieve both things is to spend time anticipating relatively certain future pleasures and also lower your expectations concerning how future complex (and thus uncertain) events will play out.

Comment author: LukeStebbing 13 April 2011 05:54:09AM 2 points [-]

Good point, but since an accurate model of the future is helpful, this may be a case where you should purchase your warm fuzzies separately.

(Since people tend to make overly optimistic plans, the two strategies might be similar in practice.)

Comment author: PhilGoetz 09 April 2011 04:44:26PM 3 points [-]

As Eliezer pointed out, if it's fairness, then you probably have a curved but continuous utility function - and with the numbers involved, it has to be a curve specifically tailored to the example.

Comment author: LukeStebbing 09 April 2011 05:44:15PM 2 points [-]

Where did Eliezer talk about fairness? I can't find it in the original two threads.

This comment talked about sublinear aggregation, but there's a global variable (the temperature of the, um, globe). Swimmer963 is talking about personally choosing specks and then guessing that most people would behave the same. Total disutility is higher, but no one catches on fire.

If I was forced to choose between two possible events, and if killing people for organs had no unintended consequences, I'd go with the utilitarian cases, with a side order of a severe permanent guilt complex.

On the other hand, if I were asked to accept the personal benefit, I would behave the same as Swimmer963 and with similar expectations. Interestingly, if people are similar enough that TDT applies, my personal decisions become normative. There's no moral dilemma in the case of torture vs specks, though, since choosing torture would result in extreme psychological distress times 3^^^3.

Comment author: LukeStebbing 01 April 2011 02:59:06AM 2 points [-]

I loved Erfworld Book 1, and a few months ago I was racking my brains for more rationalist protagonists, so I can't believe I missed that.

I was originally following it on every update, but there was a lull and I stopped reading for a while. When I started again, Book 1 was complete so I read it straight through from the beginning. As good as it was as serial fiction, it was even better as a book. Anyone else experience that?

Comment author: LukeStebbing 31 March 2011 08:49:59PM 0 points [-]

I'll be there.

Comment author: atucker 22 March 2011 10:44:55PM 0 points [-]

My expectation is that the presently small fields of machine ethics and neuroscience of morality will grow rapidly and will come into contact, and there will be a distributed research subculture which is consciously focused on determining the optimal AI value system in the light of biological human nature.

Is SIAI working on trying to cause that?

It seems like it would do more harm than good, since it does a lot of work for FAI, and almost none for AI.

Comment author: LukeStebbing 27 March 2011 11:40:49PM 2 points [-]

Without speaking toward its plausibility, I'm pretty happy with a scenario where we err on the side of figuring out FAI before we figure out seed AIs.

Comment author: LukeStebbing 23 March 2011 12:21:49AM 0 points [-]

I'll be there. Morgan_Catha: have an upvote!

Comment author: lukeprog 21 March 2011 07:56:50PM *  13 points [-]

I don't get it. When low-hanging fruit is covered on Less Wrong, it's considered useful stuff. When low-hanging fruit comes from mainstream philosophy, it supposedly doesn't help show that mainstream philosophy is useful. If that's what's going on, it's a double standard, and a desperate attempt to "show" that mainstream philosophy isn't useful.

Also, saying "Well, we already know about lots of mainstream philosophy that's useful" is direct support for the central claim of my original post: That mainstream philosophy can be useful and shouldn't be ignored.

Comment author: LukeStebbing 21 March 2011 08:14:51PM 2 points [-]

What's the low-hanging fruit mixed with? If I have a concentrated basket of low-hanging fruit, I call that an introductory textbook and I eat it. Extending the tortured metaphor, if I find too much bad fruit in the same basket, I shop for the same fruit at a different store.

Comment author: NancyLebovitz 18 March 2011 12:16:46AM 20 points [-]

The fascinating thing about this situation is that Eliezer is about as high status here as it's possible for a human being to be in a non-religious group, and it's still extremely difficult for him to get people to take what he says about his experiences with food and exercise seriously.

Comment author: LukeStebbing 20 March 2011 10:53:16PM 1 point [-]

it's still extremely difficult for him to get people to take what he says about his experiences with food and exercise seriously.

For how many people was it extremely easy?

I maintain a healthy weight with zero effort, and I have a friend for whom The Hacker's Diet worked perfectly. I thought losing weight was a matter of eating less than you burn.

Then I read Eliezer's two posts. Oops, I thought. There's no reason intake reduction has to work without severe and continuing side-effects.

Comment author: Mario 12 February 2011 06:10:56AM 1 point [-]

Sorry this is so late, but I honestly completely forgot about this after I wrote it, so I never came back to see what transpired.

Anyway, I'm aware of how the marginal propensity to consume affects tax incidence, but in this case, where payroll taxes apply to every employee at every business, the only choices involved are whether to work and whether to hire, and companies have far more leeway in that decision. You can avoid the fizzlesprot tax by consuming an untaxed equivalent or finding a different, fizzlesprotless sexual fetish. You can only avoid a payroll tax by being unemployed; in practice, I don't think there is such a thing as one's marginal job. By contrast, employers look at the tax as part of the cost of hiring an additional employee, and simply won't hire the marginal worker if his or her cost is above the expected benefit. I can't imagine a situation where any significant portion of a payroll tax (as opposed to the corporate income tax) falls on the employer, so I didn't bring it up.

In response to comment by Mario on Optimal Employment
Comment author: LukeStebbing 12 February 2011 07:23:48AM *  -1 points [-]

Hmm, and yet only two-thirds of the working age population chooses to work, and some of that is part-time, which reduces the amount of labor available to employers. Labor can also move between sectors, leaving some relatively starved of workers. People who accumulate enough savings can choose to retire early and have to be enticed back into the labor market with higher wages, if they can be enticed at all. That doesn't look like a fixed supply of working hours that must be sold at any price -- the supply looks somewhat elastic.

Edit: Sorry about the tone in my original comment -- tax incidence doesn't seem to be common knowledge and I failed to consider that you might be aware of it already.

Comment author: Nornagest 11 February 2011 11:55:30PM 0 points [-]

I don't have the astrophysics background to say for sure, but if subjective time is a function of total computational resources and computational resources are a function of energy input, then you might well get more subjective time out of a highly luminous supernova precursor than a red dwarf with a lifetime of a trillion years. Existential risk isn't going to be seen in the same way in a CPU-bound civilization as in a time-bound one.

Comment author: LukeStebbing 12 February 2011 12:06:34AM 1 point [-]

If computation is bound by energy input and you're prepared to take advantage of a supernova, you still only get one massive burst and then you're done. Think of how many future civilizations could be supercharged and then destroyed by supernovae if only you'd launched that space colonization program first!

View more: Prev | Next