As Eliezer pointed out, if it's fairness, then you probably have a curved but continuous utility function - and with the numbers involved, it has to be a curve specifically tailored to the example.
Where did Eliezer talk about fairness? I can't find it in the original two threads.
This comment talked about sublinear aggregation, but there's a global variable (the temperature of the, um, globe). Swimmer963 is talking about personally choosing specks and then guessing that most people would behave the same. Total disutility is higher, but no one catches on fire.
If I was forced to choose between two possible events, and if killing people for organs had no unintended consequences, I'd go with the utilitarian cases, with a side order of a severe permanent guilt complex.
On the other hand, if I were asked to accept the personal benefit, I would behave the same as Swimmer963 and with similar expectations. Interestingly, if people are similar enough that TDT applies, my personal decisions become normative. There's no moral dilemma in the case of torture vs specks, though, since choosing torture would result in extreme psychological distress times 3^^^3.
I loved Erfworld Book 1, and a few months ago I was racking my brains for more rationalist protagonists, so I can't believe I missed that.
I was originally following it on every update, but there was a lull and I stopped reading for a while. When I started again, Book 1 was complete so I read it straight through from the beginning. As good as it was as serial fiction, it was even better as a book. Anyone else experience that?
My expectation is that the presently small fields of machine ethics and neuroscience of morality will grow rapidly and will come into contact, and there will be a distributed research subculture which is consciously focused on determining the optimal AI value system in the light of biological human nature.
Is SIAI working on trying to cause that?
It seems like it would do more harm than good, since it does a lot of work for FAI, and almost none for AI.
Without speaking toward its plausibility, I'm pretty happy with a scenario where we err on the side of figuring out FAI before we figure out seed AIs.
I don't get it. When low-hanging fruit is covered on Less Wrong, it's considered useful stuff. When low-hanging fruit comes from mainstream philosophy, it supposedly doesn't help show that mainstream philosophy is useful. If that's what's going on, it's a double standard, and a desperate attempt to "show" that mainstream philosophy isn't useful.
Also, saying "Well, we already know about lots of mainstream philosophy that's useful" is direct support for the central claim of my original post: That mainstream philosophy can be useful and shouldn't be ignored.
What's the low-hanging fruit mixed with? If I have a concentrated basket of low-hanging fruit, I call that an introductory textbook and I eat it. Extending the tortured metaphor, if I find too much bad fruit in the same basket, I shop for the same fruit at a different store.
The fascinating thing about this situation is that Eliezer is about as high status here as it's possible for a human being to be in a non-religious group, and it's still extremely difficult for him to get people to take what he says about his experiences with food and exercise seriously.
it's still extremely difficult for him to get people to take what he says about his experiences with food and exercise seriously.
For how many people was it extremely easy?
I maintain a healthy weight with zero effort, and I have a friend for whom The Hacker's Diet worked perfectly. I thought losing weight was a matter of eating less than you burn.
Then I read Eliezer's two posts. Oops, I thought. There's no reason intake reduction has to work without severe and continuing side-effects.
Sorry this is so late, but I honestly completely forgot about this after I wrote it, so I never came back to see what transpired.
Anyway, I'm aware of how the marginal propensity to consume affects tax incidence, but in this case, where payroll taxes apply to every employee at every business, the only choices involved are whether to work and whether to hire, and companies have far more leeway in that decision. You can avoid the fizzlesprot tax by consuming an untaxed equivalent or finding a different, fizzlesprotless sexual fetish. You can only avoid a payroll tax by being unemployed; in practice, I don't think there is such a thing as one's marginal job. By contrast, employers look at the tax as part of the cost of hiring an additional employee, and simply won't hire the marginal worker if his or her cost is above the expected benefit. I can't imagine a situation where any significant portion of a payroll tax (as opposed to the corporate income tax) falls on the employer, so I didn't bring it up.
Hmm, and yet only two-thirds of the working age population chooses to work, and some of that is part-time, which reduces the amount of labor available to employers. Labor can also move between sectors, leaving some relatively starved of workers. People who accumulate enough savings can choose to retire early and have to be enticed back into the labor market with higher wages, if they can be enticed at all. That doesn't look like a fixed supply of working hours that must be sold at any price -- the supply looks somewhat elastic.
Edit: Sorry about the tone in my original comment -- tax incidence doesn't seem to be common knowledge and I failed to consider that you might be aware of it already.
I don't have the astrophysics background to say for sure, but if subjective time is a function of total computational resources and computational resources are a function of energy input, then you might well get more subjective time out of a highly luminous supernova precursor than a red dwarf with a lifetime of a trillion years. Existential risk isn't going to be seen in the same way in a CPU-bound civilization as in a time-bound one.
If computation is bound by energy input and you're prepared to take advantage of a supernova, you still only get one massive burst and then you're done. Think of how many future civilizations could be supercharged and then destroyed by supernovae if only you'd launched that space colonization program first!
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Good question! One way to achieve both things is to spend time anticipating relatively certain future pleasures and also lower your expectations concerning how future complex (and thus uncertain) events will play out.
Good point, but since an accurate model of the future is helpful, this may be a case where you should purchase your warm fuzzies separately.
(Since people tend to make overly optimistic plans, the two strategies might be similar in practice.)