Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
That plan worked for you, but you're very unusual. You'd probably be an even bigger intellectual celebrity if you took the academic path.
Someone closer to average, like me, cannot do research alone, only in a group of like-minded people.
I wonder how this bleak picture might change if we throw cheap cognitive enhancement into the mix. Especially considering Eliezer's idea that increased intelligence should make the poor folks better at cooperating with each other.
When the robot revolution happens, we need to have many supporters of efficient charity among the ruling class, at least while the ruling class still consists of humans.
Yeah, I like this solution too. It doesn't have to be based on the universal distribution, any distribution will work. You must have some way of distributing your single unit of care across all creatures in the multiverse. What matters is not the large number of creatures affected by the mugger, but their total weight according to your care function, which is less than 1 no matter what outlandish numbers the mugger comes up with. The "leverage penalty" is just the measure of your care for not losing $5, which is probably more than 1/3^^^^3.
I'm not sure you have avoided the question completely. When culture tells you, "X is the most important thing on which I have no particular belief", do you believe it?
So you're saying we need to mathematically model our desire for a mathematical model of X? :-) That might be a line of attack, but I don't yet see what it gives us, compared to attacking the problem directly...
One could try to create a model of agents that respond to such exhortations. Maybe such agents could be uncertain about their own utility function, like in Dewey's value learning paper.
If we try to translate sentences involving "should" into descriptive sentences about the world, they will probably sound like "action A increases the value of utility function U". If I was a consistent utility maximizer, and U was my utility function, then believing such a statement would make me take action A. No further verbal convincing would be necessary.
Since we are not consistent utility maximizers, we run an approximate implementation of that mechanism which is vulnerable to verbal manipulation, often by sentences involving "should". So the murkiness in the meaning of "should" is proportional to the difference between us and utility maximizers. Does that make sense?
(It may or may not be productive to describe a person as a utility maximizer plus error. But I'm going with that because we have no better theory yet.)
Interestingly, the boxes from the movie Primer can be made to avoid that problem.
A short recap of how they work. You switch the box on, walk away from it to avoid running into past you, come back to the box several hours later, switch it off, climb inside, sit there for several hours, and climb out at the moment the box was switched on. One reason this model is cool is that it avoids one common problem with fictional time travel, the changing location of the Earth. You don't end up in interplanetary space because you travel back along the path of the box in spacetime.
So here's how you make the Primer boxes obey conservation of mass as well. The idea is that a box containing a time-reversed human should weigh less than an empty box. Let's say you weigh 70kg, and the box weighs 100kg when empty and switched off. When you switch the box on, a past version of you climbs out, and the box now weighs 30kg. Several hours later, a future you climbs in and the box now weighs 100kg, at which point the box switches off and sits there as empty as before.
At first I felt pretty smart for figuring this out because this whole issue never came up in the movie at all. And then I remembered the small detail that the boxes in the movie were an accidental invention, whose original purpose was to reduce the mass of objects. Wow.
That got me thinking about the other possible hole in the movie, namely all the abandoned timelines. Can this model of time travel be made to work correctly with not just spacetime paths and conservation of mass, but also causality and probabilistically branching timelines? For example, if you travel back in time and kill your past self, can that yield a unique consistent assignment of probabilities to timelines, where all time travelers "come from somewhere" and can't affect their "probability weight"? The result was this comment, for which I later found a proof of consistency which this margin is too small to contain ;-)
It seems to me that private_messaging is right and explains his point beautifully. Here's a Robin Hanson post making a similar point. Also see this discussion, especially Wei Dai's comment.
All it takes is a username and password
Already have an account and just want to login?
Forgot your password?