Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
Alternative vote is Instant Runoff Voting, right? If so, then it's bad, for it fails the monotonicity criterion. That means that raising one's vote re a particular candidate doesn't necessarally do the obvious thing.
Personally, I favor Approval Voting, since it seems to be the simplest possible change to our voting system that would still produce large gains.
(Also, would be nice if we (the US, that is) could switch to algorithmic redistricting and completely get rid of the whole gerrymandering nonsense.)
Hrm... But "self-interest" is itself a fairly broad category, including many sub categories like emotional state, survival, fulfillment of curiosity, self determination, etc... Seems like it wouldn't be that hard a step, given the evolutionary pressures there have been toward cooperation and such, for it to be implemented via actually caring about the other person's well being, instead of it secretly being just a concern for your own. It'd perhaps be simpler to implement that way. It might be partly implemented by the same emotional reinforcement system, but that's not the same thing as saying that the only think you care about is your own reinforcement system.
Why would actual altruism be a "new kind" of motivation? What makes it a "newer kind" than self interest?
Re your checking method to construct/simulate an acausal universe, won't work near as I can tell.
Specifically, the very act of verifying a string to be a life (or life + time travel or whatever) history requires actually computing the CA rules, doesn't it? So in the act of verification, if nothing else, all the computing needed to make a string that contains minds actually contain the minds would have to occur, near as I can make out.
He wasn't endorsing that position. He was saying "pebblesorters should not do so, but they pebblesorter::should do so."
ie, "should" and "pebblesorter::should" are two different concepts. "should" appeals to that which is moral, "pebblesorter::should" appeals to that which is prime. The pebblesorters should not have killed him, but they pebblesorter::should have killed them.
Think of it this way: imagine the murdermax function that scores states/histories of reality based on how many people were murdered. Then people shouldn't be murdered, but they murdermax::should be murdered. This is not an endorsement of doing what one murdermax::should do. Not at all. Doing the murdermax thing is bad.
Looking down the thread, I think one or two others may have beat me to it too. But yes, It seems at least that Omega would be handing the programmers a really nice toy and (conditional on the programmers having the skill to wield it), well..
Yes, there is that catch, hrm... Could put something into the code that makes the inhabitants occasionally work on the problem, thus really deeply intertwining the two things.
Game3 has an entirely separate strategy available to it: Don't worry initially about trying to win... instead code a nice simulator/etc for all the inhabitants of the simulation, one that can grow without bound and allows them to improve (and control the simulation from inside).
You might not "win", but a version of three players will go on to found a nice large civilization. :) (Take that Omaga.)
(In the background, have it also running a thread computing increasingly large numbers and some way to randomly decide which of some set of numbers to output, to effectively randomize which one of the three original players wins. Of course, that's a small matter compared to the simulated world which, by hypothesis, has unbounded computational power available to it.)
You know, I want to say you're completely and utterly wrong. I want to say that it's safe to at least release The Actual Explanation of Consciousness if and when you should solve such a thing.
But, sadly, I know you're absolutely right re the existence of trolls which would make a point of using that to create suffering. Not just to get a reaction, but some would do it specifically to have a world they could torment beings.
My model is not that all those trolls are identical (In that I've seen some that will explicitly unambiguously draw the line and recognize that egging on suicidal people is something that One Does Not Do, but I also know (seen) that all too many gleefully do do that.)
I'm sorry. *offers a hug* Not sure what else to say.
For what it's worth, in response to this, I just sent 20$ to each of SENS and SIAI.
All it takes is a username and password
Already have an account and just want to login?
Forgot your password?