Posts

Sorted by New

Wiki Contributions

Comments

I spent much of my childhood obsessing over symmetry. At one point I wanted to be a millionaire solely so I could buy a mansion, because I had never seen a symmetrical suburban house.

I'll replace it without the spacing so it's more compact. Sorry about that, I'll work on my comment etiquette.

Maybe I'm wrong, but it seems most people here follow the decision theory discussions just for fun. Until introduced, we just didn't know it was so interesting! That's my take anyways.

Big Edit: Jack formulated my ideas better, so see his comment.
This was the original: The fact that the universe hasn't been noticeably paperclipped has got to be evidence for a) the unlikelihood of superintelligences, b) quantum immortality, c) our universe being the result of a non-obvious paperclipping (the theists were right after all, and the fine-tuned universe argument is valid), d) the non-existence of intelligent aliens, or e) that superintelligences tend not to optimize things that are astronomically visible (related to c). Which of these scenarios is most likely? Related question: If we built a superintelligence without worrying about friendliness or morality at all, what kind of things would it optimize? Can we even make a guess? Would it be satisfied to be a dormant Laplace's Demon?

I had a dream where some friends and I invaded the "Less Wrong Library", and I agree it was most impressive. ...in my dream.

^ Yossarian, a character in the novel Catch 22, by Joseph Heller.

I am probably in way over my head here, but...

The closest thing to teleportation I can imagine is uploading my mind and sending the information to my intended destination at lightspeed. I wouldn't mind if once the information was copied the teleporter deleted the old copy. If instead of 1 copy, the teleporter made 50 redundant copies just in case, and destroyed 49 once it was confirmed the teleportation was successful, would that be like killing me 49 times? Are 50 copies of the same mind being tortured any different than 1 mind being tortured? I do not think so. It is just redundant information, there is no real difference in experience. Thus, in my mind, only 1 of the 50 minds matter (or the 50 minds are essentially 1 mind). The degree to which the other 49 matter is only equal to the difference in information they encode. (Of course, a superintelligence would see about as much relative difference in information between humans as we humans see in ants; but we must take an anthropocentric view of state complexity.)

The me in other quantum branches can be very, very similar to the me in this one. I don't mind dying in one quantum branch all that much if the me not-dying in other quantum branches is very similar to the me that is dying. The reason I would like there to be more mes in more quantum branches is because other people care about the mes. That is why I wouldn't play quantum immortality games (along with the standard argument that in the vast majority of worlds you would end up horribly maimed.)

If the additional identical copies count for something, despite my intuitions, at the very least I don't think their value should aggregate linearly. I would hazard a guess that a utility function which does that has something wrong with it. If you had 9 identical copies of Bob and 1 copy of Alice, and you had to kill off 8 copies, there must be some terminal value for complexity that keeps you from randomly selecting 8, and instead automatically decides to kill off 8 Bobs (given that Alice isn't a serial killer, utility of Alice and Bob being equal, yada yada yada.)

I think that maybe instead of minds it would be easier and less intuition-fooling to think about information. I also think that, like I said, I am probably missing the point of the post.

I'm slowly waking up to the fact that people at the Singularity Institute as well as Less Wrong are dealing with existential risk as a Real Problem, not just a theoretical idea to play with in an academic way. I've read many essays and watched many videos, but the seriousness just never really hit my brain. For some reason I had never realized that people were actually working on these problems.

I'm an 18 year old recent high school dropout, about to nab my GED. I could go to community college, or I could go along with my plan of leading a simple life working a simple job, which I would be content doing. I'm a sort of tabla rossa here: if I wanted to get into the position where I would be of use to the SIAI, what skills should I develop? Which of the 'What we're looking for' traits would be most useful in a few years? (The only thing I'm good at right now is reading very quickly and retaining large amounts of information about various fields: but I rarely understand the math, which is currently very limiting.)

Point taken, I just think that it's normally not good. I also think that maybe, for instance, libertarians and liberals have different conceptions of selfishness that lead the former to go 'yay, selfishness!' and the latter to go 'boo, selfishness!'. Are they talking about the same thing? Are we talking about the same thing? In my personal experience, selfishness has always been demanding half of the pie when fairness is one-third, leading to conflict and bad experiences that could have been avoided. We might just have different conceptions of selfishness.

I liked this comment, but as anonym points out far below, the original blog post is really talking about "pre-scientific and scientific ways of investigating and understanding the world." - anonym. So 'just a few centuries ago' might not be very accurate in the context of the post. The author's fault, not yours; but just sayin'.

Load More