Yorick_Newsome
Yorick_Newsome has not written any posts yet.

Yorick_Newsome has not written any posts yet.

I'll replace it without the spacing so it's more compact. Sorry about that, I'll work on my comment etiquette.
Maybe I'm wrong, but it seems most people here follow the decision theory discussions just for fun. Until introduced, we just didn't know it was so interesting! That's my take anyways.
Big Edit: Jack formulated my ideas better, so see his comment.
This was the original:
The fact that the universe hasn't been noticeably paperclipped has got to be evidence for a) the unlikelihood of superintelligences, b) quantum immortality, c) our universe being the result of a non-obvious paperclipping (the theists were right after all, and the fine-tuned universe argument is valid), d) the non-existence of intelligent aliens, or e) that superintelligences tend not to optimize things that are astronomically visible (related to c). Which of these scenarios is most likely? Related question: If we built a superintelligence without worrying about friendliness or morality at all, what kind of things would it optimize? Can we even make a guess? Would it be satisfied to be a dormant Laplace's Demon?
I had a dream where some friends and I invaded the "Less Wrong Library", and I agree it was most impressive. ...in my dream.
^ Yossarian, a character in the novel Catch 22, by Joseph Heller.
I am probably in way over my head here, but...
The closest thing to teleportation I can imagine is uploading my mind and sending the information to my intended destination at lightspeed. I wouldn't mind if once the information was copied the teleporter deleted the old copy. If instead of 1 copy, the teleporter made 50 redundant copies just in case, and destroyed 49 once it was confirmed the teleportation was successful, would that be like killing me 49 times? Are 50 copies of the same mind being tortured any different than 1 mind being tortured? I do not think so. It is just redundant information, there is... (read more)
I'm slowly waking up to the fact that people at the Singularity Institute as well as Less Wrong are dealing with existential risk as a Real Problem, not just a theoretical idea to play with in an academic way. I've read many essays and watched many videos, but the seriousness just never really hit my brain. For some reason I had never realized that people were actually working on these problems.
I'm an 18 year old recent high school dropout, about to nab my GED. I could go to community college, or I could go along with my plan of leading a simple life working a simple job, which I would be content... (read more)
Point taken, I just think that it's normally not good. I also think that maybe, for instance, libertarians and liberals have different conceptions of selfishness that lead the former to go 'yay, selfishness!' and the latter to go 'boo, selfishness!'. Are they talking about the same thing? Are we talking about the same thing? In my personal experience, selfishness has always been demanding half of the pie when fairness is one-third, leading to conflict and bad experiences that could have been avoided. We might just have different conceptions of selfishness.
I liked this comment, but as anonym points out far below, the original blog post is really talking about "pre-scientific and scientific ways of investigating and understanding the world." - anonym. So 'just a few centuries ago' might not be very accurate in the context of the post. The author's fault, not yours; but just sayin'.
I spent much of my childhood obsessing over symmetry. At one point I wanted to be a millionaire solely so I could buy a mansion, because I had never seen a symmetrical suburban house.