Comment author: [deleted] 07 December 2009 05:43:06AM 3 points [-]

Just thought I'd mention this: as a child, I detested praise. (I'm guessing it was too strong a stimulus, along with such things as asymmetry, time being a factor in anything, and a mildly loud noise ceasing.) I wonder how it's affected my overall development.

Incidentally, my childhood dislike of asymmetry led me to invent the Thue-Morse sequence, on the grounds that every pattern ought to be followed by a reversal of that pattern.

In response to comment by [deleted] on Open Thread: December 2009
Comment author: Yorick_Newsome 08 December 2009 04:39:32AM 0 points [-]

I spent much of my childhood obsessing over symmetry. At one point I wanted to be a millionaire solely so I could buy a mansion, because I had never seen a symmetrical suburban house.

Comment author: Jack 02 December 2009 12:32:58PM 0 points [-]

Some of that was probably needed to contextualize my comment.

Comment author: Yorick_Newsome 02 December 2009 12:57:34PM 0 points [-]

I'll replace it without the spacing so it's more compact. Sorry about that, I'll work on my comment etiquette.

Comment author: Wei_Dai 02 December 2009 08:23:55AM 7 points [-]

This is a bit off topic, but I find it strange that for years I was unable to find many people interested in decision theory and anthropic reasoning (especially a decision theoretic approach to anthropic reasoning) to talk with, and now they're hot topics (relatively speaking) because they're considered matters of existential risk. Why aren't more people working on these questions just because they can't stand not knowing the answers?

Comment author: Yorick_Newsome 02 December 2009 11:25:56AM 1 point [-]

Maybe I'm wrong, but it seems most people here follow the decision theory discussions just for fun. Until introduced, we just didn't know it was so interesting! That's my take anyways.

Comment author: Yorick_Newsome 02 December 2009 11:06:32AM *  2 points [-]

Big Edit: Jack formulated my ideas better, so see his comment.
This was the original: The fact that the universe hasn't been noticeably paperclipped has got to be evidence for a) the unlikelihood of superintelligences, b) quantum immortality, c) our universe being the result of a non-obvious paperclipping (the theists were right after all, and the fine-tuned universe argument is valid), d) the non-existence of intelligent aliens, or e) that superintelligences tend not to optimize things that are astronomically visible (related to c). Which of these scenarios is most likely? Related question: If we built a superintelligence without worrying about friendliness or morality at all, what kind of things would it optimize? Can we even make a guess? Would it be satisfied to be a dormant Laplace's Demon?

Comment author: steven0461 02 December 2009 03:13:10AM *  10 points [-]

After some doubts as to ability to contribute and the like, I went to be an intern in this year's summer program. It was fun and I'm really glad I went. At the moment, I'm back there as a volunteer, mostly doing various writing tasks, like academic papers.

Getting to talk a lot to people immersed in these ideas has been both educational and motivating, much more so than following things through the internet. So I'd definitely recommend applying.

Also, the house has an awesome library that for some reason isn't being mentioned. :-)

Comment author: Yorick_Newsome 02 December 2009 06:40:50AM 2 points [-]

I had a dream where some friends and I invaded the "Less Wrong Library", and I agree it was most impressive. ...in my dream.

Comment author: Kaj_Sotala 01 December 2009 02:02:19PM 3 points [-]

This quote always reminds me of another choice one: "I want to live forever, or die trying".

Comment author: Yorick_Newsome 01 December 2009 02:10:16PM 1 point [-]

^ Yossarian, a character in the novel Catch 22, by Joseph Heller.

Comment author: Yorick_Newsome 01 December 2009 01:17:40PM *  3 points [-]

I am probably in way over my head here, but...

The closest thing to teleportation I can imagine is uploading my mind and sending the information to my intended destination at lightspeed. I wouldn't mind if once the information was copied the teleporter deleted the old copy. If instead of 1 copy, the teleporter made 50 redundant copies just in case, and destroyed 49 once it was confirmed the teleportation was successful, would that be like killing me 49 times? Are 50 copies of the same mind being tortured any different than 1 mind being tortured? I do not think so. It is just redundant information, there is no real difference in experience. Thus, in my mind, only 1 of the 50 minds matter (or the 50 minds are essentially 1 mind). The degree to which the other 49 matter is only equal to the difference in information they encode. (Of course, a superintelligence would see about as much relative difference in information between humans as we humans see in ants; but we must take an anthropocentric view of state complexity.)

The me in other quantum branches can be very, very similar to the me in this one. I don't mind dying in one quantum branch all that much if the me not-dying in other quantum branches is very similar to the me that is dying. The reason I would like there to be more mes in more quantum branches is because other people care about the mes. That is why I wouldn't play quantum immortality games (along with the standard argument that in the vast majority of worlds you would end up horribly maimed.)

If the additional identical copies count for something, despite my intuitions, at the very least I don't think their value should aggregate linearly. I would hazard a guess that a utility function which does that has something wrong with it. If you had 9 identical copies of Bob and 1 copy of Alice, and you had to kill off 8 copies, there must be some terminal value for complexity that keeps you from randomly selecting 8, and instead automatically decides to kill off 8 Bobs (given that Alice isn't a serial killer, utility of Alice and Bob being equal, yada yada yada.)

I think that maybe instead of minds it would be easier and less intuition-fooling to think about information. I also think that, like I said, I am probably missing the point of the post.

Comment author: Yorick_Newsome 01 December 2009 06:39:15AM *  19 points [-]

I'm slowly waking up to the fact that people at the Singularity Institute as well as Less Wrong are dealing with existential risk as a Real Problem, not just a theoretical idea to play with in an academic way. I've read many essays and watched many videos, but the seriousness just never really hit my brain. For some reason I had never realized that people were actually working on these problems.

I'm an 18 year old recent high school dropout, about to nab my GED. I could go to community college, or I could go along with my plan of leading a simple life working a simple job, which I would be content doing. I'm a sort of tabla rossa here: if I wanted to get into the position where I would be of use to the SIAI, what skills should I develop? Which of the 'What we're looking for' traits would be most useful in a few years? (The only thing I'm good at right now is reading very quickly and retaining large amounts of information about various fields: but I rarely understand the math, which is currently very limiting.)

Comment author: DanArmak 30 November 2009 05:21:19PM 4 points [-]

I can imagine a super giant mega list of situations where love is a bad thing, too. Like when people kill themselves or others. That doesn't mean its default connotations should be negative.

The reason "selfishness" has negative connotations are at least partly due to Western culture (with Christian antecedents in "man is fundamentally evil" and "seek not pleasure in this life"). They're not objectively valid.

Comment author: Yorick_Newsome 01 December 2009 01:27:56AM 3 points [-]

Point taken, I just think that it's normally not good. I also think that maybe, for instance, libertarians and liberals have different conceptions of selfishness that lead the former to go 'yay, selfishness!' and the latter to go 'boo, selfishness!'. Are they talking about the same thing? Are we talking about the same thing? In my personal experience, selfishness has always been demanding half of the pie when fairness is one-third, leading to conflict and bad experiences that could have been avoided. We might just have different conceptions of selfishness.

Comment author: alyssavance 30 November 2009 04:11:02AM 16 points [-]

I don't buy a lot of that, at least if we're referring to the 18th century.

  • The founders of America knew damn well that there were no such things as gods, at least not ones that actively intervened in any way we could detect.

  • They were wrong about some details of astronomy, but they had most of the basic outlines right (Lagrange's works describe the celestial mechanics of the solar system in quite some detail).

  • The theories of classical mechanics were known and well understood. Quantum mechanics and relativity weren't, of course, but I am hesitant to refer to this as people being wrong, as there were very few observations available to them which required these to be explained (the perihelion advance of Mercury, for instance, wasn't discovered until 1859).

  • The 18th century view of cosmology was essentially ours, except that it lacked knowledge about how it was organized on a larger scale (galaxies within clusters within superclusters and all that) due to the lack of sufficiently powerful telescopes, and many supposed the universe to be infinite instead of beginning with the Big Bang.

  • The structure of democratic government invented during this period works pretty darn well, by comparison with everything that came before. There have, for instance, been no wars in Western Europe for sixty years, something that has never happened before.

  • Lavoisier and Lomonosov's theories of chemistry were, in fact, largely correct. The periodic table wasn't known, but there was no widely used wrong system of grouping the elements.

  • The full theory of evolution was not known (people still believed in spontaneous generation, for instance), but the idea that groups of similar species arose from a common ancestor by descent with modification was widely known and accepted.

The proper extrapolation from this is not "everything you know is wrong", but "there are lots of things you don't know, and lots of non-technical things you 'know' are wrong."

Comment author: Yorick_Newsome 30 November 2009 06:34:12AM 2 points [-]

I liked this comment, but as anonym points out far below, the original blog post is really talking about "pre-scientific and scientific ways of investigating and understanding the world." - anonym. So 'just a few centuries ago' might not be very accurate in the context of the post. The author's fault, not yours; but just sayin'.

View more: Next