Comment author: Capla 30 June 2015 11:21:42PM 2 points [-]

I am curious to what extent the info in this post is common knowledge. Are these things familiar to people?

Comment author: Mirzhan_Irkegulov 30 June 2015 08:35:35PM 7 points [-]

I'm definitely interested in subsequent posts on sleep, so please continue posting. I don't want to practice polyphasic sleep or super-optimize sleep anyway, rather just generally improve the quality of sleep, because I sleep much and still wake up feeling like crap.

I kindly ask you, however, to change the font to defaults and break paragraphs with 2 lines instead of indentation, it makes it much more readable.

Comment author: Capla 30 June 2015 10:12:07PM 5 points [-]

Better?

Comment author: Mike_Blume 15 July 2008 08:39:05AM 23 points [-]

You mentioned rationalist fiction, and my mind immediately jumped to this - are you familiar with the graphic short story "Fleep"? Main character passes out, comes to in a phone booth encased in concrete, with a phonebook full of gibberish, a letter in his pocket he can't read, a few coins and various sundries. From inside the booth he experiments and calculates, manages to work out where he is, *who* he is, what's happened, and what to do next.

Comment author: Capla 09 June 2015 12:04:55AM *  3 points [-]

Ok. I just read another comic by the same author, Demon, about a (sociopathic) character who discovers that he can't die (in an interesting way). It's great! The protagonist does exactly the sort of experimentation I would do in his situation, and several charterers make plans that are authentically clever, and legitimately surprising.

Highly recommended.

Comment author: Mike_Blume 15 July 2008 08:39:05AM 23 points [-]

You mentioned rationalist fiction, and my mind immediately jumped to this - are you familiar with the graphic short story "Fleep"? Main character passes out, comes to in a phone booth encased in concrete, with a phonebook full of gibberish, a letter in his pocket he can't read, a few coins and various sundries. From inside the booth he experiments and calculates, manages to work out where he is, *who* he is, what's happened, and what to do next.

Comment author: Capla 08 June 2015 09:03:53PM 1 point [-]

That was fantastic!

Comment author: Dan 23 November 2007 07:55:15AM -1 points [-]

Most apples aren't good to eat. Only those specifically bred for such purpose.

In response to comment by Dan on Leaky Generalizations
Comment author: Capla 01 June 2015 09:05:45PM 0 points [-]

Aren't most of the apples on earth precisely the ones we bread to be edible (and tasty)?

Comment author: Jiro 04 May 2015 02:28:38PM 2 points [-]

People shooting other people with blaster rifles and flying spaceships sounds cool too.

Comment author: Capla 09 May 2015 08:30:23PM 0 points [-]

I'm not sure what your point is?

Comment author: Gunnar_Zarncke 06 May 2015 09:39:46PM 2 points [-]

I think people acquire a belief that a post or comment of a certain felt quality deserves a rough number of upvotes or downvotes, so they don't add or subtract karma when the post or comment hits that level.

Sounds familiar and could indeed explain why some posts do not continue to accumulate votes after some time.

Let's check:

I think a post deserves a certain number of votes/karma and up/downvote accordingly

Submitting...

Comment author: Capla 06 May 2015 10:26:14PM 6 points [-]

I generally don't care what level a post is at if I'm going to upvote it, but when I see something has a negative core that I think is unfair, I'll bump it up by one.

Comment author: jacob_cannell 29 April 2015 07:08:57AM 1 point [-]

I mean that the physical information which defines - or alternatively is required to reconstruct - a human mind is not strictly localized in space to the confines of a single brain.

Using the hardware/software analogy, the brain is the hardware, the mind is the software, but the mind is distributed software: each mind program runs mainly on a single brain, but it also has partial cached copies distributed on other brains.

For example, if two people spend a bunch of time together, they are going to have many shared memories. Later if both die and the brain of one is preserved, the shared memories are useful for constructing both minds. With many preserved brains, you get multiple viewpoints for many overlapping memories which allow for more precise reconstruction.

Comment author: Capla 06 May 2015 09:19:09PM 0 points [-]

I'm a little disturbed by the thought of reconstructing my personality from others' impressions of my personality.

Comment author: Gondolinian 01 May 2015 06:04:30PM 13 points [-]

There is a not necessarily large, but definitely significant chance that developing machine intelligence compatible with human values may very well be the single most important thing that humans have or will ever do, and it seems very likely that economic forces will make strong machine intelligence happen soon, even if we're not ready for it.

So I have two questions about this: firstly, and this is probably my youthful inexperience talking (a big part of why I'm posting this here), but I see so many rationalists do so much awesome work on things like social justice, social work, medicine, and all kinds of poverty-focused effective altruism, but how can it be that the ultimate fate of humanity to either thrive beyond imagination or perish utterly may rest on our actions in this century, and yet people who recognize this possibility don't do everything they can to make it go the way we need it to? This sort of segues in to my second question, which is what is the most any person, more specifically, I can do for FAI? I'm still in high school, so there really isn't that much keeping me from devoting my life to helping the cause of making sure AI is friendly. What would that look like? I'm a village idiot by LW standards, and especially bad at math, so I don't think I'd be very useful on the "front lines" so to speak, but perhaps I could try to make a lot of money and do FAI-focused EA? I might be more socially oriented/socially capable than many here, perhaps I could try to raise awareness or lobby for legislation?

Comment author: Capla 06 May 2015 09:12:45PM 1 point [-]

which is what is the most any person, more specifically, I can do for FAI?

First thing you should do is talk to the people that are already involved in this. CFAR seems to be the gateway for man people (at least, it was for me).

Comment author: Viliam 02 May 2015 05:06:35PM *  6 points [-]

people who recognize this possibility don't do everything they can to make it go the way we need it to

Despite all talking about rationality, we are still humans with all typical human flaws. Also, it is not obvious which way it needs to go. Even if we had unlimited and infinitely fast processing power, and could solve mathematically all kinds of problems related to Löb's theorem, I still would have no idea how we could start transferring human values to the AI, considering that even humans don't understand themselves, and ideas like "AI should find a way to make humans smile" can lead to horrible outcomes. So maybe the first step would be to upload some humans and give them more processing power, but humans can also be horrible (and the horrible ones are actually more likely to seize such power), and the changes caused by uploading could make even nice people go insane.

So, what is the obvious next step, other than donating some money to the research, which will most likely conclude that further research is needed? I don't want to discourage anyone who donates or does the research, just saying that the situation with the research is frustrating by its lack of feedback. On the scale where 0 is the first electronic computer and 100 is the Friendly AI, are we at least at point 1? If we happen to be there, how would we know that?

Comment author: Capla 06 May 2015 09:07:52PM 0 points [-]

So maybe the first step would be to upload some humans and give them more processing power,

I would like this plan, but there are reasons to think that the path to WBE passes through nueromorphic AI which is exceptionally likely to be unfriendly, since the principle is basically to just copy parts of the human brain without understanding how the human brain works.

View more: Prev | Next