Comment author: Gavin 27 February 2016 07:34:28PM 3 points [-]

Similar to some of the other ideas, but here are my framings:

  1. Virtually all of the space in the universe have been taken over by superintelligences. We find ourselves observing the universe from one of these rare areas because it would be impossible for us to exist in one of the colonized areas. Thus, it shouldn't be too surprising that our little area of non-colonization is just now popping out a new superintelligence. The most likely outcome for an intelligent species is to watch the area around them become colonized while they cannot develop fast enough to catch up.

  2. A dyson-sphere level intelligence knows basically everything. There is a limit to knowledge and power that can be approached. Once a species has achieved a certain level of power it simply doesn't need to continue expanding in order to guarantee its safety and the fulfillment of its values. Continued expansion has diminishing returns and it has other values or goals that counterbalance any tiny desire to continue expanding.

Comment author: RedErin 01 March 2016 03:26:20PM -1 points [-]

But it is unethical to allow all the suffering that occurs on our planet.

In response to Crazy Ideas Thread
Comment author: farbeyond 10 July 2015 09:43:51PM 0 points [-]

Dogs are incredibly good at perceiving through their nose. They smell almost everything around them including other species' old feces. Some dogs even eat their own. A lot of smells must be something unbearable with human nose but they take them well. If their physical mechanism enables them to embrace all kinds of disgusting smells with less rejection, I think the same mechanism also makes the nature of dogs more tolerant and altruistic than that of human being who are easily disgusted. Dogs are overall just nice : )

Comment author: RedErin 20 July 2015 03:46:48PM 1 point [-]

Dogs were domesticated in such a way so that their very existence depends on them being nice to humans.

Comment author: RedErin 05 May 2015 05:35:40PM -3 points [-]

I'm going to provide a paperclip senerio below, please tell me if you think it's impossible.

Imagine a struggling office supplies company that's pressureing it's empoyees to produce innovative results or they'll be fired. They hired an AI guy who has yet to produce any significant results. After a meeting where the boss basically tells the AI guy to produce something by the end of the month or he's out. Our AI is a gifted coder, but lacks a lot commen sense, he's also quite poor, and is desparate to give the company an edge so he can save the day. In a flash of insight combined with some open source deep learning sites (like kaggle), he's able to create the first self recursive AI, and he tests it out by telling it to maximise the amount of paperclips his factory makes.

The AI is going to stupid, but it's going to quickly find out how to turn the world into paperclips. It's not going to be a general intelligence. But it doesn't have to be to cause problems.

Comment author: wobster109 02 March 2015 04:21:14PM 3 points [-]

I'm so confused about the wand. Why does Harry still have the wand? Obviously Voldemort should have demanded that Harry drop the wand before giving him 60 seconds to speak.

Comment author: RedErin 02 March 2015 06:08:11PM 0 points [-]

Maybe this is a test for Harry. V wants Harry to find a way to win.

Comment author: lmm 13 February 2015 07:24:30PM 0 points [-]

Doesn't the same logic apply to the gatekeeper?

Comment author: RedErin 13 February 2015 09:20:33PM 0 points [-]

The Gatekeeper usually wants to publish if they win, to brag. Their strategy isn't usually a secret, it's simply to resist.

Comment author: polymathwannabe 10 February 2015 08:54:36PM -1 points [-]

Specifically because of which argument?

Comment author: RedErin 13 February 2015 09:10:54PM 0 points [-]

It just seemed like you had a great answer to each of his comments. You chipped away at my reservations bit my bit.

Although I do think a FAI is more likely than most people.

Comment author: polymathwannabe 08 February 2015 04:49:01PM 10 points [-]

It was good that polymathwannabe decided to end the experiment a bit earlier than was planned

Wow. I gravely underestimated my chances of success toward the end, then.

Comment author: RedErin 10 February 2015 08:08:47PM 1 point [-]

It it was me, I would have let you out.

Comment author: Luke_A_Somers 08 February 2015 04:53:21PM 7 points [-]

Whoa, someone actually letting the transcript out. Has that ever been done before?

Comment author: RedErin 10 February 2015 08:08:00PM 1 point [-]

Whoa, someone actually letting the transcript out. Has that ever been done before?

Yes, but only when the gatekeeper wins. If the AI wins, then they wouldn't want the transcript to get out, because then their strategy would be less effective next time they played.

Comment author: RedErin 10 February 2015 08:04:43PM 1 point [-]

Your misanthropy reminds me of myself when I was younger. I used to think the universe would be better off if there were no more humans. I think it would be good for your mental health if you read some Peter Diamandis or Stephen Pinker's "The Better Angels of our Nature". They talk about how things are getting better in world.

In response to Quotes Repository
Comment author: RedErin 10 February 2015 06:21:52PM 7 points [-]

This one should help you empathize with other people more.

"Everyone has a secret world inside of them. All the people in the whole world, no matter how dull they seem on the outside, inside them they've got unimaginable, magnificent, wonderful, stupid, amazing worlds."

-Neil Gaiman

View more: Next