Dogs are incredibly good at perceiving through their nose. They smell almost everything around them including other species' old feces. Some dogs even eat their own. A lot of smells must be something unbearable with human nose but they take them well. If their physical mechanism enables them to embrace all kinds of disgusting smells with less rejection, I think the same mechanism also makes the nature of dogs more tolerant and altruistic than that of human being who are easily disgusted. Dogs are overall just nice : )
Dogs were domesticated in such a way so that their very existence depends on them being nice to humans.
I'm going to provide a paperclip senerio below, please tell me if you think it's impossible.
Imagine a struggling office supplies company that's pressureing it's empoyees to produce innovative results or they'll be fired. They hired an AI guy who has yet to produce any significant results. After a meeting where the boss basically tells the AI guy to produce something by the end of the month or he's out. Our AI is a gifted coder, but lacks a lot commen sense, he's also quite poor, and is desparate to give the company an edge so he can save the day. In a flash of insight combined with some open source deep learning sites (like kaggle), he's able to create the first self recursive AI, and he tests it out by telling it to maximise the amount of paperclips his factory makes.
The AI is going to stupid, but it's going to quickly find out how to turn the world into paperclips. It's not going to be a general intelligence. But it doesn't have to be to cause problems.
I'm so confused about the wand. Why does Harry still have the wand? Obviously Voldemort should have demanded that Harry drop the wand before giving him 60 seconds to speak.
Maybe this is a test for Harry. V wants Harry to find a way to win.
Doesn't the same logic apply to the gatekeeper?
The Gatekeeper usually wants to publish if they win, to brag. Their strategy isn't usually a secret, it's simply to resist.
Specifically because of which argument?
It just seemed like you had a great answer to each of his comments. You chipped away at my reservations bit my bit.
Although I do think a FAI is more likely than most people.
It was good that polymathwannabe decided to end the experiment a bit earlier than was planned
Wow. I gravely underestimated my chances of success toward the end, then.
It it was me, I would have let you out.
Whoa, someone actually letting the transcript out. Has that ever been done before?
Whoa, someone actually letting the transcript out. Has that ever been done before?
Yes, but only when the gatekeeper wins. If the AI wins, then they wouldn't want the transcript to get out, because then their strategy would be less effective next time they played.
Your misanthropy reminds me of myself when I was younger. I used to think the universe would be better off if there were no more humans. I think it would be good for your mental health if you read some Peter Diamandis or Stephen Pinker's "The Better Angels of our Nature". They talk about how things are getting better in world.
This one should help you empathize with other people more.
"Everyone has a secret world inside of them. All the people in the whole world, no matter how dull they seem on the outside, inside them they've got unimaginable, magnificent, wonderful, stupid, amazing worlds."
-Neil Gaiman
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Similar to some of the other ideas, but here are my framings:
Virtually all of the space in the universe have been taken over by superintelligences. We find ourselves observing the universe from one of these rare areas because it would be impossible for us to exist in one of the colonized areas. Thus, it shouldn't be too surprising that our little area of non-colonization is just now popping out a new superintelligence. The most likely outcome for an intelligent species is to watch the area around them become colonized while they cannot develop fast enough to catch up.
A dyson-sphere level intelligence knows basically everything. There is a limit to knowledge and power that can be approached. Once a species has achieved a certain level of power it simply doesn't need to continue expanding in order to guarantee its safety and the fulfillment of its values. Continued expansion has diminishing returns and it has other values or goals that counterbalance any tiny desire to continue expanding.
But it is unethical to allow all the suffering that occurs on our planet.