All of RedErin's Comments + Replies

RedErin-10

But it is unethical to allow all the suffering that occurs on our planet.

2Coacher
Compared to what alternative?
1Lumifer
That depends on your ethical system, doesn't it?
RedErin20

Dogs were domesticated in such a way so that their very existence depends on them being nice to humans.

1Donald Hobson
The point of a paperclip maximiser thought experiment is that most arbitrary real world goals are bad news for humanity. Your hopeless engineer would likely create an AI that makes something that has the same relation to paperclips as chewing gum has to fruit. In the sense that evolution gave us "fruit detectors" in our taste buds but chewing gum triggers them even more. But you could be excessively conservative, insist that all paperclips must be molecularly identical to this particular paperclip and get results.
5[anonymous]
You're kidding, right? Deep neural nets are very good at learning hierarchies of features, but they are still basically doing correlative statistical inference rather than causal inference. They are going to be much too slow, with respect to actual computation speed and sample complexity, to function dangerously well in realistically complex environments (ie: not Atari games).
6[anonymous]
You answered your own question when you said: Incorrect. It very much DOES have to be a general intelligence, and far from stupid, if it is going to be smart enough to evade the efforts of humanity to squelch it. That really is the whole point behind all of these scenarios. It has to be an existential threat, or it will just be a matter of someone walking up to it and pulling the power cord when it is distracted by a nice juicy batch of paper-clip steel that someone tempts it with. Or, as Rick Deckard might have said: "If it's an idiot, it's not my problem"
RedErin00

Maybe this is a test for Harry. V wants Harry to find a way to win.

RedErin00

The Gatekeeper usually wants to publish if they win, to brag. Their strategy isn't usually a secret, it's simply to resist.

RedErin00

It just seemed like you had a great answer to each of his comments. You chipped away at my reservations bit my bit.

Although I do think a FAI is more likely than most people.

2polymathwannabe
Specifically because of which argument?
RedErin10

Whoa, someone actually letting the transcript out. Has that ever been done before?

Yes, but only when the gatekeeper wins. If the AI wins, then they wouldn't want the transcript to get out, because then their strategy would be less effective next time they played.

0Jiro
I would imagine that if we ever actually build such an AI, we would conduct some AI-box experiments to determine some AI strategies and figure out how to counter them. Humans who become the gatekeeper for the actual AI would be given the transcripts of AI-box experiment sessions to study as part of their gatekeeper training. Letting out the transcript, then, would be a good thing. It would make the AI player's job harder because in the next experiment the human player will be aware of those strategies, but when facing an actual AI, the human will be aware of those strategies.
0lmm
Doesn't the same logic apply to the gatekeeper?
RedErin20

Your misanthropy reminds me of myself when I was younger. I used to think the universe would be better off if there were no more humans. I think it would be good for your mental health if you read some Peter Diamandis or Stephen Pinker's "The Better Angels of our Nature". They talk about how things are getting better in world.

RedErin110

This one should help you empathize with other people more.

"Everyone has a secret world inside of them. All the people in the whole world, no matter how dull they seem on the outside, inside them they've got unimaginable, magnificent, wonderful, stupid, amazing worlds."

-Neil Gaiman

5Gondolinian
I agree with Jiro that the typical mind fallacy is likely a large factor here, but I only see it as affecting the quantity/sophistication of worlds-inside-head, and not the quality that most people have them (I won't go as far as the quote's claim that all people have them, though.). I still agree with the sentiment that it's important to remember that people's inner lives are often much more complex and subjectively rational than we may see from the outside.
Jiro11-2

This seems like typical mind fallacy. Especially since the quote comes from a writer, who is used to having lots of worlds in his head and may be especially prone to making unwarranted assumptions that his mind is thus typical.

RedErin30

Interesting idea. I have a strong fear of death and also despite my best effort, I am prone to procrastination.

But my procrastination diverts my attention from the things I really want to be doing. So it wastes my time more than anything.

RedErin30

I've liked all of Tim Urbans articles. Very thorough and in depth.

RedErin20

Leadership?

It's a rare quality. I didn't like his book, but I did like him in interviews he's done. People have a tendency to rally behind anyone who leads.

1advancedatheist
Istvan claims he was born in L.A., grew up there and then went to Columbia University. But something about his accent doesn't sound right to me. I lived in Southern California during the years 1991-2004, so I've had plenty of exposure to how people in the Southland talk.
RedErin00

I didn't see Ray Kurzwiels name on there. I guess he wants AI asap, and figures it's worth the risk.

RedErin50

I watched the Joe Rogan interview with him where he disavowed his books political leanings. I'm a left-liberal who used to hate him because of his book, but after watching that interview I like him.

https://www.youtube.com/watch?v=9grWo5ZofmA

RedErin20

So if an AI were created that had consciousness and sentience, like in the new Chappie movie. Would they advocate killing it?

4Stuart_Armstrong
If the AI were superintelligent and would otherwise kill everyone else on Earth - yes. Otherwise, no. The difficult question is when the uncertainties are high and difficult to quantify.
RedErin10

I used to have severe social anxiety. A lot of factors helped me get over it. But talking to people was definitely up there. I'm not scared of people today, but my social skills are still a bit lacking.

RedErin00

I wouldn't say pouring money into the developing world is a tiny drop.

Bill Gates 2014 Annual Letter gives evidence that it's a very good investment.

http://annualletter.gatesfoundation.org/