Comment author: Gunnar_Zarncke 28 December 2014 07:47:09PM 6 points [-]

Maybe before you read the article you'd like to do the following test (mentioned in the article):

Jack is looking at Anne, but Anne is looking at George. Jack is married, but George is not. Is a married person looking at an unmarried person?

Submitting...

Comment author: FeepingCreature 29 December 2014 06:13:43AM 3 points [-]

Haha. The second I read the first sentence of that bit in the article I knew my mistake.

Comment author: chaosmage 08 December 2014 12:11:18PM 0 points [-]

You're welcome - but I'll have to check my registrations sheet, I think we're running out of mattresses and beds. If you're planning to sleep in a ho(s)tel, no problem, otherwise bring a sleeping bag and earplugs!

Comment author: FeepingCreature 10 December 2014 01:42:48PM 0 points [-]

(Count me under "sleeping bag"!)

Comment author: FeepingCreature 06 December 2014 01:29:00PM *  1 point [-]

Hi, I'd like to come as well if you still have places!

By the way, if you still have spots, you should maybe post this again now that we're a bit closer to the actual date. I think it was posted somewhat early, which might mean people saw it, wanted to attend but didn't want to commit yet, and then forgot about it.

Also maybe message a moderator to get it listed as a meetup.

Comment author: passive_fist 29 October 2014 05:38:34AM 2 points [-]

As long as the computer is in its own simulated world, with no input from the outside world, we're almost certainly safe. It cannot model the real world.

But hook it up to some cameras and microphones, and then you have the potential for something that could wind up being dangerous.

So I'd say there's no reason to speculate about 1000x computing power. Just stick it in a virtual world with no human communication and let in run for a while and see if it shows signs of the kind of intelligence that would be worrying.

(The AI Box argument does not apply here)

The challenge, of course, is coming up with a virtual world that is complex enough to be able to discern high intelligence while being different enough from the real world that it could not apply knowledge gained in the simulation to the real world.

Comment author: FeepingCreature 29 October 2014 02:18:23PM *  3 points [-]

As long as the computer is in its own simulated world, with no input from the outside world, we're almost certainly safe. It cannot model the real world.

Note: given really really large computational resources, an AI can always "break out by breaking in"; generate some physical laws ordered by complexity, look what sort of intelligent life arises in those cosmologies, craft an attack that works against it on the assumption that it's running the AI in a box, repeat for the hundred simplest cosmologies. This potentially needs a lot of computing power, but it might take very little depending on how determined our minds are by our physics.

Comment author: TheAncientGeek 22 September 2014 12:46:22PM 1 point [-]

You haven't dealt with the case where the safety goals are the primary ones.

These kinds of primary goals have been raised by Isaac Asimov.

Comment author: FeepingCreature 22 September 2014 04:13:52PM *  1 point [-]

The question of "what are the right safety goals" is what FAI research is all about.

Comment author: Azathoth123 07 September 2014 09:17:08PM 6 points [-]

Could you spell out the connection, I don't see it.

Eliezers essay looks at humanism, looks at the reasons for it and than argues that those reasons apply to transhumanism. The article you linked to starts with a model of marriage that has already abstracted away all the reasons for it existing in the first place and goes from there.

Comment author: FeepingCreature 11 September 2014 01:13:22AM *  -1 points [-]

Eliezers essay looks at humanism, looks at the reasons for it and than argues that those reasons apply to transhumanism.

Eliezer's essay then makes the case that transhumanism is preferable because it lacks special rules.

By analogy: "Love is good. Isolation is bad. If two people are in love, they can marry. It's that simple. You don't have to look at anybody's gender."

Elegant program designs imply elegant (occam!) rules.

Comment author: David_Gerard 07 September 2014 10:38:35AM 1 point [-]

The cut'n'paste not merely of the opinions, but of the phrasing is the tell that this is undigested. Possibly this could be explained by complete correctness with literary brilliance, but we're talking about one-draft daily blog posts here.

Comment author: FeepingCreature 07 September 2014 10:42:43AM *  -1 points [-]

I feel like charitably, another explanation would just be that it's simply a better phrasing than people come up with on their own.

but we're talking about one-draft daily blog posts here.

So? Fast doesn't imply bad. Quite the opposite, fast-work-with-short-feedback-cycle is one of the best ways to get really good.

Comment author: Azathoth123 06 September 2014 05:44:04PM 3 points [-]

If you think of marriage as merely a database entry or XML tag with no connection to how the participants act or should act in the real word, yes.

Comment author: FeepingCreature 06 September 2014 08:56:37PM 0 points [-]

I was trying to draw a comparison to Transhumanism as Simplified Humanism - Universal Marriage as simplified Hetero Marriage.

Comment author: Azathoth123 05 September 2014 02:30:24AM 4 points [-]

American progressives are more likely to have some conflicting sentimental attachments to religious ideas of objective value, or ideas of "human rights" being a pseudo-objective value (I say "pseudo-objective" because, unless they are arguing from religion, the only basis they really have for asserting that such-and-such is an objective "human right" is their own moral intuition (in other words, what makes them feel good or icky, which is back to subjectivism even if they don't realize it. Like I said, they don't always follow their thoughts to the logical conclusion)).

In particular, modern progressives are perfectly willing to invent new human rights and declare them "objectively" good (e.g. gay marriage) or take rights that have been considered human rights for centuries and demote them (e.g. free speech).

Comment author: FeepingCreature 06 September 2014 10:59:23AM -1 points [-]
Comment author: skeptical_lurker 04 September 2014 06:37:27PM 2 points [-]

An existence proof is very different from a constructive proof! Nature did not happen upon this design on the first try, the brain has evolved for billions of generations. Of course, intelligence can work faster than the blind idiot god, and humanity, if it survives long enough, will do better. The question is, will this take decades or centuries?

Comment author: FeepingCreature 05 September 2014 09:17:55PM 1 point [-]

An existence proof is very different from a constructive proof!

Quite so. However, it does give reason to hope.

The question is, will this take decades or centuries?

If you look at Moore's Law coming to a close in silicon around 2020, and we're still so far away from a human brain equivalent computer, it's easy to get disheartened. I think it's important to remember that it's at least possible, and if nature could happen upon it..

View more: Prev | Next