You're welcome - but I'll have to check my registrations sheet, I think we're running out of mattresses and beds. If you're planning to sleep in a ho(s)tel, no problem, otherwise bring a sleeping bag and earplugs!
(Count me under "sleeping bag"!)
Hi, I'd like to come as well if you still have places!
By the way, if you still have spots, you should maybe post this again now that we're a bit closer to the actual date. I think it was posted somewhat early, which might mean people saw it, wanted to attend but didn't want to commit yet, and then forgot about it.
Also maybe message a moderator to get it listed as a meetup.
As long as the computer is in its own simulated world, with no input from the outside world, we're almost certainly safe. It cannot model the real world.
But hook it up to some cameras and microphones, and then you have the potential for something that could wind up being dangerous.
So I'd say there's no reason to speculate about 1000x computing power. Just stick it in a virtual world with no human communication and let in run for a while and see if it shows signs of the kind of intelligence that would be worrying.
(The AI Box argument does not apply here)
The challenge, of course, is coming up with a virtual world that is complex enough to be able to discern high intelligence while being different enough from the real world that it could not apply knowledge gained in the simulation to the real world.
As long as the computer is in its own simulated world, with no input from the outside world, we're almost certainly safe. It cannot model the real world.
Note: given really really large computational resources, an AI can always "break out by breaking in"; generate some physical laws ordered by complexity, look what sort of intelligent life arises in those cosmologies, craft an attack that works against it on the assumption that it's running the AI in a box, repeat for the hundred simplest cosmologies. This potentially needs a lot of computing power, but it might take very little depending on how determined our minds are by our physics.
You haven't dealt with the case where the safety goals are the primary ones.
These kinds of primary goals have been raised by Isaac Asimov.
The question of "what are the right safety goals" is what FAI research is all about.
Could you spell out the connection, I don't see it.
Eliezers essay looks at humanism, looks at the reasons for it and than argues that those reasons apply to transhumanism. The article you linked to starts with a model of marriage that has already abstracted away all the reasons for it existing in the first place and goes from there.
Eliezers essay looks at humanism, looks at the reasons for it and than argues that those reasons apply to transhumanism.
Eliezer's essay then makes the case that transhumanism is preferable because it lacks special rules.
By analogy: "Love is good. Isolation is bad. If two people are in love, they can marry. It's that simple. You don't have to look at anybody's gender."
Elegant program designs imply elegant (occam!) rules.
The cut'n'paste not merely of the opinions, but of the phrasing is the tell that this is undigested. Possibly this could be explained by complete correctness with literary brilliance, but we're talking about one-draft daily blog posts here.
I feel like charitably, another explanation would just be that it's simply a better phrasing than people come up with on their own.
but we're talking about one-draft daily blog posts here.
So? Fast doesn't imply bad. Quite the opposite, fast-work-with-short-feedback-cycle is one of the best ways to get really good.
If you think of marriage as merely a database entry or XML tag with no connection to how the participants act or should act in the real word, yes.
I was trying to draw a comparison to Transhumanism as Simplified Humanism - Universal Marriage as simplified Hetero Marriage.
American progressives are more likely to have some conflicting sentimental attachments to religious ideas of objective value, or ideas of "human rights" being a pseudo-objective value (I say "pseudo-objective" because, unless they are arguing from religion, the only basis they really have for asserting that such-and-such is an objective "human right" is their own moral intuition (in other words, what makes them feel good or icky, which is back to subjectivism even if they don't realize it. Like I said, they don't always follow their thoughts to the logical conclusion)).
In particular, modern progressives are perfectly willing to invent new human rights and declare them "objectively" good (e.g. gay marriage) or take rights that have been considered human rights for centuries and demote them (e.g. free speech).
An existence proof is very different from a constructive proof! Nature did not happen upon this design on the first try, the brain has evolved for billions of generations. Of course, intelligence can work faster than the blind idiot god, and humanity, if it survives long enough, will do better. The question is, will this take decades or centuries?
An existence proof is very different from a constructive proof!
Quite so. However, it does give reason to hope.
The question is, will this take decades or centuries?
If you look at Moore's Law coming to a close in silicon around 2020, and we're still so far away from a human brain equivalent computer, it's easy to get disheartened. I think it's important to remember that it's at least possible, and if nature could happen upon it..
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Haha. The second I read the first sentence of that bit in the article I knew my mistake.