Risky Machines: Artificial Intelligence as a Danger to Mankind
I like the your non-fiction style a lot (don't know your fictional stuff). I often get the impression you're in total control of the material. Very thorough yet original, witty and humble. The exemplary research paper. Definitely more Luke than Yvain/Eliezer.
Navigating the LW rules is not intended to require precognition.
Well, it was required when (negative) karma for Main articles increased tenfold.
I'll be there!
Do you still want to do this?
To be more specific:
I live in Germany, so timezone is GMT +1. My preferred time would be on a workday sometime after 8 pm (my time). Since I'm a german native speaker, and the AI has the harder job anyway, I offer: 50 dollars for you if you win, 10 dollars for me if I do.
I agree in large parts, but it seems likely that value drift plays a role, too.
Well, I'm somewhat sure (80%?) that no human could do it, but...let's find out! Original terms are fine.
I'd bet up to fifty dollars!?
Ok, so who's the other one living in Berlin?
If there are others who feel the same way, maybe we could set up some experiments where AI players are anonymous.
In that case, I'd like to participate as gatekeeper. I'm ready to put some money on the line.
BTW, I wonder if Clippy would want to play a human, too. I
...Some have argued that a machine cannot reach human-level general intelligence, for example see Lucas (1961); Dreyfus (1972); Penrose (1994); Searle (1980); Block (1981). But Chalmers (2010) points out that their arguments are irrelevant: To reply to the Lucas, Penrose, and Dreyfus objections, we can note that nothing in the singularity idea requires that an AI be a classical computational system or even that it be a computational system at all. For example, Penrose (like Lucas) holds that the brain is not an algorithmic system in the ordinary sense, but h
it is standard in a rational discourse to include and address opposing arguments, provided your audience includes anyone other than supporters already. At a minimum, one should state an objection and cite a discussion of it.
This is not a rational discourse but part of an FAQ, providing explanations/definitions. Counterarguments would be misplaced.
For those who read german or can infer the meaning: Philosopher Cristoph Fehige shows a way to embrace utilitarianism and dust specks.
"Literalness" is explained in sufficient detail to get a first idea of the connection to FAI, but "Superpower" is not.
going back to the 1956 Dartmouth conference on AI
maybe better (if this is good english): going back to the seminal 1956 Dartmouth conference on AI
There are many types of digital intelligence. To name just four:
Readers might like to know what the others are and why you chose those four.
Die Forscher kombinieren Daten aus Informatik und psychologischen Studien. Ihr Ziel: Eine Not-to-do-Liste, die jedes Unternehmen bekommt, das an künstlicher Intelligenz arbeitet.
Rough translation:
The researchers combine data from computer science and psychological studies. Their goal: a not-to-do list, given to every organization working on artificial intelligence.
I don't see a special problem...evaluate the arguments, try to correct for biases. Business as usual. Or do you suspect there is a new type of bias at work here?
I took it. Thanks for this, I'm excited about the results.
Typo in the title!
Good Translation! I'm through the whole text now, did proofreading and changed quite a bit, some terminological questions remain. After re-reading the original in the process, I think the english FAQ needs some work (unbalanced sections, winding sentences etc). But as a non-native speaker, I don't dare.
At least 29 and 32 are process advice, too.
31: Anything can be done in dialogue (cf. Plato), but probably shoudn't.
22: Reader of blogs or of papers? What's the target audience?
Further points:
First approximation: Make your writing similar to a blockbuster movie.
First approximation: Make your writing similar to a blockbuster movie.
"Really nice cleavage and explosions" doesn't look nearly as impressive on paper.
...Since the Universe’s computational accuracy appears to be infinite, in order for the mind to be omniscient about a human brain it must be running the human brain’s quark-level computations within its own mind; any approximate computation would yield imperfect predictions. In the act of running this computation, the brain’s qualia are generated, if (as we have assumed) the brain in question experiences qualia. Therefore the omniscient mind is fully aware of all of the qualia that are experienced within the volume of the Universe about which it has perfec
I promised to give you feedback on your wikibook. Some quick thoughts:
There is a ton of at false or at least controversial stuff (e.g., "Disappointment is always something positive"; "instrumental rationality" = "instrumentale Rationalität" (wheras it's "instrumentelle Rationalität") or stuff that cannot be understood without further knowledge (what is your average reader to make of the words "The Litany of Gendlin"?).
The preface is lacking footnotes, links or an outline.
You're obviously just getting started on this project, so maybe you should rather wait for EY's book(s) on rationality and try a translation thereof?
Great! Me also.
Let's meet September 10th in Munich (see http://www.doodle.com/u49xxi6z4zqbihqa) Maybe we can attract a few more people with a definite time and setting.
I would definitely attend, but not in the first two weekends in August. The 5th is a friday, which may be problematic - at least for some - too. I propose a saturday/sunday meetup later in august. The 20th maybe? End of july would alos be possible.
I live in Berlin, but Munich would be fine. Not in June though.
No. He seems to talk about the species, and not its members.
I'm sitting in Berlin.
Well, if you stipulate that "abstract truth-seeking" has nothing whatsoever to do with my getting along in the world, then you're right I guess.
"There is no causation."
Seems to me you're conflating different concepts: "being the reason for" and "being the cause of":
compare what an enemy of determinism could say: "we have no reason to listen to you if your theory is false and no reason to listen if it's true either". Now what?
Now we're getting to the heart of it. Upvoted. What does it mean to live in a hume world? For example, we would have to accept the existence of non-reducible mental states (everybody here granted the consistency of the theory until now) and take everything on faith. But indeed we cannnot take anything on faith, since we cannot think, if thinking is a causal notion!?
Suppose for the sake of argument we're not living in a hume world, but had massive, perhaps infinite computing power. we could simulate so many hume worlds that there are some with order and inh...
First, none of us are being as rude to you as you are to us in this comment alone. If you can't stand the abuse you're getting here, then quit commenting on this post.
Oh, I can take the abuse, I'm just wondering.
Second, we've given this well more than a few minutes' discussion, and you've given us no reason to believe that we misunderstand your theory
At least at first, I've been given just accusations and incredulous stares.
if a theory doesn't help us get what we actually want, it really is of no use to us
If you want the truth, you have to consider being wrong even about your darlings, say, prediction.
Why does everybody assume I'm a die-hard believer in this theory?
From your first comment to my post on you were really agressive. Arguments are fine, but why always the personal attacks? I tell you what might be going on here: You saw the post, couldn't make sense of it after a quick glance and decided it was junk and an easy way to gain reputation and boost your ego by bashing. And you are not alone. There are lots of haters, and nobody who just said, Ok, I don't believe it, but let's discuss it, and stop hitting the guy over the head.
The theory is highly counterintuitive, I said as much, but it is worth at least a f...
Downvoted again. Phew. Maybe you just tell me where I said or implied it?
Thanks but no thanks. I do know this really really basic stuff - I just don't agree. Instead of just postulating that all explanations have to be tied to prediction, why don't you try to rebut the argument. Again: Inhabitants of a Hume world are right to explain their world with this Hume-world theory. They just happen to live in a world where no prediction is possible. So explanation should be conceived independently of prediction. Not every explanation needs to be tied to prediction.
Ok, it seems that if you're right to choose density over cardinality then it's a blow to my proposal. I'm still trying to figure it out. Suppose the universe is an infinite Hume world. So is it true that even though there are just as many ordered regions, the likelihood that I live in one is almost zero?
No. We talked about evidential support, not predictive power. Inhabitants of a Hume world are obviously right to explain flying pigs et al. by a hume-world-theory, even if they cannot predict anything.
So now I scanned over the "Dust theory FAQ" to which Z_M_Davis linked (thanks again!)
To
Q5: How seriously do you take the Dust Theory yourself?
Egan replies:
...A5: Not very seriously, although I have yet to hear a convincing refutation of it on purely logical grounds. For example, some people have suggested that a sequence of states could only experience consciousness if there was a genuine causal relationship between them. The whole point of the Dust Theory, though, is that there is nothing more to causality than the correlations between state
Where do I say or imply that? did you read it at all?
Why don't you apply the principle of charity for once?
Anyway, compare:
So in 2. I now have prolonged the mystery. Is it less mysterious?
If the universe is completely non-deterministic with infinite random events happening, shouldn't the odds of my living in the specific sub-universe that appears fully deterministic be almost indistinguishable from zero?
As I said, I want to argue that the sizes of ordered and chaotic regions are of the same cardinality.
I guess we're calling it the Humeiform theory - isn't supported by any conceivable block of evidence, including that which actually holds true
just untrue. IF pigs start to fly, etc., you'll better remember this theory. besides, I repeat that in my opinion, the (controverted, granted,, but this is definitely not a closed case) existence of qualia, mental causation and indeterministic processes already give support.
Using early IA techniques is probably risky in most cases. Commited altruists might have a general advantage here.