All of spuckblase's Comments + Replies

(2) looks awfully hard, unless we can find a powerful IA technique that also, say, gives you a 10% chance of cancer. Then some EAs devoted to building FAI might just use the technique, and maybe the AI community in general doesn’t.

Using early IA techniques is probably risky in most cases. Commited altruists might have a general advantage here.

Risky Machines: Artificial Intelligence as a Danger to Mankind

I like the your non-fiction style a lot (don't know your fictional stuff). I often get the impression you're in total control of the material. Very thorough yet original, witty and humble. The exemplary research paper. Definitely more Luke than Yvain/Eliezer.

Navigating the LW rules is not intended to require precognition.

Well, it was required when (negative) karma for Main articles increased tenfold.

1thomblake
Yes, or when downvotes were limited without warning.

Do you still want to do this?

To be more specific:

I live in Germany, so timezone is GMT +1. My preferred time would be on a workday sometime after 8 pm (my time). Since I'm a german native speaker, and the AI has the harder job anyway, I offer: 50 dollars for you if you win, 10 dollars for me if I do.

I agree in large parts, but it seems likely that value drift plays a role, too.

Well, I'm somewhat sure (80%?) that no human could do it, but...let's find out! Original terms are fine.

I'd bet up to fifty dollars!?

2lessdazed
What I want to know is whether you are one of those who thinks no superintelligence could talk them out in two hours, or just no human. If not with a probability of literally zero (or perhaps one for the ability of a superintelligence to talk its way out), approximately what. Regardless, let's do this some time this month. As far as betting is concerned, something similar to the original seems reasonable to me.

Ok, so who's the other one living in Berlin?

0yttrium
Me too!
0blob
That's me. I found here through HP:MoR fairly recently and am currently reading through the sequences. I'm also a mathematician who, surprisingly, never attended a lecture on probability theory. Eliezer insistently praises Jaynes' "Probability Theory", so I also got that to fill in the blanks. If you're interested, I'd like to meet and chat. I'll send you a PM.

If there are others who feel the same way, maybe we could set up some experiments where AI players are anonymous.

In that case, I'd like to participate as gatekeeper. I'm ready to put some money on the line.

BTW, I wonder if Clippy would want to play a human, too. I

Some have argued that a machine cannot reach human-level general intelligence, for example see Lucas (1961); Dreyfus (1972); Penrose (1994); Searle (1980); Block (1981). But Chalmers (2010) points out that their arguments are irrelevant: To reply to the Lucas, Penrose, and Dreyfus objections, we can note that nothing in the singularity idea requires that an AI be a classical computational system or even that it be a computational system at all. For example, Penrose (like Lucas) holds that the brain is not an algorithmic system in the ordinary sense, but h

... (read more)

it is standard in a rational discourse to include and address opposing arguments, provided your audience includes anyone other than supporters already. At a minimum, one should state an objection and cite a discussion of it.

This is not a rational discourse but part of an FAQ, providing explanations/definitions. Counterarguments would be misplaced.

For those who read german or can infer the meaning: Philosopher Cristoph Fehige shows a way to embrace utilitarianism and dust specks.

"Literalness" is explained in sufficient detail to get a first idea of the connection to FAI, but "Superpower" is not.

going back to the 1956 Dartmouth conference on AI

maybe better (if this is good english): going back to the seminal 1956 Dartmouth conference on AI

There are many types of digital intelligence. To name just four:

Readers might like to know what the others are and why you chose those four.

Relevant? (A fake ad by renowned artist Katerina Jebb)

Die Forscher kombinieren Daten aus Informatik und psychologischen Studien. Ihr Ziel: Eine Not-to-do-Liste, die jedes Unternehmen bekommt, das an künstlicher Intelligenz arbeitet.

Rough translation:

The researchers combine data from computer science and psychological studies. Their goal: a not-to-do list, given to every organization working on artificial intelligence.

I don't see a special problem...evaluate the arguments, try to correct for biases. Business as usual. Or do you suspect there is a new type of bias at work here?

I took it. Thanks for this, I'm excited about the results.

Good Translation! I'm through the whole text now, did proofreading and changed quite a bit, some terminological questions remain. After re-reading the original in the process, I think the english FAQ needs some work (unbalanced sections, winding sentences etc). But as a non-native speaker, I don't dare.

1Simon Fischer
Same here. All in all, great job everybody!

At least 29 and 32 are process advice, too.

31: Anything can be done in dialogue (cf. Plato), but probably shoudn't.

22: Reader of blogs or of papers? What's the target audience?

Further points:

  • Avoid formulas
  • Use key words, catch phrases, highlighting.
  • Use a Summary and/or Conclusion where possible.

First approximation: Make your writing similar to a blockbuster movie.

wedrifid140

First approximation: Make your writing similar to a blockbuster movie.

"Really nice cleavage and explosions" doesn't look nearly as impressive on paper.

Since the Universe’s computational accuracy appears to be infinite, in order for the mind to be omniscient about a human brain it must be running the human brain’s quark-level computations within its own mind; any approximate computation would yield imperfect predictions. In the act of running this computation, the brain’s qualia are generated, if (as we have assumed) the brain in question experiences qualia. Therefore the omniscient mind is fully aware of all of the qualia that are experienced within the volume of the Universe about which it has perfec

... (read more)
3[anonymous]
I don’t see any difference. If the superintelligence is just watching the dots move around, then it isn’t predicting anything. If it knows exactly what the simulation is going to do, then it must be performing the same computations that the computer running the cellular automaton is doing. Amongst these computations must be the computations that generate the qualia that the entity in Game of Life experiences. Therefore the superintelligence also experiences the qualia.

I promised to give you feedback on your wikibook. Some quick thoughts:

There is a ton of at false or at least controversial stuff (e.g., "Disappointment is always something positive"; "instrumental rationality" = "instrumentale Rationalität" (wheras it's "instrumentelle Rationalität") or stuff that cannot be understood without further knowledge (what is your average reader to make of the words "The Litany of Gendlin"?).

The preface is lacking footnotes, links or an outline.

You're obviously just getting started on this project, so maybe you should rather wait for EY's book(s) on rationality and try a translation thereof?

Let's meet September 10th in Munich (see http://www.doodle.com/u49xxi6z4zqbihqa) Maybe we can attract a few more people with a definite time and setting.

I would definitely attend, but not in the first two weekends in August. The 5th is a friday, which may be problematic - at least for some - too. I propose a saturday/sunday meetup later in august. The 20th maybe? End of july would alos be possible.

0David Althaus
Ah, the problem is this guy is away from August 7th to September 7th. So maybe a meetup at the weekend some day in September? Maybe we should start a google-group to discuss these things...

I live in Berlin, but Munich would be fine. Not in June though.

No. He seems to talk about the species, and not its members.

0timtyler
I wasn't talking about indivduals. Homo Erectus didn't launch a civilisation. Nor did Homo habilis, Homo cepranensis, Homo antecessor, Homo heidelbergensis, Homo rhodesiensis, Homo neanderthalensis etc. That is relevant data.

Well, if you stipulate that "abstract truth-seeking" has nothing whatsoever to do with my getting along in the world, then you're right I guess.

0[anonymous]
You're not going to believe this, but I actually broke my sense of causality. @_@

Seems to me you're conflating different concepts: "being the reason for" and "being the cause of":

compare what an enemy of determinism could say: "we have no reason to listen to you if your theory is false and no reason to listen if it's true either". Now what?

0cousin_it
Let's drop abstract truth-seeking for a moment and talk about instrumental values instead. Believing in causality is useful in a causal world and neutral in an acausal one. Disbelieving in causality is harmful in a causal world and likewise neutral in an acausal one. So, if you assign nonzero credence to the existence of causality (as you implied in a comment above: "why does everybody assume I'm a die-hard believer?"), you'd do better by increasing this credence to 100%, because doing so has positive utility in the causal world (to which you have assigned nonzero credence) and doesn't matter in the acausal one.
0[anonymous]
I can't parse your comment. Are you saying that, conditioned on your theory being true, our beliefs "should" somehow causally update in response to your arguments? That's obviously false.

Now we're getting to the heart of it. Upvoted. What does it mean to live in a hume world? For example, we would have to accept the existence of non-reducible mental states (everybody here granted the consistency of the theory until now) and take everything on faith. But indeed we cannnot take anything on faith, since we cannot think, if thinking is a causal notion!?

Suppose for the sake of argument we're not living in a hume world, but had massive, perhaps infinite computing power. we could simulate so many hume worlds that there are some with order and inh... (read more)

First, none of us are being as rude to you as you are to us in this comment alone. If you can't stand the abuse you're getting here, then quit commenting on this post.

Oh, I can take the abuse, I'm just wondering.

Second, we've given this well more than a few minutes' discussion, and you've given us no reason to believe that we misunderstand your theory

At least at first, I've been given just accusations and incredulous stares.

if a theory doesn't help us get what we actually want, it really is of no use to us

If you want the truth, you have to consider being wrong even about your darlings, say, prediction.

2RobinZ
Do you actually believe this theory that you have proposed? Because we aren't arguing that it's logically impossible, we're explaining why we don't believe it.

Why does everybody assume I'm a die-hard believer in this theory?

2cousin_it
No such assumption required. For example, if you have 10% credence in your theory, the same 10% says you're defending it by accident. Viewed another way, we have no reason to listen to you if your theory is false and no reason to listen if it's true either. Please apply this logic to your beliefs and update.
0[anonymous]
We don't need to assume that. If you have 10% credence for your theory, my reasoning applies for that 10%.

From your first comment to my post on you were really agressive. Arguments are fine, but why always the personal attacks? I tell you what might be going on here: You saw the post, couldn't make sense of it after a quick glance and decided it was junk and an easy way to gain reputation and boost your ego by bashing. And you are not alone. There are lots of haters, and nobody who just said, Ok, I don't believe it, but let's discuss it, and stop hitting the guy over the head.

The theory is highly counterintuitive, I said as much, but it is worth at least a f... (read more)

2RobinZ
Spuckblase, two things. First, none of us are being as rude to you as you are to us in this comment alone. If you can't stand the abuse you're getting here, then quit commenting on this post. Second, we've given this well more than a few minutes' discussion, and you've given us no reason to believe that we misunderstand your theory - you just object to our categorical dismissal of it. I am perfectly willing to believe that the philosophers you discussed this with gave you credit for making an interesting argument - philosophers are generous like that - and for all its faults, your theory is consistent. But around here, interesting is a matter of writing style, and consistent is a sub-minimal requirement: we demand useful. None of us are rationalists just for the lulz - if a theory doesn't help us get what we actually want, it really is of no use to us. And by that standard, any skeptical hypothesis is a waste of time, including your proposed Humeiform worldview, when other hypotheses actually work. Edit circa 2014: the Slacktivist blog moved (mostly) to a new website - this is the new link to the "sub-minimal requirement" post.
0cousin_it
Your theory says you can't cause our beliefs to change and you shouldn't be surprised about it. It also implies that you defend it by accident, not because it's true. The good news is that you have an obvious upgrade right ahead. Not all of us are so lucky.

Downvoted again. Phew. Maybe you just tell me where I said or implied it?

0RobinZ
No, you're right - you didn't say that. Your theory maintains that all prediction is impossible, but it doesn't maintain that all statements are false or unknowable.

Thanks but no thanks. I do know this really really basic stuff - I just don't agree. Instead of just postulating that all explanations have to be tied to prediction, why don't you try to rebut the argument. Again: Inhabitants of a Hume world are right to explain their world with this Hume-world theory. They just happen to live in a world where no prediction is possible. So explanation should be conceived independently of prediction. Not every explanation needs to be tied to prediction.

5Sideways
Just because what you believe happens to be true, doesn't mean you're right to believe it. If I walk up to a roulette wheel, certain that the ball will land on black, and it does--then I still wasn't right to believe it would. Hypothetical Hume-worlders, like us, do not have the luxury of access to reality's "source code": they have not been informed that they exist in a hypothetical Hume-world, any more than we can know the "true nature" of our world. Their Hume-world theory, like yours, cannot be based on reading reality's source code; the only way to justify Hume-world theory is by demonstrating that it makes accurate predictions. Arguably, it does make at least one prediction: that any causal model of reality will eventually break down. This prediction, to put it mildly, does not hold up well to our investigation of our universe. Alternatively, you could assert that if all possibilities are randomly realized, we might (with infinitesimal probability) be living in a world that just happened to exactly resemble a causal world. But without evidence to support such a belief, you would not be right to believe it, even if it turns out to be true. Not to mention that, as others have mentioned in this thread, unfalsifiable theories are a waste of valuable mental real estate.

Ok, it seems that if you're right to choose density over cardinality then it's a blow to my proposal. I'm still trying to figure it out. Suppose the universe is an infinite Hume world. So is it true that even though there are just as many ordered regions, the likelihood that I live in one is almost zero?

No. We talked about evidential support, not predictive power. Inhabitants of a Hume world are obviously right to explain flying pigs et al. by a hume-world-theory, even if they cannot predict anything.

1RobinZ
Err, wrong. 1. Evidential support is directly tied to predictive power. That's what it means to be supported by the evidence - that it predicted the evidence over the alternatives. 2. Explanations are directly tied to predictive power. That's what it means to explain things - that those things are predicted to occur instead of the alternatives. This is really, really basic stuff - dating back at least to Karl Popper's falsifiability, if not further. If you don't know it, you have a long way to go before you can reasonably consider trying to calculate the fundamental nature of the universe.

So now I scanned over the "Dust theory FAQ" to which Z_M_Davis linked (thanks again!)

To

Q5: How seriously do you take the Dust Theory yourself?

Egan replies:

A5: Not very seriously, although I have yet to hear a convincing refutation of it on purely logical grounds. For example, some people have suggested that a sequence of states could only experience consciousness if there was a genuine causal relationship between them. The whole point of the Dust Theory, though, is that there is nothing more to causality than the correlations between state

... (read more)

Where do I say or imply that? did you read it at all?

0spuckblase
Downvoted again. Phew. Maybe you just tell me where I said or implied it?

Why don't you apply the principle of charity for once?

Anyway, compare:

  1. The universe was created in the big bang.
  2. God created the big bang.

So in 2. I now have prolonged the mystery. Is it less mysterious?

2Alicorn
I employ the principle of charity when someone's writing is unclear and they could be saying any of several things, some of which would make sense and some of which wouldn't. Then the principle of charity suggests that I interpret the unclarity as the possibility that makes sense. Are you saying that I misunderstand you, or do you just want to throw up "charity" as a defense force field for when people who do not agree with you express that disagreement? As for your comparison: The move to God is unmotivated, unlike the mystery-postponing moves we make based on evidence and logical inference. Also, God is one big, conspicuous, intractable mystery, not lots of little ones, which is exactly what I complained about in your theory of causation. So it is a comparison that is extremely unfavorable to what you seem to be defending.

If the universe is completely non-deterministic with infinite random events happening, shouldn't the odds of my living in the specific sub-universe that appears fully deterministic be almost indistinguishable from zero?

As I said, I want to argue that the sizes of ordered and chaotic regions are of the same cardinality.

2Psychohistorian
I'm not quite sure what it means that you "want to argue ... the same cardinality." Argue it or don't. As near as I can tell, you didn't, or at least you didn't argue how this prevents our universe from being overwhelmingly strong evidence against this theory. Still, identical cardinality wouldn't get you out of this one. >0, , < infinity. This does not mean that if I pick a number at random out of the latter, I am just as likely to pick in the 0-1 range as I am to pick outside of it. Please correct me if this analogy is somehow inappropriate. If I understand the gyst of the theory, saying that our universe is acausal is saying that any random causally unexplainable event could occur at any time. If this theory is true, I should expect with extraordinarily high probability to see at least one acausal event (and, for that matter, I should expect with high probability for the universe to spontaneously convert to "static," which would unmake me). Since an acausal event wouldn't necessarily destroy me, this theory can't even cheat by using the anthropic principle. Events that are predicted with overwhelming probability never happening is about the most damning evidence against a theory that exists. Events that are predicted with unbelievably low probability happening not only often but invariably is also about the most damning evidence against a theory that exists. The theory is admittedly undisprovable, so you can take some comfort in never being proven wrong, but you really, really shouldn't. Non-disprovability is generally a very undesirable attribute, at least if you care about finding the truth.
0RobinZ
That's irrelevant. The density of ordered points within the region of possibilities is what is relevant, and that density is almost zero.

I guess we're calling it the Humeiform theory - isn't supported by any conceivable block of evidence, including that which actually holds true

just untrue. IF pigs start to fly, etc., you'll better remember this theory. besides, I repeat that in my opinion, the (controverted, granted,, but this is definitely not a closed case) existence of qualia, mental causation and indeterministic processes already give support.

3RobinZ
If pigs start to fly, that doesn't support the Humeiform theory - it just undermines (some of) its competitors. Being as the Humeiform theory predicts absolutely nothing, it can't possibly be a better predictor than any theory which predicts anything at all correctly. The only way it can win is if no theory can do so - in which case it, being the simplest, wins by default.
Load More