Comment author: simplicio 01 October 2014 06:55:55PM 4 points [-]

completely ignoring the actual outcome seems iffy to me

That's because we live in a world where people's inner states are not apparent, perhaps not even to themselves. So we revert to (a) what would a reasonable person believe, (b) what actually happened. The latter is unfortunate in that it condemns many who are merely morally unlucky and acquits many who are merely morally lucky, but that's life. The actual bad outcomes serve as "blameable moments". What can I say - it's not great, but better than speculating on other people's psychological states.

In a world where mental states could be subpoenaed, Clifford would have both a correct and an actionable theory of the ethics of belief; as it is I think it correct but not entirely actionable.

I don't know what a "genuine extrapolated prior" is.

That which would be arrived at by a reasonable person (not necessarily a Bayesian calculator, but somebody not actually self-deceptive) updating on the same evidence.

A related issue is sincerity; Clifford says the shipowner is sincere in his beliefs, but I tend to think in such cases there is usually a belief/alief mismatch.

I love this passage from Clifford and I can't believe it wasn't posted here before. By the way, William James mounted a critique of Clifford's views in an address you can read here; I encourage you to do so as James presents some cases that are interesting to think about if you (like me) largely agree with Clifford.

Comment author: Cyan 01 October 2014 07:41:26PM 0 points [-]

That's because we live in a world where... it's not great, but better than speculating on other people's psychological states.

I wanted to put something like this idea into my own response to Lumifer, but I couldn't find the words. Thanks for expressing the idea so clearly and concisely.

Comment author: KatjaGrace 01 October 2014 07:06:35PM 2 points [-]

I'm not sure I followed that correctly, but I take it you are saying that making brain emulations involves biological intelligence (the emulation makers) acting on biological intelligence (the emulations). Which is quite right, but it seems like intelligence acting on intelligence should only (as far as I know) produce faster progress if there is some kind of feedback - if the latter intelligence goes on to make more intelligence etc. Which may happen in the emulation case, but after the period in which we might expect particularly fast growth from copying technology from nature. Apologies if I misunderstand you.

Comment author: Cyan 01 October 2014 07:26:42PM *  1 point [-]

I wasn't talking about faster progress as such, just about a predictable single large discontinuity in our capabilities at the point in time when the em approach first bears fruit. It's not a continual feedback, just an application of intelligence to the problem of making biological computations (including those that implement intelligence) run on simulated physics instead of the real thing.

Comment author: Lumifer 01 October 2014 04:51:24PM -1 points [-]

The key to the shipowner's... blameworthiness, is that he allowed the way he desired the world to be to influence his assessment of the actual state of the world.

Pretty much everyone does that almost all the time. So, is everyone blameworthy?

Of course, if everyone is blameworthy then no one is.

Comment author: Cyan 01 October 2014 07:08:17PM *  2 points [-]

I would say that I don't do that, but then I'd pretty obviously be allowing the way I desire the world to be to influence my assessment of that actual state of the world. I'll make a weaker claim -- when I'm engaging conscious effort in trying to figure out how the world is and I notice myself doing it, I try to stop. Less Wrong, not Absolute Perfection.

Pretty much everyone does that almost all the time. So, is everyone blameworthy? Of course, if everyone is blameworthy then no one is.

That's a pretty good example of the Fallacy of Gray right there.

Comment author: KatjaGrace 01 October 2014 04:21:01PM 2 points [-]

In the emulation case, how does intelligence acting on itself come into the picture? (I agree it might do after there are emulations, but I'm talking about the jump from capabilities prior to the first good emulation to those of emulations).

Comment author: Cyan 01 October 2014 05:29:57PM *  1 point [-]

Hmm.. let me think...

The materialist thesis implies that a biological computation can be split into two parts: (i) a specification of a brain-state; (ii) a set of rules for brain-state time evolution, i.e., physics. When biological computations run in base reality, brain-state maps to program state and physics is the interpreter, pushing brain-states through the abstract computation. Creating an em then becomes analogous to using Futamura's first projection to build in the static part of the computation -- physics -- thereby making the resulting program substrate-independent. The entire process of creating a viable emulation strategy happens when we humans run a biological computation that (i) tells us what is necessary to create a substrate-independent brain-state spec and (ii) solves a lot of practical physics simulation problems, so that to generate an em, the brain-state spec is all we need. This is somewhat analogous to Futamura's second projection: we take the ordered pair (biological computation, physics), run a particular biological computation on it, and get a brain-state-to-em compiler.

So intelligence is acting on itself indirectly through the fact that an "interpreter", physics, is how reality manifests intelligence. We aim to specialize physics out of the process of running the biological computations that implement intelligence, and by necessity, we're use a biological computation that implements intelligence to accomplish that goal.

Comment author: V_V 28 September 2014 04:07:07PM 1 point [-]

What would the source code of an Omega able to predict an AIXI look like?

Comment author: Cyan 01 October 2014 12:21:02PM 1 point [-]

It won't have source code per se, but one can posit the existence of a halting oracle without generating an inconsistency.

Comment author: KatjaGrace 30 September 2014 03:20:59AM 10 points [-]

In my understanding, technological progress almost always proceeds relatively smoothly (see algorithmic progress, the performance curves database, and this brief investigation). Brain emulations seem to represent an unusual possibility for an abrupt jump in technological capability, because we would basically be ‘stealing’ the technology rather than designing it from scratch. Similarly, if an advanced civilization kept their nanotechnology locked up nearby, then our incremental progress in lock-picking tools might suddenly give rise to a huge leap in nanotechnology from our perspective, whereas earlier lock picking progress wouldn’t have given us any noticeable nanotechnology progress. If this is an unusual situation however, it seems strange that the other most salient route to superintelligence - artificial intelligence designed by humans - is also often expected to involve a discontinuous jump in capability, but for entirely different reasons. Is there some unifying reason to expect jumps in both routes to superintelligence, or is it just coincidence? Or do I overstate the ubiquity of incremental progress?

Comment author: Cyan 01 October 2014 03:52:56AM *  2 points [-]

My intuition -- and it's a Good one -- is that the discontinuity is produced by intelligence acting to increase itself. It's built into the structure of the thing acted upon that it will feed back to the thing doing the acting. (Not that unique an insight around these parts, eh?)

Okay, here's a metaphor(?) to put some meat on the bones of this comment. Suppose you have an interpreter for some computer language and you have a program written in that language that implements partial evaluation. With just these tools, you can make the partial evaluator (i) act as a compiler, by running it on an interpreter and a program; (ii) build a compiler, by running it on itself and an interpreter; (iii) build a generic interpreter-to-compiler converter, by running it on itself and itself. So one piece of technology "telescopes" by acting on itself. These are the Three Projections of Doctor Futamura.

Comment author: [deleted] 30 September 2014 10:55:17PM 3 points [-]

This is just curiosity, but what community has brought "funge" to have this meaning? The only definition of "funge" I can find is archaic references to either fungus or simpletons.

In response to comment by [deleted] on The Puzzle of Faith and Belief
Comment author: Cyan 30 September 2014 11:33:34PM *  3 points [-]

Fungible. The term is still current within economics, I believe. If something is fungible, it stands to reason that one can funge it, nu?

Comment author: satt 30 September 2014 10:17:18PM *  2 points [-]

Nice capsule summary of LW. One minor suggestion about a personal hobby-horse:

topics like goal factoring/funging

Might a simple but less jargon-y word/phrase replace "funging" here? (I'm actually not 100% sure what it means here since I'm used to always seeing "against" after "funge"...)

[Edited to delete an extra "to".]

Comment author: Cyan 30 September 2014 11:04:17PM *  2 points [-]

As Vaniver mentioned, it relates to exploring trade-offs among the various goals one has / things one values. A certain amount of it arises naturally in the planning of any complex project, but it seems like the deliberate practice of introspecting on how one's goals decompose into subgoals and on how they might be traded off against one another to achieve a more satisfactory state of things is an idea that is novel, distinct, and conceptually intricate enough to deserve its own label.

Comment author: fubarobfusco 30 September 2014 07:49:43PM *  2 points [-]

Most of them are geeks/nerds in general, or at least have seen themselves as such at some point in their lives.

Comment author: Cyan 30 September 2014 09:36:32PM *  3 points [-]

Yeesh. These people shouldn't let feelings or appearances influence their opinions of EY's trustworthiness -- or "morally repulsive" ideas like justifications for genocide. That's why I feel it's perfectly rational to dismiss their criticisms -- that and the fact that there's no evidence backing up their claims. How can there be? After all, as I explain here, Bayesian epistemology is central to LW-style rationality and related ideas like Friendly AI and effective altruism. Frankly, with the kind of muddle-headed thinking those haters display, they don't really deserve the insights that LW provides.

There, that's 8 out of 10 bullet points. I couldn't get the "manipulation" one in because "something sinister" is underspecified; as to the "censorship" one, well, I didn't want to mention the... thing... (ooh, meta! Gonna give myself partial credit for that one.)

Ab, V qba'g npghnyyl ubyq gur ivrjf V rkcerffrq nobir; vg'f whfg n wbxr.

Comment author: Lumifer 30 September 2014 07:23:19PM 2 points [-]

An interesting quote. It essentially puts forward the "reasonable person" legal theory. But that's not what's interesting about it.

The shipowner is pronounced "verily guilty" solely on the basis of his thought processes. He had doubts, he extinguished them, and that's what makes him guilty. We don't know whether the ship was actually seaworthy -- only that the shipowner had doubts. If he were an optimistic fellow and never even had these doubts in the first place, would he still be guilty? We don't know what happened to the ship -- only that it disappeared. If the ship met a hurricane that no vessel of that era could survive, would the shipowner still be guilty? And, flipping the scenario, if solely by improbable luck the wreck of the ship did arrive unscathed to its destination, would the shipowner still be guilty?

Comment author: Cyan 30 September 2014 09:09:43PM *  1 point [-]

He had doubts, he extinguished them, and that's what makes him guilty.

This is not the whole story. In the quote

He had acquired his belief not by honestly earning it in patient investigation, but by stifling his doubts.

you're paying too much heed to the final clause and not enough to the clause that precedes it. The shipowner had doubts that, we are to understand, were reasonable on the available information. The key to the shipowner's... I prefer not to use the word "guilt", with its connotations of legal or celestial judgment -- let us say, blameworthiness, is that he allowed the way he desired the world to be to influence his assessment of the actual state of the world.

In your "optimistic fellow" scenario, the shipowner would be as blameworthy, but in that case, the blame would attach to his failure to give serious consideration to the doubts that had been expressed to him.

And going beyond what is in the passage, in my view, he would be equally blameworthy if the ship had survived the voyage! Shitty decision-making is shitty-decision-making, regardless of outcome. (This is part of why I avoided the word "guilt" -- too outcome-dependent.)

View more: Prev | Next