Comment author: MendelSchmiedekamp 04 December 2009 04:35:33PM 1 point [-]

Or more succinctly and broadly, learn to:

  • pay attention

  • correct bias

  • anticipate bias

  • estimate well

With a single specific enumeration of means to accomplish these competencies you risk ignoring other possible curricula. And you encourage the same blind spots for the entire community of aspiring rationalists so educated.

Comment author: PhilGoetz 01 December 2009 03:17:10AM 0 points [-]

No write-up. The idea is that you can decide between two situations by choosing the one with greater information or complexity. The trickiness is in deciding how to measure information or complexity, and in deciding what to measure the complexity of. You probably don't want to conclude that, in a closed system, the ethically best thing to do is nothing because doing anything increases entropy. (Perhaps using a measure of computation performed, instead of a static measure of entropy, would address that.)

This gives you immediately a lot of ethical principles that are otherwise difficult to justify; such as valuing evolution, knowledge, diversity, and the environment; and condemning (non-selective) destruction and censorship. Also, whereas most ethical systems tend to extreme points of view, the development of complexity is greatest when control parameters take on intermediate values. Conservatives value stasis; progressives value change; those who wish to increase complexity aim for a balance between the two.

(The equation in my comment is not specific to that idea, so it may be distracting you.)

Comment author: MendelSchmiedekamp 01 December 2009 06:05:02PM 1 point [-]

This parallels some of the work I'm doing with fun-theoretic utility, at least in terms of using information theory. One big concern is what measure of complexity to use, as you certainly don't want to use a classical information measure - otherwise Kolmogorov random outcomes will be preferred to all others.

In response to Rational lies
Comment author: MendelSchmiedekamp 24 November 2009 07:23:33PM 0 points [-]

Lies, truth, and radical honesty are all that get in the way in understanding what is going on here.

You are communicating with someone, several of the many constantly changing layers (in addition to status signaling, empathy broadcasting, and performatives) of this communication are the transfer of information from you to that someone. The effectiveness of the communication of this information and its accuracy when received is something we can talk about fairly easily in terms of both instrumental (effectiveness) and epistemic (accurate) rationality.

To classify that communication as a lie or as truth or as honest (from your own perspective) involves unpacking social signals, conscious and unconscious intent, and is entirely irrelevant to any rational goal.

Considering that our societies place value on the signals shown by these terms, it may matter how our signals are received. This is an instrumental rationality question about increasing the likelihood of being seen as honest or as telling a lie.

It is essential not to confuse these two very different things. One of the first clues is to realize that when we talk about truth in rationality we mean something closely related to accuracy, but in communication it may be the same word, but it means something entirely different. This means that we should ban ourselves from using the word until we are quite sure we know what we mean by it.

Comment author: pre 20 November 2009 09:24:53PM 2 points [-]

Thanks folks, sounds like the entire point of quantum computing is to avoid the kinda differences in interpretation that Copenhagen/MWI are concerned with, so my suspicion that a MW computational image would help is mistaken. Which is good, read around some Quantum Algorithms a bit. Have a better grasp of how that actually works than the terrible "explore all possibilities and pick the best" line that seems to come up so much.

Still leaves me a bit at a loss with these quantum effects in photosynthesis though:

We have obtained the first direct evidence that remarkably long-lived wavelike electronic quantum coherence plays an important part in energy transfer processes during photosynthesis,” said Graham Fleming, the principal investigator for the study. “This wavelike characteristic can explain the extreme efficiency of the energy transfer because it enables the system to simultaneously sample all the potential energy pathways and choose the most efficient one.”

Seems likely that line about "simultaneously sampling all the potential energy pathways and choosing the most efficient one" is just as misleading as the similar line in explaining how to quantumly factor a number.

Humm. Oh well. Can't expect to clear up all my confusion in one day. It's Friday Night, I should go find something fun to do.

Comment author: MendelSchmiedekamp 20 November 2009 09:58:23PM 0 points [-]

My post does describe a distinct model based on a Many Worlds interpretation where the probabilities are computed differently based on whether entanglement occurs or not - i.e. whether the universes influence each other. It is distinct from the typical model of decoherence.

As for photosythesis, it ought to behave in much the same way, as a network of states propagating through entangled universes, with the interactions of the states in those branches causing the highest probabilities to be assigned to the branches which have the lowest energy barriers.

Of note, there are other, more esoteric models based on even more unusual interpretations of quantum mechanics, but I suspect that's not something we need to get into here.

Comment author: MendelSchmiedekamp 20 November 2009 09:41:45PM 10 points [-]

It's as though no one here has ever heard of the bystander effect. The deadline is January 15th. Setting up a wiki page and saying "Anyone's free to edit." is the equivalent to killing this thing.

Also this is a philosophy, psychology, and technology journal, which means that despite the list of references for Singularity research you will also need to link this with the philosophical and/or public policy issues that the journal wants you to address (take a look at the two guest editors).

Another worry to me is that in all the back issues of this journal I looked over, the papers were almost always monographs (and baring that 2). I suspect that having many authors might kill the chances for this paper.

Comment author: MendelSchmiedekamp 20 November 2009 08:03:42PM 5 points [-]

First of all consider a computer is incomplete without a program, so lets just think of a programmed computer - whether in hardware or software doesn't matter for our purposes.

This gives us a system that goes from some known start state to some outcome state through a series of intermediate steps. If each of these steps is deterministic, then the entire system reaches the same outcome in all universes where it had the same starting point.

If those steps were stochastic, perhaps because there is chance of memory corruption in our computer or because of a random guess, than in some universes the system arrives as a different outcome, based on the probability of that branch of the intermediate states. This can produce many branches, but because each of these branches cannot affect the others the result is tree of intermediate states, leading to the outcomes of our computer and its program.

Now, both of these are classical computers, but it helps to know what a classical computer looks like in a many worlds interpretation, before mapping a quantum computer there. This is because all computers, classical or quantum share a property - they are computers. This means there must be a path from the starting state to the outcome state. We can influence that path in many ways, but the path is part of how we define and build a computer and a program.

In quantum mechanics, there is a phenomena called entanglement, which loosely means that the events in very similar worlds can affect the probabilities of events in all of those worlds. You can think of this as the boundaries between the many worlds smoothing out as you get to a small scale.

This means that unlike our stochastic tree of states, the quantum computer can have a more complex structure. It is even possible, for two branches to converge back into one and to have branches cancel out.* In practice, these are more approximate than precise, so you will find a dominant combination of two branches or a near cancellation. Using this interaction a skilled quantum algorithm designer can use a variety of tricks to make correct answers more likely, by canceling wrong answers, and by increasing the probability of correction ones.

There is no uniform solution to this problem, for example, the best known quantum algorithm, the Shor algorithm for prime factorization exploits frequency of a possible prime factorization using number theory. This works well on quantum computers because the frequency difference is also a critical value for determining the probability of the combination of two quantum variables.

In each case, a computer and its program produce a path of execution, but by exploiting the features of or needing to deal with the problems of non-determinism and quantum mechanics, the nature of that computation becomes more complex and difficult to see. Any one world's view is not sufficient, especially in the case of quantum computing where the probabilities which govern a world's execution are not derived solely from within that world.

  • I'm fairly sure one this, but I'm a little rusty, so I could be wrong.
Comment author: Douglas_Knight 16 November 2009 11:53:49PM 0 points [-]

So what did you mean by

Note, setting the limit to "no preference" does not resolve the discontinuity. But by intermediate value, there will exist at least one such point in any continuous approximation of the discontinuous function.

Comment author: MendelSchmiedekamp 17 November 2009 04:52:36PM 0 points [-]

I meant that setting the limit to no preference for a given C doesn't equate to a globally continuous function. But that when you adjust your preferences function to approximate the discontinuous function by a continuous one, the result will contain (at least one) no preference point between any two A < B.

Now perhaps there is a result which says that if you take the limit as you set all discontinuous C to no preference, that the resulting function is complete, consistent, transitive, and continuous, but I wouldn't take that to be automatic.

Consider, for example, a step discontinuity, where an entire swatch of pA + (1-p)B are stuck on the same set of < and = mappings and then there is a sharp jump to a very large set of < and = mappings at a critical p'. If you map the ordinals to the real line, this is analogous to a y-coordinate jump. To remove this discontinuity you would need to do more than split the preferences at p' around no preference, because all this does is add a single point to the mix. To fully resolve it, you need to add an entire continuous curve, which means a process of selecting new A, B, and C, and showing that the transfinite limit always converges to a valid result.

Comment author: Douglas_Knight 13 November 2009 01:36:04PM 0 points [-]

The function whose continuity is at issue is the function from real numbers to lotteries that mixes A and B. C is being used to build open sets in the space of lotteries of the form of all lotteries better (or worse) than C, whose preimage in the real numbers must be open, rather than half-open.

Comment author: MendelSchmiedekamp 16 November 2009 03:58:59PM 0 points [-]

We are talking about the same thing here just at different levels of generality. The function you describe is the same as the one I'm describing, except on a much narrower domain (only a single binary lottery between A and B). Then you project the range to just a question about C.

In the specific function you are talking about, you must hold that this is true for all A, B, and C to get continuity. In the function I describe, the A, B, and C are generalized out, so the continuity property is equivalent to the continuity of the function.

Comment author: Douglas_Knight 11 November 2009 04:48:43AM 0 points [-]

Note, setting the limit to "no preference" does not resolve the discontinuity. But by intermediate value, there will exist at least one such point in any continuous approximation of the discontinuous function.

What is the discontinuous function? the function that assigns a preference to a dilemma? (particularly, mixed dilemmas parameterized by probabilities) With discrete range, that can never be continuous.

I think you are complaining about the name "continuity axiom"; I am not the right target of that complaint! I don't know why it's called that, but I suspect you have jumped from the name to false beliefs about the axiom system.

There is another continuous function, which is the assignment of utilities to lotteries. But I think this is continuous (to the extent that it can be defined) without invoking the continuity axiom. It is more the inverse map, from utilities to indifference-classes of lotteries, that risks not being continuous. I would complain more that this map is not well-defined, but there may be a way of arranging something like indifference-classes to have a finer topology than the order topology (eg, the left-limit topology, or the discrete topology).

Comment author: MendelSchmiedekamp 11 November 2009 06:09:26PM 0 points [-]

I was talking about utility functions, but I can see your point about generalizing the result to the mapping from arbitrary dilemmas to preferences. Realize though, that preference space isn't discrete.

You can describe it as the function from a mixed dilemma to the joint relation space for < and =. Which you can treat as a somewhat more complex version of the ordinals (certainly you can construct a map to a dense version of the ordinals if you have at least 2 dilemmas and dense probability space). That gives you a notion of the preference space where a calculus concept of continuity does apply (as the continuity axiom is a variation on the intermediate value theorem for this space which implies typical continuity).

From this perspective, the point I'm making about continuous approximations should make more sense.

Comment author: RobinZ 10 November 2009 04:54:21PM 1 point [-]

Transitivity and Continuity are unnecessary, however.

Comment author: MendelSchmiedekamp 10 November 2009 05:24:09PM 0 points [-]

That is my reading of it too. I know Stuart is putting forward analytic results here, I was concerned that this one was not correctly represented.

View more: Prev | Next