Comment author: [deleted] 04 June 2012 11:01:29PM 0 points [-]

It's my handle @ sonic.net. Thanks in advance!

In response to comment by [deleted] on Suggestion: Less Wrong Writing Circle?
Comment author: APMason 04 June 2012 11:22:14PM 1 point [-]

Sent.

Comment author: [deleted] 03 June 2012 09:42:35PM 0 points [-]

Thanks for the note -- and I'd be very happy to see any other comments you might have.

In response to comment by [deleted] on Suggestion: Less Wrong Writing Circle?
Comment author: APMason 04 June 2012 10:46:47PM 1 point [-]

Okay, I wrote up my thoughts, but it's pretty long and I'm not sure it's fair to post it here (also it's too long for a PM). Do you have an email I can send it to?

Comment author: APMason 04 June 2012 12:49:34PM 4 points [-]

What happens if you're using this method and you're offered a gamble where you have a 49% chance of gaining 1000000utils and a 51% chance of losing 5utils (if you don't take the deal you gain and lose nothing). Isn't the "typical outcome" here a loss, even though we might really really want to take the gamble? Or have I misunderstood what you propose?

Comment author: [deleted] 02 June 2012 05:43:50PM *  2 points [-]

Okay!


The Dragons of Mars, Chapter One, Part One

Jhalasi, crown prince of Mars, was publicly auditioning a mistress. It was, of course, a highly ceremonial occasion, excruciatingly ritualized, and he was bored. Nonetheless it provided a welcome spectacle for his people. They filled the streets of Arsia Mons, drummers and masked dancers entertaining those who were too far away from the central stage to view the royal ritual. Here and there bonfires flared. On the red planet, burning oxygen was a signal luxury, allowed only during times of special license.

Twelve miles above them, a translucent dome sealed the caldera of Arsia Mons -- the ancient volcano that had spewed forth its last fire when Archaeopteryx fluttered among Terran dinosaurs. The first colonists of Mars carved out their settlements in the flanks of Arsia Mons, building out from a natural cavern system that provided some degree of protection from deadly cosmic radiation. Over time, those settlements had been linked, first via a network of tunnels, and finally, triumphantly, when the great Worldhouse dome was erected. Martian citizens could now walk in the open expanse of the vast caldera, free of pressure-suits or breathing tanks. In the wavering red lights of their fires, they drummed, and they danced.

Jhalasi waited to be introduced to his mistress. The Sleepers had not yet revealed to him a wife, though he was nearly sixteen years of age: twenty-nine, in Terran standard years. Such was not uncommon for one of his descent. There had been, of course, liaisons -- each one carefully vetted by the royal genealogists before it began, and monitored by the Oracles until the affair had run its course. But Martian politics being what they were, the time had come for a more formal arrangement.

The girl was of House Rao. Jhalasi had never met her, but had been assured at great length of her beauty and virtue. Her name was Siannamar. She modeled advances in filtration systems, and rescued orphan mice.

Siannamar Rao was already seated across the stage from him, though of course he could make out very little of her beneath her layers of veils and sateens. She -- and he -- would be expected to dance, later, for the crowd.

First, though, came the endless patriotic affirmations. The singing: first a hymn of loss, for Earth; then a hymn of triumph, for the Void; and last, the hymn of thanks, for Mars, the mother planet. Then the three-fold salute to the Sleepers, honoring their blessings of air, soil, and ice; and finally the affirmation of fire and aether, for the transformative intelligence that humanity carried within itself. This last was the only one that Jhalasi found personally moving. Something about the high, wailing notes of the chant -- the way one's hands were lifted, twining, to the heavens, only to fall back again -- it reminded him of DNA, and the way that information was carried forward in human bodies, through human love. In its blind, brute way, evolution had created a more durable data storage system than anything scientists had yet devised. That part did, truly, give him chills.

After the singing and the salutes came the re-enactment of the story of the Martian Founding. Jhalasi rigorously stifled a yawn as the actors came on stage. To amuse himself as they stepped through a recital he'd seen several dozen times before, he tried guessing at the identity of the mummers beneath the masks. Each actor representing a Founder would be drawn from that Founder's noble House. Maculin and Sen, he hadn't a clue. Rao -- that would probably be Siannamar's brother, he'd met the man at the signing ceremony for the love-contract. Sharp eyes and few words: Jhalasi had liked him. Could probably remember his name, if he really had to.

Sinclair: Jhalasi actually smiled, fractionally. Nothing that the cams would pick up. Still, he'd been hoping for this, and he recognized that sway of hips beneath the awkward, concealing pressure-suit. No, that wasn't it. He just knew her, knew her nearness. He hadn't seen his sister Vihanyasa since she'd been moved out of the line of succession and married off to House Sinclair. But once -- as children, romping together, pranking the Oracles and giving their minders heart-attacks -- he and Vihanyasa had shared one mind.

The last actor to come on stage represented Rajendra, Founder of the royal House. Jhalasi spared him not even a glance. He knew exactly who the actor would be: his own elder brother, Khamsarajan, rendered ineligible for the throne via an accident of birth. Khamsarajan had never joined in the easy hive-mind of the other noble children. Outside, outcast, his bitterness had smouldered, and sought to burn any who came too close.

Jhalasi leaned back on his carved, three-legged stool. It was an artifact of old Earth -- "teak," they called it. It came from a "tree." He'd seen pictures. It was a priceless artifact. But not very comfortable.

The actors were now going through the discovery of the Sleepers. Four of them mimed terror, shrinking back as the drums beat fast. One -- Khamsarajan/Rajendra -- stepped forward. The chorus ran onstage from the wings, a dozen dancers each bearing up a piece of the great puppet that represented a Sleeper. Its sinuous, serpentine form glittered with fractal light; its face bore round eyes, a wide smile, and the suggestion of mouse-like fur. It was a beautiful, friendly sight, and one echoed a thousand times in the crowd that surrounded them. The "dragons of Mars" were painted on masks, woven in banners, worked into the very architecture of Arsia Mons. They were part of the royal seal and the central emblem of the Martian flag. Many in the crowd probably even believed that's what the Sleepers actually looked like.

The dancers swirled around Khamsarajan and the others, weaving the sinuous form of the dragon around and among them. In choreographed unison, the Founders all slumped down, miming unconsciousness. Jhalasi let his eyes focus slightly over them, and over the dancers of the chorus, as they quickly and precisely changed out the stage scenery. It would be better if he could close his eyes, but that, the cams would pick up.

Vihanyasa, he thought, with all his specialized intent. How are you? How are you?

The thought would take some time to pass to her. And if she caught it at all, it wouldn't be received as words. The message had to be transmitted by the microorganisms in his own body, coordinating with the microorganisms in hers. They would pass a complex chemical signal that, when taken up by her cerebral cortex, would trigger relevant emotions and memories in her brain. He couldn't predict which ones, exactly, but she'd probably get a memory of herself and him, playing together as children. She'd know he was thinking of her.

That is, if he'd succeeded in activating the transmission mechanism, and if the meaning could pass at all. Touch, between royal siblings, was a fairly reliable method of communication. A gap of meters, as existed between them at the moment, could kill any message. Especially with the crowd pressed so close, and other noble Houses on the stage.

Jhalasi waited. It would take time, in any case, for the communication to work. He watched the show, as the terraforming of Mars continued. "Rajendra" was the first to wake. Khamsarajan pulled off his helmet, and Jhalasi schooled his own features to impassivity at his brother's absurd mimicry of surprise and joy.

It wasn't true, of course, that House Rajendra could survive on the Martian surface without pressure-suits or breathing tanks. It was only propaganda. Or as the tutors had carefully put it, after Jhalasi and Vihanyasa had been caught trying to execute a very dangerous experiment -- it was myth, a kind of truth that uneducated people took literally, but princes and princesses should understand in a more sophisticated way. They should not try to escape the Worldhouse dome without adult supervision: not that it was possible, but they should not try. The story taught that House Rajendra was uniquely bound to Mars, steeped in its biosphere. They heard the will of the Sleepers more clearly than any other lineage. That's what the "Rajendra breathed the air of Mars" part of the story meant -- it didn't mean that they, his descendants, should try it.

(They hadn't been planning to, of course. They weren't that stupid. They had suits and tanks. They had both spat into a pressurized container, and they were going to unseal the container on the surface, to see if the liquid in their spit boiled away at once, even in the vastly cold temperatures. That would've told them all they needed to know. Still, they were both whipped for the disobedience, and Khamsarajan laughed at them when they couldn't sit down after.)

In response to comment by [deleted] on Suggestion: Less Wrong Writing Circle?
Comment author: APMason 03 June 2012 08:57:44PM 3 points [-]

I might be interested in giving a fuller critique of this at some point (but then who the hell am I), but for now I'll confine myself to just one point:

It was, of course, a highly ceremonial occasion...

The reader knows that the narrator knows more about this world than they do. The reader is okay with that. Trying to impart information by pretending that the reader already knows it seems clumsy and distracting to me. Compare with:

It was a highly ceremonial occasion, excruciatingly ritualized, and he was bored.

I think this is fine. No need to pretend you're the reader's chum.

Comment author: drnickbone 26 May 2012 08:33:56PM *  0 points [-]

Here are the variants which make no explicit mention of TDT anywhere in the problem statement. It seems a real strain to describe either of them as unfair to TDT. Yet TDT will be outperformed on them by CDT; unless it resolves never to allow itself to be outperformed on any problem (in TDT über alles fashion)

Problem 1: Omega (who experience has shown is always truthful) presents the usual two boxes A and B and announces the following. "Before you entered the room, I selected an agent at random from the following distribution over all full source-codes for decision theory agents (insert distribution). I then simulated the result of presenting this exact problem to that agent. I won't tell you what the agent decided, but I will tell you that if the agent two-boxed then I put nothing in Box B, whereas if the agent one-boxed then I put big Value-B in Box B. Regardless of how the simulated agent decided, I put small Value-A in Box A. Now please choose your box or boxes."

Problem 2: Our ever-reliable Omega now presents ten boxes, numbered from 1 to 10, and announces the following. "Exactly one of these boxes contains $1 million; the others contain nothing. You must take exactly one box to win the money; if you try to take more than one, then you won't be allowed to keep any winnings. Before you entered the room, I ran multiple simulations of this problem as presented to different agents, sampled uniformly from different possible future universes according to their relative numbers, with the universes themselves sampled from my best projections of the future. I determined the box which the agents were least likely to take. If there were several such boxes tied for equal-lowest probability, then I just selected one of them, the one labelled with the smallest number. I then placed $1 million in the selected box. Please choose your box."

Comment author: APMason 27 May 2012 02:13:56PM *  2 points [-]

I think the clearest and simplest version of Problem 1 is where Omega chooses to simulate a CDT agent with .5 probability and a TDT agent with .5 probability. Let's say that Value-B is $1000000, as is traditional, and Value-A is $1000. TDT will one-box for an expected value of $500500 (as opposed to $1000 if it two-boxes), and CDT will always two-box, and receive an expected $501000. Both TDT and CDT have an equal chance of playing against each other in this version, and an equal chance of playing against themselves, and yet CDT still outperforms. It seems TDT suffers for CDT's irrationality, and CDT benefits from TDT's rationality. Very troubling.

EDIT: (I will note, though, that a TDT agent still can't do any better by two-boxing - only make CDT do worse).

Comment author: lukeprog 27 May 2012 01:01:31AM *  1 point [-]

Peterson's way of formally representing a decision problem also seems more helpful to me than the ways proposed by Jeffrey and Savage. Peterson (2008) explains:

I shall conceive of a formal representation as an ordered quadruple π =<A,S,P,U>. The intended interpretation of its elements is as follows.

A = {a1,a2, . . .} is a non-empty set of acts.
S ={s1, s2, . . .} is a non-empty set of states.
P = {p1 : A×S→[0,1], p2 : A×S→[0,1], . . .} is a set of probability functions.
U = {u1 : A×S→Re,u2 : A×S→Re, . . .} is a set of utility functions.

An act is, intuitively speaking, an uncertain prospect. I shall use the term ‘uncertain prospect’ when I speak of alternatives in a loose sense. The term ‘act’ is used when more precision is required.

By introducing the quadruple <A,S,P,U>, a substantial assumption is made. The assumption implies that everything that matters in rational decision making can be represented by the four sets A,S,P,U. Hence, nothing except acts, states, and numerical representations of partial beliefs and desires is allowed to be of any relevance. This is a standard assumption in decision theory.

A formal decision problem under risk is a quadruple π = <A,S,P,U> in which each set P and U has exactly one element. A formal decision problem under uncertainty is a quadruple π = A,S,P,U in which P = /0 and U has exactly one element. Since P and U are sets of functions rather than single functions, agents are allowed to consider several alternative probability and utility measures in a formal decision problem. This set-up can thus model what is sometimes referred to as ‘epistemic risks’, i.e. cases in which there are several alternative probability and utility functions that describe a given situation. Note that the concept of an ‘outcome’ or ‘consequence’ of an act is not explicitly employed in a formal decision problem. Instead, utilities are assigned to ordered pairs of acts and states. Also note that it is not taken for granted that the elements in A and S (which may be either finite or countable infinite sets) must be jointly exhaustive and mutually exclusive. To determine whether such requirements ought to be levied or not is a normative issue that will be analysed in greater detail in subsequent chapters.

It is easy to jump to the conclusion that the framework employed here presupposes that the probability and utility functions have been derived ex ante (i.e. before any preferences over alternatives have been stated), rather than ex post (i.e. after the agent has stated his preferences over the available acts). However, the sets P and U are allowed to be empty at the beginning of a representation process, and can be successively expanded with functions obtained at a later stage.

The formal set-up outlined above is one of many alternatives. In Savage’s theory, the fundamental elements of a formal decision problem are taken to be a set S of states of the world and a set F of consequences. Acts are then defined as functions from S to F. No probability or utility functions are included in the formal decision problem. These are instead derived ‘within’ the theory by using the agent’s preferences over uncertain prospects (that is, risky acts).

Jeffrey’s theory is, in contrast to Savage’s, homogenous in the sense that all elements of a formal decision problem—e.g. acts, outcomes, probabilities, and utilities—are defined on the same set of entities, namely a set of propositions. For instance, ‘[a]n act is . . . a proposition which it is within the agent’s power to make true if he pleases’, and to hold it probable that it will rain tomorrow is ‘to have a particular attitude toward the proposition that it will rain tomorrow’. In line with this, the conjunction B∧C of the propositions B and C is interpreted as the set-theoretic intersection of the possible worlds in which B and C are true, and so on. Note that Jeffrey’s way of conceiving an act implies that all consequences of an act in a decision problem under certainty are acts themselves, since the agent can make those propositions true if he pleases. Therefore, the distinction between acts on the one hand and consequences on the other cannot be upheld in his terminology, which appears to be a drawback.

However, irrespective of this, the homogenous character of Jeffrey’s set-up is no decisive reason for preferring it to Savage’s, since the latter can easily be reconstructed as a homogenous theory by widening the concept of a state. The consequence of having a six-egg omelette can, for example, be conceived as a state in which the agent enjoys a six-egg omelette; acts can then be defined as functions from states to states. A similar manoeuvre can, mutatis mutandis, be carried out for the quadruple <A,S,P,U>.

I think it is more reasonable to take states of the world, rather than propositions, to be the basic building blocks of formal decision problems. States are what ultimately matter for agents, and states are less opaque from a metaphysical point of view. Propositions have no spatio-temporal location, and one cannot get into direct acquaintance with them. Arguably, the main reason for preferring Jeffrey’s set-up would be that things then become more convenient from a technical point of view, since it is easy to perform logical operations on propositions. In my humble opinion, however, technical convenience is not the right kind of reason for making metaphysical choices.

An additional reason for conceiving of a formal decision problem as a quadruple <A,S,P,U>, rather than in the way proposed by Savage or Jeffrey, is that it is neutral with regard to the controversy over causal and evidential decision theory. To take a stand on that issue would be beyond the scope of the present work. In Savage’s theory, which has been claimed to be ‘the leading example of a causal decision theory’, it is explicitly assumed that states are probabilistically independent of acts, since acts are conceived of as functions from states to consequences. This requirement makes sense only if one thinks that agents should take beliefs about causal relations into account: Acts and states are, in this type of theory, two independent entities that together cause outcomes. In an evidential theory such as Jeffrey’s, the probability of a consequence is allowed to be affected by what act is chosen; the agent’s beliefs about causal relations play no role. Evidential and causal decision theories come to different conclusions in Newcomb-style problems. For a realistic example, consider the smoking-caused-by-genetic-defect problem: Suppose that there is some genetic defect that is known to cause both lung cancer and the drive to smoke. In this case, the fact that 80 percent of all smokers suffer from lung cancer should not prevent a causal decision theorist from starting to smoke, since (i) one either has that genetic defect or not, and (ii) there is a small enjoyment associated with smoking, and (iii) the probability of lung cancer is not affected by one’s choice. An evidential decision theorist would, on the contrary, conclude (incorrectly) that if you start to smoke, there is an 80 percent risk that you will contract lung cancer.

It seems obvious that causal decision theory, but not its evidential rival (in its most na¨ıve version), comes to the right conclusion in the smoking-caused-bygenetic-defect problem. Some authors have proposed that for precisely this reason, evidential decision theory should be interpreted as a theory of valuation rather than as a theory of decision. A theory of valuation ranks a set of alternative acts with regard to how good or bad they are in some relevant sense, but it does not prescribe any acts. Valuation should, arguably, be ultimately linked to decision, but there is no conceptual mistake involved in separating the two questions.

A significant advantage of e.g. Jeffrey’s evidential theory, both when interpreted as a theory of decision and as a theory of valuation, is that it does not require that we understand what causality is. The concept of causality plays no role in this theory.

Since there are significant pros and cons for either the causal or evidential approach, it seems reasonable to opt for a set-up that allows for both kinds of theories, until the dispute has been resolved...

Comment author: APMason 27 May 2012 01:33:11AM 1 point [-]

Hmm, if I've understood this correctly, it's the way I've always thought about decision theory for as long as I've had a concept of expected utility maximisation. Which makes me think I must have missed some important aspect of the ex post version.

Comment author: TimS 24 May 2012 05:54:27PM *  1 point [-]

For instance, every culture has a belief in the supernatural.

Every culture has some different things they believe in, and call supernatural. That doesn't prove there really is a category of things that actually are supernatural. By analogy, belief by Himalayan people that the Yeti is real is not evidence that Bigfoot (in the northwestern United States) is real. Likewise, a Hindu's fervent belief is not evidence of the resurrection of Jesus.

In short, the shortfalls in human understanding completely explain why primitive cultures believed "supernatural" was a real and useful label, even though that belief is false.

Comment author: APMason 24 May 2012 06:00:12PM 2 points [-]

I'm not sure whether it is the case that primitive cultures have a category of things they think of as "supernatural" - pagan religions were certainly quite literal: they lived on Olympus, they mated with humans, they were birthed. I wonder whether the distinction between "natural" and "supernatural" only comes about when it becomes clear that gods don't belong in the former category.

Comment author: Jakinbandw 24 May 2012 05:31:11PM 1 point [-]

Is "god exists, has the properties I believe it to have, and wants to stay hidden" really the only reason you can think of for the observable universe being as we observe it to be?

My own belief is closer to: "Something very powerful and supernatural* exists, doesn't seem to be hostile, and doesn't mind that I call it the Christian God." And while I would answer 'no' to that question, the amount of evidence that there is something supernatural* if far greater than the amount of evidence that there are millions of people lying about their experiences.

For instance, every culture has a belief in the supernatural. Now I would expect that social evolution would trend away from such beliefs. If you say, I can dance and make it rain, and then you fail, you would get laughed at. If you don't believe me gather a bunch of your closest friends and try it. The reason for people to believe someone else is if they had proof to back it up, or they already had reason to believe. Humans aren't stupid, and I don't think we've become radically more intelligent in the last couple thousand years. Why then is belief in the supernatural* everywhere? Is it something in our makeup, how we think? I have heard such a thing discounted by both sides. So there must be some cause, some reason for people to have started believing.

And that's without even getting into my experiences, or those close to me. As was suggested, misremembering, and group hallucination are possible, but if that is the case than I should probably check myself and some people I know into a medical clinic because I would be forced to consider myself insane. Seeing things that aren't there wold be a sign of something being very wrong with me, but I do not any any other symptoms of insanity so I strongly doubt this is the case.

I suppose when I get right down to it, either I and some others are insane with an unknown form of insanity, or there is something out there.

*(outside of the realm of what human science commonly accepts)

Comment author: APMason 24 May 2012 05:42:42PM 3 points [-]

And that's without even getting into my experiences, or those close to me.

Well, don't be coy. There's no point in withholding your strongest piece of evidence. Please, get into it.

Comment author: Jack 23 May 2012 10:06:14PM 4 points [-]

He sets out an idea of a "fair" test, which evaluates only what you do and what you are predicted to do, not what you are.

Two questions: First, how does is this distinction justified? What a decision theory is is a strategy for responding to decision tasks and simulating agents performing the right decision tasks tells you what kind of decision theory they're using. Why does it matter if it's done implicitly (as in Newcomb's discrimination against CDT) or explicitly. And second why should we care about it? Why is it important for a decision theory to pass fair tests but not unfair tests?

Comment author: APMason 24 May 2012 10:47:29AM 7 points [-]

Why is it important for a decision theory to pass fair tests but not unfair tests?

Well, on unfair tests a decision theory still needs to do as well as possible. If we had a version of the original Newcomb's problem, with the one difference that a CDT agent gets $1billion just for showing up, it's still incumbent upon a TDT agent to walk away with $1000000 rather than $1000. The "unfair" class of problems is that class where "winning as much as possible" is distinct from "winning the most out of all possible agents".

Comment author: drnickbone 23 May 2012 03:34:34PM 2 points [-]

I also had thoughts along these lines - variants of TDT could logically separate themselves, so that T-0 one-boxes when it is simulated, but T-1 has proven that T-0 will one-box, and hence T-1 two-boxes when T-0 is the sim.

But a couple of difficulties arise. The first is that if TDT variants can logically separate from each other (i.e. can prove that their decisions aren't linked) then they won't co-operate with each other in Prisoner's Dilemma. We could end up with a bunch of CliqueBots that only co-operate with their exact clones, which is not ideal.

The second difficulty is that for each specific TDT variant, one with algorithm T' say, there will be a specific problematic problem on which T' will do worse than CDT (and indeed worse than all the other variants of TDT) - this is the problem with T' being the exact algorithm running in the sim. So we still don't get the - desirable - property that there is some sensible decision theory called TDT that is optimal across fair problems.

The best suggestion I've heard so far is that we try to adjust the definition of "fairness", so that these problematic problems also count as "unfair". I'm open to proposals on that one...

Comment author: APMason 23 May 2012 04:22:55PM *  0 points [-]

Well, I've had a think about it, and I've concluded that it would matter how great the difference between TDT and TDT-prime is. If TDT-prime is almost the same as TDT, but has an extra stage in its algorithm in which it converts all dollar amounts to yen, it should still be able to prove that it is isomorphic to Omega's simulation, and therefore will not be able to take advantage of "logical separation".

But if TDT-prime is different in a way that makes it non-isomorphic, i.e. it sometimes gives a different output given the same inputs, that may still not be enough to "separate" them. If TDT-prime acts the same as TDT, except when there is a walrus in the vicinity, in which case it tries to train the walrus to fight crime, it is still the case in this walrus-free problem that it makes exactly the same choice as the simulation (?). It's as if you need the ability to prove that two agents necessarily give the same output for the particular problem you're faced with, without proving what output those agents actually give, and that sure looks crazy-hard.

EDIT: I mean crazy-hard for the general case, but much, much easier for all the cases where the two agents are actually the same.

EDIT 2: On the subject of fairness, my first thoughts: A fair problem is one in which if you had arrived at your decision by a coin flip (which is as transparently predictable as your actual decision process - i.e. Omega can predict whether it's going to come down heads or tails with perfect accuracy), you would be rewarded or punished no more or less than you would be using your actual decision algorithm (and this applies to every available option).

EDIT 3: Sorry to go on like this, but I've just realised that won't work in situations where some other agent bases their decision on whether you're predicting what their decision will be, i.e. Prisoner's Dilemma.

View more: Prev | Next