Comment author: lukeprog 13 March 2015 09:55:55PM 1 point [-]

Fair enough. I've edited my original comment.

(For posterity: the text for my original comment's first hyperlink originally read "0 and 1 are not probabilities".)

Comment author: owencb 13 March 2015 10:16:04PM 0 points [-]

Perfect, thanks!

Comment author: owencb 13 March 2015 08:21:32PM 7 points [-]

Thanks for providing this!

I have a worry about using trivia questions for calibration: there's a substantial selection effect in the construction of trivia questions, so you're much more likely to get an obscure question pointing to a well-known answer than happens by chance. The effect may be to calibrate people for trivia questions in a way that transfers poorly to other questions.

Comment author: lukeprog 13 March 2015 05:59:17PM *  0 points [-]

I'd prefer not to allow 0 and 1 as available credences. But if 0 remained as an option I would just interpret it as "very close to 0" and then keep using the app, though if a future version of the app showed me my Bayes score then the difference between what the app allows me to choose (0%) and what I'm interpreting 0 to mean ("very close to 0") could matter.

Comment author: owencb 13 March 2015 07:56:56PM 4 points [-]

I think it's misleading to just drop in the statement that 0 and 1 are not probabilities.

There is a reasonable and arguably better definition of probabilities which excludes them, but it's not the standard one, and it also has costs -- for example probabilities are a useful tool in building models, and it is sometimes useful to use probabilities 0 and 1 in models.

(aside: it works as a kind of 'clickbait' in the original article title, and Eliezer doesn't actually make such a controversial statement in the post, so I'm not complaining about that)

Comment author: Zvi 13 March 2015 02:48:16PM 2 points [-]

This is a great formalization of a three-resource model - time, money and mental energy - which clearly gives much better answers than a two-resource model of only time and money in cases where mental energy is a relevant resource, which is often true.

Despite that, it still feels woefully incomplete/simplified, given how important it is to get something like this right. One of these is that there are lots of resources (N is large) and trading is not cost-less between them; you have to draw the line somewhere, however, and can draw it as needed by a given exercise. I think more importantly than that, it comes down to the fact that while money is fungible and savable, time and mental energy (and many other key resources) aren't. Resources that are use-it-or-lose-it, but vital to pretty much everything, like time, have highly variable marginal value, which makes the calculations very different than described in the paper. I'm going to try and expand/formalize this concept more.

Comment author: owencb 13 March 2015 07:31:38PM 1 point [-]

Thanks, I'd love to see what you come up.

I agree that it is a big simplification, but I don't know how much of a practical problem that is, given that a lot of people can get things wrong that would be fixable even by the two-resource model. Still, I fully support having a range of different models of different complexities!

Comment author: owencb 12 March 2015 11:50:36AM 3 points [-]

Perhaps small tariffs in the short term, but: (i) I don't think people are engaging in that much trade at the moment, (ii) while it's an update, I don't think this will change most people's estimates very much.

In the medium term I don't think so, because the network is greased by deeper understanding of what others might do. I think your patch is a step in that direction, and may increase acausal trade by accelerating full understanding. It could decrease it in the medium term, though.

In the longer term, I guess everyone figures everything out, and this has no effect.

Comment author: Larks 12 March 2015 12:46:37AM 2 points [-]

Could you post the actual article? I think far fewer people will read it if they have to suffer the trivial inconvenience of clicking the link.

Comment author: owencb 12 March 2015 09:49:55AM 3 points [-]

I wonder about this. I agree that fewer people will read it, but it's not clear that's that's bad -- they will presumably tend to be the people who were less interested in it. In general there's a lot of good content on the internet, and I view the scenario where everyone tries to maximise readership of their content as a defecting strategy. I'd rather give the best information so that people can decide whether to read it.

I'm really not sure about this, though -- maybe enough of the those who pass would benefit from it that it's worth trying to maximise readership at least among people here.

Another reason not to post it is that it's 14 pages.

Comment author: Dias 10 March 2015 11:48:16PM 1 point [-]

I basically agree with everything you said.

With regards the race and socio-economic background issue, I agree, only noting that this is similarly an issue for job applications and other financial products. Reality is not race-blind; at some point you have to deal with it, and this is not a special case.

Perhaps it would be easier to do in England (or some other non-US country) for this reason.

Comment author: owencb 11 March 2015 10:45:39PM 0 points [-]

I somewhat agree with that point, but this would bring it out into the open as an explicit effect, which might be more controversial.

Of course anti-discrimination legislation might mean that the contracts on offer were only allowed to depend on certain parameters.

Comment author: owencb 11 March 2015 07:18:12PM 4 points [-]

I think your definition is really a definition of powerful things (which is of course extremely relevant!).

I'd had some incomplete thoughts in this direction. I'd taken a slightly different tack to you. I'll paste the relevant notes in.

Descriptions (or models) of systems vary in their complexity and their accuracy. Descriptions which are simpler and more accurate are preferred. Good descriptions are those which are unusually accurate for their level of complexity. For example ‘spherical’ is a good description of the shape of the earth, because it’s much more accurate than other descriptions of that length.

Often we want to describe subsystems. To work out how good such descriptions are we can ask how good the implied description of the whole system is, if we add a perfect description of the rest of the system.

Definition: The agency of a subsystem is the degree to which good models of that system predict its behaviour in terms of high-level effects on the world around.

Note this definition is not that precise: it replaces a difficult notion (agency) with several other imprecise notions (degree to which; good models of the that system; high-level effects). My suggestion is that while still awkward, they are more tractable than agent. I shy away from giving explicit forms, but I think this should generally be possible and indeed I could give guesses in several cases, but at the moment questions about precise functional forms seem a distraction from the architecture. Also note that this definition is continuous rather than binary.

Proposition: Very simple systems cannot have high degrees of agency. This is because if the system in its entirety admits a short description, you can’t do much better by appealing to motivation.

Observation: Some subsystems may have high agency just with respect to a limited set of environments. A chess-playing program has high agency when attached to a game of chess (and we care about the outcome), and low agency otherwise. A subsystem of an AI may have high agency when properly embedded in the AI, and low agency if cut off from its tools and levers.

Comment: this definition picks out agents, but also picks out powerful agents. Giving someone an army increases their agency. I'm not sure whether this is a desirable feature. If we wanted to abstract away from that, we could do something like:

Define power: the degree to which a subsystem has large effects on systems it is embedded in.

Define relative agency = agency - power

Comment author: Lumifer 10 March 2015 02:50:57PM 1 point [-]

I am sorry, this makes no sense to me at all.

Playing games inside your own mind has nothing to do with trades with other real entities, acausal or not.

Comment author: owencb 10 March 2015 06:28:54PM 0 points [-]

The first version isn't inside your own mind.

If you think there is a large multiverse, then there are many worlds including people very much like you in a variety of situations (this is a sense of 'counterfactual' which isn't all in the mind). Suppose that you care about people who are very similar to you. Then you would like to trade with real entities in these branches, when they are able to affect something you care about. Of course any trade with them will be acausal.

In general it's very hard to predict the relative likelihoods of different worlds, and the likelihood of agents in them predicting the existence of your world. This provides a barrier to acausal trade. Salient counterfactuals (in the 'in the mind' sense) give you a relatively easy way of reasoning about a slice of worlds you care about, including the fact that your putative trade partner also has a relatively easy way of reasoning about your world. This helps to enable trade between these branches.

In response to comment by evand on Counterfactual trade
Comment author: Lumifer 09 March 2015 04:10:11PM 1 point [-]

Who are "they"?

Comment author: owencb 10 March 2015 12:08:19PM 0 points [-]

The direct interpretation is that "they" are people elsewhere in a large multiverse. That they could be pictured as a figment of imagination gives the agent evidence about their existence.

The instrumental interpretation is that one acts as though trading with the figment of one's imagination, as a method of trade with other real people (who also act this way), because it is computationally tractable and tends to produce better outcomes all-round.

View more: Prev | Next