Comment author: FrankAdamek 06 May 2012 09:26:45PM 0 points [-]

I am primarily referring to the unconscious drives underlying our actions, not our verbal goals. No matter what term I used to describe it, when I imagined myself doing very well in general relative to other people, spending every moment in focused and topical optimization, I was excited and driven to pursue the things I expected to make me like that. If I anticipated outcomes that did NOT involve me being that kind of person, there was far less unconscious drive to act.

Being hyper-competent was not a subgoal of programming or business, and if it were I would have your same critique. Being hyper-competent was a subgoal of having social success, having riches, being safe, a general assessment of "able to succeed even in difficult situations." Programming and business were rather what seemed consciously to be the best specific routes for achieving these things, but they involved not being the sort of hyper-competent person, and because I unconsciously desired that so much I was not, in practice, driven to pursue programming or business.

Similarly, unless you have already developed high competence in many concrete tasks, how would you recognise a mind that was a perfectly honed instrument for realizing your goals?

The term "perfectly honed instrument" is meant to convey an intuitive sense, not a technical description. But you would recognize such a person by them constantly engaging in what actually seemed to have the greatest marginal return on time, and probably by quickly developing unusually large amounts of skill.

Taboo "perfectly honed instrument", "hyper-competent" etc., and the goal dissolves.

Those terms refer to particular patterns of reality and not others - Bourne, rational!Quirrell, arguably rational!Dumbledore are all extensions of this intension. The average person is not.

Going up the pyramid of goals in such a context is an active hinderance, because the higher goals are harder to make operational.

By "going up the pyramid of goals" I'm referring to understanding more precisely the rules generating the particular, concrete situations we desire, and following a rule higher up on that pyramid. In other words, are there some things we could think of concretely, that once thinking of them, we realized were the real reason we had been motivated by something else was that we unconsciously anticipated it to lead to the first thing? This is something for each person to discover on their own, but it is something to discover.

"Beware affective death spirals on far-mode (sub)goals" or "Taboo specific terms in your goals to make them operationally useful" or possibly even "Check that your stated goals are not semantic stop-signs"

Are you doing those things already? Do they leave something left for you to desire in your rationality? These are all descriptions of much more surface-level techniques than what's being discussed here. This technique is about finding concrete things that make you think "hey, that's awesome, how can I get that?"

Comment author: Jonathan_Lee 07 May 2012 01:13:47AM 0 points [-]

I want to note that I may be confused: I have multiple hypotheses fitting some fraction of the data presented.

  • Supergoals and goals known, but unconscious affective death spirals or difficulties in actioning a far goal are interfering with the supergoals.
  • Supergoals and goals known, goal is suboptimal.
  • Supergoals not known consciously, subgoal known but suboptimal given knowledge of supergoals.

The first is what seems to be in the example. The second is what the strategy handles. The third is what I get when I try to interpret:

This technique is about finding concrete things that make you think "hey, that's awesome, how can I get that?"

The third is a call for more luminosity; the second is bad goal choice. The first is more awkward to handle. You need to operationally notice which goals are not useful and which are. That means noticing surface level features of your apparent goals that are not optimal.

As I see it, speaking of an "intuitive notion" of "perfectly honed instrument for realizing your goals", or merely stopping at "particular patterns of reality" is the warning signal of this failure mode. Taboo these terms, make them operationally defined. If you have a sequence of definite concrete statements about what the world would look like if you were this kind of entity, then you have a functional definition of what you want from the goal.

Of course, the imprecise goal may shatter into a large number of actionable goals. It may be the case that the skills needed to achieve these subgoals have a larger scale skill to learn in them. Functionally, if that high level skill can't be stated with sufficient precision to go out and know success when it's seen, then more data is needed about this possible high-level skill before we can be confident it's there in a form matching the imprecise goal. So note it, do the concrete things now, and look again when there is a better sense of the potential high level problem to solve.

The bit of the post that I find most awesome is the couple of days taken to audit your goals, and notice that achieving your goals were being hindered by this urge. I am aware that when I noticed how badly broken my goal structures were, I had to call "halt and catch fire" and keep a diary for a couple of months. Being able to perform an audit in a few days would be incredibly useful.

Comment author: Jonathan_Lee 06 May 2012 08:55:05PM *  0 points [-]

So, it seems to me that what you describe here is not moving up a hierarchy of goals, unless there are serious issues with the mechanisms used to generate subgoals. It seems like slogans more appropriate to avoiding the demonstrated failure mode are:

"Beware affective death spirals on far-mode (sub)goals" or "Taboo specific terms in your goals to make them operationally useful" or possibly even "Check that your stated goals are not semantic stop-signs"

As presented, you are claiming that:

I wanted to be a perfectly honed instrument for realizing my goals, similar to the hyper-competent characters in my favorite fictions

was generated as a subgoal of specific concrete goals (you mention programming and business). This seems to be a massive failure of planning. I would compare it to stating you would develop calculus to solve a constant speed distance-time problem, having never solved any of the latter sort of question. There is no shape to such a goal; to such an individual "calculus" is a term without content. Similarly, unless you have already developed high competence in many concrete tasks, how would you recognise a mind that was a perfectly honed instrument for realizing your goals? Taboo "perfectly honed instrument", "hyper-competent" etc., and the goal dissolves.

On the other hand, going up the pyramid of goals seems more likely to induce this error. Generally my high level goals are in farer modes and less concrete. Certainly "acquire awesome skills" is not something that I have generated as a subgoal of other goals; I have it as a generalisation of past methods of success, in the (inductive) belief that acquiring such skills will be useful in general. As subgoals to that I attempt general self improvement, for example learning to code in new languages or pushing other skillsets. Going up the pyramid of goals in such a context is an active hinderance, because the higher goals are harder to make operational.

Comment author: Clarity1992 06 May 2012 04:55:15PM *  0 points [-]

Great post. The meet sounds awesome and touched on many things I'm interested in. I'd love to be on the same continent to attend these more regularly than once a year.

One possible typo: "This was followed by multiple passes for people to affiliated with any proposed topic".

Comment author: Jonathan_Lee 06 May 2012 04:59:55PM 1 point [-]

Thanks. Definite typo, Fixed.

Comment author: Jonathan_Lee 30 April 2012 10:27:34AM *  2 points [-]

Better directions to the JCR (with images) are here.

ETA: Also fixed the list of meetups to link there.

Comment author: Jonathan_Lee 05 July 2011 10:09:37AM 1 point [-]

The foundational problem in your thesis is that you have grounded "rationality" as a normative "ought" on beliefs or actions. I dispute that assertion.

Rationality is more reasonably grounded as selecting actions so as to satisfy your explicit or implicit desires. There is no normative force to statements of the form "action X is not rational", unpacked as "If your values fall into {large set of human-like values}, then action X is not optimal, choosing for all similar situations where the algorithm you use is run".

There may or may not be general facts about what it is "rational" for "people" to do; it depends rather crucially on how consistent terminal values are across the set of "people". Neglecting trade with Clippy, it is (probably) not rational for humans to convert Jupiter to paperclips. Clippy might disagree.

It should be clear that rational actions are predicated on terminal values, and do not carry normative connotations. Given terminal values, your means of selecting actions may be rational or otherwise. Again, this is not normative; it may be suboptimal.

Comment author: lionhearted 23 October 2010 03:22:18PM *  0 points [-]

The thrust of your argument appears to be that: 1) Trolley problems are idealised 2) Idealisation can be a dark art rhetorical technique in discussion of the real world. 3) Boo trolley problems!

This is strange, this is the second comment that summarized an argument that I'm not actually making, and then argues against the made up summary.

My argument isn't against idealization - which would be an argument against any sort of generalized hypothetical and against the majority of fiction ever made.

No, my argument is that trolley problems do not map to reality very well, and thus, time spent on them is potentially conducive to sloppy thinking. The four problems I listed were perfect foresight, ignoring secondary effects, ignoring human nature, and constraining decisions to two options - these all lead to a lower quality of thinking than a better constructed question would.

There's a host of real world, realistic dilemmas you could use in place of a (flawed) trolley problem. Layoffs/redundancies to try to make a company more profitable or keep the ship running as is (like Jack Welch at GE), military problems like fighting a retreating defensive action, policing problems like profiling, what burden of proof in a courtroom, a doctor getting asked for performance enhancing drugs with potentially fatal consequences... there's plenty of real world, reality-based situations to use for dilemmas, and we would be better off for using them.

Comment author: Jonathan_Lee 24 October 2010 09:13:16AM 2 points [-]

From your own summary:

I think that trolley problems contain perfect information about outcomes in advance of them happening, ignore secondary effects, ignore human nature, and give artificially false constraints.

Which is to say they are idealised problems; they are trued dilemmas. Your remaining argument is fully general against any idealisation or truing of a problem that can also be used rhetorically. This is (I think) what Tordmor's summary is getting at; mine is doing the same.

Now, I think that's bad. Agree/disagree there?

So, I clearly disagree, and further you fail to actually establish this "badness". It is not problematic to think about simplified problems. The trolley problems demonstrate that instinctual ethics are sensitive to whether you have to "act" in some sense. I consider that a bug. The problem is that finding these bugs is harder in "real world" situations; people can avoid the actual point of the dilemma by appealing for more options.

In the examples you give, there is no similar pair of problems. The point isn't the utilitarianism in a single trolley problem; it's that when two tracks are replaced by a (canonically larger) person on the bridge and 5 workers further down, people change their answers.

Okay, finally, I think this kind of thinking seeps over into politics, and it's likewise bad there. Agree/disagree?

You don't establish this claim (I disagree). It is worth observing that the standard third "trolley" problem is 5 organ recipients and one healthy potential donor for all. The point is to establish that real world situations have more complexity -- your four problems.

The point of the trolley problems is to draw attention to the fact that the H.Sap inbuilt ethics is distinctly suboptimal in some circumstances. Your putative "better" dilemmas don't make that clear. Failing to note and account for these bugs is precisely "sloppy thinking". Being inconsistent in action on the basis of the varying descriptions of identical situations seems to be "sloppy thinking". Failing on Newcomb's problem is "sloppy thinking". Taking an "Activists" hypothetical as a true description of the world is "sloppy thinking". Knowing that the hardware you use is buggy? Not so much.

Comment author: Jonathan_Lee 23 October 2010 08:31:47AM 8 points [-]

The thrust of your argument appears to be that: 1) Trolley problems are idealised 2) Idealisation can be a dark art rhetorical technique in discussion of the real world. 3) Boo trolley problems!

There are a number of issues.

First and foremost, reversed stupidity is not intelligence. Even if you are granted the substance of your criticisms of the activists position, this does not argue per se against trolley problems as dilemmas. The fact that they share features with a "Bad Thing" does not inherently make them bad.

Secondly, the whole point of considering trolley problems is to elucidate human nature and give some measure of training in cognition in stressful edge cases. The observation that humans freeze or behave inconsistently is important. This is why the trolley problems have to be trued in the sense that you object to - if they are not, many humans will avoid thinking about the ethical question being posed. In essence "I don't like your options, give me a more palatable one" is a fully general and utterly useless answer; it must be excluded.

Thirdly, your argument turns on the claim that merely admitting trolley problems as objects of thought somehow makes people more likely to accept dichotomies that "justify tyranny and oppression". This is risible. Even if the dichotomy is a false one, you surely should find one or the other branch preferable. It is perfectly admissible to say:

"I prefer this option (implicitly you presume that will be the taxation), but that if this argument is to be the basis for policy, then there are better alternatives foo, bar, etc., and that various important real world effects have been neglected."

Those familiar with the trolley problems and general philosophical dilemmas are more likely to be aware of the idealisations and voice these concerns cogently if idealisations are used in rhetoric or politics.

Fourthly, in terms of data, I would challenge you to find evidence suggesting that study of trolley problems leads to acceptance of tyranny. I would note (anecdotally) that communities where one can say "trolley problem" without needing to explain further seem to have a higher density of the libertarians and anarchists than the general population.

So in rough summary: 1) Your conclusion does not follow from the argument. 2)Trolley problems are idealised because if they aren't humans evade rather than engage. 3) Noting and calling out dark arts rhetoric is roughly orthogonal to thinking about trolley problems (conditional on thinking). 4) Citation needed wrt. increased tyranny in those who consider trolley problems.

Comment author: [deleted] 31 May 2010 07:10:00AM 0 points [-]

Let's take the stock market as an example. The stock market prices are in principle predictable, only not from the data itself but from additional data taken from the newspapers or other sources. How does the CRM apply if the data does not in itself contain the neccessary information?

Let's say I have a theory that cutting production costs will increase stock prices in relation to the amount of cost cut and the prominence of the company and the level of fear of a crash on the stock market and the level of a "bad news indicator" that is a weighted sum of bad press for the company in the past. How would I test my theory with CRM?

In response to comment by [deleted] on Significance of Compression Rate Method
Comment author: Jonathan_Lee 31 May 2010 07:48:26AM *  0 points [-]

In the wider sense, MML still works on the dataset {stock prices, newspapers, market fear}. Regardless of what work has presently been done to compress newspapers and market fear, if your hypothesis is efficient then you can produce the stock price data for a very low marginal message length cost.

You'd write up the hypothesis as a compressor-of-data; the simplest way being to produce a distribution over stock prices and apply arithmetic coding, though in practice you'd tweak whatever state of the art compressors for stock prices exist.

Of course the side effect of this is that your code references more data, and will likely need longer internal identifiers on it, so if you just split the cost of code across the datasets being compressed, you'd punish the compressors of newspapers and market fear. I would suggest that the solution is to deploy shapely value, with the value being the number of bits saved overall by a single compressor working on all the data sets in a given pool of cooperation.

Comment author: neq1 13 May 2010 10:03:39AM 0 points [-]

Credence isn't constrained to be in [0,1]???

It seems to me that you are working very hard to justify your solution. It's a solution by argument/intuition. Why don't you just do the math?

The experimenters fix 2 unique constants, k1,k2, each in {1,2,..,20}, sedate you, roll a D20 and flip a coin. If the coin comes up tails, they will wake you on days k1 and k2. If the coin comes up heads and the D20 that comes up is in {k1,k2}, they will wake you on day 1.

I just used Bayes rule. W is an awakening. We want to know P(H|W), because the question is about her subjective probability when (if) she is woken up.

To get P(H|W), we need the following:

P(W|H)=2/20 (if heads, wake up if D20 landed on k1 or k2)

P(H)=1/2 (fair coin)

P(W|T)=1 (if tails, woken up regardless of result of coin flip)

P(T)=1/2 (fair coin)

Using Bayes rule, we get:

P(H|W)=(2/20)(1/2) / [(2/20)(1/2)+(1)*(1/2)] = 1/11

With your approach, you avoid directly applying Bayes' theorem, and you argue that it's ok for credence to be outside of [0,1]. This suggests to me that you are trying to derive a solution that matches your intuition. My suggestion is to let the math speak, and then to figure out why your intuition is wrong.

Comment author: Jonathan_Lee 13 May 2010 12:22:26PM 0 points [-]

You and I both agree on Bayes implying 1/21 in the single constant case. Considering the 2 constant game as 2 single constant games in series, with uncertainty over which one (k1 and k2 the mutually exclusive "this is the k1/k2 game")

P(H | W) = P(H ∩ k1|W) + P(H ∩ k2|W) = P(H | k1 ∩ W)P(k1|W) + P(H|k2 ∩ W)P(k2|W) = 1/21 . 1/2 + 1/21 . 1/2 = 1/21

This is the logic that to me drives PSB to SB and the 1/3 solution. I worked it through in SB by conditioning on the day (slightly different but not substantially).

I have had a realisation. You work directly with W, I work with subsets of W that can only occur at most once in each branch and apply total probability.

Formally, I think what is going on is this: (Working with simple SB) We have a sample space S = {H,T}

"You have been woken" is not an event, in the sense of being a set of experimental outcomes. "You will be woken at least once" is, but these are not the same thing.

"You will be woken at least once" is a nice straightforward event, in the sense of being a set of experimental outcomes {H,T}. "You have been woken" should be considered formally as the multiset {H,T,T}. Formally just working thorough with multisets wherever sets are used as events in probability theory, we recover all of the standard theorems (including Bayes) without issue.

What changes is that since P(S) = 1, and there are multisets X such that X contains S, P(X) > 1.

Hence P({H,T,T}) = 3/2; P({H}|{H,T,T}) = 1/3.

In the 2 constant PSB setup you suggest, we have S = {H,T} x {1,..,20} W = {(H,k1),(H,k2), (T,1),(T,1),(T,2),(T,2),....,(T,20),(T,20)}

And P(H|W) = 1/21 without issue.

My statement is that this more accurately represents the experimental setup; when you wake, conditioned on all background information, you don't know how many times you've been woken before, but this changes the conditional probabilities of H and T. If you merely use background knowledge of "You have been woken at least once", and squash all of the events "You are woken for the nth time" into a single event by using union on the events, then you discard information.

This is closely related to my earlier (intuition) that the problem was something to do with linearity.

In sets, union and intersection are only linear when the working on some collection of atomic sets, but are generally linear in multisets. [eg. (A υ B) \ B ≠ A in general in sets]

Observe that the approach I take of splitting "events" down to disjoint things that occur at most once is precisely taking a multiset event apart into well behaved events and then applying probability theory.

What was concerning me is that the true claim that P({H,T}|T) = 1 seemed to discard pertinent information (ie the potential for waking on the second day). With W as the multiset {H,T,T}, P(W|T) = 2. You can regard this as expectation number of times you see Tails, or the extension of probability to multisets.

The difference in approach is that you have to put the double counting of waking given tails in as a boost to payoffs given Tails, which seems odd as from the point of view of you having just been woken you are being offered immediate take-it-or-leave-it odds. This is made clearer by looking at the twins scenario; each person is offered at most one bet.

Comment author: neq1 12 May 2010 08:08:26PM 0 points [-]

Continuity problem is that the 1/2 answer is independent of the ratio of expected number of wakings in the two branches of the experiment

Why is this a problem? I'm perfectly comfortable with that property. Since you really just have one random variable in each arm. You can call them different days of the week, but with no new information they are all just the same thing

By D do you mean W?

What's happened is closer to E(H|D) = E(D|H) E(H) / E(D), over one run of the experiment, and this yields 1/3 immediately.

Is this how you came up with the 1/3 solution? If so, I think it requires more explanation. Such as what D is precisely.

Comment author: Jonathan_Lee 13 May 2010 08:29:48AM 0 points [-]

Continuity problem is that the 1/2 answer is independent of the ratio of expected number of wakings in the two branches of the experiment

Why is this a problem?

The next clause of the sentence is the problem

unless the ratio is 0 (or infinite) at which point special case logic is invoked to prevent the trivially absurd claim that credence of Heads is 1/2 when you are never woken under Heads.

The problem is special casing out the absurdity, and thus getting credences that are discontinuous in the ratio. On the other hand, you seem to take 1/21in PSB (ie you do let it depend on the ratio) but deviate from 1/21 when multiple runs of PSB aggregate, which is not what I had expected...

D was used in the comment I was replying to as an "event" that was studiously avoiding being W.

http://lesswrong.com/lw/28u/conditioning_on_observers/201l shows multiple ways I get the 1/3 solution; alternatively betting odds taken on awakening or the long run frequentist probability, they all cohere, and yield 1/3.

The problem as I see it with W is that it's not a set of outcomes, it's really a multiset. That's fine in it's way, but it gets confusing because it no longer bounds probabilities to [0,1]. Your approach is to quash multiple membership to get a set back.

View more: Prev | Next