Comment author: [deleted] 23 May 2015 06:50:03PM *  -1 points [-]

Selecting a likely hypothesis for consideration does not alter that hypothesis' likelihood. Do we agree on that?

Comment author: Valentine 23 May 2015 08:17:04PM 4 points [-]

Hmm. Maybe. It depends on what you mean by "likelihood", and by "selecting".

Trivially, noticing a hypothesis and that it's likely enough to justify being tested absolutely is making it subjectively more likely than it was before. I consider that tautological.

If someone is looking at n hypotheses and then decided to pick the kth one to test (maybe at random, or maybe because they all need to be tested at some point so why not start with the kth one), then I quite agree, that doesn't change the likelihood of hypothesis #k.

But in my mind, it's vividly clear that the process of plucking a likely hypothesis out of hypothesis space depends critically on moving probability mass around in said space. Any process that doesn't do that is literally picking a hypothesis at random. (Frankly, I'm not sure a human mind even can do that.)

The core problem here is that most default human ways of moving probability mass around in hypothesis space (e.g. clever arguments) violate the laws of probability, whereas empirical tests aren't nearly as prone to that.

So, if you mean to suggest that figuring out which hypothesis is worthy of testing does not involve altering our subjective likelihood that said hypothesis will turn out to be true, then I quite strongly disagree.

But if you mean that clever arguments can't change what's true even by a little bit, then of course I agree with you.

Perhaps you're using a Frequentist definition of "likelihood" whereas I'm using a Bayesian one?

Comment author: Valentine 23 May 2015 04:48:45PM *  7 points [-]

Thank you for this.

I see you as highlighting a virtue that the current Art gestures toward but doesn't yet embody. And I agree with you, a mature version of the Art definitely would.

In his Lectures on Physics, Feynman provides a clever argument to show that when the only energy being considered in a system is gravitational potential energy, then the energy is conserved. At the end of that, he adds the following:

It is a very beautiful line of reasoning. The only problem is that perhaps it is not true. (After all, nature does not have to go along with our reasoning.) For example, perhaps perpetual motion is, in fact, possible. Some of the assumptions may be wrong, or we may have made a mistake in reasoning, so it is always necessary to check. It turns out experimentally, in fact, to be true.

This is such a lovely mental movement. Feynman deeply cared about knowing how the world really actually works, and it looks like this led him to a mental reflex where even in cases of enormous cultural confidence he still responds to clever arguments by asking "What does nature have to say?"

In my opinion, people in this community update too much on clever arguments. I include myself in that. I disagree with your claim that people shouldn't update at all on clever arguments, but I very much agree that there would be much more strength in the Art if it were to emphasize an active hunger for asking nature its opinion.

I think there's a flavor of mistake that comes from overemphasizing the direction I see you pointing at the expense of other virtues. I've known quite a number of scientists who think the way I see you suggesting who feel like they can't have any opinions or thoughts about things they haven't seem empirical tests of. I think they're in part trying to protect themselves against what Eliezer calls "privileging the hypothesis", but they also make themselves unnecessarily stupid in some ways. The most common and blatant I recall is their getting routinely blindsided by predictable social expectations and drama.

But I think Feynman gets it right.

And I think we ought to, too.

So again, thank you for bringing this up. It clarified something that had been nagging me, and now I think I see how to fix it.

Comment author: [deleted] 23 May 2015 01:33:08PM *  5 points [-]

it seems you've been unable to escape a certain amount of reasoning by analogy in your post. You state that experimental investigation of asteroid impacts was useful, so by analogy, experimental investigation of AI risks should be useful.

It seems I should have picked a different phrase to convey my intended target of ire. The problem isn't concept formation by means of comparing similar reference classes, but rather using thought experiments as evidence and updating on them.

To be sure, thought experiments are useful for noticing when you are confused. They can also be semi-dark art in providing intuition pumps. Einstein did well in introducing special relativity by means of a series of thought experiments, by getting the reader to notice their confusion over classical electromagnetism in moving reference frames, then providing an intuition pump for how his own relativity worked in contrast. It makes his paper one of the most beautiful works in all of physics. However it was the experimental evidence which proved Einstein right, not the gedankenexperimenten.

If a thought experiment shows something to not feel right, that should raise your uncertainty about whether your model of what is going on is correct or not (notice your confusion), to whit the correct response should be “how can I test my beliefs here?” Do NOT update on thought experiments, as thought experiments are not evidence. The thought experiment triggers an actual experiment—even if that experiment is simply looking up data that is already collected—and the actual experimental results is what updates beliefs.

My impression is that MIRI thinks most possible AGI architectures wouldn't meet its standards for safety.

MIRI has not to my knowledge released any review of existing AGI architectures. If that is their belief, the onus is on them to support it.

but note that Eliezer Yudkowsky from MIRI was the one who invented the AI box experiment

He invented the AI box game. If it's an experiment, I don't know what it is testing. It is a setup totatly divorced from any sane reality for how AGI might actually develop and what sort of controls might be in place, with built-in rules that favor the AI.

Yet nevertheless, time and time again people such as yourself point me to the AI box games as if it demonstrated anything of note, anything which should cause me to update my beliefs.

It is, I think, the examples of the sequences and the character of many of the philosophical discussions which happen here that drive people to feel justified in making such ungrounded inferences. And it is that tendency which possibly makes the sequences and/or less wrong a memetic hazard.

Comment author: Valentine 23 May 2015 04:19:22PM 9 points [-]

If a thought experiment shows something to not feel right, that should raise your uncertainty about whether your model of what is going on is correct or not (notice your confusion), to whit the correct response should be “how can I test my beliefs here?”

I have such very strong agreement with you here.

The problem isn't concept formation by means of comparing similar reference classes, but rather using thought experiments as evidence and updating on them.

…but I disagree with you here.

Thought experiments and reasoning by analogy and the like are ways to explore hypothesis space. Elevating hypotheses for consideration is updating. Someone with excellent Bayesian calibration would update much much less on thought experiments etc. than on empirical tests, but you run into really serious problems of reasoning if you pretend that the type of updating is fundamentally different in the two cases.

I want to emphasize that I think you're highlighting a strength this community would do well to honor and internalize. I strongly agree with a core point I see you making.

But I think you might be condemning screwdrivers because you've noticed that hammers are really super-important.

Comment author: ChristianKl 22 May 2015 09:05:09PM 1 point [-]

And obviously there's a ton of money going into cancer research in general, albeit I wouldn't be surprised if most of it was dedicated to solving specific cancers rather than all cancer at once.

I think the consensus in the field is at the moment that cancer isn't a single thing. Therefore "solve all cancer at once" unfortunately doesn't make a good goal.

Comment author: Valentine 22 May 2015 10:17:42PM 2 points [-]

That's my vague impression too. But if I remember correctly, the original idea of OncoSENS (the part of SENS addressing cancer) was something that in theory would address all cancer regardless of type.

I also seem to recall that most experimental biologists thought that many of Aubrey's ideas about SENS, including OncoSENS, were impractical and that they betrayed a lack of familiarity with working in a lab. (Although I should note, I don't really know what they're talking about. I, too, lack familiarity with working in a lab!)

Comment author: [deleted] 21 May 2015 11:48:25PM *  7 points [-]

The things that SENS is working on right now are not ready for investment. I'm going to call you out on this one: please name something SENS is or has researched which is or was the subject of private industry or taxpayer research at the time that SENS was working on it. I think you'll find that such examples, if they exist at all, are isolated. It is nevertheless the goal of SENS to create a vibrant rejuvenation industry with the private sector eventually taking the reins. But until then, there is a real need for a non-profit to fund research that is too speculative and/or too far from clinical trials to achieve return-on-investment on a typical funding horizon.

Comment author: Valentine 22 May 2015 06:03:07PM 3 points [-]

I generally quite agree with you here. I really enormously appreciate the effort SENS is putting into addressing this horror, and there does seem to be a hyperbolic discounting style problem with most of the serious anti-aging tech that SENS is trying to address.

But I think you might be stating your case too strongly:

please name something SENS is or has researched which is or was the subject of private industry or taxpayer research at the time that SENS was working on it. I think you'll find that such examples, if they exist at all, are isolated.

If I recall correctly, one of Aubrey's Seven Deadly Things is cancer, and correspondingly one of the seven main branches of SENS is an effort to eliminate cancer via an idea Aubrey came up with via inspiration. (I honestly don't remember the strategy anymore. It has been about six years since I've read Ending Aging.)

If you want to claim that no one else was working on Aubrey's approach to ending all cancers or that anyone else doing it was isolated, I think that's fair, but kind of silly. And obviously there's a ton of money going into cancer research in general, albeit I wouldn't be surprised if most of it was dedicated to solving specific cancers rather than all cancer at once.

But I want to emphasize that this is more of a nitpick on the strength of your claim. I agree with the spirit of it.

Comment author: [deleted] 21 May 2015 10:05:12PM *  0 points [-]

So I know less about CFAR than I do the other two sponsors of the site. There is in my mind some unfortunate guilt by association given partially shared leadership & advisor structure, but I would not want to unfairly prejudice an organization for that reason alone.

However there are some worrying signs which is why I feel justified in saying something at least in a comment, with the hope that someone might prove me wrong. CFAR used donated funds to pay for Yudkowsky's time in writing HPMoR. It is an enjoyable piece of fiction, and I do not object to the reasoning they gave for funding his writing. But it is a piece of fiction whose main illustrative character suffers from exactly the flaw that I talked about above, in spades. It is my understanding also that Yudkowsky is working on a rationality textbook for release by CFAR (not the sequences which was released by MIRI). I have not seen any draft of this work, but Yudkowsky is currently 0 for 2 on this issue, so I'm not holding my breath. And given that further donations to CFAR are likely to pay for the completion of this work which has a cringe-inducing Bayesian prior, I would be hesitant to endorse them. That and, as you said, publications have been sparse or non-existent.

But I know very little other than that about CFAR and I remain open to having my mind changed.

Comment author: Valentine 22 May 2015 05:48:23PM *  17 points [-]

As Chief Financial Officer for CFAR, I can say all the following with some authority:

CFAR used donated funds to pay for Yudkowsky's time in writing HPMoR.

Absolutely false. To my knowledge we have never paid Eliezer anything. Our records indicate that he has never been an employee or contractor for us, and that matches my memory. I don't know for sure how he earned a living while writing HPMOR, but at a guess it was as an employed researcher for MIRI.

It is my understanding also that Yudkowsky is working on a rationality textbook for release by CFAR (not the sequences which was released by MIRI).

I'm not aware of whether Eliezer is writing a rationality textbook. If he is, it's definitely not with any agreement on CFAR's part to release it, and we're definitely not paying him right now whether he's working on a textbook or not.

And given that further donations to CFAR are likely to pay for the completion of this work…

Not a single penny of CFAR donations go into paying Eliezer.

I cannot with authority promise that will never happen. I want to be clear that I'm making no such promise on CFAR's behalf.

But we have no plans to pay him for anything to the best of my knowledge as the person in charge of CFAR's books and financial matters.

Comment author: elharo 10 March 2013 11:14:00PM 4 points [-]

I wouldn't update a lot or revise too much based on this report. The simple fact is that there was so much packed into 4 days that there was just no way anyone could remember it all. I suspect different attendees understood, remembered, implemented, and forgot different subsets of the material.

I will note that it is extremely helpful to have the spiral bound notebook with detailed notes from the sessions. I've been skimming it every couple of weeks just to jog my memory, and give me ideas about what I should be working on. Usually I just toss these handouts after a conference or workshop, but this one's been really helpful.

Comment author: Valentine 11 March 2013 04:54:43PM 3 points [-]

I wouldn't update a lot or revise too much based on this report. The simple fact is that there was so much packed into 4 days that there was just no way anyone could remember it all. I suspect different attendees understood, remembered, implemented, and forgot different subsets of the material.

Noted, thanks. At the same time, the Planning Kata is a commonly forgotten one, and it's where we introduce the outside view. So it seems ripe for updating!

I will note that it is extremely helpful to have the spiral bound notebook with detailed notes from the sessions.

Good to know!

I forgot to mention one thing in reply to your top post, by the way: several references for Turbocharging Training should be in that booklet. The one part that the references don't particularly support is the addition of imagery as a means of creating intensity. It clearly works for many people, but the evidence is mostly anecdotal. Quite a few people walked away with the impression that the imagery was the point of Turbocharging (which it wasn't), so we've removed that and I now emphasize material that's more directly connected to the literature.

Comment author: Dr_Manhattan 08 March 2013 12:27:07PM 0 points [-]

Without making concrete plans, I'd like to offer some ideas off-line, PM me?

Comment author: Valentine 10 March 2013 08:09:57PM 0 points [-]

Better yet, could you email me? I'd love to talk about this, but I find I check Less Wrong infrequently enough that PMs here are not a reliable way for me to keep up with communications.

My email is valentine at appliedrationality dot org.

Thanks!

Comment author: Valentine 10 March 2013 08:07:44PM *  6 points [-]

This is incredibly helpful. Thank you! This part stood out to me:

Similarly I completely did not understand the concepts of inside view vs. outside view at the workshop; and worse yet I don't think that I even realized that I didn't understand these. However now that I've read Thinking Fast and Slow, the lightbulb has gone on. Inside view is simply me deciding how likely I (or my team) is likely to accomplish something based on my judgement of the problem and our capabilities. Outside view is a statistical question about how people and teams like us have done when confronted with similar problems in the past. As long as there are similar teams and similar problems to compare with, the outside view is likely to be much more accurate.

This and the similar comment about having forgotten the Planning Kata makes for really useful feedback. Yours is not the only report we've recently collected like this, but I think this is the most detailed. We'll revise this and test some revisions. Thank you!

Comment author: Giles 05 February 2013 02:34:52PM 1 point [-]

Not that I know of

Any advice on how to set one up? In particular how to add entries to it retrospectively - I was thinking about searching the comments database for things like "I intend to", "guard against", "publication bias" etc. and manually find the relevant ones. This is somewhat laborious, but the effect I want to avoid is "oh I've just finished my write-up (or am just about to), now I'll go and add the original comment to the anti-publication bias registry".

On the other hand it seems like anyone can safely add anyone else's comment to the registry as long as it's close enough in time to when the comment was written.

Any advice? (I figured if you're involved at CFAR you might know a bit about this stuff).

Comment author: Valentine 07 February 2013 05:20:54PM 0 points [-]

Any advice? (I figured if you're involved at CFAR you might know a bit about this stuff).

A reasonable assumption, but alas, false! I don't think I have anything useful to add to this. Sorry!

View more: Prev | Next