Comment author: alex_zag_al 01 May 2016 01:45:41AM 0 points [-]

The alternative I would propose, in this particular case, is to debate the general rule of banning physics experiments because you cannot be absolutely certain of the arguments that say they are safe.

Giving up on debating the probability of a particular proposition, and shifting to debating the merits of a particular rule, is I feel one of the ideas behind frequentist statistics. Like, I'm not going to say anything about whether the true mean is in my confidence interval in this particular case. But note that using this confidence interval formula works pretty well on average.

Comment author: alex_zag_al 23 February 2016 09:56:36PM 0 points [-]

I don't know about the role of this assumption in AI, which is what you seem to care most about. But I think I can answer about its role in philosophy.

One thing I want from epistemology is a model of ideally rational reasoning, under uncertainty. One way to eliminate a lot of candidates for such a model is to show that they make some kind of obvious mistake. In this case, the mistake is judging something as a good bet when really it is guaranteed to lose money.

Comment author: alex_zag_al 14 February 2016 02:45:26PM 1 point [-]

Inquiring after the falsifiability of a theory?

Not perfect but very good, and pretty popular.

Comment author: alex_zag_al 14 November 2015 11:13:31AM *  0 points [-]

After a few years in grad school, I think the principles of science are different from what you've picked up from your own sources.

In particular, this stands out to me as incorrect:

(1) I had carefully followed everything I'd been told was Traditionally Rational, in the course of going astray. For example, I'd been careful to only believe in stupid theories that made novel experimental predictions, e.g., that neuronal microtubules would be found to support coherent quantum states.

My training in writing grant applications contradicts this depiction of science. A grant has an introduction that reviews the facts of the field. It is followed by your hypothesis, and the mark of a promising grant is that the hypothesis looks obvious given your depiction of the facts. In fact, it is best if your introduction causes the reader to think of the hypothesis themselves, and anticipate its introduction.

This key feature of a good hypothesis is totally separate from its falsifiability (important later in the application). And remember, the hypothesis has to appear obvious in the eyes of a senior in the field, since that's who judges your proposal. Can you say this for your stupid theory?

(2) Science would have been perfectly fine with my spending ten years trying to test my stupid theory, only to get a negative experimental result, so long as I then said, "Oh, well, I guess my theory was wrong."

Given the above, the social practice of science would not have funded you to work for ten years on this theory. And this reflects the social practice's implementation of the ideals of Science. The ideals say your hypothesis, while testable, is stupid.

I think you have a misconception about how science handles stupid testable ideas. However, I can't think of a way that this undermines this sequence, which is about how science handles rational untestable ideas.

EDIT: it seems poke said all this years ago.

In response to The Power of Noise
Comment author: johnswentworth 20 June 2014 05:49:05AM 1 point [-]

The randomized control trial is a great example where a superintelligence actually could do better by using a non-random strategy. Ideally, an AI could take its whole prior into account and do a value of information calculation. Even if it had no useful prior, that would just mean that any method of choosing is equally "random" under the the AI's knowledge.

Comment author: alex_zag_al 14 November 2015 09:39:19AM *  2 points [-]

Bayesian adaptive clinical trial designs place subjects in treatment groups based on a posterior distribution. (Clinical trials accrue patients gradually, so you don't have to assign the patients using the prior: you assign new patients using the posterior conditioned on observations of the current patients.)

These adaptive trials are, as you conjecture, much more efficient than traditional randomized trials.

Example: I-SPY 2. Assigns patients to treatments based on their "biomarkers" (biological measurements made on the patients) and the posterior derived from previous patients.

When I heard one of the authors explain adaptive trials in a talk, he said they were based on multi-armed bandit theory, with a utility function that combines accuracy of results with welfare of the patients in the trial.

However, unlike in conventional multi-armed bandit theory, the trial design still makes random decisions! The trials are still sort of randomized: "adaptively randomized," with patients having a higher chance of being assigned to certain groups than others, based on the current posterior distribution.

Comment author: alex_zag_al 14 November 2015 09:14:48AM *  0 points [-]

Here are some things that shouldn't happen, on my analysis: An ad-hoc self-modifying AI as in (1) undergoes a cycle of self-improvement, starting from stupidity, that carries it up to the level of a very smart human - and then stops, unable to progress any further.

I'm sure this has been discussed elsewhere, but to me it seems possible that progress may stop when the mind becomes too complex to make working changes to.

I used to think that a self-improving AI would foom because as it gets smarter, it gets easier for it to improve itself. But it may get harder for it to improve itself, because as it self-improves it may turn itself into more and more of an unmaintainable mess.

What if creating unmaintainable messes is the only way that intelligences up to very-smart-human-level know how to create intelligences up to very-smart-human level? That would make that level a hard upper limit on a self-improving AI.

Comment author: alex_zag_al 01 November 2015 06:06:07PM 0 points [-]

As I understand the post, its idea is that a rationalist should never "start with a bottom line and then fill out the arguments".

I disagree. The idea, rather, is that your beliefs are as good as the algorithm that fills out the bottom line. Doesn't mean you shouldn't start by filling out the bottom line; just that you shouldn't do it by thinking of what feels good or what will win you an argument or by any other algorithm only weakly correlated with truth.

Also, note that if what you write above the bottom line can change the bottom line, that's part of the algorithm too. So, actually, I do agree that a rationalist should not write the bottom line, look for a chain of reasoning that supports it, and refuse to change the bottom line if the reasoning doesn't.

Comment author: alex_zag_al 30 July 2015 01:36:34PM 0 points [-]

By trusting Eliezer on MWI, aren't you trusting both his epistemology and his mathematical intuition?

Eliezer believes that the MWI interpretation allows you to derive quantum physics without any additional hypotheses that add complexity, such as collapse or the laws of movement for Bohm's particles. But this belief is based on mathematical intuition, according to the article on the Born probabilities. Nobody knows how to derive the observations without additional hypotheses, but a lot of people such as Eliezer conjecture it's possible. Right?

I feel like this point must have been made many times before, as Eliezer's quantum sequence has been widely discussed, so maybe instead of a response I need a link to a previous conversation or a summary of previous conclusions.

But relating it to the point of your article... If Eliezer is wrong about quantum mechanics, should that lower my probability that his other epistemological views are correct? This is important because it affects whether or not I bother learning those views. The answer is "yes but not extremely", because I think if there's an error, it may be in the mathematical intuition.

To generalize a bit, it's hard to find pure tests of a single ability. Though your example of stopping rules is actually a pretty good one, for understanding the meaning of all the probability symbols. But usually we should not be especially confused when someone with expertise is wrong about a single thing, since that single thing is probably not a pure test of that expertise. However we should be confused if on average they are wrong as many times as people without the expertise. Then we have to doubt the expertise or our own judgments of the right and wrong answers.

Comment author: alex_zag_al 08 January 2015 11:13:10PM *  0 points [-]

This reminds me of hitting Ctrl+C, but on a thought process or object of focus instead of a program. After reading, I do it when i suspect I'm about to voluntarily do something I'm going to regret.

EDIT: At least, I think I'm doing it... I haven't done any training approaching the amount of time the training in your post takes.

Comment author: christopherj 09 October 2013 03:45:20AM 2 points [-]

This article gives me a better perspective as to why scientific journals like to publish surprising results rather than unsurprising ones -- they are more informative, for the very reason they are likelier to be wrong.

Comment author: alex_zag_al 22 November 2014 07:37:52PM 0 points [-]

Yes... if a theory adds to the surprisal of an experimental result, then the experimental result adds precisely the same amount of the surprisal of the theory. That's interesting.

View more: Next