Comment author: gucciCharles 26 September 2016 05:01:11AM 2 points [-]

She gives a pattern of feedback that makes the students practice well? In the sense that she gives positive feedback she functions more as a motivator than as a teacher. Her skill is teaching, it's only happenstance that she teaches music; has she taught shoe polishing or finger painting she would have produced the best shoe polishers and the most skilled finger painters.

Perhaps she doesn't have many complex skills but has strong fundamentals (think Tim Duncan of the NBA Spurs). She might make her students practice the fundamentals which will allow them to do more complex work as they get older.

Finally, she might have knowledge more advanced than her skill. She might not have the hand eye coordination or the processing speed to play sophisticated music but she might know how it's done. Imagine a 5 foot tall jewish guy that loves basketball. He's not gonna make the NBA. It's simply not gonna happen. However, he might understand the game better than many NBA players. Likewise he might be the best basketball coach in the world even though his athleticism (and hence his basketball playing skills) is less than that of NBA players. Likewise the teacher might have had a strong theoretical understanding but not have had the ability to put her theoretical knowledge into practice.

Comment author: Furcas 24 September 2016 03:39:19PM 18 points [-]

Donated $500!

Comment author: Romashka 23 September 2016 11:41:10AM 0 points [-]

Okay, VoI aside, how would you bet in the following setup:

There are three 5 copecks coins, randomly chosen. Each one is dropped 20 times (A0, B0, C0). Then a piece of gum is attached to the heads of Coin A (AGH) & it is dropped 20 times; to the tails of Coin A (AGT); to the heads (BGH) or tails (BGT) of Coin B, & to the tails of Coin C (CGT). Coin C is dropped three times, and the gum attached to the side which appeared two of them. Then, Coin C is dropped twenty times (CGX). The numbers are as follows: A0: heads 14/20, AGT heads 10/20, AGH heads 7/20. B0: heads 8/20, BGT heads 8/20, BGH heads 8/20 (I guess I need to hoard this one.) C0: heads 10/20, CGT heads 11/20, CGX heads 14/20. To what side of Coin C was gum applied in CGX?

Comment author: Nick5a1 22 September 2016 05:15:07PM 0 points [-]

It seems to me that the paraphrasing in parentheses is also preying on the Conjunction Bias, by adding additional detail.

Comment author: omalleyt 22 September 2016 04:56:22AM 0 points [-]

Most things humans like are super-colorful. Colorful things were probably a good sign of fertile land or some other desirable thing. As to the stars, don't you think the guy who looks up every night and likes what he sees is gonna have a better, more productive life then the guy who looks up and grimaces?

Comment author: PhilGoetz 20 September 2016 03:01:10PM 0 points [-]

I wrote a paragraph on that in the post. I predicted a publication bias in favor of positive results, assuming the community is not biased on the particular issue of vaccines & autism. This prediction is probably wrong, but that hypothesis (lack of bias) is what I was testing.

Comment author: roland 20 September 2016 02:29:38PM 1 point [-]

Let E stand for the observation of sabotage

Didn't you mean "the observation of no sabotage"?

In response to comment by CCC on Say Not "Complexity"
Comment author: stack 20 September 2016 01:12:25PM 0 points [-]

Oh I see: for that specific instance of the task.

I'd like to see someone make this AI, I want to know how it could be done.

In response to comment by stack on Say Not "Complexity"
Comment author: CCC 20 September 2016 10:24:09AM 0 points [-]

(Wow, this was from a while back)

I wasn't suggesting that the AI might try to calculate the reverse sequence of moves. I was suggesting that, if the cube-shuffling program is running on the same computer, then the AI might learn to cheat by, in effect, looking over the shoulder of the cube-shuffler and simply writing down all the moves in a list; then it can 'solve' the cube by simply running the list backwards.

In response to comment by CCC on Say Not "Complexity"
Comment author: stack 19 September 2016 11:02:17PM 0 points [-]

the problem with this is the state space is so large that it cannot explore every transition, so it can't follow transitions backwards in a straight forward manner as you've proposed. It needs some kind of intuition to minimize the search space, to generalize it.

Unfortunately I'm not sure what that would look like. :(

Comment author: Vaniver 18 September 2016 10:44:31PM 1 point [-]

This tends to be very context dependent; I don't know enough about biology to estimate. The main caution here is that people tend to forget about regression to the mean (if you have a local measurement X that's only partly related to Y, you should not just port your estimate from X over to Y, but move it closer to what you would have expected from Y beforehand).

Comment author: Vaniver 18 September 2016 10:27:06PM 1 point [-]

You should play if the expected value is positive, and not if it's negative. If the test run results in heads, then the posterior probability is 2/3rds and 24*2/3-12=4, which is positive. If the test run results in tails, then the posterior probability is 1/3rd and 24*1/3-12=-4, which is negative.

(Why is the posterior probability 2/3 or 1/3? Check out footnote 3, or Laplace's Rule of Succession.)

Comment author: omalleyt 18 September 2016 06:27:25PM 0 points [-]

Eliezer is jousting with Immanuel Kant here, who believed that our rationality would lead us to a supreme categorical imperative, i.e. a bunch of "ought" statements with which everyone with a sufficiently advanced ability to reason would agree.

Kant is of course less than compelling. His "treat people as ends, not just means" is cryptic enough to sound cool but be meaningless. If interpreted to mean that one should weighs the desires of all rational minds equally (the end result of contemplating both passive and active actions as influencing the fulfillment of the desires of others), then it dissolves into utilitarianism.

In response to Fake Selfishness
Comment author: omalleyt 18 September 2016 01:31:56AM 0 points [-]

When we weigh options in our mind, we pick the one that yields the cocktail of chemicals/neurotransmitters that induce the strongest positive response in our reward center. Or rather, the cocktail of chemicals/neurotransmitters that elicits the strongest positive response Is able to pass its signals through to the motor neurons.

A desire to be moral, a desire to avoid pain, a desire to protect kin, all release chemicals.

Seen in this light, the phrase "everything one does is selfish" appears to reduce to "all choices are weighed through one's own neural algorithm." Which is so obvious as to be trivial. The only way to get around this would be to detach your motor neurons from your reward center, and hook them up to a committee of, say, ten other people's reward centers, with the action that receives the highest average response being performed. And the detachment is crucial. You can't just willingly abide by the committee's decision, because your choice to obey would still be passing through your own neural algorithm.

Is this what people mean when they boldly assert that everything a person does is selfish? I don't think so. I think, when looked at like this, the question dissolves.

Comment author: Romashka 17 September 2016 09:03:29PM 0 points [-]

Where do you get the exact "half-chance of nothing because you don't play"? How do you decide to play or not, given a favorable outcome of the test run?

Comment author: Romashka 17 September 2016 08:43:03PM 0 points [-]

But what if your friend offers you to stick the gum to any other coin and let you see which way it lands, to get a feel on how the gum "might" affect the result*, and then offer you this deal? How would you calculate Vol then?

  • I ask because I often run into the difference between "physiological" and "ecological" approaches. In the first instance, you might study (for example) "Plant X with/without Fungus Y0 and/or Bacteria Z0" microscope slides, where you carefully inoculate X. In the second, you make slides from X collected in the wild, with who-knows-what growing in it, and have to say if it has Y1 or Z1 or anything at all. I mean, having a previous "physiological" study at hand sure helps, but...are there any quantitative estimates on how much?
Comment author: siIver 17 September 2016 11:56:02AM *  0 points [-]

To me it is immediately obvious that torture is preferable. Judging my the comments I'm in the minority.

Comment author: Houshalter 16 September 2016 11:32:00AM 0 points [-]

But remove human agency and imagine the torturer isn't a person. Say you can remove a dust speck from your eye, but the procedure has a 1/3\^\^\^3 chance of failing and giving you injuries equivalent to torturing you for 50 years.

Now imagine 3\^\^\^3 make a similar choice. One of them will likely fail the procedure and get tortured.

Comment author: Wes_W 14 September 2016 11:29:34PM *  2 points [-]

But in the single-shot scenario, after it comes down tails, what motivation does an ideal game theorist have to stick to the decision theory?

That's what the problem is asking!

This is a decision-theoretical problem. Nobody cares about it for immediate practical purpose. "Stick to your decision theory, except when you non-rigorously decide not to" isn't a resolution to the problem, any more than "ignore the calculations since they're wrong" was a resolution to the ultraviolet catastrophe.

Again, the point of this experiment is that we want a rigorous, formal explanation of exactly how, when, and why you should or should not stick to your precommitment. The original motivation is almost certainly in the context of AI design, where you don't HAVE a human homunculus implementing a decision theory, the agent just is its decision theory.

Comment author: Raemon 14 September 2016 10:17:48PM 1 point [-]

I did not end up using it, although I periodically stumble upon this again and still think it's a neat way of thinking

View more: Prev | Next