Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: hairyfigment 29 September 2016 10:15:19PM 0 points [-]

...As I pointed out recently in another context, humans have existed for tens of thousands of years or more. Even civilization existed for millenia before obvious freak Isaac Newton started modern science. Your position is a contender for the nuttiest I've read today.

Possibly it could be made better by dropping this talk of worlds and focusing on possible observers, given the rise in population. But that just reminds me that we likely don't understand anthropics well enough to make any definite pronouncements.

Comment author: curtd59 29 September 2016 06:03:43PM 0 points [-]

WHERE PHILOSOPHY(rational instrumentalism) MEETS SCIENCE (physical instrumentalism)?

Philosophy and Science are identical processes until we attempt to use one of them without the other.

That point of demarcation is determined by the limits beyond which we cannot construct either (a) logical, or (b) physical, instruments with which to eliminate error, bias, wishful thinking, suggestion, loading and framing, obscurantism, propaganda, pseudorationalism, pseudoscience, and outright deceit.

Comment author: Good_Burning_Plastic 29 September 2016 08:03:41AM 2 points [-]

Computing can't harm the environment in any way

Well...

Comment author: wafflepudding 29 September 2016 02:28:39AM 0 points [-]

I'd agree that certain worlds would have the building of the LHC pushed back or moved forward, but I doubt there would be many where the LHC was just never built. Unless human psychology is expected to be that different from world to world?

Comment author: SilasBarta 29 September 2016 01:52:03AM 1 point [-]

My favorite one: burning wood for heat. Better than fossil fuels for the GW problem, but really bad for local air quality.

Comment author: TheAncientGeek 28 September 2016 02:02:22PM 0 points [-]

Given that qualia ere what they appear to be., are you denying that qualia can appear simple, or that they are just appearances?

Comment author: Vaniver 27 September 2016 09:38:12PM 1 point [-]

There shouldn't be any conflicts between VoI and Bayesian reasoning; I thought of all of my examples as Bayesian.

From the perspective of avoiding wireheading, an agent should be incentivized to gain information even when this information decreases its (subjective) "value of decision situation". For example, consider a bernoulli 2-armed bandit:

I don't think that example describes the situation you're talking about. Remember that VoI is computed in a forward-looking fashion; when one has a (1, 1) beta distribution over the arm, one thinks it is equally likely that the true propensity of the arm is above .5 and below .5.

The VoI comes into that framework by being the piece that agitates for exploration. If you've pulled arm1 seven times and gotten 4 heads and three tails, and haven't pulled arm2 yet, the expected value of pulling arm1 is higher than pulling arm2 but there's a fairly substantial chance that arm2 has a higher propensity than arm1. Heuristics that say to do something like pull the level with the higher 95th percentile propensity bake in the VoI from pulling arms with lower means but higher variances.


If, from a forward-looking perspective, one does decrease their subjective value of decision situation by gaining information, then one shouldn't gain that information. That is, it's a bad idea to pay for a test if you don't expect the cost of the test to pay for the additional value. (Maybe you'll continue to pull arm1, regardless of the results of pulling arm2, as in the case where arm1 has delivered heads 7 times in a row. Then switching means taking a hit for nothing.)

One thing that's important to remember here is conservation of expected evidence--if I believe now that running an experiment will lead me to believe that arm1 has a propensity of .1 and arm2 has a propensity of .2, then I should already believe those are the propensities of those arms, and so there's no subjective loss of well-being.

Comment author: capybaralet 26 September 2016 10:48:41PM *  1 point [-]

Does anyone have any insight into VoI plays with Bayesian reasoning?

At a glance, it looks like the VoI is usually not considered from a Bayesian viewpoint, as it is here. For instance, wikipedia says:

""" A special case is when the decision-maker is risk neutral where VoC can be simply computed as; VoC = "value of decision situation with perfect information" - "value of current decision situation" """

From the perspective of avoiding wireheading, an agent should be incentivized to gain information even when this information decreases its (subjective) "value of decision situation". For example, consider a bernoulli 2-armed bandit:

If the agent's prior over the arms is uniform over [0,1], so its current value is .5 (playing arm1), but after many observations, it learns that (with high confidence) arm1 has reward of .1 and arm2 has reward of .2, it should be glad to know this (so it can change to the optimal policy, of playing arm2), BUT the subjective value of this decision situation is less than when it was ignorant, because .2 < .5.

Comment author: So8res 26 September 2016 06:39:53PM 1 point [-]

Thanks!

Comment author: gucciCharles 26 September 2016 05:02:56AM 0 points [-]

Isn't teaching itself a skill? So what that she was a bad musician, she was obviously a first rate teacher (independent of the subject that she taught).

Comment author: gucciCharles 26 September 2016 05:01:11AM 1 point [-]

She gives a pattern of feedback that makes the students practice well? In the sense that she gives positive feedback she functions more as a motivator than as a teacher. Her skill is teaching, it's only happenstance that she teaches music; has she taught shoe polishing or finger painting she would have produced the best shoe polishers and the most skilled finger painters.

Perhaps she doesn't have many complex skills but has strong fundamentals (think Tim Duncan of the NBA Spurs). She might make her students practice the fundamentals which will allow them to do more complex work as they get older.

Finally, she might have knowledge more advanced than her skill. She might not have the hand eye coordination or the processing speed to play sophisticated music but she might know how it's done. Imagine a 5 foot tall jewish guy that loves basketball. He's not gonna make the NBA. It's simply not gonna happen. However, he might understand the game better than many NBA players. Likewise he might be the best basketball coach in the world even though his athleticism (and hence his basketball playing skills) is less than that of NBA players. Likewise the teacher might have had a strong theoretical understanding but not have had the ability to put her theoretical knowledge into practice.

Comment author: Furcas 24 September 2016 03:39:19PM 15 points [-]

Donated $500!

Comment author: Romashka 23 September 2016 11:41:10AM 0 points [-]

Okay, VoI aside, how would you bet in the following setup:

There are three 5 copecks coins, randomly chosen. Each one is dropped 20 times (A0, B0, C0). Then a piece of gum is attached to the heads of Coin A (AGH) & it is dropped 20 times; to the tails of Coin A (AGT); to the heads (BGH) or tails (BGT) of Coin B, & to the tails of Coin C (CGT). Coin C is dropped three times, and the gum attached to the side which appeared two of them. Then, Coin C is dropped twenty times (CGX). The numbers are as follows: A0: heads 14/20, AGT heads 10/20, AGH heads 7/20. B0: heads 8/20, BGT heads 8/20, BGH heads 8/20 (I guess I need to hoard this one.) C0: heads 10/20, CGT heads 11/20, CGX heads 14/20. To what side of Coin C was gum applied in CGX?

Comment author: Nick5a1 22 September 2016 05:15:07PM 0 points [-]

It seems to me that the paraphrasing in parentheses is also preying on the Conjunction Bias, by adding additional detail.

Comment author: omalleyt 22 September 2016 04:56:22AM 0 points [-]

Most things humans like are super-colorful. Colorful things were probably a good sign of fertile land or some other desirable thing. As to the stars, don't you think the guy who looks up every night and likes what he sees is gonna have a better, more productive life then the guy who looks up and grimaces?

Comment author: PhilGoetz 20 September 2016 03:01:10PM 0 points [-]

I wrote a paragraph on that in the post. I predicted a publication bias in favor of positive results, assuming the community is not biased on the particular issue of vaccines & autism. This prediction is probably wrong, but that hypothesis (lack of bias) is what I was testing.

Comment author: roland 20 September 2016 02:29:38PM 1 point [-]

Let E stand for the observation of sabotage

Didn't you mean "the observation of no sabotage"?

In response to comment by CCC on Say Not "Complexity"
Comment author: stack 20 September 2016 01:12:25PM 0 points [-]

Oh I see: for that specific instance of the task.

I'd like to see someone make this AI, I want to know how it could be done.

In response to comment by stack on Say Not "Complexity"
Comment author: CCC 20 September 2016 10:24:09AM 0 points [-]

(Wow, this was from a while back)

I wasn't suggesting that the AI might try to calculate the reverse sequence of moves. I was suggesting that, if the cube-shuffling program is running on the same computer, then the AI might learn to cheat by, in effect, looking over the shoulder of the cube-shuffler and simply writing down all the moves in a list; then it can 'solve' the cube by simply running the list backwards.

In response to comment by CCC on Say Not "Complexity"
Comment author: stack 19 September 2016 11:02:17PM 0 points [-]

the problem with this is the state space is so large that it cannot explore every transition, so it can't follow transitions backwards in a straight forward manner as you've proposed. It needs some kind of intuition to minimize the search space, to generalize it.

Unfortunately I'm not sure what that would look like. :(

View more: Next