All of adamtpack's Comments + Replies

But why should that be bad if you could justify any experiment? Let's say you had enough readership and enough 'active' readership that quite a few people did the same thing you did.

Then 1. You're doing a lot of good, and that sounds like a really cool blog and pursuit actually. And 2. You will need to raise your $/hour in the VoI in order to pick and choose only the very highest-returning experiments. Both interesting outcomes.

3gwern
I don't think that follows. Suppose I'm considering two experiments, A with an estimated return of $100 and another B of $200; I muse that I should probably do the $200 B experiment first and only then A $100 (if ever). I then reflect that I have 10 readers who will follow the results, and logically I ought to multiply the returns by 10, to get A actually is worth $1,000 and B is actually worth $2,000. I then muse I should probably do... experiment B. Choices between experiments aren't affected by a constant factor applied equally to all experiments: the highest marginal return remains the highest marginal return. (If experiment B was the best one to do with no audience, then it's still the best one to do with any audience.) Where the audience would matter is if experiments interact with the audience: maybe no one cares about vitamin D but people are keenly interested in modafinil. Then the highest return could change based on how you use audience numbers.

Thanks for clarifying that. I should note that I am very interested in techniques for self-improvement, too. I am currently learning how to read. (Apparently, I never knew :( ) And also get everything organized, GTD-style. (It seems a far less daunting prospect now than when I first heard of the idea, because I'm pseudo-minimalist.)

I still am surprised at the average LWers reaction here. Probably because it's not clear to me the nature of 'volition on the level of people'. Not something to expect you to answer, clarifying the distinction was helpful enough.

But your environment includes people, dude.

This shouldn't be a puzzle. Reinforcement happens, consciously or subconsciously. Why in the name of FSM would you choose to relinquish the power to actually control what would otherwise happen just subconsciously?

How is that not on the face of it a paragon, a prototype of optimization? Isn't that optimizing is, more or less-consciously changing what is otherwise unconscious?

I'm confused, not only by the beginning of this comment, but by several others as well.

I thought being a LessWronger meant you no longer thought in terms of free will. That it's a naive theory of human behavior, somewhat like naive physics.

I thought so, anyway. I guess I was wrong? (This comment still up voted for amazing analysis.)

4Vaniver
Autonomy and philosophical free will are different things. Philosophical free will is the question "well, if physical laws govern how my body acts, and my brain is a component of my body, then don't physical laws govern what choices I make?", to which the answer is mu. One does not need volition on the level of atoms to have volition on the level of people- and volition on the level of people is autonomy. (You will note that LW is very interested in techniques to increase one's will, take more control over one's goals, and so on. Those would be senseless goals for a fatalist.)

.... And here begins the debate.

What do we do? What do we think about this piece of freaking powerful magic-science?

I vote we keep it a secret. Some secrets are too dangerous and powerful to be shared.

5beoShaffer
I think the cat is out of the bag on this one.

.... And what about helping other people without knowing you helped them? /sly look/

[This comment is no longer endorsed by its author]Reply
2TheOtherDave
Similarly, if helping people is OK, it's OK whether I know I'm doing it or not, and if it's not OK, it's not OK whether I know I'm doing it or not.