Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Mark_Eichenlaub 04 January 2013 05:18:44AM *  4 points [-]

A bit of an aside, but for me the reference to "If" is a turn off. I read it as promoting a fairly-arbitrary code of stoicism rather than effectiveness. The main message I get is keep cool, don't complain, don't show that you're affected by the world, and now you've achieved your goal, which is apparently was to live up to Imperial Britain's ideal of masculinity.

I also see it as a recipe for disaster - don't learn how to guide and train your elephant; just push it around through brute force and your indefatigable will to hold on. It does have a message of continuing to work effectively even in bad circumstances, but for me that feels incidental to the poem's emotional content. I.E. Kipling probably thought that suffering are failure are innately good things. Someone who takes suffering and failure well but never meets their goals is more of a man than someone who consistently meets goals without tragic hardship, or meets them despite expressing their despair during setbacks.

Note: I heard the poem first a long time ago, but I didn't originally read it this way. I saw it differently after reading this: http://www.quora.com/Poems/What-is-your-view-on-the-Poem-IF-by-Rudyard-Kipling/answer/Marcus-Geduld

Comment author: Marcello 23 January 2013 11:19:25PM 7 points [-]

A bit of an aside, but for me the reference to "If" is a turn off. I read it as promoting a fairly-arbitrary code of stoicism rather than effectiveness. The main message I get is keep cool, don't complain, don't show that you're affected by the world, and now you've achieved your goal,

I agree that the poem is about stoicism, but have a very different take on what stoicism is. Real stoicism is about training the elephant to be less afraid and more stable and thereby accomplish more. For example, the standard stoic meditation technique of thinking about the worst and scariest possible outcomes you could face will gradually chip away at instinctive fear responses and allow one to think in a more level headed way. Similarly, taking cold showers and deconditioning the flinch response (which to some extent also allows one not to flinch away from thoughts.)

Of course, all of these real stoic training techniques are challengingly unpleasant. It's much easier to be a poser-stoic who explicitly optimizes for how stoic-looking of a face they put forward, by keeping cool, not complaining, and not emoting, rather than putting in all the hard work required to train the elephant and become a real stoic. This is, as you say, a recipe for disaster if pushed too hard. Most people out there who call themselves stoics are poser-stoics, just as Sturgeon's Law would demand. After reading the article you linked to I now have the same oppinion of the kind of stoicism the Victorian school system demanded.

Comment author: Marcello 18 August 2012 04:31:05AM 5 points [-]

Short version: Make an Eckman-style micro-expression reader in a wearable computer.

Fleshed out version: You have a wearable computer (perhaps something like Google glass) which sends video from its camera (or perhaps two cameras if one camera is not enough) over to a high-powered CPU which processes the images, locates the faces, and then identifies micro expressions by matching and comparing the current image (or 3D model) to previous frames to infer which bits of the face have moved in which directions. If a strong enough micro-expression happens, the user is informed by a tone or other notification. Alternatively, one could go the more pedagogical route by showing then a still frame of the person doing the micro-expression some milliseconds prior with the relevant bits of the face highlighted.

Feasibility: We already can make computers are good at finding faces in images and creating 3D models from multiple camera perspectives. I'm pretty sure small cameras are good enough by now. We need the beefy CPU and/or GPU as a separate device for now because it's going to be a while before wearables are good enough to do this kind of heavy-duty processing on their own, but wifi is good enough to transmit very high resolution video. The foggiest bit in my model would be whether current image processing techniques are up to the challenge. Would anyone with expertise in machine vision care to comment on this?

Possible positive consequences: Group collaboration easily succumbs to politics and scheming unless a certain (large) level of trust and empathy has been established. (For example, I've seen plenty of hacker news comments confirm that having a strong friendship with one's startup cofounder is important.) A technology such as this would allow for much more rapid (and justified) trust-building between potential collaborators. This might also allow for the creation of larger groups of smart people who all trust each other. (Which would be invaluable for any project which produces information which shouldn't be leaked because it would allow such projects to be larger.) Relatedly, this might also allow one to train really excellent therapist-empaths.

Possible negative consequence: Police states where the police are now better at reading people's minds.

Comment author: jhuffman 29 September 2011 08:44:22PM 2 points [-]

Is this code for "burnt out from 80 hour weeks" ?

Comment author: Marcello 29 September 2011 10:02:22PM *  3 points [-]

I didn't leave due to burn-out.

Comment author: cousin_it 28 September 2011 08:22:25PM 3 points [-]

This sounds extremely cool. I won't be leaving my current job anytime soon (hopefully), but the list of people involved is impressive! Just curious, why did Marcello and Paul leave your company?

Comment author: Marcello 28 September 2011 09:54:17PM 3 points [-]

Quixey is a great place to work, and I learned a lot working there. My main reason for leaving was that I wanted to be able to devote more time and mental energy to some of my own thoughts and projects.

Comment author: Peter_de_Blanc 02 January 2011 08:06:55AM 9 points [-]

This sounds reasonable. What sort of thought would you recommend responding with after noticing oneself procrastinating? I'm leaning towards "what would I like to do?"

Comment author: Marcello 02 January 2011 08:42:49AM 15 points [-]

Offhand, I'm guessing the very first response ought to be "Huzzah! I caught myself procrastinating!" in order to get the reverse version of the effect I mentioned. Then go on to "what would I like to do?"

Comment author: Marcello 02 January 2011 08:01:07AM 26 points [-]

Here's a theory about one of the things that causes procrastination to be so hard to beat. I'm curious what people think of it.

  1. Hypothesis: Many parts of the mind are influenced by something like reinforcement learning, where the emotional valances of our thoughts function as a gross reward signal that conditions their behaviors.

  2. Reinforcement learning seems to have a far more powerful effect when feedback is instant.

  3. We think of procrastinating as a bad thing, and tend to internally punish ourselves when we catch ourselves doing it.

  4. Therefore, the negative feedback signal might end up exerting a much more powerful training effect on the "catcher" system (aka. whatever is activating frontal override) rather than on whatever it is that triggered the procrastination in the first place.

  5. This results in a simple counter-intuitive piece of advice: when you catch yourself procrastinating, it might be a very bad idea to internally berate yourself about it; Thoughts of the form "%#&%! I'm procrastinating again! I really shouldn't do that!" might actually cause more procrastinating in the long run. If I had to guess, things like meditation would be helpful for building up the skill required to catch the procrastination-berating subsystem in the act and get it to do something else.

TL;DR: It would probably be hugely helpful to try to train oneself to make the "flinch" less unpleasant.

Comment author: Marcello 23 December 2009 07:45:33PM 1 point [-]

I am going to be there.

Positive-affect-day-Schelling-point-mas Meetup

4 Marcello 23 December 2009 07:41PM

There will be a LessWrong Meetup on the Friday December 25th (day after tomorrow.)  We're meeting at 6:00 PM at Pan Tao Restaurant at 1686 South Wolfe Road, Sunnyvale, CA the SIAI House in Santa Clara, CA for pizza or whatever else we can figure out how to cook.  Consider it an available refuge if you haven't other plans.

Please comment if you plan to show up!

continue reading »
Comment author: pengvado 08 September 2009 09:09:06PM 11 points [-]

Well, we don't want to build conscious AIs, so of course we don't want them to use anthropic reasoning.

Why is anthropic reasoning related to consciousness at all? Couldn't any kind of Bayesian reasoning system update on the observation of its own existence (assuming such updates are a good idea in the first place)?

Comment author: Marcello 09 September 2009 01:32:42PM 3 points [-]

Why do I think anthropic reasoning and consciousness are related?

In a nutshell, I think subjective anticipation requires subjectivity. We humans feel dissatisfied with a description like "well, one system running a continuation of the computation in your brain ends up in a red room and two such systems end up in green rooms" because we feel that there's this extra "me" thing, whose future we need to account for. We bother to ask how the "me" gets split up, what "I" should anticipate, because we feel that there's "something it's like to be me", and that (unless we die) there will be in future "something it will be like to be me". I suspect that the things I said in the previous sentence are at best confused and at worst nonsense. But the question of why people intuit crazy things like that is the philosophical question we label "consciousness".

However, the feeling that there will be in future "something it will be like to be me", and in particular that there will be one "something it will be like to be me"<1> if taken seriously, forces us to have subjective anticipation, that is, to write probability distribution summing to one for which copy we end up as. Once you do that, if you wake up in a green room in Eliezer's example, you are forced to update to 90% probability that the coin came up heads (provided you distributed your subjective anticipation evenly between all twenty copies in both the head and tail scenarios, which really seems like the only sane thing to do.)

<1> Or, at least, the same amount of "something it is like to be me"-ness as we started with, in some ill-defined sense.

On the other hand, if you do not feel that there is any fact of the matter as to which copy you become, then you just want all your copies to execute whatever strategy is most likely to get all of them the most money from your initial perspective of ignorance of the coinflip.

Incidentally, the optimal strategy looks like an policy selected by updateless decision theory and not like any probability of the the coin having been heads or tails. <a href="http://lesswrong.com/lw/17c/outlawing_anthropics_an_updateless_dilemma/13d7">PlaidX</a> beat me to the counter-example for p=50%. Counter-examples of like PlaidX's will work for any p<90%, and counter-examples like Eliezer's will work for any p>50%, so that pretty much covers it. So, unless we want to include ugly hacks like responsibility, or unless we let the copies reason Goldenly (using Eliezer's original TDT) about each other's actions as tranposed versions of their own actions (which does correctly handle PlaidX's counter-example, but might break in more complicated cases where no isomorphism is apparent) there simply isn't a probability-of-heads that represents the right thing for the copies to do no matter the deal offered to them.

Comment author: jimmy 23 July 2009 06:19:22PM 7 points [-]

The first thing that comes to mind is to have an "efficient charity" box that you drop money in every time you turn down a beggar or in any way invoke this excuse. If you can stick to it, it should have the effect of getting the money to the right people, and may even help you purchase "warm fuzzies" if you can convince the warm fuzzy factory that "I really did more than help that beggar, which would have felt warm and fuzzy".

Comment author: Marcello 24 July 2009 02:57:50AM 7 points [-]

The most effective version of this would probably be an iPhone (or similar mobile device) application that gives a dollar to charity when you push a button. If it's going to work reliably it has to be something that can be used when the beggar/cause invocation is in sight: for most people, I'm guessing that akrasia would probably prevent a physical box or paper ledger from working properly.

View more: Next