Comment author: takora 15 August 2014 07:21:10PM *  8 points [-]

Hi LW, My name's Olivier, I'm a 37-year-old Canadian currently living in Ottawa. My background is varied: I have a BA in Communication Studies and an MPhil in Japanese Studies but also a DEC (some special Quebec degree equivalent to the last year of high school and first of university in the rest of Canada and the USA) in Natural Sciences. I've owned a business, worked in cultural media and am now a public servant working in immigration.

I've been interested in AI, existential risks, intelligence explosion et al. for a number of years, probably since finding Bolstrom's paper on Simulated Reality.

I'm not 100% sure how I found LW, but it probably was while browsing for one of the topics above.

I've considered myself a rationalist for as long as I can remember, though I've long called it (rather naively?) "realist". Also being an existentialist, I try to bring these beliefs/convictions into practice in my work and how I raise my children (we'll see how that turns out!)

Through browsing here, I'm glad to find community that appears in between rigid academia and sensationalist media.

Anyhow, I'll most likely lurk a lot more than I post. Having three young kids leaves me with little time, and a sleep-addled, rather incoherent brain.

Thanks for reading!

Comment author: Benquo 15 August 2014 10:12:59PM 2 points [-]

Welcome! Just in case you haven't noticed yet, there's a Less Wrong meetup in Ottawa.

Sequences rec seconded, they're what formed the initial kernel of the Less Wrong community. There are many of them, so take them at a comfortable pace.

Comment author: Friendly-HI 29 July 2014 08:39:33PM 3 points [-]

I get it. Makes sense, actually now that you point it out I think I've also seen this phrase employed as a "pseudo-compliment". Rest assured that it wasn't intended that way.

Comment author: Benquo 30 July 2014 01:05:28PM 1 point [-]

I figured it wasn't.

Comment author: Friendly-HI 29 July 2014 06:06:39PM 1 point [-]

I feel almost ashamed for asking that question, partly because it's quite impolite and inappropriate to ask a question like that (at least outside of LW) and maybe also because it might betray some kind of deeply rooted egghead-elitism on my part that I still can't quite manage shake off, but I simply can't resist this attempt to satisfy my raging curiosity: What's the reason why someone as smart as you chooses to become a nurse?

Also: Do you think of your perfectionism as largely useful, largely a hindrance, or kind-of-a-mixed-bag?

Comment author: Benquo 29 July 2014 07:46:22PM *  4 points [-]

It seems like you're trying to ask this nicely, which is good, and I don't know how Swimmer963 feels about this so I'm not upset on her behalf, but in general I read this sort of comment as less insulting when it doesn't use a phrase like "someone as smart as you".

Comment author: NoSuchPlace 27 July 2014 05:52:07PM *  7 points [-]

Quirrell doesn't have a very large window in which to drink the blood.

According to this he should have plenty of time:

"Is it possible to Transfigure a living subject into a target that is static, such as a coin - no, excuse me, I'm terribly sorry, let's just say a steel ball."

Professor McGonagall shook her head. "Mr. Potter, even inanimate objects undergo small internal changes over time. There would be no visible changes to your body afterwards, and for the first minute, you would notice nothing wrong. But in an hour you would be sick, and in a day you would be dead."

I could see the drinker getting sick

From the transfiguration rules:

"I will never Transfigure anything that looks like food or anything else that goes inside a human body."

This presumably means don't transfigure anything into food. However it could also be interpreted to mean, don't transfigure food into anything. I am somewhat disappointed in McGonagall for not catching that ambiguity.

Also Quirrell is not a recognized transfiguration authority:

"If I am not sure whether a Transfiguration is safe, I will not try it until I have asked Professor McGonagall or Professor Flitwick or Professor Snape or the Headmaster, who are the only recognised authorities on Transfiguration at Hogwarts. Asking another student is not acceptable, even if they say that they remember asking the same question."

"Even if the current Defence Professor at Hogwarts tells me that a Transfiguration is safe, and even if I see the Defence Professor do it and nothing bad seems to happen, I will not try it myself."

However since Quirrells past is unknown (as far as Hogwarts is concerned) he could be one of the best transfigures in the world and he wouldn't be recognized as an authority. Also I don't see Quirrell neglecting something as useful and versatile as transfiguration, so I would expect him to know how dangerous eating formerly transfigured food is.

Comment author: Benquo 27 July 2014 11:06:32PM 8 points [-]

I think McGonagall doubts Quirrell's goodness more than his knowledge.

Meetup : Rationality Practice - Be Specific

1 Benquo 26 June 2014 02:14PM

Discussion article for the meetup : Rationality Practice - Be Specific

WHEN: 29 June 2014 03:00:00PM (-0400)

WHERE: Kogod Courtyard, National Portrait Gallery, 8th and F Sts NW, Washington, DC 20001

Being specific can help you notice when you don't know what you're talking about, and avoid unnecessary miscommunication and arguments over definitions.

Let's come up with some ways to teach ourselves the habit of being specific, and giving and thinking through concrete examples. Related: http://lesswrong.com/lw/bc3/sotw_be_specific/

Discussion article for the meetup : Rationality Practice - Be Specific

Comment author: [deleted] 16 June 2014 10:54:38AM *  18 points [-]

I am going to write the same warning I have written to rationalist friends in relation to the Great Filter Hypothesis and almost everything on Overcoming Bias: BEWARE OF MODELS WITH NO CAUSAL COMPONENTS! I repeat: BEWARE NONCAUSAL MODELS!!! In fact, beware of nonconstructive mental models as well, while we're at it! Beware classical logic, for it is nonconstructive! Beware noncausal statistics, for it is noncausal and nonconstructive! All these models, when they contain true information, and accurately move that information from belief to belief in strict accordance with the actual laws of statistical inference, still often fail at containing coherent propositions to which belief-values are being assigned, and at corresponding to the real world.

Now apply the above warning to virtue ethics.

Now let's dissolve the above warning about virtue ethics and figure out what it really means anyway, since almost all of us real human beings use some amount of it.

It's not enough to say that human beings are not perfectly rational optimizers moving from terminal goals to subgoals to plans to realized actions back to terminal goals. We must also acknowledge that we are creatures of muscle and neural-net, and that the lower portions (ie: almost all) of our minds work via reinforcement, repetition, and habit, just as our muscles are built via repeated strain.

Keep in mind that anything you consciously espouse as a "terminal goal" is in fact a subgoal: people were not designed to complete a terminal goal and shut off.

Practicing virtue just means that I recognize the causal connection between my present self and future self, and optimize my future self for the broad set of goals I want to be able to accomplish, while also recognizing the correlations between myself and other people, and optimizing my present and future self to exploit those correlations for my own goals.

Because my true utility function is vast and complex and only semi-known to me, I have quite a lot of logical uncertainty over what subgoals it might generate for me in the future. However, I do know some actions I can take to make my future self better able to address a broad range of subgoals I believe my true utility function might generate, perhaps even any possible subgoal. The qualities created in my future self by those actions are virtues, and inculcating them in accordance with the design of my mind and body is virtue ethics.

As an example, I helped a friend move his heavy furniture from one apartment to another because I want to maintain the habit of loyalty and helpfulness to my friends (cue House Hufflepuff) for the sake of present and future friends, despite this particular friend being a total mooching douchebag. My present decision will change the distribution of my future decisions, so I need to choose for myself now and my potential future selves.

Not really that complicated, when you get past the philosophy-major stuff and look at yourself as a... let's call it, a naturalized human being, a body and soul together that are really just one thing.

In response to comment by [deleted] on On Terminal Goals and Virtue Ethics
Comment author: Benquo 16 June 2014 04:12:37PM *  2 points [-]

It sounds like you're thinking of the "true utility function's" preferences as a serious attempt to model the future consequences of present actions, including their effect on future brain-states.

I don't think that's always how the brain works, even if you can tell a nice story that way.

Comment author: Benquo 30 May 2014 04:14:39AM 2 points [-]

I should disclaim that this is my interpretation and not a complete account or likely to be quite the way they taught it or will teach it in the future.

I still expect it to be high-value for anyone who actually wants to practice the art of rationality, and not just talk about it.

Comment author: robertskmiles 07 May 2014 02:49:35PM *  3 points [-]

Losing keys has two problems. The first is that you can't open the lock, the second is that there's a chance that now someone else can open the lock, if they find your keys and are nefarious. It reminds me of Type 1 and Type 2 errors. Having more keys reduces the risk of "An authorised person is not able to open the lock" by increasing the risk of "An unauthorised person is able to open the lock".

Consider this trade-off carefully.

Comment author: Benquo 07 May 2014 08:51:25PM 1 point [-]

They need two informational keys to open any lock. The first is the physical key. The second is the knowledge of which of the billions of locks in the world is opened by this key, and how to find it.

I think if I lose an unmarked physical key, I'm still okay.

Comment author: Zack_M_Davis 05 May 2014 12:44:56AM 4 points [-]

that they are awake (I'll be up and donating for all 24 hours!) [...] While North America sleeps, you'll be awake

What does being awake have to do with anything? Aren't you people supposed to know something about computers?

Comment author: Benquo 05 May 2014 05:31:55AM *  4 points [-]

Thanks for the links. I have never used Selenium before but may play with it for this. I expect it will be useful for future things too.

Update: Nope. Selenium is tricky and I'll have to figure out how to use Cron some other time. I'm not losing more sleep over this right now.

Comment author: RichardKennaway 02 May 2014 03:52:59PM -2 points [-]

And, in lesswrongology, basilisks.

Comment author: Benquo 02 May 2014 06:24:22PM 2 points [-]

A basilisk is somewhat different, I think - it's supposed to be a strong informational hazard, not just an unhelpful thought pattern.

View more: Prev | Next