In response to Feeling Rational
Comment author: Benquo 17 August 2016 06:54:11AM 0 points [-]

"Since the days of Socrates at least, and probably long before, the way to appear cultured and sophisticated has been to never let anyone see you care strongly about anything."

I would strongly encourage anyone who wants a good counterexample to read Plato's Symposium, where the desire for wisdom is specifically linked with erotic desire.

Comment author: Benquo 31 July 2016 09:08:56PM 0 points [-]

Might want to correct "mortage" to "mortgage".

Comment author: CarlShulman 19 October 2007 02:22:32AM 0 points [-]
Comment author: Benquo 28 March 2016 07:28:39PM 0 points [-]

Maybe I should follow the same heuristic and find some magicians to listen to.

Comment author: shminux 18 August 2012 09:31:14PM *  3 points [-]

I'm wondering if there is any selfish reason to want procryostinators to sign up, other than hoping that more participants would improve the odds of your favorite cryo outfit surviving until the time revival becomes feasible, or that more research would go into it?

Comment author: Benquo 21 August 2015 12:19:43AM 3 points [-]

Some of them are my friends.

Comment author: JohnGreer 06 July 2015 06:24:45AM 8 points [-]

Hello!

I’ve lived in Berkeley for about six years. My girlfriend is going to medical school so we’re going to be moving to Boca Raton, Florida (most likely) or Columbus, Ohio in less than a month. I’m sad to be leaving the Bay Area but thrilled to be with my girlfriend when she starts such an exciting chapter of her life. I’m also very fortunate that I can handle nearly all my business online.

I co-founded a startup devoted to making a web game with an old buddy of mine. This same guy introduced me to LW.

Critical thinking and debate has been a focus of mine since I was quite young so LW fit right into my interests. I’m very interested in instrumental/practical applications of rationality. I’ve been lurking for many years and finally decided to make an account to get over my fear of online embarrassment given my unfamiliarity with a lot of the lexicon and protocol on LW.

Some passions of mine are movies, seeking out novel experiences (examples are shooting an AK-47, judging a singing competition, and visiting Pixar), and martial arts.

I’m also interested in effective altruism and AI research but still have a lot of learning to do, especially in the latter.

Comment author: Benquo 06 July 2015 01:47:45PM 1 point [-]

Welcome!

You may want to check out some of AnnaSalamon's old posts for some things to try as far as applied rationality goes, if you haven't already.

Have you been / are you interested in connecting with the Bay Area Rationalist or EA community while you're still here?

Comment author: Benquo 20 June 2015 06:53:45PM 1 point [-]

This is the conclusion I've recently drawn about my own social skills - I used to think I didn't have the native architecture for some things, but more recently I've noticed that it's pretty easy to internalize some social skills once I decide it's interesting to learn. Related is the kitten vertical vision experiment, where cats that from birth saw only through vertical slits couldn't track horizontal motion once the filters were taken off. (They recovered.)

Comment author: pnrjulius 27 May 2012 03:19:55AM 3 points [-]

I agree. In real life, when the trolley is about to run over the five children, you stop the trolley. You don't flip a switch that moves the trolley over to one other child, much less toss a fat man off a footbridge. And if you can't stop the trolley, you try to find a way; you don't give up and pick the slightly-less-bad option.

Comment author: Benquo 16 April 2015 10:01:01PM 4 points [-]

First you flip the switch, then you make an extraordinary effort to stop the trolley.

Comment author: Jost 13 March 2015 01:15:41PM 1 point [-]

Although in canon, Lucius (and the Malfoy family) falling into Voldemort’s disgrace was caused by several events which did not happen in HPMoR, including giving away one of Voldemort’s horcruxes (the diary in book 2), failing to steal the prophecy from a handful of teenagers (book 5) and Draco’s failure to kill Dumbledore (book 6).

In HPMoR, Lucius did not fail Voldemort that often.

Comment author: Benquo 13 March 2015 10:01:44PM *  0 points [-]

He hung Bellatrix out to dry:

"In retrospect," said Harry's voice, which seemed to be operating entirely on automatic, "you should have been suspicious when you managed to get that one Death Eater hauled off to Azkaban without a trial."

"We thought Malfoy was distracted," whispered the old witch. "That he was only trying to save himself. There were other Death Eaters we managed to get then, like Bellatrix -"

Harry nodded, feeling like his neck and head were moving on puppet strings. "The Dark Lord's most fanatic and devoted servant, a natural nucleus of opposition for anyone who contested Lucius's control of the Death Eaters. You thought Lucius was distracted."

Comment author: TobyBartels 13 March 2015 04:50:31AM 4 points [-]

Yes, Harry killed a half hour's worth of Draco without really giving it a second thought. Not much in the grand scheme of things, but I expected Harry to notice that this is a little bit of evil. (Just because Wizards do it all the time doesn't make it not evil, as Harry well knows.) But maybe we'll get Harry's perspective later.

Comment author: Benquo 13 March 2015 07:04:44AM 2 points [-]

No, they just put the memories in a Pensieve for later (if Draco becomes an occlumens and turns out to be trustworthy).

Comment author: Kindly 11 March 2015 09:49:49PM *  1 point [-]

I figured that the two were equivalent. If we find something that magic does not consider information (but we do), we can use it to receive information from an arbitrarily far future.

For example, suppose that magic does not consider merely "Does a timeturned person show up at time T at this location or not" to be information, and we want to know if the world ends in the next 10 years.

Four people with time-turners agree to the following scheme (using Unbreakable Vows if necessary):

  • If, at 12:01 AM on March 14th, 2025, the world has not ended, Alice goes back 6 hours using a Time-Turner.
  • If, at 6:01 PM on March 13th, 2025, Bob sees Alice appear out of nowhere, he goes back 6 hours using a Time-Turner.
  • If, at 12:01 PM on March 13th, 2025, Carol sees Bob appear out of nowhere, he goes back 6 hours using a Time-Turner.
  • If, at 6:01 AM on March 13th, 2025, Dan sees Carol appear out of nowhere, he goes back 6 hours using a Time-Turner.
  • If, at 12:01 AM on March 13th, 2025, Alice sees Dan appear out of nowhere, he goes back 6 hours using a Time-Turner.
  • ...

Also, they make sure that in case of anything not world-ending, they will have a substitute available so that the chain will not break.

After agreeing to this, if Alice sees Dan appear out of nowhere at 12:01 AM tomorrow, then they know that the world has not ended. If not, then something sufficiently bad to cause the chain to break must have happened in the next 10 years. (Either way, they must continue to implement the plan for as long as possible.)

Using further chains of Time-Turners, we can use our favorite unbounded binary search scheme to narrow down a more precise date for the world ending.

(Edit: actually, we can probably improve the resolution simply by using the Time-Turners at a different specified time depending on circumstances.)

Comment author: Benquo 11 March 2015 10:16:57PM 3 points [-]

I think this is taking "6 hour limit" way too literally, when by far the simplest explanation is that time turners can protect you against Time's tendency towards simplicity for 6 hours or so, but if you try to chain that, it becomes overwhelmingly computationally simpler (and therefore more likely) for the intention to set up such a chain to result in the death of everyone involved, than for the chain to work as designed.

View more: Next