Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: BrandonReinhart 14 December 2015 11:58:00PM 8 points [-]

Donation sent.

I've been very impressed with MIRI's output this year, to the extent I am able to be a judge. I don't have the domain specific ability to evaluate the papers, but there is a sustained frequency of material being produced. I've also read much of the thinking around VAT, related open problems, definition of concepts like foreseen difficulties... the language and framework for carving up the AI safety problem has really moved forward.

In response to LessWrong 2.0
Comment author: BrandonReinhart 10 December 2015 05:02:41PM *  9 points [-]

Well I totally missed the diaspora. I read star slate codex (but not the comments) and had no idea people are posting things in other places. It surprises me that it even has a name "rationalist diaspora." It seemed to me that people ran out of things to say or the booster rocket thing had played itself out. This is probably because I don't read Discussion, only Main and as Main received fewer posts I stopped coming to Less Wrong. As "meet up in area X" took over the stream of content I unsubscribed from my CSS reader. Over the past few years the feeling of a community completely evaporated for me. Good to hear that there is something going on somewhere, but it still isn't clear where that is. So archiving LW and embracing the diaspora to me means so long and thanks for all the fish.

Comment author: BrandonReinhart 10 December 2015 03:48:07PM *  0 points [-]

When you’re “up,” your current strategy is often weirdly entangled with your overall sense of resolve and commitment—we sometimes have a hard time critically and objectively evaluating parts C, D, and J because flaws in C, D, and J would threaten the whole edifice.

Aside 1: I run into many developers who aren't able to separate their idea from their identity. It tends to make them worse at customer and product oriented thinking. In a high bandwidth collaborative environment, it leads to an assortment of problems. They might not suggest an idea, because they think the group will shoot it down and they will be perceived as a generator of poor ideas. Or they might not relinquish an idea that the group wants to modify, or work on an alternative to, because they feel that, too, is failure. Or they might not critically evaluate their own idea to the standard they would evaluate any other idea that didn't come from their mind. Over time it can lead to selective sidelining of that person in a way that needs a deliberate effort to undo.

The most effective collaborators are able to generate many ideas with varying degrees of initial quality and then work with the group to refine those ideas or reject the ones that are problematic. They are able to do this without taking collateral damage to their egos. These collaborators see the ideas they generate as products separate from themselves, products meant to be improved by iteration by the group.

I've seen many cases where this entanglement of ego with idea generation gets fixed (through involvement of someone who identifies the problem and works with that person) and some cases where it doesn't get fixed (after several attempts, with bad outcomes).

I know this isn't directly related to the post, but it occurred to me when I read the quoted part above.

Aside 2: I have similar mood swings when I think about the rationalist community. "Less Wrong seems dead, there is no one to talk to." then "Oh look, Anna has a new post, the world is great for rationalists." I think it's different from the work related swings, but also brought to mind by the post.

Comment author: BrandonReinhart 09 December 2015 12:02:17AM 2 points [-]

I've always thought that "if I were to give, I should maximize the effectiveness of that giving" but I did not give much nor consider myself an EA. I had a slight tinge of "not sure if EA is a thing I should advocate or adopt." I had the impression that my set of beliefs probably didn't cross over with EAs and I needed to learn more about where those gaps were and why they existed.

Recently through Robert Wiblin's facebook have encountered more interesting arguments and content in EA. I had no concrete beliefs about EA, only vague impressions (not having had much time to research it in depth in the past). I had developed an impression that EA was about people maximizing giving to a self-sacrificial degree that I found uncomfortable. I also have repeatedly bounced off the animal activism - I have a hard time separating my pleasure of eating meat from my understanding of the ethical arguments. (So, I figured I would be considered a lawful evil person to the average EA).

However, now having read a few more things even just today, I feel like these are misplaced perceptions of the movement. Reading the 2014 summary, posted in a comment here from Tog, makes me think that:

  • EAs give in a pattern similar to what I would give. However, I personally favor the ex-risk and teaching rationality stuff probably a bit higher than the mean.

  • EAs give about as much as I'd be willing to give before I run into egoist problems (where it becomes painful in a stupid way I need to work to correct). So 10% seems very reasonable to me. For whatever reason, I had thought that "EA" meant "works to give most of what they earn and live a spartan life." I think this comes from not knowing any EAs and instead reading 80,000 hours and other resources not completely processing the message correctly. Probably some selective reading going on and I need to review how that happened.

  • The "donate to one charity" argument is so much easier for me to plan around.

Overall I should have read the 2014 results much sooner and it helped me realize that my perspective is probably a lot closer to the average LWer than I had thought. This makes me feel like taking further steps to learn more about EA and make concrete plans to give some specific amount from an EA perspective is a thing I should do. Which is weird, because I could have done all of that anyway, but was letting myself bounce off of the un-pleasurable conclusions of giving up meat eating or giving a large portion of my income away. Neither of which I have to do in the short term to both give effectively or participate in the EA community. Derp.

Comment author: [deleted] 27 February 2015 07:31:33PM 0 points [-]

Any new developments on the C. Elegans simulation in the past 3+ years?

Comment author: BrandonReinhart 22 October 2015 03:08:01AM 0 points [-]

I'm curious about the same thing as [deleted].

Comment author: Vladimir_Nesov 09 January 2013 08:14:54PM 20 points [-]

[he was assuming that] people taking my recommendations would be geniuses by-and-large and that the harder book would be better in the long-run for the brightest people who studied from it.

Is this actually true? My current guess is that even though for a given level of training, smarter people can get through harder texts, they will learn more if they go through easier texts first.

Comment author: BrandonReinhart 11 January 2013 03:49:49AM 1 point [-]

Furthermore, a hard to use text may be significantly less hard to use in the classroom where you have peers, teachers, and other forms of guidance to help digest the material. Recommendations for specialists working at home or outside a classroom might not be the same as the recommendations you would give to someone taking a particular class at Berkeley or some other environment where those resources are available.

A flat out bad textbook might seem really good when it is something else such as the teacher, the method, or the support that makes the book work.

Comment author: Vaniver 07 October 2012 06:23:57PM 6 points [-]

And, if you want to get technical, optimal implies both an objective function to measure the solution by, and a proof that no solutions are superior. "Optimize your diet" seems better than "optimal diets," but even then "four proven diets" seems superior to both of those.

Comment author: BrandonReinhart 08 October 2012 11:44:48PM 1 point [-]

"A directed search of the space of diet configurations" just doesn't have the same ring to it.

Steam Greenlight

17 BrandonReinhart 10 July 2012 05:11AM

I know there is interest among this community in building rationality oriented games as teaching tools. Today we announced Steam Greenlight. We're essentially turning the game approval process over to the community. It may be possible for quality rationality games produced by the Less Wrong community to create enough gamer-community interest to get placed on Steam for distribution. 

http://steamcommunity.com/greenlight

I feel that this creates a better opportunity for rationality games as teaching tools to find broad distribution than if it had to go through the Steam product review team. Ultimately, it shifts the responsibility onto the games' creators and their community to create and drive interest for the product and it removes our limited decision making from the system.

I'm posting this here for awareness of this possible avenue toward reaching a broader audience.

Consider a robot vacuum.

16 BrandonReinhart 05 June 2012 08:08AM

My wife and I recently acquired a robot vacuum. It has turned out to be a really great time-saving and life-improving investment. Some simple math suggests it may be worth you also considering buying one.

Let's say you spend 20 minutes a week vacuuming. That's about 17 hours of vacuuming per year. The Neato XV-11 costs about $350 bucks with basic shipping. For the purposes of our Fermi calculation we will say that your time spent vacuuming with the robot is zero. This is close to true. See below for exceptions.

At $350, if you value an hour of your time at more than about $20 you would be better off buying the robot than doing the vacuuming yourself in the first year. (17*$20=$340 - close enough for our fermi estimate)

Consider also that if you spend at least 20 minutes or less a week vacuuming, you can also instruct the robot to vacuum 20 minutes a week or more and raise the quality of your life by living in a better cared-for environment by some amount. For example, you could increase the pay-out of the robot by having it vacuum every other day.

If you have the robot do 60 minutes of work a week, then you'd only have to value your time at about $7 for the robot to be worthwhile in the first year. (52*$7=$364)

Do the calculation to see if it makes sense for you:

b = value of your time in dollars/hour

y = hours/year you spend vacuuming

350 = estimated price of a robot

x = b*y - 350

If x > 0, then the robot would save you money in time, according to how you value your time. If x < 0, then you either don't clean often enough or value your time so low that doing the work yourself makes sense. (So this is a simple model, feel free to make it more complex but the purpose of this post is to illustrate a fermi calc that seems to yield an easy choice.)

Given the cost of many upright vacuums, if you can avoid buying an upright and only buy the robot the calculation shifts drastically in favor of only getting the robot (and perhaps borrowing an upright if you really need one).

If vacuuming causes you particular disutility, you could put a dollar premium on that disutility and add it to b. On the flip side, if you really like to vacuum you'd want to discount b to reflect the extra utility you get from spending your time doing something you enjoy.

Considerations:

- The robots are claimed to be pretty good at navigating complex room layouts. Our robot rarely (but sometimes) gets stuck behind places where it has little clearance to enter. You can adjust furniture layout to compensate or lay down (ugly) magnetic strips that stop the robot. You might want to try out the robot to make sure it can navigate your layout before you commit.

- Once our robot failed to back itself fully into its charging dock and it ran out of juice and missed a scheduled vacuum session.

- The robot won't drive itself off cliffs (down stairs to its doom). On the down side, it won't vacuum stairs. You may still need an upright to handle stairs.

- You can make the robot do a lot more vacuuming than you would normally do yourself.

- They are really quiet on carpet. Somewhat noisier on hard wood. Depending on your sensitivity, you may be able to run it while you sleep.

- If you shed hair, you'll need to regularly clip the hair from the brush (like a normal upright). This takes almost no time. Do it as a part of the bin-emptying ritual.

- It isn't clear to me how long the robot will last, so I don't know what the replacement period or cost is.

- This is the robot we use, but there are many types. It isn't clear to me if the upgraded types are worth the extra money: http://www.amazon.com/Neato-XV-11-Robotic-Vacuum-System/dp/B003UBPB6E/ref=sr_1_1?ie=UTF8&qid=1338882167&sr=8-1

- I haven't investigated central vac, so I don't know what the trade-offs are. It seems like central vac still requires time to use and our goal was reducing time spent doing an automatable home maintenance task.

Maybe this is a trivial post, but I hadn't realized how much cleaner our environment could be or how much happier we could be for such a small relative investment. Much of the benefit comes from the robot being able to vacuum far more often than we'd ever have a desire to do ourselves.

Comment author: cousin_it 24 May 2012 09:35:39AM *  3 points [-]

Maybe I'm missing something, but the formalization looks easy enough to me...

def tdt_utility():
if tdt(tdt_utility) == 1:
box1 = 1000
box2 = 1000000
else:
box1 = 1000
box2 = 0
if tdt(tdt_utility) == 1:
return box2
else:
return box1+box2
def your_utility():
if tdt(tdt_utility) == 1:
box1 = 1000
box2 = 1000000
else:
box1 = 1000
box2 = 0
if you(your_utility) == 1:
return box2
else:
return box1+box2

The functions tdt() and you() accept the source code of a function as an argument, and try to maximize its return value. The implementation of tdt() could be any of our formalizations that enumerate proofs successively, which all return 1 if given the source code to tdt_utility. The implementation of you() could be simply "return 2".

Comment author: BrandonReinhart 04 June 2012 02:11:56AM 0 points [-]

Thanks for this. I hadn't seen someone pseudocode this out before. This helps illustrate that interesting problems lie in the scope above (callers to tdt_uility() etc) and below (implementation of tdt() etc).

I wonder if there is a rationality exercise in 'write pseudocode for problem descriptions, explore the callers and implementations'.

View more: Next