Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: bgaesop 22 November 2013 09:31:29AM 24 points [-]

Several of these questions are poorly phrased. For instance, the supernatural and god questions, as phrased, imply that the god chance should be less than the chance of supernatural anything existing. However, I think (and would like to be able to express) that there is a very small (0), chance of ghosts or wizards, but only a small (1) chance of there being some sort of intelligent being which created the universe-for instance, the simulation hypothesis, which I would consider a subset of the god hypothesis.

Comment author: Louie 09 January 2013 02:39:20PM *  2 points [-]

PS - I had some initial trouble formatting my table's appearance. It seems to be mostly fixed now. But if an admin wants to tweak it somehow so the text isn't justified or it's otherwise more readable, I won't complain! :)

Comment author: bgaesop 09 January 2013 03:57:38PM 2 points [-]

Interesting list. Minor typo: "This is where you get to study computing at it's most theoretical," the "it's" should read "its".

Comment author: bgaesop 10 November 2012 12:36:10AM 0 points [-]

I have started a boardgame company whose first game is up on kickstarter at the moment. I'm going to bring the no-art, largely hand written copy that was made for playtesting.

http://www.kickstarter.com/projects/sixpencegames/the-6p-card-game-of-victorian-combat

Comment author: Will_Newsome 19 August 2011 08:01:29PM *  4 points [-]

Incomplete response:

Does "FAI-focused" mean what I called code first?

Jah. Well, at least determining whether or not "code first" is even reasonable, yeah, which is a difficult question in itself and only partially tied in with making direct progress on FAI.

What are your thoughts on that post and its followup?

You seem to have missed Oracle AI? (Eliezer's dismissal of it isn't particularly meaningful.) I agree with your concerns. This is why the main focus would at least initially be determining whether or not "code first" is a plausible approach (difficulty-wise and safety-wise). The value of information on that question is incredibly high and as you've pointed out it has not been sufficiently researched.

What is this new non-profit planning to do differently from SIAI and why?

Basically everything. SingInst is focused on funding a large research program and gaining the prestige necessary to influence (academic) culture and academic and political policy. They're not currently doing any research on Friendly AI, and their political situation is such that I don't expect them to be able to do so effectively for a while, if ever. I will not clarify this. (Actually their research associates are working on FAI-related things, but SingInst doesn't pay them to do that.)

What are the other things that you could be doing?

Learning, mostly. Working with an unnamed group of x-risk-cognizant people that LW hasn't heard of, in a way unrelated to their setting up a non-profit.

Comment author: bgaesop 22 August 2011 08:16:12AM 3 points [-]

Working with an unnamed group of x-risk-cognizant people that LW hasn't heard of, in a way unrelated to their setting up a non-profit.

Could you tell us about them?

Comment author: kaz 19 August 2011 01:00:48AM *  8 points [-]

I really don't see why I can't say "the negative utility of a dust speck is 1 over Graham's Number."

You can say anything, but Graham's number is very large; if the disutility of an air molecule slamming into your eye were 1 over Graham's number, enough air pressure to kill you would have negligible disutility.

or "I am not obligated to have my utility function make sense in contexts like those involving 3^^^^3 participants, because my utility function is intended to be used in This World, and that number is a physical impossibility in This World."

If your utility function ceases to correspond to utility at extreme values, isn't it more of an approximation of utility than actual utility? Sure, you don't need a model that works at the extremes - but when a model does hold for extreme values, that's generally a good sign for the accuracy of the model.

An addendum: 2 more things. The difference between a life with n dust specks hitting your eye and n+1 dust specks is not worth considering, given how large n is in any real life. Furthermore, if we allow for possible immortality, n could literally be infinity, so the difference would be literally 0.

If utility is to be compared relative to lifetime utility, i.e. as (LifetimeUtility + x / LifetimeUtility), doesn't that assign higher impact to five seconds of pain for a twenty-year old who will die at 40 than to a twenty-year old who will die at 120? Does that make sense?

Secondly, by virtue of your asserting that there exists an action with minimal disutility, you've shown that the Field of Utility is very different from the field of, say, the Real numbers, and so I am incredulous that we can simply "multiply" in the usual sense.

Eliezer's point does not seem to me predicated on the existence of such a value; I see no need to assume multiplication has been broken.

Comment author: bgaesop 22 August 2011 07:32:51AM 2 points [-]

if the disutility of an air molecule slamming into your eye were 1 over Graham's number, enough air pressure to kill you would have negligible disutility.

Yes, this seems like a good argument that we can't add up disutility for things like "being bumped into by particle type X" linearly. In fact, it seems like having 1, or even (whatever large number I breathe in a day) molecules of air bumping into me is a good thing, and so we can't just talk about things like "the disutility of being bumped into by kinds of particles".

If your utility function ceases to correspond to utility at extreme values, isn't it more of an approximation of utility than actual utility?

Yeah, of course. Why, do you know of some way to accurately access someone's actually-existing Utility Function in a way that doesn't just produce an approximation of an idealization of how ape brains work? Because me, I'm sitting over here using an ape brain to model itself, and this particular ape doesn't even really expect to leave this planet or encounter or affect more than a few billion people, much less 3^^^3. So it's totally fine using something accurate to a few significant figures, trying to minimize errors that would have noticeable effects on these scales.

Sure, you don't need a model that works at the extremes - but when a model does hold for extreme values, that's generally a good sign for the accuracy of the model.

Yes, I agree. Given that your model is failing at these extreme values and telling you to torture people instead of blink, I think that's a bad sign for your model.

doesn't that assign higher impact to five seconds of pain for a twenty-year old who will die at 40 than to a twenty-year old who will die at 120? Does that make sense?

Yeah, absolutely, I definitely agree with that.

In response to comment by bgaesop on The 5-Second Level
Comment author: shokwave 29 July 2011 05:56:54AM 4 points [-]

This is a very silly reason to reject an idea.

Not always. Time-consuming investigations have a disutility value - if the prior for theories in this reference class multiplied by the utility of finding this idea to be true does not overcome that disutility, you ought not investigate. That is a very serious reason to reject an idea. If you do not give some weight to time costs of investigation, I have a reductio ad absurdum here that will monopolise your free time forever.

Comment author: bgaesop 09 August 2011 10:25:22PM 1 point [-]

That's true. But that's a reason to not investigate and not read this thread and not think about the subject at all, not a reason to reply in this thread that the idea is unlikely, much less to declare it unlikely.

If your reaction to reading about the truther idea is "the value of knowing the facts about this issue, whatever they are, is rather low, and it would be time consuming to learn them, so I don't care" that is A-OK. If your reaction is "the value of knowing the facts about this issue, whatever they are, is rather low, and it would be time consuming to learn them, therefore I am not going to update whatsoever on this issue and will ignore the evidence I know is available and yet still have a strong, high-confidence belief on it" then that seems kind of silly to me.

Does that make sense? Do you agree, or not? This is not an issue I feel very strongly about, but value of information is something I've been thinking about more recently and so I think that hearing others' opinions on it would be useful. At the very least, worth the time to read them :) Amusing link, by the way.

In response to comment by roland on The 5-Second Level
Comment author: WrongBot 11 May 2011 02:41:06AM 5 points [-]
  • The WTC being loaded with explosives is a much more complex explanation than the orthodox one - penalty.
  • The explosives theory involves a conspiracy - penalty.
  • The explosives theory can be and is used to score political points - penalty.
  • Explosive-theory advocates seem to prefer videos to text, which raises the time cost I have to pay to investigate it - penalty.
  • The explosives theory doesn't make any goddamn sense - huge penalty.
Comment author: bgaesop 28 July 2011 05:30:41PM 0 points [-]

The explosives theory involves a conspiracy

So does the traditional explanation.

The explosives theory can be and is used to score political points

So is the traditional explanation. War in Iraq, anyone?

Explosive-theory advocates seem to prefer videos to text, which raises the time cost I have to pay to investigate it

This is a very silly reason to reject an idea.

Comment author: taw 26 July 2011 01:48:40AM 6 points [-]

If, as people here like to believe (and may or may not be true), the LWers are very rational and good at picking things that have very high expected value as things to start or donate to [...]

I didn't downvote you, but what you're saying is essentially "if you accept our tribe is the most awesome and smartest, then it makes sense to donate to our tribal charity". Which is something every single group would say, in slight variation.

I was under the impression that those already had sufficient resources? Could you link to some more information on this subject, please? I agree that asteroids are a more obviously important issue than the Singularity.

Here's results chart for various asteroid tracking efforts. Catalina Sky Survey seems to be doing most of the work these days, and you can probably donate to University of Arizona and have that money go to CSS somehow. I'm not really following this too closely, I'm mostly glad that some people are doing something here.

Comment author: bgaesop 26 July 2011 08:00:04AM *  4 points [-]

I didn't downvote you,

Thanks! I upvoted you.

but what you're saying is essentially "if you accept our tribe is the most awesome and smartest, then it makes sense to donate to our tribal charity". Which is something every single group would say, in slight variation.

Well yeah; that's why you should examine the evidence and not just do what everyone else does. So let's look at the beliefs of all the Singularitarians on LW as evidence. What would we expect to see if LW is just an arbitrary tribe that picked a random cause to glom around? I suspect we would see that not many people in the world, and particularly not high-status people and organizations, would pay attention to the Singularity. I predict that everyone on LW would donate money to SIAI and shun people who don't donate or belittle SIAI.

Now what would we see if LW is in fact a group of high-quality rationalists and the world, in general, is too blinded by various biases to think rationally about low-probability, high-impact events? Well, most people, including high-status people (but perhaps not some academics) wouldn't talk about it. People on LW would donate money to SIAI because they did the calculation and decided it was the highest expected value. And they would probably shun the people who disagree, because they're still humans.

Those two situations look awfully similar to me. My point is, I certainly don't think that you can use LW's enthusiasm about SIAI compared to the general public as a strike against LW or SIAI.

Here's results chart for various asteroid tracking efforts. Catalina Sky Survey seems to be doing most of the work these days, and you can probably donate to University of Arizona and have that money go to CSS somehow. I'm not really following this too closely, I'm mostly glad that some people are doing something here.

I'm not finding anything there indicating that they're hurting for funding, but perhaps I'm missing it.

Comment author: taw 25 July 2011 11:09:29PM 0 points [-]

So it's just an awfully convenient coincidence that the charity to donate to best display trial affiliations to lesswrong crowd, and the charity to donate to best save the world just happens to be the same one? What a one in a billion chance! Outside view says they're not anything like that, and they have zero to show for it as a counterargument.

If you absolutely positively have to spend money on existential risk (not that I'm claiming this is a good idea, but if you have to), asteroids are known to cause mass extinction every year with 1:50,000,000 or so chance. That's 1:500,000 per century, not really negligible. And you can make some real difference by supporting asteroid tracking programs.

Comment author: bgaesop 26 July 2011 12:59:00AM 3 points [-]

So it's just an awfully convenient coincidence that the charity to donate to best display trial affiliations to lesswrong crowd, and the charity to donate to best save the world just happens to be the same one? What a one in a billion chance!

No, that's not it at all. If, as people here like to believe (and may or may not be true), the LWers are very rational and good at picking things that have very high expected value as things to start or donate to, then it makes sense that one of them (Eliezer) would create an organization that would have a very high expected value to have exist (SIAI) and the rest of the people here would donate to it. If that is the case, that SIAI is the best charity to donate to in terms of expected value (which it may or may not be), then it would also be the best charity to best donate to in order to display tribal affiliations (which it definitely is). So if you accept that people on LW are more rational than average, then them donating so much to SIAI should be taken as weak evidence that SIAI is a really good charity to donate to.

you can make some real difference by supporting asteroid tracking programs.

I was under the impression that those already had sufficient resources? Could you link to some more information on this subject, please? I agree that asteroids are a more obviously important issue than the Singularity.

Comment author: aausch 28 May 2011 08:56:16PM *  6 points [-]

Exercise: Dancing

Single/Partnered dancing lessons. Increase body awareness and consciousness of body language signs, both emitted and received. Practice basic skills that can lead to other benefits - confidences speaking with strangers, and hugging at meet-ups.

Comment author: bgaesop 30 May 2011 05:44:15AM *  4 points [-]

Exercise: Improvisatory dance. In my opinion, improvising is more useful than specific styles of dance (salsa, swing, waltz). Most people do not dance specific dances in common social interactions unless the social event is based around that dance. If you are at a club, you can pop and lock, b-boy, robot, liquid&digits, krump, while everyone around you does something else. Also, it's easier and more obvious to be better at improvisatory dance than the people around you.

I have found that attempting to teach others to dance in literal language doesn't work as well as using metaphorical, poetic, woo-filled language. That said, as a specific exercise: feel the energy in your torso and each of your limbs. Feel your connection to the earth beneath you-actually feel the sensation of your feet touching the ground-what parts are touching? The heel, balls, toes, pay attention to it specifically. Direct your focus and weight either towards or away from the parts of your body you find yourself noticing. Feel the energy in your limbs again, and let some of it out, to float in front of you: snap it out, or gently wave it, or pull or push or whatever your body intuits. Then move the now-floating ball of energy around, and let it move you around.

This is much easier to explain in person when you can see me doing it. I was originally inspired to dance by this TED talk by the Legion of Extraordinary Dancers, which is also where I got some of what I wrote above (the rest I got from my own experience and from the improvisation and choreography class I just took). If you enjoy this kind of dance, you will love the LXD web show

View more: Next