Comment author: MathieuRoy 19 March 2014 03:21:16PM *  1 point [-]

I'm a semifinalist for the MA365 mission (a one year simulation of Mars exploration). We now need to do a video to answer some questions such as "Why Mars? Why do you think we should go there?" (see my answer below). The Mars Society will chose 18 finalists that will be split in 3 teams. In august 2014, they (we?) will go to Devon Island during two weeks and they will pick the best team. This team of 6 will then do a simulation of Mars exploration during 365 days starting in July 2015. You can contribute to the project by giving money (in exchange for gifts) in the indiegogo campaign: this will help us to buy better equipment.

my answer to "Why Mars? Why do you think we should go there?" or as asked in the post "Why this project and not others?" (note: I would like to hear your answers too)

I think space exploration in general has a lot of benefits: it improves our understanding of the history of our solar system and it pushes us to develop new technologies that often become useful on Earth. I think the next step in space exploration is to send humans to Mars because it's probably the easiest and most interesting planet to explore in our solar system because it has some similarity with the Earth. I also think that sending humans to Mars will inspire more people to go study in science and technology, which will in turn improve humans' well being and life expectancy. Finally, I think that if we succeed to colonize Mars, it will be a good fallback position in case of an existential catastrophe happening on Earth.

I think the first manned mission to Mars will probably be in the next decade, so we need to prepare ourselves. For example, Inspiration Mars (www.inspirationmars.org‎) wants to send a couple to fly within 100 miles of Mars in 2018; Mars One (www.mars-one.com) wants to start sending humans to Mars in 2024 and let them there indefinitely. The USA, the Russian and the European space agencies are also planning manned missions to Mars relatively soon. So the research done by The Mars Society, such as this mission, will be really useful to all of these organisations.

Comment author: MathieuRoy 13 March 2014 08:52:46AM 0 points [-]

A gene conveying a 3% fitness advantage, spreading through a population of 100,000, would require an average of 768 generations to reach universality in the gene pool.

Generations to fixation = 2 ln(N) / s = 2 ln(100000)/1.03 = 22.36 != 768

I'm confused.

Comment author: westward 02 March 2014 10:39:37PM 0 points [-]

Yes. I'm "ASL IV" level. I'm conversationally expressive, less so receptive.

Do you have interest in collaborating on Anki decks? I'm thinking video clips with English glosses. I'd also love a ASL/English dictionary that was searchable by handshape, body location, movement, etc.

Comment author: MathieuRoy 07 March 2014 09:16:23PM *  0 points [-]

If you're interested, I've done this document with all the best resources (IMO) that can be found about ASL.

Concerning an ASL/English dictionary, I know these two: Handspeak and ASL Jinkle.

Comment author: ArisKatsaris 01 March 2014 03:50:24PM 0 points [-]

Music Thread

Comment author: MathieuRoy 02 March 2014 01:44:51AM *  3 points [-]

I am doing a Youtube playlist of transhumanist songs (with a particular quote from each song). Since there's not a lot of these, I also put songs that are only somewhat transhumanist (frankly I'm shocked at the ratio of transhumanist songs to love songs). So do you have suggestions for songs that are somewhat related to transhumanism (and/or rationality) (not necessarily in English) please?

For example, here are the ones that I have put in the playlist so far:

Turn It Around by Tim McMorris

Have you ever looked outside and didn’t like what you see

Or am I the only one who sees the things we could be

If we made more effort, then I think you’d agree

That we could make the world a better place, a place that is free

Another one is Hiro by Soprano: a song about someone who's saying what he would do if he could travel back in time. (it’s in French but with English subtitles) (it's inspired from the TV show Heroes which I also recommend).

Tellement de choses que j’aurais voulu changer ou voulu vivre (So many things that I would change or live)

Tellement de choses que j’aurais voulu effacer ou revivre (So many things that I would erase or live again)

The classic Imagine by John Lennon

Imagine there's no countries

It isn't hard to do

Nothing to kill or die for

And no religion too

Imagine all the people

Living life in peace…

The Future Soon by Jonathan Coulton

Well it's gonna be the future soon

And I won't always be this way

One that I saw recommended on LW: The Singularity by Dr. Steel (it's my favorite!)

Nanotechnology transcending biology

This is how the race is won

Another that I saw on LW: Singularity by The Lisps

You'd keep all the memories and feelings that you ever want,

And now you can commence your life as an uploaded extropian.

Singularity by Steve Aoki & Angger Dimas ft. My Name is Kay

We’re gonna live, we’ll never die

I am the very model of a singularitarian

I am a Transhuman, Immortalist, Extropian

I am the very model of a Singularitarian

Another World by Doug Bard

Sensing a freedom you've never known,

no limitation, only you can decide

Transhuman by Neurotech

The mutation is in our nature

Transhuman by Amaranthe

My adrenaline feeds my desire

To become an immortal machine

E.T. by Katy Perry ft. Kanye West

You're from a whole other world

A different dimension

You open my eyes

And I'm ready to go

Lead me into the light

Space Girl by Charmax

She told me never venture out among the asteroids, yet I did.

Comment author: ygert 10 February 2014 12:41:42PM *  1 point [-]

As I mentioned to you when you asked on PredictionBook, look to the media threads. These are threads specifically intended for the purpose you want: to find/share media, including podcasts/audiobooks.

I also would like to reiterate what I said on PredictionBook: I don't think PredictionBook is really meant for this kind of question. Asking it here is fine, even good. It gives us a chance to direct you to the correct place without clogging up PredictionBook with nonpredictions.

Comment author: MathieuRoy 11 February 2014 02:14:40AM 0 points [-]

Thank you for the link.

Comment author: MathieuRoy 10 February 2014 04:58:14AM *  2 points [-]

What transhumanist and/or rationalist podcast/audiobook do you prefer beside hpmor which I just finished and really liked!!

Comment author: Eliezer_Yudkowsky 05 September 2008 12:23:51AM 6 points [-]

Eliezer: the rationality of defection in these finitely repeated games has come under some fire, and there's a HUGE literature on it. Reading some of the more prominent examples may help you sort out your position on it.

My position is already sorted, I assure you. I cooperate with the Paperclipper if I think it will one-box on Newcomb's Problem with myself as Omega.

As Paul says, this is very well trodden ground. Since it hasn't been assumed that we are sure we know how the other party reasons, we might want to invest some early rounds in probing to see how the party thinks.

As someone who rejects defection as the inevitable rational solution to both the one-shot PD and the iterated PD, I'm interested in the inconsistency of those who accept defection as the rational equilibrium in the one-shot PD, but find excuses to reject it in the finitely iterated known-horizon PD.

True, the iteration does present the possibility of "exploiting" an "irrational" opponent whose "irrationality" you can probe and detect, if there's any doubt about it in your mind. But that doesn't resolve the fundamental issue of rationality; it's like saying that you'll one-box on Newcomb's Problem if you think there's even a slight chance that Omega is hanging around and will secretly manipulate box B after you make your choice. What if neither party to the IPD thinks there's a realistic chance that the other party is stupid - if they're both superintelligences, say? Do they automatically defect against each other for 100 rounds?

And are you really "exploiting" an "irrational" opponent, if the party "exploited" ends up better off? Wouldn't you end up wishing you were stupider, so you could be exploited - wishing to be unilaterally stupider, regardless of the other party's intelligence? Hence the phrase "regret of rationality"...

Comment author: MathieuRoy 05 February 2014 12:07:27PM *  1 point [-]

Do you mean "I cooperate with the Paperclipper if AND ONLY IF I think it will one-box on Newcomb's Problem with myself as Omega AND I think it thinks I'm Omega AND I think it thinks I think it thinks I'm Omega, etc." ? This seems to require an infinite amount of knowledge, no?

Edit: and you said "We have never interacted with the paperclip maximizer before", so do you think it would one-box?

Comment author: MathieuRoy 04 February 2014 05:20:16AM 1 point [-]

Would a (hypothetically) pure altruist have children (in our current situation)?

In response to comment by dclayh on Closet survey #1
Comment author: woodside 13 January 2013 05:35:15PM *  8 points [-]

I'm curious about your personal experiences with physical pain. What is the most painful thing you've experienced and what was the duration?

I'm sympathetic to your preference in the abstract, I just think you might be surprised at how little pain you're actually willing to endure once it's happening (not a slight against you, I think people in general overestimate what degree of physical pain they can handle as a function of the stakes involved, based largely on anecdotal and second hand experience from my time in the military).

At the risk of being overly morbid, I have high confidence (>95%) that I could have you begging for death inside of an hour if that were my goal (don't worry, it's certainly not). An unfriendly AI capable of keeping you alive for eternity just to torture you would be capable of making you experience worse pain than anyone ever has in the history of our species so far. I believe you that you might sign a piece of paper to pre-commit to an eternity of torture vice simple death. I just think you'd be very very upset about that decision. Probably less than 5 minutes into it.

In response to comment by woodside on Closet survey #1
Comment author: MathieuRoy 03 February 2014 12:34:19PM 0 points [-]

I would definitely pre-commit to immortality.

Comment author: MathieuRoy 03 February 2014 11:23:49AM 1 point [-]

So D "wins" the bid, and B pays him $15 to go get the kids from their grandma's.

Shouldn't it be more something like 15+(100-15)/2$? So both win (about) the same amount of utility? Otherwise, the one who was ready to pay 100$ saved ("won") 85$ and the other won nothing (s/he was indifferent to pay or do it for 15$).

Nice post by the way. Such techniques seem useful if you trust the other will make a bid that really represents the amount s/he's ready to pay.

View more: Prev | Next