Comment author: Viliam_Bur 02 February 2016 10:49:15PM 1 point [-]

Sounds like this could work.

Well, depends on how large fraction of votes currently comes from users with karma under 250. It would be bad to reduce the total number of votes drastically. They do have a positive role, in general; most people use them correctly.

Comment author: AspiringRationalist 03 February 2016 02:56:05AM 0 points [-]

Good point. I'm not sure what the right threshold would be.

How difficult would it be to look up the percentage of votes that come from different karma levels?

Comment author: AspiringRationalist 31 January 2016 03:49:32PM 10 points [-]

What if we set a significantly higher karma threshold for voting? I think a threshold of 250 or so would make Eugene's sockpuppetry and mass-downvoting shenanigans prohibitively difficult.

Comment author: leplen 25 January 2016 03:52:30PM 1 point [-]

I'm working through the udacity deep learning course right now, and I'm always trying to learn more things on the MIRI research guide. I'm in a fairly different timezone, but my schedule is pretty flexible. Maybe we can work something out?

Comment author: AspiringRationalist 28 January 2016 02:14:07AM 2 points [-]

I just finished Stanford's machine learning class on Coursera and I was thinking about starting Google's Udacity course.

I don't have much formal background in CS (2 classes in college and later a couple Coursera classes), but I've been working as a software engineer for a few years now.

I am in U.S. Eastern Time (UTC-4).

Comment author: RyanCarey 26 January 2016 06:19:16AM 1 point [-]

Machine Learning for Good is A machine learning and deep learning study group for EAs and rationalists that I'm facilitating.

It includes a study group for the current Udacity Tensorflow/Deep Learning course. I'm not going to repost further info here, one can access it through the following group:

https://m.facebook.com/profile.php?id=1582428355359588&tsid=0.6936991019174457&source=typeahead

Comment author: AspiringRationalist 28 January 2016 02:08:55AM 0 points [-]

The Facebook group is closed. Should people here assume that they will be allowed to join?

Comment author: gwern 04 January 2016 08:12:46PM 21 points [-]

I've gotten around to doing a cost-benefit analysis for vitamin D: http://www.gwern.net/Longevity#vitamin-d

Comment author: AspiringRationalist 06 January 2016 03:04:59AM 0 points [-]

Thanks for posting that!

The key stats: expected life extension: 4 months; optimal starting age: 24.

Comment author: SilentCal 05 November 2015 11:34:23PM 3 points [-]

I was wondering about the state of the deterrence in place against nuclear weapons usage, having always assumed it to be massive, and I can't tell if there's actually any formal international treaty about the use of nuclear weapons in war.

https://en.wikipedia.org/wiki/List_of_weapons_of_mass_destruction_treaties has arms-reduction, non-proliferation, and test ban treaties, but apparently nothing about who you actually nuke. I think Geneva says you can't target civilians with any weapon, but does anything prohibit nuking your enemy's army?

Comment author: AspiringRationalist 06 November 2015 01:03:01AM 4 points [-]

If things escalate to the point where nuclear weapons get used, that probably implies enough of a breakdown of order that it doesn't matter what any treaty says.

Comment author: eeuuah 17 January 2015 07:51:14PM 0 points [-]

My skin (particularly my hands, because soap is harsh) is prone to drying out, so a humidifier really reduces small issues.

Comment author: AspiringRationalist 15 October 2015 12:38:20AM 0 points [-]

So does mine. The humidifier helps, but since I spend a lot of time in environments where I can't easily install one (eg work), I also use moisturizer.

Comment author: Bound_up 13 October 2015 11:51:04PM 2 points [-]

I've been through the free will sequences a second time now, and I'm trying to figure out how to apply it to my life.

See, even that sounds weird, because applying to my life...trying...figure out...whether I do or not is inevitable, right?

Speaking from the naive standpoint, how does the determinist viewpoint affect your decisions? How do you think about it, incorporate it? Do you compartmentalize and pretend you're in control, or what?

Comment author: AspiringRationalist 14 October 2015 12:59:06AM 3 points [-]

Do you compartmentalize and pretend you're in control, or what?

I think the main takeaway is that you shouldn't worry too much about questions of free will. Basically, the fact that your free will is made of physics doesn't mean it makes sense to make poor choices or not take responsibility for yourself and then blame physics. Also, don't go looking for magical explanations of free will existing "outside of physics".

Comment author: DanielLC 14 October 2015 12:38:51AM 0 points [-]

Taxes would increase to pay for the Universal Basic Income. You could do it using the money we currently spend on welfare, but that includes things like medicare. Either we need to keep that, or we need to give them extra money to pay for medical insurance.

Supply of labor could decrease. This is a necessary consequence of any effort to help the poor. But since we already have a welfare system, it's just a question of which causes labor to decrease less.

Comment author: AspiringRationalist 14 October 2015 12:49:23AM 0 points [-]

Supply of labor could decrease. This is a necessary consequence of any effort to help the poor. But since we already have a welfare system, it's just a question of which causes labor to decrease less.

For things like welfare (and almost certainly for UBI, though I doubt there's enough empirical evidence either way to be sure), yes.

Things like education subsides (assuming they subsidize professionally relevant education rather than just signaling, which admittedly is a somewhat dubious assumption) and the EITC (basically a negative income tax for the working poor in the US) could very well increase the labor supply.

Comment author: AspiringRationalist 09 October 2015 12:08:22AM 2 points [-]

When we tried Paranoid Debating at the Boston meetup a few years back, we often had the problem that the deceiver didn't know enough about the question to know which direction to mislead in. I think the game would work better if the deceiver were simply trying to bias the group in a particular direction rather than make the group wrong. I think that's also a closer approximation to real life - plenty of people want to sell you their product and don't know or care which option is best. Not many just want you to buy the wrong product.

View more: Prev | Next