I don't think that so high estimate for first statement is reasonable.
Also, link now leads to bicameral reasoning article.
I don't think that so high estimate for first statement is reasonable.
Also, link now leads to bicameral reasoning article.
Thanks, fixed, now points to http://slatestarcodex.com/2015/05/29/no-time-like-the-present-for-ai-safety-work/
(Continuing the posting of select posts from Slate Star Codex for comment here, for the reasons discussed in this thread, and as Scott Alexander gave me - and anyone else - permission to do with some exceptions.)
Scott recently wrote a post called No Time Like The Present For AI Safety Work. It makes the argument for the importance of organisations like MIRI thus, and explores the last two premises:
1. If humanity doesn’t blow itself up, eventually we will create human-level AI.
2. If humanity creates human-level AI, technological progress will continue and eventually reach far-above-human-level AI
3. If far-above-human-level AI comes into existence, eventually it will so overpower humanity that our existence will depend on its goals being aligned with ours
4. It is possible to do useful research now which will improve our chances of getting the AI goal alignment problem right
5. Given that we can start research now we probably should, since leaving it until there is a clear and present need for it is unwise
I placed very high confidence (>95%) on each of the first three statements – they’re just saying that if trends continue moving towards a certain direction without stopping, eventually they’ll get there. I had lower confidence (around 50%) on the last two statements.
Commenters tended to agree with this assessment; nobody wanted to seriously challenge any of 1-3, but a lot of people said they just didn’t think there was any point in worrying about AI now. We ended up in an extended analogy about illegal computer hacking. It’s a big problem that we’ve never been able to fully address – but if Alan Turing had gotten it into his head to try to solve it in 1945, his ideas might have been along the lines of “Place your punch cards in a locked box where German spies can’t read them.” Wouldn’t trying to solve AI risk in 2015 end in something equally cringeworthy?
As always, it's worth reading the whole thing, but I'd be interested in the thoughts of the LessWrong community specifically.
For my part, I'm interested in the connection to GiveWell's powerful advocacy of "cluster thinking". I'll think about this some more and post thoughts if I have time.
(Continuing the posting of select posts from Slate Star Codex for comment here, as discussed in this thread, and as Scott Alexander gave me - and anyone else - permission to do with some exceptions.)
Scott recently wrote a post called Bicameral Reasoning. It touches on epistemology and scope insensitivity. Here are some excerpts, though it's worth reading the whole thing:
Delaware has only one Representative, far less than New York’s twenty-seven. But both states have an equal number of Senators, even though New York has a population of twenty million and Delaware is uninhabited except by corporations looking for tax loopholes.
[...]
I tend to think something like “Well, I agree with this guy about the Iraq war and global warming, but I agree with that guy about election paper trails and gays in the military, so it’s kind of a toss-up.”
And this way of thinking is awful.
The Iraq War probably killed somewhere between 100,000 and 1,000,000 people. If you think that it was unnecessary, and that it was possible to know beforehand how poorly it would turn out, then killing a few hundred thousand people is a really big deal. I like having paper trails in elections as much as the next person, but if one guy isn’t going to keep a very good record of election results, and the other guy is going to kill a million people, that’s not a toss-up.
[...]
I was thinking about this again back in March when I had a brief crisis caused by worrying that the moral value of the world’s chickens vastly exceeded the moral value of the world’s humans. I ended up being trivially wrong – there are only about twenty billion chickens, as opposed to the hundreds of billions I originally thought. But I was contingently wrong – in other words, I got lucky. Honestly, I didn’t know whether there were twenty billion chickens or twenty trillion.
And honestly, 99% of me doesn’t care. I do want to improve chickens, and I do think that their suffering matters. But thanks to the miracle of scope insensitivity, I don’t particularly care more about twenty trillion chickens than twenty billion chickens.
Once again, chickens seem to get two seats to my moral Senate, no matter how many of them there are. Other groups that get two seats include “starving African children”, “homeless people”, “my patients in hospital”, “my immediate family”, and “my close friends”.
[...]
I’m tempted to say “The House is just plain right and the Senate is just plain wrong”, but I’ve got to admit that would clash with my own very strong inclinations on things like the chicken problem. The Senate view seems to sort of fit with a class of solutions to the dust specks problem where after the somethingth dust speck or so you just stop caring about more of them, with the sort of environmentalist perspective where biodiversity itself is valuable, and with the Leibnizian answer to Job.
But I’m pretty sure those only kick in at the extremes. Take it too far, and you’re just saying the life of a Delawarean is worth twenty-something New Yorkers.
Thoughts?
There is a related practice where you can switch your current account. At least in the UK, there are several banks that will pay you from £100 to £150 for switching your current account to them. Like the above practice of churning, this will have a small negative impact on your credit score in the short term, but is otherwise fine. The bank will do all the switching for you, and transfer all your direct debits, standing orders, etc, from the old account. Once you have received the switching bonus (normally within a month of the switch), you can then always switch back. Naturally, you should choose a no-fee current account to switch to (all of these offers allow that). Note that some of these switching bonuses require you to have at least two Direct Debits and pay in a certain sum of money per month, so read the fine print.
The following offers are currently available in the UK (not necessarily an exhaustive list):
What you do with the money is of course up to you, but this is compatible with effective altruism. I have taken advantage of all of these offers at one time or another, and heartily recommend it.
EDITED TO ADD: Many of these bonuses cannot be claimed if you have already received the bonus too recently. So it's not like you can make £600 a month doing this!
http://www.moneysavingexpert.com/ is the best way to learn about these.
Amazon smile is nice too, .5% for a bookmark. Cant easily link from phone.
Shop for Charity is much better - 5%+ directly to GiveWell-recommended charities, plus browser plugins people have made that apply this every time you buy from Amazon.
Did you edit your original comment? When i first read it, I thought it was saying the opposite of what it now seems to say... I actually agree with it now - should is not universal, it depends on your goals.
P.S That paper you provide actually argues hedonism, not utlilitarianism :).
Did you edit your original comment?
Not that I recall
Can you give an explicit argument for why you"should" maximize utility for everyone, instead of just for yourself?
Some people offer arguments - eg http://philpapers.org/archive/SINTEA-3.pdf - and for some people it's a basic belief or value not based on argument.
Bob should suggest that the neighbour should write down the maximum amount she's willing to pay for Alice to stop playing her music (without Alice watching), Alice should write down the minimum amount she's willing to accept to stop playing music (without the neighbour watching), and if the latter amount equals or exceeds the former the neighbour should give the arithmetic mean of the two to Alice and Alice should stop playing and learn to live with it or buy headphones or go live somewhere else, otherwise Alice will keep playing and the neighbour should learn to live with it or buy earplugs or go live somewhere else. (This reduces to the "politeness" thing when both write down "zero".)
Why should it be the neighbour who should pay Alice to not play rather than Alice who should pay the neighbour to play? Because the rules as they exist now (and were accepted by the neighbour when she came to live here) do allow Alice to play, that's why.
This is a good solution when marginal money has roughly equal utility to Alice and Bob, but suffers otherwise.
People seem to be complaining about community fracturing, and good writers going off onto their own blogs. Why not just accept that and encourage people to post links to the good content from these places?
Hacker News is successful mainly because they encourage people to post their own blog posts there, to get a wider audience and discussion. As opposed to reddit where self promotion is heavily discouraged.
Lesswrong is based on reddit's code. You could add a lesswrong.com/r/links, and just tell people it's ok to publish links to whatever they want there. This could be quite successful, given lesswrong already has a decent community to seed it with. As opposed to going off and starting another subreddit, where it's very hard to attract an initial user base (and you run into the self promotion problem I mentioned.)
Potentially worth actually doing - what'd be the next step in terms of making that a possibility?
Relevant: a bunch of us are coordinating improvements to the identical EA Forum codebase at https://github.com/tog22/eaforum and https://github.com/tog22/eaforum/issues