If the book was targeting AI researchers I would agree that Harris is a poor choice. On the other hand, if the goal is to reach a popular audience, you could do much worse than someone who is very well known in the mainstream media and has a proven track record of writing best selling books.
Would you say there's an implicit norm in LW Discussion of not posting links to private LessWrong diaspora or rationalist-adjacent blogs?
I feel like if I started posting links to every new and/or relevant SSC or Ribbonfarm post as top-level Discussion topics, I would get downvoted pretty bad. But I think using LW Discussion as a sort of LW Diaspora Link Aggregator would be one of the best ways to "save" it.
One of the lessons of the diaspora is that lots of people want to say and discuss sort-of-rationalist-y things or at least discuss mundane or political topics in a sort-of-rationalist-y way. As far as I can tell, in order to actually find what all these rationalist-adjacent people are saying, you would have to read like twenty different blogs.
I personally wouldn't mind a more Hacker News style for LW Discussion, with a heavy focus on links to outside content. Because frankly, we're not generating enough content locally anymore.
I'm essentially just floating this idea for now. If it's positively received, I might take it upon myself to start posting links.
I pretty regularly post links as comments in the Open Thread.
The current norm in LW is to have few but meaty top-level posts. I think if we start to just post links, that would change the character of LW considerably, going in the Reddit/HN direction. I don't know if that would be a good thing.
CGP Grey has read Bostrom's Superintelligence.
Transcript of the relevant section:
Q: What do you consider the biggest threat to humanity?
A: Last Q&A video I mentioned opinions and how to change them. The hardest changes are the ones where you're invested in the idea, and I've been a techno-optimist 100% all of my life, but [Superintelligence: Paths, Dangers, Strategies] put a real asterisk on that in a way I didn't want. And now Artificial Intelligence is on my near term threat list in a deeply unwelcome way. But it would be self-delusional to ignore a convincing argument because I don't want it to be true.
I like this how this response describes motivated cognition, the difficulty of changing your mind and the Litany of Gendlin.
He also apparently discusses this topic on his podcast, and links to the amazon page for the book in the description of the video.
Grey's video about technological unemployment was pretty big when it came out, and it seemed to me at the time that he wasn't too far off of realising that there were other implications of increasing AI capability that were rather plausible as well, so it's cool to see that it happened.
http://lesswrong.com/lw/nfk/lesswrong_2016_survey/
Friendly weekly reminder that the survey is up and you should take it.
[LINK]
There seems like some relevant stuff this week:
Katie Cohen, a member of the rationality community in the Bay Area, and her daughter, are beneficiaries of fundraiser anonymously hosted by one (or more) of their friends, and they've fallen on some hard times. I don't know them, but Rob Bensinger vouched on social media he is friends with everyone involved, including the anonymous fundraiser.
Seems like there are lots of good links and corrections from the previous links post this week, so check it out if you found yoursel
The moon may be why earth has a magnetic field. If it takes a big moon to have a life-protecting magnetic field, this presumably affects the Fermi paradox.
EY arguing that a UFAI threat is worth considering -- as a response to Bryan Caplan's scepticism about it. I think it's a repost from Facebook, though.
ETA: Caplan's response to EY's points. EY answers in the comments.
If you don't have a good primary care doctor or are generally looking to trade money for health, and live in the Bay Area, Phoenix, Boston, NY, Chicago, or Washington DC, I'd recommend considering signing up for One Medical Group, which is a service that provides members with access to a network of competent primary doctors, as well as providing other benefits. They do charge patients a $150 yearly membership fee in additional to charging co-pays similar to what you'd pay at any other primary care physician's office, but in return for this, they hire more ...
I'm a One Medical member. The single biggest draw for me is that you can get appointments the same or next day with little or no waiting time -- where my old primary care doctor was usually booked solid for two weeks or more, by which point I'd either have naturally gotten over whatever I wanted to see him for, or have been driven to an expensive urgent care clinic full of other sick people.
They don't bother with the traditional kabuki dance where a nurse ushers you in and takes your vitals and then you wait around for fifteen minutes before the actual doctor shows, either -- you see a doctor immediately about whatever you came in for, and you're usually in and out in twenty minutes. It's so much better of a workflow that I'm astonished it hasn't been more widely adopted.
That said, they don't play particularly nice with my current insurance, so do your homework.
What do LessWrongers think of terror management theory? It has it's roots in Freudian psychoanalysis, but it seems to be getting more and more evidence supporting it (here's a 2010 literature review)
Imagine a case of existential risk, in which humanity needs to collectively make a gamble.
We prove that at least one of 16 possible choices guarantees survival, but we don't know which one.
Question: can we acquire a quantum random number that is guaranteed to be independent from anything else?
I.e. such that the whole world is provably guaranteed to enter quantum superposition of all possible outcomes, and we provably survive in 1/16th of the worlds?
Mnemotechnics
Time Ferris's recent podcsat with Carl Shurmann had a quote of a quote that stuck with me: 'the good shit sticks', said by a writer when questioned on how he remembers good thoughts when he's constantly blackout drunk. That kind of 'memory optimism' as I call it seems a great way to mitigate memory doubt disorder which I'd guess is more common among skeptics, rationalist and other purveyors of doubt.
Innovation in education
Do your alma matta have anything resembling a academic decisions tribunal and administrative decisions tribunal?
We should...
I'm not able to see the post Ultimate List of Irrational Nonsense on my Discussion/New/ page even though I have enabled the options to show posts that have extremely negative vote counts (-100) while signed in. I made a request in the past about not displaying those types of posts for people who are not signed in. I'm not sure if that's related to this or not.
Would there be a fanfic about how Cassandra did not tell people the future, but simply 'what not to do', lied and schemed her way to the top and saved Troya...
Interesting. The general consensus in that thread seems to have been that the user in question was missing the point somehow, and -3 isn't really such a terribly low score for something generally thought to have been missing the point. (I guess it was actually +6 -9.)
I don't think the poor reception of "Adding up to normality" is why the user in question left LW. E.g., this post was made by the same user about 6 months later, so clearly s/he wasn't immediately driven off by the downvotes on "Adding up to normality".
Anyway. I think I agree with the general consensus in that thread (though I didn't downvote the post and still wouldn't) that the author missed the point a bit. I think Egan's law is a variant on a witticism attributed to Wittgenstein. Supposedly, he and a colleague had a conversation like this. W: Why did anyone think the sun went round the earth? C: Because it looks as if it does. W: What would it have looked like, if it had looked as if the earth went round the sun? The answer, of course, being that it would have looked just the way it actually does, because the earth does go round the sun and things look the way they do.
Similarly (and I think this is Egan's point), if you have (or the whole species has) developed some attitude to life, or some expectation about what will happen in ordinary circumstances, based on how the world looks, and if some new scientific theory that predicts that the world will look that way, then either you shouldn't change that attitude or it was actually inappropriate all along.
Now, you can always take the second branch and say things like this: "This theory shows that we should all shoot ourselves, so plainly if we'd been clever enough we'd already have deduced from everyday observation that we should all shoot ourselves. But we weren't, and it took the discovery of this theory to show us that. But now, we should all shoot ourselves." So far as I can tell, appealing to Egan's law doesn't do anything to refute that. It just says that if something is known to work well in the real world, then ipso facto our best scientific theories tell us it should work well in the world they describe, even if the way they describe that world feels weird to us.
I agree with the author when s/he writes that correct versions of Egan's law don't at all rule out the possibility that some proposition we feel attached to might in fact be ruled out by our best scientific theories, provided that proposition goes beyond merely-observational statements along the lines of "it looks as if X".
So, what about the example we're actually discussing? Your proposal, AIUI, is as follows: rig things up so that in the event of the human race getting wiped out you almost certainly get instantly annihilated before you have a chance to learn what's happening; then you will almost certainly never experience the wiping-out of the human race. You describe this by saying that you "probably survive any x-risk".
This seems all wrong to me, and I can see the appeal of expressing its wrongness in terms of "Egan's law", but I don't think that's necessary. I would just say: Are you quite sure that what this buys you is really what you care about? If so, then e.g. it seems you should be indifferent to the installation of a device at your house that at 4am every day, with probability 1/2, blows up the house in a massive explosion with you in it. After all, you will almost certainly never experience being killed by the device (the explosion is big and quick enough for that, and in any case it usually happens when you're asleep). Personally, I would very much not want such a device in my house, because I value not dying as well as not experiencing death, and also because there are other people who would be (consciously) harmed if this happened. And I think it much better terminology to describe the situation as "the device will almost certainly kill me" than as "the device will almost certainly not kill me", because when computing probabilities now I want to condition on my knowledge, existence, etc., now, not after the relevant events happen.
Am I applying "Egan's law" here? Kinda. I care about not dying because that's how my brain's built, and it was built that way by an evolutionary process formed in the actual world where a lineage isn't any better off for having its siblings in other wavefunction-branches survive; and when describing probabilities I prefer to condition only on my present epistemic state because in most contexts that leads to neater formulas and fewer mistakes; and what I'm claiming is that those things aren't invalidated by saying words like "anthropic" or "quantum". But an explicit appeal to Egan seems unnecessary. I'm just reasoning in the usual way, and waiting to be shown a specific reason why I'm wrong.
I meant that not only his post but most of his comments were downvoted, and from my personal experience if I get a lot of downvoting, I feel difficult to continue rational discussion of the topic.
Egan's law is very vague in its short formulation. It is not clear, what is "all", what kind of law is it - epistemic, natural, legal; what is normality - physics, experience, our expectation, our social agreements. So it mostly used as universal objection to any strange things.
But there are lot of strange things. Nukes were not normal before they were c...
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.