If the book was targeting AI researchers I would agree that Harris is a poor choice. On the other hand, if the goal is to reach a popular audience, you could do much worse than someone who is very well known in the mainstream media and has a proven track record of writing best selling books.
Would you say there's an implicit norm in LW Discussion of not posting links to private LessWrong diaspora or rationalist-adjacent blogs?
I feel like if I started posting links to every new and/or relevant SSC or Ribbonfarm post as top-level Discussion topics, I would get downvoted pretty bad. But I think using LW Discussion as a sort of LW Diaspora Link Aggregator would be one of the best ways to "save" it.
One of the lessons of the diaspora is that lots of people want to say and discuss sort-of-rationalist-y things or at least discuss mundane or political topics in a sort-of-rationalist-y way. As far as I can tell, in order to actually find what all these rationalist-adjacent people are saying, you would have to read like twenty different blogs.
I personally wouldn't mind a more Hacker News style for LW Discussion, with a heavy focus on links to outside content. Because frankly, we're not generating enough content locally anymore.
I'm essentially just floating this idea for now. If it's positively received, I might take it upon myself to start posting links.
I pretty regularly post links as comments in the Open Thread.
The current norm in LW is to have few but meaty top-level posts. I think if we start to just post links, that would change the character of LW considerably, going in the Reddit/HN direction. I don't know if that would be a good thing.
CGP Grey has read Bostrom's Superintelligence.
Transcript of the relevant section:
Q: What do you consider the biggest threat to humanity?
A: Last Q&A video I mentioned opinions and how to change them. The hardest changes are the ones where you're invested in the idea, and I've been a techno-optimist 100% all of my life, but [Superintelligence: Paths, Dangers, Strategies] put a real asterisk on that in a way I didn't want. And now Artificial Intelligence is on my near term threat list in a deeply unwelcome way. But it would be self-delusional to ignore a convincing argument because I don't want it to be true.
I like this how this response describes motivated cognition, the difficulty of changing your mind and the Litany of Gendlin.
He also apparently discusses this topic on his podcast, and links to the amazon page for the book in the description of the video.
Grey's video about technological unemployment was pretty big when it came out, and it seemed to me at the time that he wasn't too far off of realising that there were other implications of increasing AI capability that were rather plausible as well, so it's cool to see that it happened.
http://lesswrong.com/lw/nfk/lesswrong_2016_survey/
Friendly weekly reminder that the survey is up and you should take it.
[LINK]
There seems like some relevant stuff this week:
Katie Cohen, a member of the rationality community in the Bay Area, and her daughter, are beneficiaries of fundraiser anonymously hosted by one (or more) of their friends, and they've fallen on some hard times. I don't know them, but Rob Bensinger vouched on social media he is friends with everyone involved, including the anonymous fundraiser.
Seems like there are lots of good links and corrections from the previous links post this week, so check it out if you found yoursel
The moon may be why earth has a magnetic field. If it takes a big moon to have a life-protecting magnetic field, this presumably affects the Fermi paradox.
EY arguing that a UFAI threat is worth considering -- as a response to Bryan Caplan's scepticism about it. I think it's a repost from Facebook, though.
ETA: Caplan's response to EY's points. EY answers in the comments.
If you don't have a good primary care doctor or are generally looking to trade money for health, and live in the Bay Area, Phoenix, Boston, NY, Chicago, or Washington DC, I'd recommend considering signing up for One Medical Group, which is a service that provides members with access to a network of competent primary doctors, as well as providing other benefits. They do charge patients a $150 yearly membership fee in additional to charging co-pays similar to what you'd pay at any other primary care physician's office, but in return for this, they hire more ...
I'm a One Medical member. The single biggest draw for me is that you can get appointments the same or next day with little or no waiting time -- where my old primary care doctor was usually booked solid for two weeks or more, by which point I'd either have naturally gotten over whatever I wanted to see him for, or have been driven to an expensive urgent care clinic full of other sick people.
They don't bother with the traditional kabuki dance where a nurse ushers you in and takes your vitals and then you wait around for fifteen minutes before the actual doctor shows, either -- you see a doctor immediately about whatever you came in for, and you're usually in and out in twenty minutes. It's so much better of a workflow that I'm astonished it hasn't been more widely adopted.
That said, they don't play particularly nice with my current insurance, so do your homework.
What do LessWrongers think of terror management theory? It has it's roots in Freudian psychoanalysis, but it seems to be getting more and more evidence supporting it (here's a 2010 literature review)
Imagine a case of existential risk, in which humanity needs to collectively make a gamble.
We prove that at least one of 16 possible choices guarantees survival, but we don't know which one.
Question: can we acquire a quantum random number that is guaranteed to be independent from anything else?
I.e. such that the whole world is provably guaranteed to enter quantum superposition of all possible outcomes, and we provably survive in 1/16th of the worlds?
Mnemotechnics
Time Ferris's recent podcsat with Carl Shurmann had a quote of a quote that stuck with me: 'the good shit sticks', said by a writer when questioned on how he remembers good thoughts when he's constantly blackout drunk. That kind of 'memory optimism' as I call it seems a great way to mitigate memory doubt disorder which I'd guess is more common among skeptics, rationalist and other purveyors of doubt.
Innovation in education
Do your alma matta have anything resembling a academic decisions tribunal and administrative decisions tribunal?
We should...
I'm not able to see the post Ultimate List of Irrational Nonsense on my Discussion/New/ page even though I have enabled the options to show posts that have extremely negative vote counts (-100) while signed in. I made a request in the past about not displaying those types of posts for people who are not signed in. I'm not sure if that's related to this or not.
Would there be a fanfic about how Cassandra did not tell people the future, but simply 'what not to do', lied and schemed her way to the top and saved Troya...
So it mostly used as universal objection to any strange things.
Well, for the avoidance of doubt, I do not endorse any such use and I hope I haven't fallen into such sloppiness myself.
Your interpretation of Egan's law is that everything useful should already be used by evolution.
No, I didn't intend to say or imply that at all. I do, however, say that if evolution has found some particular mode of thinking or feeling or acting useful (for evolution's goals, which of course need not be ours) then that isn't generally invalidated by new discoveries about why the world is the way that's made those things evolutionarily fruitful.
(Of course it could be, given the "right" discoveries. Suppose it turns out that something about humans having sex accelerates some currently unknown process that will in a few hundred years make the earth explode. Then the urge to have sex that evolution has implanted in most people would be evolutionarily suboptimal in the long run and we might do better to use artificial insemination until we figure out how to stop the earth-exploding process.)
In case of QI it has some similarities to anthropic principle, by the way
You could have deduced that I'd noticed that, from the fact that I wrote
what I'm claiming is that those things aren't invalidated by saying words like "anthropic" or "quantum".
but no matter.
You also suggest to use Egan's law as normative: don't do strange risky things.
I didn't intend to say or imply that, either, and this one I don't see how you got out of what I wrote. I apologize if I was very unclear. But I might endorse as a version of Egan's law something like "If something is a terrible risk, discovering new scientific underpinnings for things doesn't stop it being a terrible risk unless the new discoveries actually change either the probabilities or the consequences". Whether that applies in the present case is, I take it, one of the points under dispute.
so my best strategy should not be normal
I take it you mean might not be; it could turn out that even in this rather unusual situation "normal" is the best you can do.
even if QI doesn't work
I have never been able to understand what different predictions about the world anyone expects if "QI works" versus if "QI doesn't work", beyond the predictions already made by physics. (QI seems to me to mean: standard physics, plus a decision to condition probabilities on future rather than present epistemic state. The first bit is unproblematic; the second bit -- which is what you need to say e.g. "I will survive" -- seems to me like a decision rather than a proposition, and I don't know what it would mean to say that it does or doesn't work.)
cryonics
I'm not really seeing any connection to speak of between cryonics and QI. (Except for this. Suppose you reckon that cryonics has a 5% chance of working on other people, but QI considerations lead you to say that for you it will almost certainly work. No, sorry, I see you give QI a 10% chance of working. So I mean that for you it will work with probability more like 10%. Does that mean that you'd be prepared to pay about twice as much for cryonics as you would be without bringing QI into it? (Given the presumably regrettable costs for whatever influence you might have hoped to have post mortem using the money: children, charities, etc.)
I have never been able to understand what different predictions about the world anyone expects if "QI works" versus if "QI doesn't work", beyond the predictions already made by physics.
Turchin may have something else in mind, but personally (since I've also used this expression several times on LW) I mean something like this: usually people think that when they die, their experience will be irreversibly lost (unless extra measures like cryonics are taken, or they are religious), meaning that the experiences they have just prior to de...
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.