If the book was targeting AI researchers I would agree that Harris is a poor choice. On the other hand, if the goal is to reach a popular audience, you could do much worse than someone who is very well known in the mainstream media and has a proven track record of writing best selling books.
Would you say there's an implicit norm in LW Discussion of not posting links to private LessWrong diaspora or rationalist-adjacent blogs?
I feel like if I started posting links to every new and/or relevant SSC or Ribbonfarm post as top-level Discussion topics, I would get downvoted pretty bad. But I think using LW Discussion as a sort of LW Diaspora Link Aggregator would be one of the best ways to "save" it.
One of the lessons of the diaspora is that lots of people want to say and discuss sort-of-rationalist-y things or at least discuss mundane or political topics in a sort-of-rationalist-y way. As far as I can tell, in order to actually find what all these rationalist-adjacent people are saying, you would have to read like twenty different blogs.
I personally wouldn't mind a more Hacker News style for LW Discussion, with a heavy focus on links to outside content. Because frankly, we're not generating enough content locally anymore.
I'm essentially just floating this idea for now. If it's positively received, I might take it upon myself to start posting links.
I pretty regularly post links as comments in the Open Thread.
The current norm in LW is to have few but meaty top-level posts. I think if we start to just post links, that would change the character of LW considerably, going in the Reddit/HN direction. I don't know if that would be a good thing.
CGP Grey has read Bostrom's Superintelligence.
Transcript of the relevant section:
Q: What do you consider the biggest threat to humanity?
A: Last Q&A video I mentioned opinions and how to change them. The hardest changes are the ones where you're invested in the idea, and I've been a techno-optimist 100% all of my life, but [Superintelligence: Paths, Dangers, Strategies] put a real asterisk on that in a way I didn't want. And now Artificial Intelligence is on my near term threat list in a deeply unwelcome way. But it would be self-delusional to ignore a convincing argument because I don't want it to be true.
I like this how this response describes motivated cognition, the difficulty of changing your mind and the Litany of Gendlin.
He also apparently discusses this topic on his podcast, and links to the amazon page for the book in the description of the video.
Grey's video about technological unemployment was pretty big when it came out, and it seemed to me at the time that he wasn't too far off of realising that there were other implications of increasing AI capability that were rather plausible as well, so it's cool to see that it happened.
http://lesswrong.com/lw/nfk/lesswrong_2016_survey/
Friendly weekly reminder that the survey is up and you should take it.
[LINK]
There seems like some relevant stuff this week:
Katie Cohen, a member of the rationality community in the Bay Area, and her daughter, are beneficiaries of fundraiser anonymously hosted by one (or more) of their friends, and they've fallen on some hard times. I don't know them, but Rob Bensinger vouched on social media he is friends with everyone involved, including the anonymous fundraiser.
Seems like there are lots of good links and corrections from the previous links post this week, so check it out if you found yoursel
The moon may be why earth has a magnetic field. If it takes a big moon to have a life-protecting magnetic field, this presumably affects the Fermi paradox.
EY arguing that a UFAI threat is worth considering -- as a response to Bryan Caplan's scepticism about it. I think it's a repost from Facebook, though.
ETA: Caplan's response to EY's points. EY answers in the comments.
If you don't have a good primary care doctor or are generally looking to trade money for health, and live in the Bay Area, Phoenix, Boston, NY, Chicago, or Washington DC, I'd recommend considering signing up for One Medical Group, which is a service that provides members with access to a network of competent primary doctors, as well as providing other benefits. They do charge patients a $150 yearly membership fee in additional to charging co-pays similar to what you'd pay at any other primary care physician's office, but in return for this, they hire more ...
I'm a One Medical member. The single biggest draw for me is that you can get appointments the same or next day with little or no waiting time -- where my old primary care doctor was usually booked solid for two weeks or more, by which point I'd either have naturally gotten over whatever I wanted to see him for, or have been driven to an expensive urgent care clinic full of other sick people.
They don't bother with the traditional kabuki dance where a nurse ushers you in and takes your vitals and then you wait around for fifteen minutes before the actual doctor shows, either -- you see a doctor immediately about whatever you came in for, and you're usually in and out in twenty minutes. It's so much better of a workflow that I'm astonished it hasn't been more widely adopted.
That said, they don't play particularly nice with my current insurance, so do your homework.
What do LessWrongers think of terror management theory? It has it's roots in Freudian psychoanalysis, but it seems to be getting more and more evidence supporting it (here's a 2010 literature review)
Imagine a case of existential risk, in which humanity needs to collectively make a gamble.
We prove that at least one of 16 possible choices guarantees survival, but we don't know which one.
Question: can we acquire a quantum random number that is guaranteed to be independent from anything else?
I.e. such that the whole world is provably guaranteed to enter quantum superposition of all possible outcomes, and we provably survive in 1/16th of the worlds?
Mnemotechnics
Time Ferris's recent podcsat with Carl Shurmann had a quote of a quote that stuck with me: 'the good shit sticks', said by a writer when questioned on how he remembers good thoughts when he's constantly blackout drunk. That kind of 'memory optimism' as I call it seems a great way to mitigate memory doubt disorder which I'd guess is more common among skeptics, rationalist and other purveyors of doubt.
Innovation in education
Do your alma matta have anything resembling a academic decisions tribunal and administrative decisions tribunal?
We should...
I'm not able to see the post Ultimate List of Irrational Nonsense on my Discussion/New/ page even though I have enabled the options to show posts that have extremely negative vote counts (-100) while signed in. I made a request in the past about not displaying those types of posts for people who are not signed in. I'm not sure if that's related to this or not.
Would there be a fanfic about how Cassandra did not tell people the future, but simply 'what not to do', lied and schemed her way to the top and saved Troya...
I meant that not only his post but most of his comments were downvoted, and from my personal experience if I get a lot of downvoting, I feel difficult to continue rational discussion of the topic.
Egan's law is very vague in its short formulation. It is not clear, what is "all", what kind of law is it - epistemic, natural, legal; what is normality - physics, experience, our expectation, our social agreements. So it mostly used as universal objection to any strange things.
But there are lot of strange things. Nukes were not normal before they were created, and if one apply Egan's law before their creation, he may claim that they are not possible. Strong self improving AI also is something new on Earth, but we don't use Egan's law to disprove its possibility.
Your interpretation of Egan's law is that everything useful should already be used by evolution. In case of QI it has some similarities to anthropic principle, by the way, so there is nothing new here from evolutionary point of view.
You also suggest to use Egan's law as normative: don't do strange risky things.
I could suggest more correct formulation of Egan's law: it all adds up to normality in local surroundings (and in normal circumstances.)
And from this follows that than surrounding become large enough everything is not normal (think about black holes, sun became red giant, or strange quantum effects in small scale)
In local surrounding Newtonian, relativistic and quantum mechanics produce the same observations and the same visible world. Also in normal circumstances I will not put a bomb into my house.
But, as OP suggested, I know that soon 1 of 16 outcomes will happen, where 15 will kill the Earth and me, so my best strategy should not be normal. In this case going into a submarine with a diverse group of people capable to restore civilization may be best strategy. And here I get benefits even if QI doesn't work, so it positive sum game.
I put only 10 per cent probability in QI to work as intended, so I will try any other strategy which have higher payoff (if I have any). That is why I will not put a bomb under my house in normal situations.
But there are situations there I don't risk anything if I use QI, but benefit if it works. One of them is cryonics, to which I signed up.
So it mostly used as universal objection to any strange things.
Well, for the avoidance of doubt, I do not endorse any such use and I hope I haven't fallen into such sloppiness myself.
Your interpretation of Egan's law is that everything useful should already be used by evolution.
No, I didn't intend to say or imply that at all. I do, however, say that if evolution has found some particular mode of thinking or feeling or acting useful (for evolution's goals, which of course need not be ours) then that isn't generally invalidated by new discoveries abou...
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.