If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

 

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

New Comment
70 comments, sorted by Click to highlight new comments since: Today at 1:42 PM

Big news for visibility: Sam Harris is preparing a book co-written with Eliezer (starting at minute 51 of podcast).

0username28y
Personally I think that Eliezer Yudkowsky should find a different co-author since Sam Harris isn't related to AI or AGI in any way and I am not sure how much can he contribute.

If the book was targeting AI researchers I would agree that Harris is a poor choice. On the other hand, if the goal is to reach a popular audience, you could do much worse than someone who is very well known in the mainstream media and has a proven track record of writing best selling books.

2drethelin8y
The friendliness problem isn't just about AGI but morality, which is something Harris studies.

Would you say there's an implicit norm in LW Discussion of not posting links to private LessWrong diaspora or rationalist-adjacent blogs?

I feel like if I started posting links to every new and/or relevant SSC or Ribbonfarm post as top-level Discussion topics, I would get downvoted pretty bad. But I think using LW Discussion as a sort of LW Diaspora Link Aggregator would be one of the best ways to "save" it.

One of the lessons of the diaspora is that lots of people want to say and discuss sort-of-rationalist-y things or at least discuss mundane or political topics in a sort-of-rationalist-y way. As far as I can tell, in order to actually find what all these rationalist-adjacent people are saying, you would have to read like twenty different blogs.

I personally wouldn't mind a more Hacker News style for LW Discussion, with a heavy focus on links to outside content. Because frankly, we're not generating enough content locally anymore.

I'm essentially just floating this idea for now. If it's positively received, I might take it upon myself to start posting links.

I pretty regularly post links as comments in the Open Thread.

The current norm in LW is to have few but meaty top-level posts. I think if we start to just post links, that would change the character of LW considerably, going in the Reddit/HN direction. I don't know if that would be a good thing.

7Brillyant8y
It seems to me, despite talk of change, LW is staying essentially the same... and thereby struggling at an accelerating rate to be a place for useful content. My current modus operandi for LW is to use the LW favorite I have in place to (1) Check SSC and the other "Rationality Blogs" on the side bar, and then (2) peruse discussion (and sometimes comment) if there isn't a new post at SSC, et al that commands my attention. I wonder if other LWers do the same? I wonder what percentage of LW traffic is "secondary" in a way similar to what I've described? I like your suggestion because it is a radical change that might work. And it's bad to do nothing if what you are doing seems to be on a trajectory of death. At some point, during a "how can we make LW better" post on here, I mentioned making LW a de facto "hub" for the rationality blogosphere since it's increasingly not anything else. I'm now re-saying that and seconding your idea. There could still be original content... but there is nowhere close to enough original content coming in right now to justify LW as a standalone site.
3Evan_Gaensbauer8y
As a a data point, this is exactly how I've been using LessWrong for at least the last year. One of the reasons I more frequently comment in open threads is because we can have less idle conversations like this one as well :P
1Viliam8y
I first check the "rationality blogs", and then the "Discussion".
7NancyLebovitz8y
I suggest posting links to specific things you think are interesting with some text about what you want to discuss about them.
7Vaniver8y
I think that LW as is set up now is not good for links; you need to click on the post, and then click again. I think that LW should have reddit-style linkposts, where there's a link and then another url for the comments. (The relevant github issue.)
5Evan_Gaensbauer8y
Rob Bensinger published Library of Scott Alexandria, his summary/"Sequences" of the historically best posts from Scott (according to Rob, that is). Scott seems to pursue or write on topics with a common thread between them in cycles of a few months. This can be observed in the "top posts" section of his blog. Sometimes I forget a blog exists for a few months, so I don't read it, but when I do read diaspora/rationality-adjacent blogs, I consider the reading personally valuable. I'd appreciate LessWrong users sharing pieces from their favourite blogs that they believe would also appeal to many users here. So, to make a top-level post linking to several articles from one author once in a while, sharing their best recent posts which would be relevant to LessWrong's interests, seems reasonable. I agree making a top-level post for any one or all links from a separate blog would be too much, and that this implicit norm should continue to exist.
4Viliam8y
I think today the norm is providing a short summary of the link target. (If I must click on the link to find out why you linked it, that's almost guaranteed downvote.) But I could imagine having a "subreddit" consisting of links only, where the norm would be different. And of course, links to lower-quality articles can still be downvoted.
2Dagon8y
Unless there's some novel point in the post, or reason to discuss it here rather than there, I'd rather not have a link post. Let people who want to read more outside blogs do so, rather than "aggregating".
3Cariyaga8y
I would be more inclined to read outside rationality-adjacent blogs if there were some form of familiar-feeling (as opposed to a new website) aggregation than I would be if there were none, and I had to actively search them out.

CGP Grey has read Bostrom's Superintelligence.

Transcript of the relevant section:

Q: What do you consider the biggest threat to humanity?

A: Last Q&A video I mentioned opinions and how to change them. The hardest changes are the ones where you're invested in the idea, and I've been a techno-optimist 100% all of my life, but [Superintelligence: Paths, Dangers, Strategies] put a real asterisk on that in a way I didn't want. And now Artificial Intelligence is on my near term threat list in a deeply unwelcome way. But it would be self-delusional to ignore a convincing argument because I don't want it to be true.

I like this how this response describes motivated cognition, the difficulty of changing your mind and the Litany of Gendlin.

He also apparently discusses this topic on his podcast, and links to the amazon page for the book in the description of the video.

Grey's video about technological unemployment was pretty big when it came out, and it seemed to me at the time that he wasn't too far off of realising that there were other implications of increasing AI capability that were rather plausible as well, so it's cool to see that it happened.

[-]Elo8y100

http://lesswrong.com/lw/nfk/lesswrong_2016_survey/

Friendly weekly reminder that the survey is up and you should take it.

7username28y
Another friendly reminder that you can take it even if you do not have a LessWrong account.

[LINK]

Slate Star Codex Open Thread

There seems like some relevant stuff this week:

  • Katie Cohen, a member of the rationality community in the Bay Area, and her daughter, are beneficiaries of fundraiser anonymously hosted by one (or more) of their friends, and they've fallen on some hard times. I don't know them, but Rob Bensinger vouched on social media he is friends with everyone involved, including the anonymous fundraiser.

  • Seems like there are lots of good links and corrections from the previous links post this week, so check it out if you found yoursel

... (read more)

The moon may be why earth has a magnetic field. If it takes a big moon to have a life-protecting magnetic field, this presumably affects the Fermi paradox.

EY arguing that a UFAI threat is worth considering -- as a response to Bryan Caplan's scepticism about it. I think it's a repost from Facebook, though.

ETA: Caplan's response to EY's points. EY answers in the comments.

4Algon8y
But, isn't this what he's been saying for years? What's the point in posting about it?
4Vaniver8y
Caplan posted that he was skeptical, Yudkowsky responded with "which part of this argument do you disagree with?"
-4V_V8y
EY warns against extrapolating current trends into the future. Seriously?
2knb8y
Why does that surprise you? None of EY's positions seem to be dependent on trend-extrapolation.
0entirelyuseless8y
Trend extrapolation is more reasonable than invoking something that hasn't happened at all yet, and then claiming, "When this happens, it will become an unstoppable trend."
0knb8y
It would be more reasonable to use trend-extrapolation if it was a field where you would necessarily be able to discern a trend. Yudkowsky argues there could be sharp discontinuities. Personally I don't really feel qualified to have a strong opinion, and I would not be able to discern a trend even if it exists.
-3V_V8y
Other than a technological singularity with artificial intelligence explosion to a god-like level?
1knb8y
I don't believe that prediction is based on trend-extrapolation. Nothing like that has ever happened, so there's no trend to draw from.
0Lumifer8y
You are right about the singularity, but the underlying trend extrapolation is that of technical progress and, specifically, of software getting smarter. Nowadays people got used to rapid technical progress and often consider it, um, inevitable. A look at history should disabuse one of that notion, though.
0knb8y
Yudkowsky explicitly doesn't believe in rapid technical progress. He's talked about the fact that he believes in the Great Stagnation (slowdown in science/tech/economic progress) which is possibly a good thing since it may retard the creation of AGI, giving people a better shot to work on friendliness first.
-2Lumifer8y
Links? What is "rapid"? Did he look at his phone recently? The Great Stagnation is phenomenon on the time scale of decades. How about the time scale of centuries?
0knb8y
Here is one: https://www.facebook.com/yudkowsky/posts/10152586485749228 .
-6Lumifer8y

If you don't have a good primary care doctor or are generally looking to trade money for health, and live in the Bay Area, Phoenix, Boston, NY, Chicago, or Washington DC, I'd recommend considering signing up for One Medical Group, which is a service that provides members with access to a network of competent primary doctors, as well as providing other benefits. They do charge patients a $150 yearly membership fee in additional to charging co-pays similar to what you'd pay at any other primary care physician's office, but in return for this, they hire more ... (read more)

I'm a One Medical member. The single biggest draw for me is that you can get appointments the same or next day with little or no waiting time -- where my old primary care doctor was usually booked solid for two weeks or more, by which point I'd either have naturally gotten over whatever I wanted to see him for, or have been driven to an expensive urgent care clinic full of other sick people.

They don't bother with the traditional kabuki dance where a nurse ushers you in and takes your vitals and then you wait around for fifteen minutes before the actual doctor shows, either -- you see a doctor immediately about whatever you came in for, and you're usually in and out in twenty minutes. It's so much better of a workflow that I'm astonished it hasn't been more widely adopted.

That said, they don't play particularly nice with my current insurance, so do your homework.

4[anonymous]8y
Be careful. They charge a lot more for services, due to how they bill.

What do LessWrongers think of terror management theory? It has it's roots in Freudian psychoanalysis, but it seems to be getting more and more evidence supporting it (here's a 2010 literature review)

Imagine a case of existential risk, in which humanity needs to collectively make a gamble.

We prove that at least one of 16 possible choices guarantees survival, but we don't know which one.

Question: can we acquire a quantum random number that is guaranteed to be independent from anything else?

I.e. such that the whole world is provably guaranteed to enter quantum superposition of all possible outcomes, and we provably survive in 1/16th of the worlds?

1Dagon8y
Are you assuming a non-MWI universe? Doesn't every source of randomness just imply different branches in proportion to their amplitude?
1SquirrelInHell8y
I'm assuming MWI, but noticing that NOT every source of "randomness" implies different branches. Some things may, or may not, be interconnected in ways we don't see, and can't detect. (E.g. it's been shown that humans can learn to flip coins predictably etc.) My point is to provably, with as high confidence as we require split the world into different branches, as opposed to having a pretty good chance but we don't know exactly how good of creating different branches.
0turchin8y
You could get really random number using cosmic rays of remote quasars, but I think that true quantum randomness is not necessary in this case. Big world immortality could work anyway - there are many other earthes in the multiverse. Superposition is also may be not necessary for QI to work. It may be useful if you want to make some kind of interaction between different outcomes, but it seems impossible for such large system. The main thing which I would worry about, if I try to use QI to survive x-risks, is that the death of all civilization should be momentary. If it is not momentary, where will be a period of time when observers will know that given risk has began but they didn't die yet, and so they will be unable to "jump" to another outcome. Only false vacuum decay provide momentary death for everybody (but not exact simultaneous given Earth size of 12 000 km and limited speed of light). Another option of using QI to survive x-risks is see that me-observer must survive any x-risks, if QI is true. So any x-risks will have at least one survivor, one wounded man on empty planet. We could use this effect to ensure that a group of people survive, if we connect me-observer with that group by necessary condition of dying together. For example, we all locked in the submarine full of explosives. In most of the worlds there are two outcomes: all the crew of the submarine dies, or everybody survive. If I am in such submarine, and QI works, we - all the crew - probably survive any x-risk. In short the idea is to convert slow x-risks into a momentary catastrophe for a group of people. The same way we may use QI personally to fight slow dying from aging, if we sign up for cryonics.
5SquirrelInHell8y
Errr. Let me put it like this: it all adds up to normality. No matter how many times you say "quantum", it doesn't make it a good idea to fill submarines with bombs. If your reasoning leads you to believe otherwise, you'd better check your reasoning again. Hint: Let's say I play in a lottery. Do I make the situation better for myself, if construct a device that destroys Earth and kills all possible observers in case I don't win the lottery? Seeing my point here?
1turchin8y
I don't say quantum - so called QI works in the big world which may be non quantum, just large enough to include many copies of the Earth. Egan law - it is not Egan's, it is not a law and it is not true, and some one who tried to question it was extensively downvoted and leaved LW. Most submarines are already filled with bombs. Anyway, I think that big world immortality has small chance to actually work (in predictable way), so it would not be wise to use it as any first line defence. But it may be used it third line defence. For example, if I build a space ship to survive impending catastrophe on Earth, I also use that fact its most likely catastrophic modes will kill all crew in one moment. The same thing about cryonics. It is good itself, but if big world immortality works, it will be even better.
0gjm8y
Who? The last LW kerfuffle I can remember that involved Egan's law (and some other Egan things) was around Eitan_Zohar who did indeed get a lot of downvotes and does seem to have left LW. But so far as I can tell he didn't get those downvotes for trying to question "Egan's law".
1turchin8y
The post I meant is here http://lesswrong.com/lw/dlb/adding_up_to_normality/
0gjm8y
Interesting. The general consensus in that thread seems to have been that the user in question was missing the point somehow, and -3 isn't really such a terribly low score for something generally thought to have been missing the point. (I guess it was actually +6 -9.) I don't think the poor reception of "Adding up to normality" is why the user in question left LW. E.g., this post was made by the same user about 6 months later, so clearly s/he wasn't immediately driven off by the downvotes on "Adding up to normality". Anyway. I think I agree with the general consensus in that thread (though I didn't downvote the post and still wouldn't) that the author missed the point a bit. I think Egan's law is a variant on a witticism attributed to Wittgenstein. Supposedly, he and a colleague had a conversation like this. W: Why did anyone think the sun went round the earth? C: Because it looks as if it does. W: What would it have looked like, if it had looked as if the earth went round the sun? The answer, of course, being that it would have looked just the way it actually does, because the earth does go round the sun and things look the way they do. Similarly (and I think this is Egan's point), if you have (or the whole species has) developed some attitude to life, or some expectation about what will happen in ordinary circumstances, based on how the world looks, and if some new scientific theory that predicts that the world will look that way, then either you shouldn't change that attitude or it was actually inappropriate all along. Now, you can always take the second branch and say things like this: "This theory shows that we should all shoot ourselves, so plainly if we'd been clever enough we'd already have deduced from everyday observation that we should all shoot ourselves. But we weren't, and it took the discovery of this theory to show us that. But now, we should all shoot ourselves." So far as I can tell, appealing to Egan's law doesn't do anything to refute that. I
0turchin8y
I meant that not only his post but most of his comments were downvoted, and from my personal experience if I get a lot of downvoting, I feel difficult to continue rational discussion of the topic. Egan's law is very vague in its short formulation. It is not clear, what is "all", what kind of law is it - epistemic, natural, legal; what is normality - physics, experience, our expectation, our social agreements. So it mostly used as universal objection to any strange things. But there are lot of strange things. Nukes were not normal before they were created, and if one apply Egan's law before their creation, he may claim that they are not possible. Strong self improving AI also is something new on Earth, but we don't use Egan's law to disprove its possibility. Your interpretation of Egan's law is that everything useful should already be used by evolution. In case of QI it has some similarities to anthropic principle, by the way, so there is nothing new here from evolutionary point of view. You also suggest to use Egan's law as normative: don't do strange risky things. I could suggest more correct formulation of Egan's law: it all adds up to normality in local surroundings (and in normal circumstances.) And from this follows that than surrounding become large enough everything is not normal (think about black holes, sun became red giant, or strange quantum effects in small scale) In local surrounding Newtonian, relativistic and quantum mechanics produce the same observations and the same visible world. Also in normal circumstances I will not put a bomb into my house. But, as OP suggested, I know that soon 1 of 16 outcomes will happen, where 15 will kill the Earth and me, so my best strategy should not be normal. In this case going into a submarine with a diverse group of people capable to restore civilization may be best strategy. And here I get benefits even if QI doesn't work, so it positive sum game. I put only 10 per cent probability in QI to work as intend
0gjm8y
Well, for the avoidance of doubt, I do not endorse any such use and I hope I haven't fallen into such sloppiness myself. No, I didn't intend to say or imply that at all. I do, however, say that if evolution has found some particular mode of thinking or feeling or acting useful (for evolution's goals, which of course need not be ours) then that isn't generally invalidated by new discoveries about why the world is the way that's made those things evolutionarily fruitful. (Of course it could be, given the "right" discoveries. Suppose it turns out that something about humans having sex accelerates some currently unknown process that will in a few hundred years make the earth explode. Then the urge to have sex that evolution has implanted in most people would be evolutionarily suboptimal in the long run and we might do better to use artificial insemination until we figure out how to stop the earth-exploding process.) You could have deduced that I'd noticed that, from the fact that I wrote but no matter. I didn't intend to say or imply that, either, and this one I don't see how you got out of what I wrote. I apologize if I was very unclear. But I might endorse as a version of Egan's law something like "If something is a terrible risk, discovering new scientific underpinnings for things doesn't stop it being a terrible risk unless the new discoveries actually change either the probabilities or the consequences". Whether that applies in the present case is, I take it, one of the points under dispute. I take it you mean might not be; it could turn out that even in this rather unusual situation "normal" is the best you can do. I have never been able to understand what different predictions about the world anyone expects if "QI works" versus if "QI doesn't work", beyond the predictions already made by physics. (QI seems to me to mean: standard physics, plus a decision to condition probabilities on future rather than present epistemic state. The first bit is unproblemati
0qmotus8y
Turchin may have something else in mind, but personally (since I've also used this expression several times on LW) I mean something like this: usually people think that when they die, their experience will be irreversibly lost (unless extra measures like cryonics are taken, or they are religious), meaning that the experiences they have just prior to death will be their final ones (and death will inevitably come). If "QI works", this will not be true: there will never be final experiences, but instead there will be an eternal (or perhaps almost eternal) chain of experiences and thus no final death, from a first-person point of view. Of course, it could be that if you've accepted MWI and the basic idea of multiple future selves implied by it then this is not very radical, but it sounds like a pretty radical departure from our usual way of thinking to me.
1gjm8y
I think your last paragraph is the key point here. Forget about QI; MWI says some small fraction of your future measure will be alive very far into the future (for ever? depends on difficult cosmological questions); even objective-collapse theories say that this holds with nonzero but very small probability (which I suggest you should feel exactly the same way about); every theory, quantum or otherwise, says that at no point will you experience being dead-and-unable-to-experience things; all QI seems to me to add to this is a certain attitude.
0qmotus8y
Another interpretation is that it is a name for an implication of MWI that a even many people who fully accept MWI seem to somehow miss (or deny, for some reason; just have a look at discussions in relevant Reddit subs, for example). Objective-collapse theories in a spatially or temporally infinite universe or with eternal inflation etc. actually say that it holds with nonzero but very small probability, but essentially give it an infinite number of chances to happen, meaning that this scenario is for all practical purposes identical to MWI. But I think what you are saying can be supposed to mean something like "if the world was like the normal intuitions of most people say it is like", in which case I still think there's a world of difference between very small probability and very small measure. I'm not entirely convinced by the usual EY/LW argument that utilitarianism can be salvaged in an MWI setting by caring about measure, but I can understand it and find it reasonable. But when this is translated to a first-person view, I find it difficult. The reason I believe that the Sun will rise tomorrow morning is not because my past observations indicate that it will happen in a majority of "branches" ("branches" or "worlds" of course not being a real thing, but a convenient shorthand), but because it seems like the most likely thing for me to experience, given past experiences. But if I'm in a submarine with turchin and x-risk is about to be realized, I don't get how I could "expect" that I will most likely blow up or be turned into a pile of paperclips like everyone else, while I will certainly (and only) experience it not happening. If QI is an attitude, and a bad one too, I don't understand how to adopt any other attitude. Actually, I think there are at least a couple of variations of this attitude: the first one that people take upon first hearing of the idea and giving it some credibility is basically "so I'm immortal, yay; now I could play quantum russian rou
0turchin8y
QI predicts not the different variants of the world, but different variants of my future experiences. It says that I will not experience "no existence", but will experience my most probable survival way. If I have a chance to survive 1 in 1000 in some situation, QI shifts probability that I will experience survival up to 1. But it could fail in unpredictable ways: if we are in the simulation, and my plane crashes, the next my experience will be probably screen with title "game over", not experience of me alive on the ground. I agree with what you said in brackets about cryonics. I also think that investing in cryonics will help to promote it and all other good things, so it doesn't contradict with my regrettable costs. I think that one rational way of action is make a will where one gives all his money to cryocompany. (It also depends of existence and well being of children, and other useful charities, which could prevent x-risks, so it may need more complex consideration.)
1AlexLundborg8y
Whether or not momentary death is necessary for multiverse immortality depends on what view of personal identity is correct. According to empty individualism, it should not matter that you know you will die, you will still "survive" while not remember having died as if that memory was erased.
1qmotus8y
I think the point is that if extinction is not immediate, then the whole civilisation can't exploit big world immortality to survive; every single member of that civilisation would still survive in their own piece of reality, but alone.
0akvadrako8y
It doesn't really matter if it's immediate according to empty individualism. Instead the chance of survival in the branches where you try to die must be much lower than the chance of choosing that world. You can never make a perfect doomsday device, because all kinds of things could happen to make it fail at the moment or during preparation. Even if it operates immediately.
[-][anonymous]8y20

Mnemotechnics

Time Ferris's recent podcsat with Carl Shurmann had a quote of a quote that stuck with me: 'the good shit sticks', said by a writer when questioned on how he remembers good thoughts when he's constantly blackout drunk. That kind of 'memory optimism' as I call it seems a great way to mitigate memory doubt disorder which I'd guess is more common among skeptics, rationalist and other purveyors of doubt.

Innovation in education

Do your alma matta have anything resembling a academic decisions tribunal and administrative decisions tribunal?

We should... (read more)

4SquirrelInHell8y
How much people want to "take care of cute children but only for a few hours" might be a (very?) bad predictor of how good they are at taking care of children. ~~~ I think gmail just doesn't cut it if you want to store your information reliably while you are vitrified for many years. Also, why in the world would you protect your old e-mails, of all things?
4Elo8y
childcare is a lucrative business already; there is probably nothing (short of administration work) stopping an existing childcare business taking on volunteers (talking about the most easy way to make this happen by slightly modifying the existing world). But I don't know of many (any) people willing to do that kind of thing. Volunteers are a tricky business too. As is duty of care towards children, in more than just the assumed privacy, protection, but also in the direction of positive stimulating environment (which becomes more difficult to prove when relying on volunteers)

I'm not able to see the post Ultimate List of Irrational Nonsense on my Discussion/New/ page even though I have enabled the options to show posts that have extremely negative vote counts (-100) while signed in. I made a request in the past about not displaying those types of posts for people who are not signed in. I'm not sure if that's related to this or not.

2Douglas_Knight8y
It's not the karma: I can see Gleb's post on Brussels with a lower score and a lower %, but not that post. (When not logged in I can't see the Brussels post.) Probably it was "deleted." That is a state where the permalink continues to work, but the post does not appear on various indices. I think that if the author or moderator wants to break the permalink, they have to click "ban." Deleting an account does not delete all the posts, at least in the past.
2Elo8y
OP deleted the account used to post the article
[-][anonymous]8y10

Would there be a fanfic about how Cassandra did not tell people the future, but simply 'what not to do', lied and schemed her way to the top and saved Troya...

5Artaxerxes8y
This exists, at least.
3polymathwannabe8y
There's this powerful one-page fanfic.
0[anonymous]8y
Thank you so much! A friend of mine has a thing about Cassandra, she'll love it.

g

[This comment is no longer endorsed by its author]Reply
2[anonymous]8y
* People with narcissistic personality disorder should be offered avenues and support for treatment not manipulated reciprocally * If they gaslight and you are susceptible to it stop fighting them and retreat. They will win. * Gang affiliation and violent behaviour suggests you should keep safe and avoid them. That's why we have police, in case they trip up. * Choose your friends
0[anonymous]8y
uh, except this guy is unethical and i'm unsure what avenue to pursue to minimize the risk of future injury to other people. since i know him the best, i'm relatively sure he does not need support nor will accept it. he is not being "manipulated" he is just straightforwardly unethical. there's no way this retard would accept treatment >_>.

On the practicalities of catastrophe readiness.