If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one.

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Open thread, September 8-14, 2014
New Comment
296 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I can't count the number of times I didn't do something that would have been beneficial because my social circle thought it would be weird or stupid. Just shows how important it is to choose the people around you carefully.

[-][anonymous]190

Someone -- maybe on LW? -- said that their strategy was to choose their friends carefully enough that they didn't have to resist peer pressure.

That has other dangers -- e.g. living in an echo chamber or facing the peer pressure to not change.

6[anonymous]
Yes, you have to be very careful. (And live in a place where the number of such people is large enough that it's even viable as a strategy, and ignore/isolate yourself from the wider culture or still maintain resistance to it, and so on, which makes it inaccessible to a large number of people, but it seems close to ideal in the rare circumstances where it's possible.)
2Lumifer
I don't know if "careful" is the right word -- it's more an issue of finding a good balance and the optimal point isn't necessarily obvious. On the one hand, you should like your friends and not have them annoy you or push you in the directions you don't want to go. On the other hand, being surrounded by the best clones of yourself that you could find doesn't sound too appealing. It's a bit like an ecosystem -- you want a healthy amount of diversity and not monoculture, but at the same time want to avoid what will poison you or maybe just eat you X-)
7michaelkeenan
Paul Graham wrote about that in A Student's Guide To Startups:

I've always been a huge non-conformist, caring relatively little what others think. I now believe that I went too far and my advice to my younger self would be to try and fit in more.

9Lumifer
You have a couple of graduate degrees and are a professor at a liberal arts college in the Northeast... People I would describe as "huge non-conformists" would probably be tailed by campus security if they ever showed up in the area X-D
3James_Miller
See this.
3Lumifer
Oh, I know you're a conservative in academia and had tenure troubles because of that. But that makes you a conservative in very liberal environment, not a non-conformist. Of course you can call yourself anything you want to and the label is sufficiently fuzzy and could be defined in many ways. Still, from my perspective you're now a part of the establishment -- Smith did grant you tenure, even if screaming and kicking. I am not passing judgement on you, it just surprised me that what you mean by a "huge non-conformist" is clearly very different from what I mean by a "huge non-conformist".
3James_Miller
It's also stuff such as I don't like sports, music, fashion, or small talk, and in high school and college made zero effort to pay attention to them and it cost me socially. I realize now I should have at least pretended to like them to have had a better social life.
9Lumifer
That makes you a fully-conforming geek, as you undoubtedly know. Welcome to the club :-)
4cameroncowan
I figured out when I was about 15 years old that I had to keep on things I didn't care about to earn points socially and it helped me a great deal and powers what I do as a writer and talk show presenter.

Such as?

In a great example of serendipity, the talking to myself is a case. I was observed doing that and people thought it would be weird, so I stopped doing that.

When I was younger, some adults told be that "you only understand something when you can teach it to someone", which people in my circle disputed as they were the kind of people that like to think of themselves as smart.

I didn't go to a couple of parties to socialise because there were people drinking copious amounts of alcohol, because there was a stigma against getting drunk and stupid. While the not drinking certainly was a good idea, the not socialising was not.

As a child I was extremely interested in everything scientific. Then in school none of the cooler kids were and neither were the friends I actually had, so I started playing video games. Thankfully I later found people interested in scholarship so I started doing that again.

(I am starting to realise most of these are from when I was in school. Might be because I matured or because I have more perspective through the distance)

Not that peer pressure can't have good effects, it is a tool like any other.

7Ben Pace
My go-to catchphrase when I notice this sort of situation is (spoken sarcastically): "Why be happy when you can be normal?"
3Stefan_Schubert
Though that certainly has happened to me as well, it strikes me that the opposite has happened more often: I've done things which turned out to be beneficial, and avoided to do things that would have been bad, because of the opinions of my social circles. Lots of the time, things that are seen as weird and stupid by the majority actually are weird and stupid.

If people were a great deal better at coordination, would they refuse to use news sources which are primarily supported by advertising?

That sounds like a good way to end up with more paywalls.

There would definitely be more paywalls. The question is whether it would be a net loss.

Would the quality of information be better? Advertising gets paid for one way or another-- would no-advertising news (possibly even no-advertising media in general) be a net financial loss for consumers?

6Lumifer
Look at the history of cable TV. When it appeared it was also promoted as "no advertising, better shows".
6[anonymous]
I would argue for the existence of a treadmill effect on these things.
5Eugene
Although this may not have been true at the beginning, it arguably did grow to meet that standard. Cable TV is still fairly young in the grand scheme of things, though, so I would say there isn't enough information yet to conclude whether a TV paywall improved content overall. Also, it's important to remember that TV relies on the data-weak and fairly inaccurate Nielsen ratings in order to understand its demographics and what they like (and it's even weaker and more inaccurate for pay cable). This leads to generally conservative decisions regarding programing. The internet, on the other hand, is filled with as much data as you wish to pull out regarding the people who use your site, on both a broad and granular level. This allows freedom to take more extreme changes of direction, because there's a feeling that the risk is lower. So the two groups really aren't on the same playing field, and their motivations for improving/shifting content potentially come from different directions.
1cameroncowan
Yes I think so because journalism is time-consuming and expensive. You also have to have the right people on the right stories so that you get the best expression of what happened. Then you can back that up with commentary and opinion which in this day and age tends to end up all at once. I think the better option is to believe in people and their unique perspective. If you follow a writer or a journalist and you like their work that is a better system than believing in an institution which is more faceless. If I am covering a story on an oil spill in the gulf on The Cameron Cowan Show and then I take an ad from BP my viewers are going to wonder if I an going to continue to cover the spill with such tenacity and they will flee from me if I don't. People can vote with their feet and dollars with individuals far more than companies. Ergo, I would not take that ad to keep my loyal watchers and seek ads somewhere else. This logic is not used at any news outlet right now because bills have to be paid and there is far less backlash.
7ChristianKl
I don't think "refusing" news sources is helpful. Even a bad newspaper gives some perspective on some topics that you won't find elsewhere. The whole idea of "news sources" is problematic. It assumes a certain 20th century model of learning about the world. If you want to get really informed about a topic it often necessary to read primary sources. I don't get scientific news from mainstream media. I either read the papers, discussion on LW or blogs by scientists. When I see a claim that I find interesting and where I don't know it's true I head over to skeptic.stackexchange and open a question. The website is no newspaper but it also serves the purpose of staying in contact with world events. Advertising is just one biases among many. If I watch a news video at German public television that's payed for by taxpayer money, the a German public television network pays a production company for that video. Some of those production companies also produce PR for paying customers. A lot of articles in newspapers get these days written by freelance journalists who aren't payed very well and can be hired for other tasks. So even if the newspaper wouldn't make it's money by serving corporate interest the individual journalist might still serve corporate interests. Wikipedia illustrates that we are actually quite good at coordination. Much better than anyone would have expected 20 years ago. It just doesn't like like we would have expected. Cultural development isn't just more of the same.

I don't think "refusing" news sources is helpful. Even a bad newspaper gives some perspective on some topics that you won't find elsewhere.

But reading it takes time that one could spend on something else.

3ChristianKl
If you make an utility calculation than the prime concern is about whether it makes sense to learn about a topic in the first place. If you do decide to inform yourself about a topic than you have to choose among the sources that are available. If you really care about an issue than it often makes sense to read multiple perspectives. It quite easy to read government funded Al Jazeera, a commercial newspaper by a publically traded company that makes money via advertising and network driven community websites like Stackexchange or Wikipedia. In a pluralistic society all those source of information can exist besides each other. If you don't like corporatist news sources there are a lot of alternatives these days.
2lmm
If people were a great deal better at coordination I suspect advertising wouldn't exist at all.

I was recently heartened to hear a very good discussion of effective altruism on BBC Radio 4's statistics programme, More or Less, in response to the "Ice Bucket Challenge". They speak to Neil Bowerman of the Centre for Effective Altruism and Elie Hassenfeld from GiveWell.

They even briefly raise the possibility that large drives of charitable donations to ineffective causes could be net negative as it's possible that people have a roughly fixed charity budget, which such drives would deplete. They admit there's not much hard evidence for such a claim, but to even hear such an unsentimental, rational view raised in the mainstream media is very bracing.

Available here: http://www.bbc.co.uk/podcasts/series/moreorless (click the link to "WS To Ice Or Not To Ice"), or directly here: http://downloads.bbc.co.uk/podcasts/radio4/moreorless/moreorless_20140908-1200a.mp3

By Brad Hicks

.A thought that I've been carrying around in my head for a while, that I have no idea what to do with:

It seems to me that almost everybody, in relationships, wants the "I Win" button. For those of you who didn't play City of Heroes, it was a developer-team joke that they shared with the public: push one button, and get your way. It became player and developer jargon for times when people wanted to argue that their preferred way of winning wasn't unfair to others. So what's the "I Win" button for relationships?

People who are really good at non-verbal communication want all relationship boundaries, rules, and expressions of wants and needs to be based on non-verbal communication; they want their partner to "just know." People who are really good at written communication want those things to be handled via written rules and relationship contracts and user manuals. People who are really good at verbal, conversational communication want those things handled by talking them out. And all three of those groups think that the secret to happy relationships is for other people to learn to communicate their way.

I have no idea what to do with this in

... (read more)

I think I'd be more inclined to frame this sort of thing as typical mind fallacy. Modeling it in terms of an I Win button seems to violate Hanlon's Razor: we don't need an adversarial model when plain old ignorance will suffice, and I don't think preferred interaction style is a matter of conscious choice for most people.

9polymathwannabe
Alternatively, the situation can be described in terms of tell vs. guess culture.
8NancyLebovitz
I'd split the difference-- I believe the typical mind fallacy can shade into believing that other sorts of minds aren't worth respecting.
1cameroncowan
I think people want that because they don't how to communicate effectively in any other way. You also have to decide why people choose to communicate in the way that they do. People that prefer written communication (as I do) may be passive aggressive or be afraid of verbal communication. Those who want their partner to "just know" I think will have the least amount of success because of their inability to use a method of agreeable communication to express their needs and desires. I am somewhat aware of this because I do expect people to have certain ideas and execute them and I have learned that I have to speak up about what I think should be done because they aren't "just going to figure it out" because most people don't think like I do. As for the "I win" button, I don't think thats what people want. People want their needs met in a pleasurable and dynamic way. Is that "winning?"
3NancyLebovitz
It seems reasonable to me that people are afraid of being forced into whatever modes of communication they think they're bad at-- it's not a specific flaw of people who prefer verbal/written communication. I wonder if the people who expect their partners to "just know" are confusing successful non-verbal communication with telepathy.
1Viliam_Bur
I would guess typical mind fallacy, or illusion of transparency. Either they believe their signals are obvious, or they believe that any (sane) person would make the same guess in that specific situation. Or a combination thereof, i.e. that any (sane) person would only see two or three possible choices in that specific situation, and the signals are sufficient to differentiate among them. Another interesting question would be whether these people are able to see the situation from both sides. Like, they can be angry at their partner for not reading their mind successfully, but do they believe they read the partner's mind successfully? Maybe they don't even realize that there is the other side, too. Or maybe they blame the partner for communication failures in both directions. ("They should know what I think about." "They shouldn't think such crazy things.") On the other hand, maybe the partner really is predictable. Or the partner communicates their thoughts explicitly, so one way the communication is clear, and the person simply does not realize that the clearness of communication is caused by the explicitness. (Or maybe they don't believe in symetry. Maybe they believe that being explicit is e.g. gender-specific, so it's okay that the partner is explicit, and it's okay that they aren't. Or perhaps that you should be explicit about some things, but not about other things.)
1cameroncowan
They may be, I think successful non-verbal communication takes time and learning. There can be many difficulties along the way to success.
0blacktrance
This model assumes that relationships are adversarial, which need not be the case, and isn't the case in a good relationship.
7VAuroch
No, the model applies even if the relationship isn't adversarial. As long as you have different priorities and are not perfect at communicating, it applies.

Does mankind have a duty to warn extraterrestrial civilizations that we might someday unintentionally build an unfriendly super-intelligent AI that expands at the speed of light gobbling up everything in its path? Assuming that the speed of light really is the maximum, our interstellar radio messages would outpace any paperclip maximizer. Obviously any such message would complicate future alien contact events as the aliens would worry that our ambassador was just an agent for a paperclipper. The act of warning others would be a good way to self-signal the dangers of AI.

I'd have thought any extraterrestrial civilization capable of doing something useful with the information wouldn't need the explicit warning.

3James_Miller
This depends on the solution to the Fermi paradox. An advanced civilization might have decided to not build defenses against a paperclip maximizer because it figured no other civilization would be stupid/evil enough to attempt AI without a mathematical proof that its AI would be friendly. A civilization near our level of development might use the information to accelerate its AI program. If a paperclip maximizer beats everything else an advanced civilization might respond to the warning by moving away from us as fast as possible taking advantage of the expansion of the universe to hopefully get in a different Hubble volume from us.
5Lumifer
One response to such a warning would be to build a super-intelligent AI that expands at the speed of light gobbling up everything in its path first. And when the two (or more) collide, it would make a nice SF story :-)
4solipsist
This wouldn't be a horrible outcome, because the two civilizations light-cones would never fully intersect. Neither civilization would fully destroy the other.

Are you crazy! Think of all the potential paperclips that wouldn't come into being!!

1Lumifer
The light cones might not fully intersect, but humans do not expand at close to the speed of light. It's enough to be able to destroy the populated planets.
4solipsist
I love this idea! A few thoughts: 1. What could the alien civilizations do? Suppose SETI decoded "Hi from the Andromeda Galaxy! BTW, nanobots might consume your planet in 23 years, so consider fleeing for your lives." Is there anything humans could do? 2. The costs might be high. Suppose our message saves an alien civilization one thousand light-years away, but delays a positive singularity by three days. By the time our colonizers reach the alien planet, the opportunity cost would be a three-light-day deep shell of a thousand light-year sphere. Most of the volume of a sphere is close to the surface, so this cost is enormous. Giving the aliens an escape ark when we colonize their planet would be quintillions of times less expensive. Of course, a paperclipper would do no such thing. 3. It may be presumptuous to warn about AI. Perhaps the correct message to say is something like "If you think of a clever experiment to measure dark energy density, don't do it."
5James_Miller
1. It depends on your stage of development. You might build a defense, flee at close to the speed of light and take advantage of the universe's expansion to get into a separate Hubble volume from mankind, accelerate your AI program, or prepare for the possibility of annihilation. 1. Good point, and the resources we put into signaling could instead be used to research friendly AI. 1. The warming should be honest and give our best estimates.
4Luke_A_Somers
1. Quite. 2. The outer thee days of a 1000 Ly sphere account for 0.0025% of its volume.

Can someone point me to estimates given by Luke Muehlhauser and others as to MIRI's chances for success in its quest to ensure FAI? I recall some values (of course these were subjective probability estimates with large error bars) in some lesswrong.com post.

4Peter Wildeford
You can see some discussion on "How does MIRI know it has a medium probability of success?"

Would there be any interest in an iPhone app for LessWrong? I was thinking it might be a fun side project for learning Swift, and I didn't see any search results on the App Store.

6Ixiel
I bet some folks would love you forever if you gave them reply notification
4ChristianKl
I think a predictionbook app or an app version of the credence game, would be more useful than a app for LessWrong.
7A1987dM
There already is one for Android.
7ChristianKl
I wasn't aware of the Android app. On the other hand the existence doesn't mean that a new attempt at the same problem is worthless. I think it's very valuable to have multiple people try to solve the problem. To me it seems like a much more interesting project than having another go at writing an app to parse an online forum. There are few people thinking in depth about designing apps to teach people to be calibrated. The fact that you have a smartphone also allows additional questions: You can ask calibration questions such as: * Did John or Joe send you more emails in the last year? * Is the air pressure more or less than X? * Is the temperature of the smart phone battery more or less than X? * Does this arrow point more North or more South?" * Is the distance between your work location and where you are at the moment more or less than X? * Is the distance between your home location and where you are at the moment more or less than X? * Is the distance between where John lives and where you are at the moment more or less than X? * What was the average speed at which you where traveling in the last minute (if you sit in public transportation) * Is the average pitch of the background noise over the last minute more or less than X? * Is the longest email that you received in the past week more or less than X characters long? * What's the chance that you will get a call today?" * Is the average of beeminder value X that you tracked over the last week (month) more or less than X? All those questions are more interesting then whether postmaster general X served before or after postmaster general Y or the boiling temperatures of various metals. Building an app around the issue might be more complicated than simply providing an new interface for LessWrong, but the payoff for getting Credence training right is also so much higher. Even if you simply focus on building a beeminder history credence game that might not be too complicated but real
7pshc
Just wanted to say: thanks for the ideas!
3Viliam_Bur
A personal prediction book? Simple version: You provide your own predictions, and state your credence. Later you say whether you were right or wrong. The app displays statistics of your calibration. This is simple in essence, but there will be many design decisions, and many little details that can make the UI better. For example, I guess you should choose the credence from, say, 50%, 60%, 70%, 80%, 90%, 95%, and 99%, instead of typing your own value, because this way it will be easier to make statistics. Also, choosing one option is easier than typing two digits, although most of the work will be typing the questions. It should be possible to edit the text later (noticing a typo too late would drive me crazy). The app should also remember the date each question was entered, so it can give you statistics like: how well calibrated you are in the last 30 days (compared with the previous 30 days). Maybe the data should be stored online, so you can edit them both from the mobile and from the PC. Although, I would prefer if the application works offline, too. These are two contradictory demands, so you have to find a solution. Perhaps each user should choose in settings whether their data should be kept in the mobile or on the web? And perhaps allow to change this setting later, and the data will be copied? Or maybe even keeping only the recent data in the mobile, and the full archive online? There are many decisions here. A nice function would be to save some work typing repeated questions. For example, if I want to make a bet every morning "will I exercise today?", there should be an option to repeat one of the recent questions with current date. (By the way, if you always display the date along the question, you can write things like "today" or "this month" without having to always write the specific date.) A more advanced version (don't do this as the first version; remember the planning fallacy!) would allow some kind of "multiplayer". You could add friends, an
4Richard_Kennaway
What do you see it doing that the web site doesn't?

Imagining:

  • Easy to read layout (I find myself doing a lot of zooming and panning in Mobile Safari)
  • Download articles and sequences for offline reading
  • Comments that are easy to read, vote on, and reply to on mobile, similar to e.g. popular reddit apps
  • Free as in beer and speech

Welcoming other features that would draw users, too. I have to wonder if there are open source Reddit clients I could adapt, given the forked codebase...

5philh
I expect that forking a reddit client is the way to go for UI (if you don't have any in mind, I think AlienBlue and Reddit is Fun are probably worth looking into for this). For the backend, reddit exposes itself through json, which LW doesn't seem to; e.g. http://www.reddit.com/user/philh/.json works, but http://lesswrong.com/user/philh/.json (and http://lesswrong.com/user/philh/overview/.json ) don't. I expect clients to mostly use this, so you'll need to rewrite those portions of the code.
3pshc
Turns out AlienBlue did release their original version as open source, but the code is four years out of date! Hmmm. Yeah, I would probably end up scraping the HTML. I filed a bug about .json being broken two years ago, but even if it were fixed, it seems that LW has quite a few customizations that the JSON output likely has not caught up to...
2lmm
I would expect most LWers to prefer Android. Certainly I do.
1gjm
Interesting. I have no particular expectation about LWers' preference. I'm an Android guy too. Let's have a poll. What do you use for your mobile devices, if any? For smartphones: [pollid:768] For tablets: [pollid:769] [EDITED to add: If the answer to either question is "Multiple different OSes", please select whichever you think is better. Or flip a coin or something.]

Does anyone have any good ideas about how to be productive while commuting? I'll be starting a program soon where I'll be spending about 2 hours a day commuting, and don't want these hours to go to waste. Note: I have interests similar to a typical LessWrong reader, and am particularly interested in startups.

My brainstorming:

  • Audio books and podcasts. This sounds like the most promising thing. However, the things I want to learn about are the hard sciences and those require pictures and diagrams to explain (you can't learn biology or math with an audioboo

... (read more)

Not really what you're looking for, but I feel obligated:

Move or get a different job. Reduce your commute by 1 or 1.5 hours. This is the best way to increase the productivity of your commute.

I read (can't remember source) that commuting was the worst part of the people's day (they were unhappy, or experienced the lowest levels of their self-assess subjective well being).

8Adam Zerner
I'm doing a coding bootcamp (Fullstack Academy). It's in NYC and I live with my parents in Long Island now. It's only 13 weeks so it's not that bad, especially if I could make it productive. If it was long term I'd probably agree with you though.
5Douglas_Knight
Commuting by car is terrible. Commuting by foot is great. There is not a lot of data on commuting by subway, but it does not look good.
2eeuuah
Long distance foot commuting is still pretty bad. In my experience I don't hate the world as much, but burning two plus hours a day commuting sucks no matter what. The subway is definitely much better than car commuting, but not as nice as biking or walking. I think subway commuting is vastly improved by good distractions available through a smartphone, though.
9Lumifer
Driving or public transportation? If driving, don't forget that you have a limited amount of attention available and being "productive" as a driver involves some trade-offs X-)
3Adam Zerner
I should have mentioned that, it's all public transportation (train + subway). If I get a seat on the train and it's not too crowded I could use my laptop to code or to read, but it's difficult to get a seat.
4Lumifer
You can read easily enough if you have a tablet or an e-reader.
1Adam Zerner
It'd be really tough on a NYC subway. On the train, I could read if I get a seat (because I could use my laptop). A tablet would help for the train when I don't have a seat, but I don't really think it's worth it for that one case
1palladias
I read my kindle pretty easily on the NYC subway by keeping it near my face/within my personal bubble. I've also read paperbacks there, turning pages one handed in an awk way, but I recommend kindle. It's also very easy to read while walking with a kindle!
2Adam Zerner
Hm, I think you're right. 1. I came to my original conclusion too quickly and without thinking enough about it. 2. It doesn't seem that hard actually. 3. I've seen people read on the subway before (although they're definitely the minority).
2Adam Zerner
Hm, I think you're right. 1) I came to my original conclusion too quickly and without thinking about it enough. 2) That sounds doable. 3) I've seen people read on the subway before (although it seems rare enough that it allowed me to draw my initial conclusion).
4satt
A potentially cheap, easy way to get more information about the ease of using an e-reader: get someone you know to lend or give you an old one.
2CWG
Given the limitations (that you describe in other replies) I think you've got a good list. Regarding podcasts, this could be a great time to experiment with new ones & decide which you want to listen to longer term. Perhaps there are some short activities of value to you, such as Anki (assuming you have a smartphone), mentally reviewing your memory palace, or mindfulness exercises. Mindfulness exercises on public transport may seem a little odd, but the distractions may make it more effective as exercise - just be patient with yourself.

Can Bayesian inference be applied to quantum immortality?

I'm writing an odd science fiction story, in which I'd like to express an idea; but I'd like to get the details correct. Another redditor suggested that I might find someone here with enough of an understanding of Bayesian theory, the Multiple Worlds interpretation of quantum mechanics, quantum suicide, that I might be able to get some feedback in time:

Assuming the Multiple Worlds Interpretation of quantum theory is true, then buying lottery tickets can be looked at in an interesting way: it can be v... (read more)

6gjm
Just out of curiosity: How (if at all) is this related to your LW post about a year ago? I think surely the following has to be wrong: because you can't get that kind of information about the future ("are going to be sufficient") just from the fact that you haven't died in the past. As for the more central issue: this also seems terribly wrong to me, at least if the situation I'm supposed to imagine is that I bought a lottery ticket just for fun, or out of habit, or something like that. Because surely the possible worlds that get more likely according to your quantum-immortality argument are ones in which I bought a lottery ticket in the expectation of a disaster. Further, I don't see how winning makes this situation any more likely, at least until the disaster has actually occurred and been surmounted with the help of your winnings. Imagine 10^12 equal-probability versions of you. 10^6 of them anticipate situations that desperately require wealth and buy lottery tickets. Another 10^9 versions of you buy lottery tickets just for fun. Then one of the 10^6, and 10^3 of the 10^9, win the lottery. OK, so now your odds (conditional on having just bought a lottery ticket) of being about to face wealth-requiring danger are only 10^3:1 instead of 10^6:1 as they were before -- but you need to conditionalize on all the relevant evidence. Let's suppose that you can predict those terrible dangers half the time when they occur; so there are another 10^6 of you facing that situation without knowing it; 10^3 of them bought lottery tickets, and 10^-3 of them won. So conditional on having just bought a lottery ticket for fun, your odds of being in danger are still 10^6:1 (10^9 out of danger, 10^3 in); conditional on having just bought a lottery ticket for fun and won, they're still 10^6:1 (10^3 out of danger, 10^-3 in). Perhaps I'm missing something important; I've never found the idea of "quantum immortality" compelling, and I think the modes of thought that make it compellin
1DataPacRat
Same general assumptions, taken in a somewhat different direction. (I'm just browsing messages in the middle of the night, so will have to wait to respond to the rest of your post for some hours. In the meantime, the response to my question at https://www.reddit.com/r/rational/comments/2g09xh/bstqrsthsf_factchecking_some_quantum_math/ckex8ul seems worth reading.)
3gjm
So, suppose I rig up a machine with the following behaviour. It "flips a coin" (actually, in case it matters, exploiting some source of quantum randomness so that heads and tails have more or less exactly equal quantum measure). If it comes up heads, it arranges that in ten years' time you will be very decisively killed. If we take "Pr(L)=1" (in that comment's notation) seriously then it follows that Pr(tails)=1 too. But if there are 100 of you using these machines, then about 50 are going to see heads; and if you are confident of getting tails -- in fact, if your estimate of Pr(tails) is substantially bigger than 1/2 -- you're liable to get money-pumped. One possible conclusion: Pr(L)=1 is the wrong way to think about quantum immortality if you believe in it. Another: the situation I described isn't really possible, because the machine can't make it certain that you will die in 10 years, and the correct conclusion is simply that if it comes up heads then the universe will find some way to keep you alive despite whatever it does. But note that that objection applies just as well to the original scenario. Any disaster that you can survive with the help of an extra $10M, you can probably survive without the $10M but with a lot of luck. Or without the $10M from the lottery but with $10M that unexpectedly reaches you by other means.
1DataPacRat
Your last paragraph is leading me to consider an alternative scenario: There are two ways to survive the disaster, either pleasantly by having enough money (via winning the lottery) or unpleasantly (such as by having to amputate most of your limbs to reduce your bodymass to have enough delta-vee). I'm currently trying to use Venn-like overlapping categories to see if I can figure out any "If X then Y" conclusions. The basic parameters of the setting seem to rule out all but five combinations (using ! to mean 'not'): WinLotto, !Disaster, !Amputee, Live: All Good WinLotto, Disaster, !Amputee, Live: Buy survival !WinLotto, !Disaster, !Amputee, Live: Nothing happens !WinLotto, Disaster, Amputee, Live: Unpleasant survival !WinLotto, Disaster, !Amputee, !Live: Dead. At this very moment, I'm trying to figure out what happens if quantum immortality means the 'dead' line doesn't exist... ... But I'm as likely as not to miss some consequence of this. Anyone care to take a shot at how to set things up so that any Bayesian calculations on the matter have at least a shot at reflecting reality?
4NancyLebovitz
I think you're leaving out that disasters which require a lot of money to survive are fairly rare and hard to predict.
5DataPacRat
The character has come uncomfortably close to dying several times in a relatively short period, having had to use one or another rare or unusual skill or piece of equipment just to survive each time. (In other words, she's a Protagonist.)

From http://www.preposterousuniverse.com/blog/2013/08/22/the-higgs-boson-vs-boltzmann-brains/

A room full of monkeys, hitting keys randomly on a typewriter, will eventually bang out a perfect copy of Hamlet. Assuming, of course, that their typing is perfectly random, and that it keeps up for a long time. An extremely long time indeed, much longer than the current age of the universe. So this is an amusing thought experiment, not a viable proposal for creating new works of literature (or old ones).

There’s an interesting feature of what these thought-experi

... (read more)
1[anonymous]
The expansion of the universe blows up the Boltzmann Brain problem. The universe is not of uniform density over time, and far into the future things get thinner and thinner on average with more and more concentrated local knots of matter of changing atomic/etc composition. It pushes the question to why we see the universe as it is rather than something smaller in space rather than in time, which becomes a question about the properties of the event we call the big bang, which nobody really understands - was it a singular event or one of many such events and what was its/their scale?
3Ronak
Thanks for your comment. My issue is much 'earlier' in terms of logic. When I started reading that post, the Boltzmann brain problem seemed like a non-problem; an inevitable conclusion that people were unwilling to accept for reasons of personal bias - analogous to how most LWers would view someone who insists on metaphysical free will. Even if certain facts about the universe didn't solve the issue, it seems to me that Carroll would still want to find reasons that we weren't Boltzmann brains. Now, from my own interest in entropy and heat death, I had long ago concluded that I might, right now, be part of a random fluctuation that is gone the next moment; in fact, I had concluded that every moment of my existence turns up somewhere during heat death. That's not an issue, as far as I can see - whatever fact we see about the universe, that would just be part of this fluctuation (I don't know about this acceleration thing - my technical understanding of the issue is not good enough, but I'm willing to take your and Carroll's words for it). At this level, 'we're part of a random fluctuation' is one of those uninteresting hypotheses like maya that could very well be true but are unverifiable. (Continued adherence to ordered laws can't really be considered evidence, since we may have just popped into existence a second ago with memories as we have. It truly can predict everything.) But then, Carroll argues that believing you're a Boltzmann brain is inconsistent, since you can't trust your own brain which is a product of a random fluctuation. Of course, I don't believe I'm a Boltzmann brain, I just note that no experience (modulo expanding universe) contradicts the hypothesis and therefore I should reason without giving a shit about it. However, Carroll's argument gives me pause, and I can't really see whether I should consider it seriously.
3Viliam_Bur
It's not necessarily an either/or situation. Maybe this universe has started a few billions of years ago in a Boltzmann-like event, but since then it evolves, uhm, just like we think it does. The analogy of the monkeys with typewriters is misleading. The laws of physics are local: what happens next does depend on what happens now; that's unlike the monkey with the typewriter where the following letter is completely independent on the previous part of the book. If some random process would create a brain, in a body, in a room, then even if the room is immediately destroyed at the speed of light, still, during those few microseconds until the destruction reaches the brain, the brain would operate logically. On the other hand, random processes creating the brain in the body in the room are much less likely than random processes creating only the brain, or only parts of the brain. So this requires some more though, and I am too tired now to make it. But my point is that if you are randomly created exactly in this moment, you don't have a reason to trust your reason... but if you were created a while ago, and your reason had some time to work, that's not the same situation. In the extreme situation, if the universe was created randomly billions of years ago and then we have evolved lawfully, that's business as usual: the details of random creation of the universe long ago should not be relevant for our reasoning about our reason now.
0Ronak
I think this is a good argument. Thanks. After some thought on why your argument sounded unsatisfatory to me, I decided that I have a much more abstract, much less precise argument, to do with things like the beginning of epistemology. In the logcial beginning, I know nothing about the territory. However, I notice that I have 'experiences.' However, I have nore ason for believing that these experiences are 'real' in any useful sense. So, I decide to base my idea of truth on the usefulness of helping me predict further experiences. 'The sun rises every morning,' in this view, is actually 'it will seem to me that every time there's this morning-thing I'll see the sun rise.' All hypotheses (liike maya and boltzmann brains) that say that these experiences are not 'real,' as long as I have no reason to doubt 'reality,' form part of this inscrutable probability noise in my probability assignments. Therefore, even if I was randomed into existence a second ago, it's still rational to do everything and say "I have no issues with being a boltzmann brain - however it's just part of my probability noise.' I haven't fleshed out precisely the connection between this reasoning and not worrying about Carroll's argument - it seems as if I'm viewing myself as an implementation-independent process trying to reason about its implementation, and asking what reasoning holds up in that view.

This is a god read: http://www.newrepublic.com/article/119321/harvard-ivy-league-should-judge-students-standardized-tests

Excerpt:

It seems to me that educated people should know something about the 13-billion-year prehistory of our species and the basic laws governing the physical and living world, including our bodies and brains. They should grasp the timeline of human history from the dawn of agriculture to the present. They should be exposed to the diversity of human cultures, and the major systems of belief and value with which they have made sense of

... (read more)
5Lumifer
Looks like they agree that specialization is for insects :-)
3ChristianKl
"They"? The author is Steven Pinker.
[-][anonymous]120

"They" can be singular or plural.

-2Ixiel
It is correct in the latter case, incorrect in the former. It largely doesn't matter, but recruiters I know, for example, throw out resumes for this particular error (though one had heard some schools actually encourage the practice, to the student's disservice) and some people (myself included until I thought better of it) think less of authors who make it. Linguistics as a discipline is descriptive, but people who are not linguists treat people differently for making errors.
3[anonymous]
It's a bit more complicated than correct or incorrect: http://en.wikipedia.org/wiki/Singular_they
0Ixiel
I agree with you as literally started, and am not a Wikipedia naysayer, but that again is descriptive linguistics. People do say that. People also do say "y'all aints gots no Beefaronis?" (One of my favorite examples heard by my own ears in a c store), and people do think differently of either than they do as what is sometimes called "blackboard grammar." I would recommend John McWhorter as a linguist who describes this better than I can. Or just say to yourself "huh, interesting opinion" and walk away; I swear I won't be offended :-)
2ShardPhoenix
That's nuts.
2Ixiel
I don't think so, but either way, if one wants a job at GE, to use a recognizable example, one might want to know.
-2Azathoth123
Why? It strikes me as a good way to sort out people who have bad attention to detail, as well as avoiding the SJW-types more interested in accusing everyone in the company of sexism than doing any actual work.
1cameroncowan
The idea of the well rounded human being strikes again! That is why we moved away from the structure of classical education and towards the free-form well rounded-ness of the liberal arts education. It allows for curiosity and testing out your own ability.
1Azathoth123
You know, sarcasm doesn't work well on the internet.
1Lumifer
Oh, but it does, it does :-D

"Whether you have herpes" is not as clearly-defined a category as it sounds. The blood test will tell you which types of HSV antibodies you have. If you're asymptomatic, it won't tell you the site of the infection, if you're communicable, or if you will ever experience an outbreak.

I had an HSV test a while ago (all clear, thankfully), and my impression from speaking to the medical staff was that given the prevalence and relative harmlessness of the disease, (compared to, say, HIV or hepatitis or something), the doubt surrounding a positive test result was enough of a psychological hazard for them to actively dissuade some people from taking it, and many sexual health clinics don't even offer it for this reason.

From Poor Economics by Esther Duflo and Abhijit Bannerjee

There is potentially another reason the poor may hold on to beliefs that might seem indefensible: When there is little else they can do, hope becomes essential. One of the Bengali doctors we spoke to explained the role he plays in the lives of the poor as follows: “The poor cannot really afford to get treated for anything major, because that involves expensive things like tests and hospitalization, which is why they come to me with their minor ailments, and I give them some little medicines which make them feel better.” In other words, it is important to keep doing something about your health, even if you know that you are not doing anything about the big problem. In fact, the poor are much less likely to go to the doctor for potentially life-threatening conditions like chest pains and blood in their urine than with fevers and diarrhea. The poor in Delhi spend as much on short-duration ailments as the rich, but the rich spend much more on chronic diseases.34 So it may well be that the reason chest pains are a natural candidate for being a bhopa disease (an older woman once explained to us the dual concepts of bhopa diseases and doctor diseases—bhopa diseases are caused by ghosts, she insisted, and need to be treated by traditional healers), as are strokes, is precisely that most people cannot afford to get them treated by doctors.

4chaosmage
Thank you, that was very interesting. It seems to me these people are paying in sanity what they can't pay in money - and the price they're paying is arguably higher than what the rich are paying, not even considering the physical health effects. This might be one of the ways that being poor is expensive.

Indeed, 'being poor is expensive' is related to how they frame this fact. From the end of the same chapter:

The poor seem to be trapped by the same kinds of problems that afflict the rest of us—lack of information, weak beliefs, and procrastination among them. It is true that we who are not poor are somewhat better educated and informed, but the difference is small because, in the end, we actually know very little, and almost surely less than we imagine. Our real advantage comes from the many things that we take as given. We live in houses where clean water gets piped in—we do not need to remember to add Chlorin to the water supply every morning. The sewage goes away on its own—we do not actually know how. We can (mostly) trust our doctors to do the best they can and can trust the public health system to figure out what we should and should not do. We have no choice but to get our children immunized—public schools will not take them if they aren’t—and even if we somehow manage to fail to do it, our children will probably be safe because everyone else is immunized. Our health insurers reward us for joining the gym, because they are concerned that we will not do it otherwise. And pe

... (read more)
-3cameroncowan
These are all nice ideas but someone has to pay for them and it won't be cheap and 2nd of all. I know of plenty of people who are living in terrible conditions right here in this country. When one is poor everything is harder because you have to do everything yourself and pay out the nose for services that the wealthy get for far less. Whether in Africa or the US, poverty has a cost.
1Ronak
I'm interested in your calling it 'paying in sanity.' Are you referring to the insanity of believing in Bengali babus, or the fact that they're preserving their own sanity in some way by not going to a real doctor for things they know they can't afford?
2chaosmage
The former. I'm speculating this tendency to rely on hope for serious problems while relying on science for small ones creates compartmentalization, which impairs rationality and increases religiosity. The correlation between poverty and religiosity is obvious, this is just a speculative direction of causation. Irrationality would probably lead to poverty, but if poverty also led to irrationality, the two causations would reinforce each other and explain the robustness of the correlation.

Thanks to its multiple infection sites, herpes has the unusual property that two people, neither of whom have an STI, can have sex that leads to one of them having an STI. It's a spontaneous creation of stigma! And if you have an asymptomatic infection (very common), there's no way to know whether it's oral (non-stigmatized, not an STI) or genital (stigmatized, STI) since the major strains are only moderately selective.

4fubarobfusco
... and that's why you should prefer to sleep with rationalists. :)
5James_Miller
But it might be rational to not find out if you believed you would have a duty to warn potential lovers if you tested positive, or were willing to lie but believed yourself to be a bad actor.
2polymathwannabe
How is it rational to willfully keep others in ignorance of a risk they have every right to know about? The discomfort of honest disclosure is a minor inconvenience when compared to the disease.
6James_Miller
You are right for the rationalist who gives substantial weight to the welfare of his or her lovers. But being rational doesn't necessarily imply you that care much about other people.
4A1987dM
A rationalist that doesn't care about the welfare of their lovers and yet believes they have a duty to warn them about if they tested positive (but no duty to get tested in the first place, even if the cost is nonpositive)?
2polymathwannabe
Are you advocating for prisoner defection?
7James_Miller
In my game theory class I teach that rational people will defect in the prisoner's dilemma game, although I stress that you should try to change the game so it is no longer a prisoner's dilemma.
9Shmi
I hope you also talk about Parfit's hitchhiker, credible precommitment and morals (e.g. honor, honesty) as one of its aspects.
1James_Miller
I spend a lot of time on credible threats and promises, but I don't do Parft's hitchhicker as it doesn't seem realistic.
2Toggle
Can this situation be modeled as a prisoner's dilemma in a useful way? There seem to be some important differences. For example, if both 'prisoners' have the same strain of herpes, then the utility for mutual defection is positive for both participants. That is, they get the sex they were looking for, with no further herpes.
3sixes_and_sevens
Not prisoner's dilemma, but successful coordination to which a decrease in the spread of HIV in the gay community is attributed: serosorting.
4[anonymous]
A classic example of confusing is with ought...
2sixes_and_sevens
The base rate of HSV2 in US adults is ~20%. I would argue that if you're sexually active, and don't get an HSV test between partners (which is typically not part of the standard barrage of STD tests), you're maintaining the same sort of plausible deniability strategy as those who pay to not see the results of their apropos-of-nothing tests.
1ChristianKl
If you do think you have an ethical obligation to inform others of a risk like this, do when did you test yourself the last time for herpes?
3polymathwannabe
If you must know, I'm a virgin. I have, however, engaged in erotic practices not involving genital contact.
1ChristianKl
If that wouldn't be the case, how often would you think you would test yourself?
3polymathwannabe
I guess a minimum should be before and after each new partner, plus additional tests if I suspect infidelity.

Peter Thiel gave an AMA at Reddit, mentioned friendly AI and such (and even neoreaction :-D).

1ChristianKl
His answer to "Peter, what's the worst investment you've ever made? What lessons did you learn from it?" is intersting. He focuses on not investing more on facebook. The shift of focus says a lot about his mindset.
1Nectanebo
One of the better AMAs I've read. Peter is an interesting guy. Is his book worth reading?
6Lumifer
I read/scanned the predecessor of that book, the transcripts of his Stanford classes where he taught one course. They were quite interesting and worth reading.

Is there still a rewards credit card that autodonates to MIRI or CfAR? I've seen them mentioned, but can't find any sign up links that are still live.

Unfortunately the program has been discontinued by Capital One :(

We have it in our queue to look into alternatives.

One thing you might want to look into is that many cards will allow you to donate your reward points etc. to charity. For many credit cards, this generates more value for the charity you choose to donate to.

4[anonymous]
I think they stopped distributing them. The last I saw, they had that entry struck out on their support page.
[-][anonymous]60

Has anyone ever worked for Varsity Tutors before? I'm looking at applying to them as an online tutor, but I don't know their track record from a tutor point of view. Has anyone had any experience with them?

6free_rip
Never worked for them in particular, but my experience with such online tutoring businesses hasn't been great: generally don't get many hours, are expected to commit fully to being available at certain times every week (which when in uni, with tests etc. at unexpected times, isn't too possible - might be possible for you in your situation) and they take a fair chunk of your earnings. On one occasion I put a lot of time into signing up, getting documents etc. to verify myself, and then never got a single student. On the other hand, signing up for services such as www.firsttutors.com has been great (not sure if this is international, I've been using the NZ site, but think it is). Basically it's a repository of tutors, people come and leave messages for you to see if you'd be a good fit and if you have times you could both make it, and then you each pay a small one-off fee (usually <$20 for the tutor) for the website providing the interface and get eachother's contact details. I've set up both online and in-person tutoring through this, online being about a fifth of all requests. The first year I used it I got about 3 or 4 students through it (each of whom I met for one or two hours a week and lasted on average ~6 months). Nowadays, with a few good reviews on there, I've put up my fees to double what they used to be and still get about 15 requests a year, each of which is good for about 2 hours tutoring a week - I don't take them all, but I could. And the fee the website charges is nothing in comparison to the hours I get out of it, usually it's less than an hour's work to make it back.
0[anonymous]
Thank you for the link. I had not heard of First Tutors before, but they seem to be a solid choice and one I'll research more. The flexibility is a very enticing quality, considering the high level of control I've seen in other service providers.
2James_Miller
Tutoring seems like a great way for lots of LW people to earn extra money. Apparently at least one high end tutor earns $1000 an hour.
4NancyLebovitz
Interesting article, but that tutor is in a fairly small niche-- test prep tutoring for the children of very rich parents. It's major that (when he tells the reporter how to solve a math problem), that he starts with teaching the reporter how to lower his panic level.
1[anonymous]
I haven't worked with that specific company before, but there are a lot of mixed incentives in the tutor market. If we can believe Glassdoor, they offer around $20/hour. (I suspect the 2 reported at $30/hour are either grad students or some other specialization). Here are some employee reviews, but I expect two of the five-stars are faked. Judging by the reviews and my own experience in the past, I think you can expect to get around five hours a week of tutoring this way. That doesn't include time spent preparing for topics and other overhead. I imagine the only way the company maintains quality control is by reducing hours on tutors after one or two bad reviews, or after a single refund request. Office politics wrt the director-tutor relationship are probably going to be brutal, and there's not going to be any reward or incentive for doing an above-average job. It seems reasonable to assume turnover is high. Since you're looking for online tutoring, I assume it's not possible to tutor in person in your local area?
0[anonymous]
That is correct. While I am looking into possibly offering Common Core math tutoring in the local area (there is an intense dislike of Common Core among parents here and I feel a tutoring service specifically for it may relieve some burdens and be worth the expense to these families), for the moment I am looking entirely online. Thank you for the links. The information really does not surprise me. I have an expectation of $15/hour unless I prove to be extremely effective as a tutor. The overhead is the central unknown to me as it's something I won't have clear numbers on until I actively deal with such an issue. Using friends of mine who currently tutor (one through Varsity) as examples, I'm not too worried about the overhead.

Research about online communities with upvotes and downvotes

We find that negative feedback leads to significant changes in the author’s behavior, which are much more salient than the effects of positive feedback. These effects are detrimental to the community: authors of negatively evaluated content are encouraged to post more, and their future posts are also of lower quality. Moreover, these punished authors are more likely to later evaluate their fellow users negatively, percolating these undesired effects through the community.

I don't think things are quite that bad here.

0Viliam_Bur
Didn't read the PDF, but I suspect the main problem is that not all "communities with downvotes" are the same. Some websites have downvotes where the downvoted comments are just as visible as the upvoted ones; so there is no punishment. Not sure if the causality can't go the other way round: crazy people having more time to post comments. Or even that members are more tolerant to new crazy people and annoyed by them later (which causes the decrease of comment karma), while the crazy people gradually write more and more comments because they become more comfortable or have more open battles to fight.

Y Combinator published a list of requests for startups.

2Azathoth123
The list makes for interesting ideas. Most of them seem good, but a few make we wonder about Paul Graham. Some of the ideas (e.g., Government) make me wonder if he's starting to drink his own cool-aid and it has caused him to forget everything he has learned along the way. With others (e.g., Diversity) one almost gets the impression that the SJ crowd is putting the screws on silicon valley and he has to at least through them some bone (the since deleted "Female Founders" essay reads similarly).
2Pfft
I think this list is due to Sam Altman. He has written about wanting to fund breakthrough technologies, and shortly after he became Y Combinator president they invested in a fusion energy company.
2Azathoth123
Well, that would explain why the list ignores Paul Graham's advise of investing in fields one understands.
2Lumifer
I am not terribly impressed by that list as it looks like a collection of wouldn't-it-be-nice-to-have wishes. The Government section looks fine -- the government is a big customer and does have very bad software. But yeah, the Diversity section is... weird. At least there is no Save the Environment section.
4Azathoth123
It suggests someone at Y Combinator now alieves he has magical superpowers about cutting through government procurement bureaucracy.
0Lumifer
Not quite. This is a list of requests -- the Y Combinator would like to find ways to achieve magical superpowers to cut through the government procurement bureaucracy.
0Azathoth123
Then why did the section talk about how inefficient government software was rather than cutting through procurement bureaucracy?
1Lumifer
Because you need to have what's called a "market opportunity" to start with.
0[anonymous]
A while back I worked at a startup (20-ish people) that had (UK) local government as their main customer. Large companies don't have a monopoly on providing bad software to governments, even if they have advantages.
0Izeinwinter
It seems very credible that "Write better software for the US government" is a field that is shockingly underexploited, simply because of the ideological biases and likely background of the typical american start-up entrepreneur. Do you have the faintest Idea what software to make social services more efficient ought to look like ? Because I don't and I figure very few people looking to start a coding shop do either. The only idea in this field I can think of with any chance of working is to try and run arbitrage against "not invented here" and check what tools are in use in the rest of the first world.
4Lumifer
It seems very credible that this field is "underexploited" for two main reasons. One is that business dealings with the US government are very stupid, inconvenient, and annoying. You drown in paperwork, you have to certifiy all kinds of silly things, etc. It's OK for a large organization with a compliance department, it's not so good for a startup. Two is that government contracts are a prime field for crony capitalism. You will be competing not only on price and quality but also on the depth of the old-boy network and the ability to provide invisible kickbacks -- again, not a strength of startups.
0Izeinwinter
.. Are you speaking from personal experience of selling things and services to the government here? Because if the answer to that is "No" you may, possibly, want to check if you remembered to remove those ideological blinders I mentioned. The main point of the paperwork vendors to the state have to do is to make sure that crony capitalism doesn't happen. If the process is very badly designed, that fails, but I've never worked anywhere that found it more obnoxious to do business with the government than with any other large customer. Usually it is less so.. The USG can't be that much worse than the nordic countries. It's still a first world state.
1Azathoth123
I can speak from personal experience. An executive for the contractor I work for was caught with a massive undisclosed conflict of interests. This had two main effects: we must now listen to an annual mandatory ethics briefing, in addition to all the briefings and paper work inflicted on us from previous misbehavior. (Note the things talked about during said briefing generally have nothing to do with what the executive was caught doing.) Second, the executive was summarily fired and managed to fall upward into a high level job with the agency we contract with.
0drethelin
These are all crazy vague.

A friend of mine has started going into REM in frequent 5 minutes cycles during the day, in order to boost his learning potential. He developed this via multiple acid trips. Is that safe? It seems like there should be some sort of disadvantage to this system but so far he seems fine.

How does he know that he actually is in REM? How does he know it boosts his learning potential?

6skeptical_lurker
How does LSD help you get develop an ability to get to sleep faster? LSD makes one less sleepy, so this seems like an improbably ability to ascribe to it. But if it actually works, its a really useful ability. You might want to try asking this question to a polyphasic sleeping community BTW.
5Douglas_Knight
What is "this"? this ability? Does he also get a full night's sleep? Eliminating other stages of sleep is almost certainly bad, but supplementing with REM seems to me unlikely to be bad. People with narcolepsy basically only have REM sleep. Narcolepsy is very bad, but many people who eventually develop it seemed to have only had REM sleep when they were functional with no ill effects. In particular, they greatly benefit from naps (both before and after developing full-blown narcolepsy).
[-][anonymous]50

Cryonics vs. Investment:

This is a question I have already made a decision on but would like some outside opinions for while it's still fresh. My beliefs have recently changed from "cryonics is not worth the investment" to "cryonics seems to be worth the investment but greater certainty for a decision is still wanting" (CStbWtIbGCoaDiSW for short). I've explored my options with Rudi Hoffman and found that while my primary choice of provider, Alcor, is out of my current range, my options are not unobtainable. CI with the bare basics, lowe... (read more)

2James_Miller
Consider other possible tradeoffs such as engaging in less leisure activities so you can take a part time job that will pay for cryonics, or saving money by reducing consumption.
3[anonymous]
These are worthwhile tips and ones I've explored. I've reduced consumption down to bare minimums already. Most of my time out of work is spent in activities for work as my position requires time spent with the community and networking, but I still look for opportunities on the side. Still, these are useful and assist with either option. Thanks.
1Larks
Have you considered Term life insurance vs Whole-of-Life insurance? Salemen will try to push you towards the latter but the former can have much lower premiums (especially if your time horizon is < 40 years)
0[anonymous]
I have considered it. The <40 years horizon is especially affecting because, while my condition is not currently life threatening (or much to note at all), I'm still young and active in controlling it. As I get older, it may be harder for my body to avoid the adverse affects, and I could be dead by 60. My biggest concern with term is going to term life now, only to be in much worse condition when the term expires, causing a renewal to raise my payments heavily. Since it's a point brought up by Larks, I'll say here I have no expectation to self-finance, and I don't know how much worse my condition will be when it comes time to renew.
0Paul Crowley
Surely for cryonics you want whole-of-life?
0Larks
There are various reasons you would not want this: * You intend to save a lot of money and self-finance when able * You think you might change your mind * You think you will die in the next 40 years * You think you will be unusually healthy, and thus renewing will be cheaper * You have a higher discount rate than the market, and value paying $10/month rather than $60/month a great deal.

Could someone recommend an article (at advanced pop-sci level) providing the best arguments against the multiverse approach to quantum mechanics.

What is the best textbook that explains quantum mechanics from a multiverse perspective (rather than following the Copenhagen school and then bringing in the multiverse as an alternative)? This should be a textbook, not pop-sci, but at a basic a level as possible.

4pragmatist
David Wallace's The Emergent Multiverse is an excellent introduction to the many-worlds interpretation, written by its best defender. Most of it should be accessible to a layperson, although there are technical sections. You can't use it to fully learn quantum mechanics from scratch, though. But if you learn the basic formalism from another textbook (I recommend this one; the first eight chapters should suffice) you'll be able to follow almost all of Wallace. As for criticism, this is the best non-technical article I know of. It does presume some knowledge of quantum mechanics and many-worlds, but not deep technical knowledge.

How useful would it be to have more people working on AI/FAI? Would it be a big help to have another 1,000 researchers working on it making $200,000 a year? Or does an incredibly disproportionate amount of the contribution come from big names like Eliezer?

What do we want out of AI? Is it happiness? If so, then why not just research wireheading itself and not encounter the risks of an unfriendly AI?

2hairyfigment
We don't know what we want from AI, beyond obvious goals like survival. Mostly I think in terms of a perfect tutor that would bring us to its own level of intelligence before turning itself off. But quite possibly we don't want that at all. I recall some commenter here seemed to want a long-term ruler AI.
0Leonhart
I am generally in favour of a long-term ruler AI; though I don't think I'm the one you heard it from before. As you say, though, this is an area where we should have unusually low confidence that we know what we want.
-1[anonymous]
The promise of AI is irresistibly seductive because an FAI would make everything easier, including wireheading and survival.
[-][anonymous]30

If I understand correctly, people become utilitarians because they think that global suffering/well-being have such big values that all the other values don't really matter (this is what I see every time someone tries to argue for utilitarianism, (2) please correct me if I'm wrong). I think a lot of people don't share this view, and therefore, before trying to convince them they should choose utilitarianism as their morality, you first need to convince them about the value of harm-pleasure.

1mare-of-night
I think it depends? People around here use utilitarianism to mean a few different things. I imagine that's the version talked about the most because the people involved in EA tend to be those types (since it's easier to get extra value via hacking if your most important values are something very specific and somewhat measurable). I think that might also be the usual philosopher's definition. But then Eliezer (in the metaethics sequence) used "utilitarianism" to mean a general approach to ethics where you add up all the values involved and pick the best outcome, regardless of what your values are and how you weight them. So it's sometimes a little confusing to know what utilitarianism means around here. (Edited for spelling.)
6Douglas_Knight
I do not believe Eliezer makes that mistake.
1mare-of-night
I might have misremembered. Sorry about that.
2Viliam_Bur
I don't understand. One of those things is "compare the options, and choose the one with the best consequences". What are the other things?
5Lumifer
You are illustrating the issue :-) That is consequentialism, not utilitarianism.
1pragmatist
Differences arise when you try to flesh out what "best consequences" means. A lot of people on this site seem to think utilitarianism interprets "best consequences" as "best consequences according to your own utility function". This is actually not what ethicists mean when they talk about utilitarianism. They might mean something like "best consequences according to some aggregation of the utility functions of all agents" (where there is disagreement about what the right aggregation mechanism is or what counts as an agent). Or they might interpret "best consequences" as "consequences that maximize the aggregate pleasure experienced by agents" (usually treating suffering as negative pleasure). Other interpretations also exist.
1Nornagest
As far as I've read, preference utilitarianism and its variants are about the only well-known systems of utilitarianism in philosophy that try to aggregate the utility functions of agents. Trying to come up with a universally applicable utility function seems to be more common; that's what gets you hedonistic utilitarianism, prioritarianism, negative utilitarianism, and so forth. Other variants, like rule or motive utilitarianism, might take one of the above as a basis but be more concerned with implementation difficulties. I agree that the term tends to be used too broadly around here -- probably because the term sounds like it points to something along the lines of "an ethic based on evaluating a utility function against options", which is actually closer to a working definition of consequentialism. It's not a word that's especially well defined, though, even in philosophy.
1mare-of-night
"Compare the options, and choose the one that results in the greatest (pleasure - suffering)."

What's supposed to happen if an expanding FAI friendly to civilization X collides with an expanding FAI friendly to civilization Y?

6bogus
If both FAIs use TDT or a comparable decision theory, then (under plausible assumptions), they will both maximize an aggregate of both civilizations' welfare.
-1Lumifer
Each FAI is friendly to its creators, not necessarily to the rest of the universe. Why would a FAI be interested in the welfare of aliens?
2NancyLebovitz
You might need a coalition against less tractable aliens, and you also might need a coalition to deal with something the non-living universe is going to throw at you. If your creators include an interest in novelty in their CEV, then aliens are going to provide more variety than what your creators can make up on their own.
-1Lumifer
Heh. The situation is symmetric, so the humanity is also novelty for aliens. And how much value does novelty has? It it similar to having some exotic pets? X-D
1NancyLebovitz
I meant novelty in a broad sense-- not just like having an exotic pet. I'd expect different sensoria leading to somewhat different angles on the universe, and better understanding of biology and material science, at least.
5ChristianKl
It's not clear that territory that already has a FAI watching over it can be overtaken by another FAI. A FAI might expand to inhibit territory by sending small probes. I think those probes are unlikely to have any effect in territory already occupied by another FAI. I'm also not sure to what extend you can call nodes of an FAI of the same origin that have millions of light years between them the same FAI.
3Lumifer
That's a valid point. An AI can rapidly expand across interstellar distances only by replicating and sending out clones. Assuming the speed of light limit, the clones would be essentially isolated from each other and likely to develop independently. So while we talk about "AI expanding through the light cone", it's actually a large set of diverging clones that's expanding. It's an interesting question how far could they diverge from one another.
2Nectanebo
If their ideas of friendliness are incompatible with each other, perhaps a conflict? Superintelligent war? It may be the case that one will be 'stronger' than the other, and that there will be a winner-take-all(-of-the-universe?) resolution? If there is some compatibility, perhaps a merge, a la Three Worlds Collide? Or maybe they co-operate, try not to interfere with each other? This would be more unlikely if they are in competition for something or other (matter?), but more likely if they have difficulties assessing risks to not co-operating, or if there is mutually assured destruction? It's a fun question, but I mean, Vinge had that event horizon idea, about how fundamentally unpredictable things are for us mere humans when we're talking about hypothetical intelligences of this caliber, and I think he had a pretty good point on that. This question is taking a few extra steps beyond that, even.
2Lumifer
Oh, sure, it's much more of a flight-of-fantasy question than a realistic one. An invitation to consider the tactical benefits of bombarding galaxies with black holes accelerated to a high fraction of c, maybe X-D But the original impetus was the curiosity about the status of intelligent aliens for a FAI mathematically proven to be friendly to humans.
2[anonymous]
Neither defects?
2Lumifer
Why do you think it's going to be a prisoner's dilemma type of situation?
2A1987dM
In the intersection of their future light cones, each FAI can either try to accommodate the other (C) or try to get its own way (D). If one plays C and one plays D, the latter's values are enforced in the intersection of light cones; if both play C, they'll enforce some kind of compromise values; if they both play D, they will fight. So the payoff matrix is either PD-like or Chicken-like depending on how bloody the fight would be and how bad their values are by each other's standards. Or am I missing something?
2Lumifer
The contact between the FAIs is not a one-decision-to-fight-or-share deal. It's a process that will take some time and each party will have to take many decisions during that process. Besides, the payoff matrix is quite uncertain -- if one initially cooperates and one initially defects does the defecting one get more? No one knows. For example, the start of the hostilities between Hitler and Stalin was the case where Stalin (initially) cooperated and Hitler (initially) defected. The end result -- not so good for Hitler. There are many options here -- fully cooperate (and potentially merge), fight till death, divide spheres of influence, set up a DMZ with shared control, modify self, etc. The first interesting question is, I guess, how friendly to aliens will a FAI be? Will it perceive another alien FAI as an intolerable obstacle in its way to implement friendliness as it understands it? More questions go along the lines of how likely it is that one FAI will be stronger (or smarter) than the other one. If they fight, what might it look like (assume interstellar distances and speed of light limits). How might an AI modify itself on meeting another AI, etc. etc.
1Viliam_Bur
As much as is reasonable in given situation. If it is stronger, and if conquering the other AI is a net gain, it will fight. If it is not stronger, or the peace could be more efficient than the war, it will try to negotiate. The costs of peace will depend on the differences between those two AIs. "Let's both self-modify to become compatible" is one way to make peace, forever. It has some cost, but it also saves some cost. Agreeing to split the universe into two parts, each governed by one AI, also has some cost. Depending on specific numbers, the utility maximizing choice could be "winner takes all" or "let's split the universe" or "let's merge into one" or maybe something else I didn't think about.
1Lumifer
The critical question is, whose utility? The Aumann theorem will not help here since the FAIs will start with different values and different priors.
1Viliam_Bur
Each AI tries to maximize their own utility, of course. When they consider merging, they make an estimate: how much of the original utility can I expect to get after we both self-modify to maximize the new utility function.
0Lumifer
Then each AI makes its own choice and the two choices might well turn out to be incompatible. There is also the issue of information exchange -- basically, it will be hard for the two AIs to trust each other.
1Eugene
Absolute strength for one, Absolute intelligence for another. If one AI has superior intelligence and compromises against one that asserts its will, it might be able to fool the assertive AI into believing it got what it wanted when it actually compromised. Alternatively, two equally intelligent AIs might present themselves to each other as though both are on equal strength, but one could easily be hiding a larger military force whose presence it doesn't want to affect the interaction (if it plans to compromise and is curious to know whether the other one will as well) Both of those scenarios result in C out-competing D.
0Izeinwinter
... Since I am, of course a FAI (Sarcasm!) I can tell you the answer to this. They obviously split the future time-streams of the universe by each committing instant civilization-wide suicide or not based on a quantum lottery. Anthropic engineering in this way ensures they do not have to fight each other at all, which would entail actual risk of people getting hurt, No, seriously, you want us to take guesses at how weakly godlike entities are going to interact ? Pftrttfffff,mwhahahahahaahaaa.
0Lumifer
Sure. I find such speculations fun. YMMV, of course.

So, I read textbooks "wrong".

The "standard" way of reading a textbook (a math textbook or something) is, at least I imagine, to read it in order. When you get to exercises, do them until you don't think you'd get any value out of the remaining exercises. If you come across something that you don't want to learn, skip forwards. If you come across something that's difficult to understand because you don't fully understand a previous concept, skip backwards.

I almost never read textbooks this way. I essentially read them in an arbitrary ord... (read more)

1Adele_L
This is how I read too, usually. I think it's one of those things that works better for some people but not others. I've tried reading things the standard way, and it works for some books, but for other books I just get too bored trudging through the boring parts. BTW, I've also been reading HoTT, so if you want to talk about it or something feel free to message me!
1polymathwannabe
On one hand, it's a good sign that you have a keen sense of what you need to know, how and where to look for it, and at what pace. On the other hand, authors who know more about a subject than you do must have had their reasons to choose the order in which they present their material. I'd say keep listening to your gut on what is important to read, but at least try to get acquainted with the other topics you're choosing not to go deeply into.

What do you guys think about having an ideas/brainstorming section? I don't see too much brainstorming of ideas here. Most posts seem to be very refined thoughts. What about a place to brainstorm some of the less refined thoughts?

0pcm
What would a separate section accomplish that couldn't be done by putting tags in posts/comments?
1Adam Zerner
Making it explicit that it's available. I think people are hesitant to make such posts because they don't know if doing so is appropriate because it is infrequently done.
-1polymathwannabe
This seems to be LW's collected wisdom on the matter. http://wiki.lesswrong.com/wiki/Futility_of_chaos
3pcm
Brainstorming does not rely on chaos. It's a method of using System 1 which delays any censoring by System 2. Some evidence of LW beliefs about it: here and here. CFAR teaches people to brainstorm more often.
2Adam Zerner
I'm a bit confused by what is meant by futility of chaos so forgive me if I misinterpreted it. Let me try to be a bit more clear about what I'm proposing and let me know if futility of chaos addresses it. I'm saying that there are ideas that you think are worth brainstorming, and there are ideas that you feel confident enough about to write a post about to get some feedback. Right now it seems that people don't post about the "ideas worth brainstorming" and I suspect that it'd be beneficial if they did and we discussed them. Futility of chaos seems to be addressing more "chaotic and random" ideas. I don't know enough about math to really know what that means, but I sense that it's different from ideas that smart people on LW judge to be worth brainstorming.
1polymathwannabe
Brainstorming is too unstructured and unpredictable, a form of "creative disorder" that has received more credit than it deserves.
4Adam Zerner
What about discussing ideas that you think have a decent shot at being good and important, but that you can't explain fully and still aren't that confident in?
5Lumifer
Sure, that's what the open thread is for.
0Adam Zerner
I haven't spent enough time commenting on LW to be sure enough of this, but it doesn't seemed to be used that way. Do you think it's used that way? If not, perhaps it would be beneficial to make it more clear that it can/should be used that way. Also, maybe it'd be a good idea to break the open thread into categories. Eg: * Ideas you're willing to work on to implement. * Refined ideas. * Unrefined ideas. * Requests for advice. * Small practical questions. * Links to articles. * Friendly conversation. * Discussions that don't leave a trail. One use case would be if you want to talk about something personal but don't want a record of it on the internet.
1Lumifer
Then start using it this way. You don't need a permission. LW has periodic discussions about the need / desirability / implementation of a more granular system of organizing posts. So far these discussions have resulted in nothing, apparently "it's too hard to do" (tm). But again, if you think it's a good idea, just do it and we'll see how that experiment will play out. For an example you can see how the media thread works.
0Adam Zerner
True, but I think permission would be beneficial because I sense that people are hesitant to go against the norms. I don't think I'm a good enough coder yet to contribute, but I'm starting a coding bootcamp on Monday and I do hope to be able to contribute in the next few months.
1Lumifer
Heh. Who, do you think, should be the one to give you the permission? :-) And how do you feel about permission culture in general? X-D Besides, I don't think there are any strong existing norms about putting brainstorming posts into the open thread.
1Adam Zerner
Some sort of site guidelines and a UI that makes it clear. For example, if there were categories, it would be clear that the posts in the category are acceptable. I'm far from a conformist, but I think norms definitely do have a purpose. I don't really have any strong opinions on permission culture in general that I could articulate well.

Could someone please give me some good arguments for a work ethic? I tend to oppose it, but the debate seems too easy so I may be missing something.

Having a work ethic might help you accomplish more things than you would without one.

It's a good reputation boost. "A highly-skilled, hard-working x" might be more flattering than "a highly skilled x."

Work ethic might be a signal/facet of conscientiousness, a desirable trait in many domains.

3Ixiel
That makes sense; I hadn't thought of that. Thanks. Perhaps there would be a required critical mass of people to accept laziness as a virtue before it becomes "this good or that good" rather than "this good or lack of this good."
4Baughn
It'll build habits that also make it easier to do things you want when not at work? That's the big one. I have things I want to do, in far mode, and I find that diligence at work translates to diligence off work. Admittedly I also love my job, but...
2Ixiel
Thanks for reply! My question was unclear, but I meant the other meaning. I strongly do believe in doing whatever one does well, but not in seeking to do more work in the first place. I mean the idea that there's something more noble about working 40+ hours a week than not, and that people with sufficient means shouldn't retire in their thirties. Sure, one can build habits at work, but one can do so cheaper than 2000 hours of one's life per year, net of compensation. Admittedly this does not apply so much if you love your job, but hypothetically if someone values leisure more, is there a way in which choosing that leisure is less ethical?
7Richard_Kennaway
"Work" can mean different things, and so also "work ethic". The way I use it, "work" is whatever you are serious (or at least want to be) about doing, whether it's something that matters in the larger scheme of things or not, and whether or not it earns money. (But having to earn a living makes it a lot easier to be serious about it.) "Leisure" is whatever you like doing but choose not to be serious about. In that sense, I'm not much interested in leisure. Idling one's days away on a tropical island is not my idea of fun, and I do not watch television. Valuing seriousness is what I would mean by "work ethic". What one should be serious about is a separate ethical question. When other people talk about "work", they might mean service to others, and by "leisure" service to oneself. I score low on the "service to others" metric, but for EA people, that is their work ethic. To others, "work" is earning a living, and "leisure" is whatever you do when you're not doing that. The work ethic relative to that concept is that the pay you get for your work is a measure of the value you are creating for others. If you are idling then you are neglecting your duty to create value all the years that you can, for time is the most perishable of all commodities: a day unused is a day lost to our future light-cone for ever.
3Ixiel
That is an interesting use of "work" and "leisure," and one with which I was not familiar. I am very serious about my leisure (depending how you use serious... I love semantic arguments for fun but not everybody does so I'll cut that here). The more frequent use I have heard is close to its etymology: what one is allowed to do, as opposed to what one has a duty to do. That is anecdotal to the people I know so may not be the standard. I am much more serious about what I am allowed to do, and what others are allowed to do, than even a self-created duty. Very interesting and I'd be happy to continue, but to restate the original question with help from noticed ambiguity: is there a strong argument why spending 80000 hours in a job for jobs sake is ethically superior to selling enough time to meet ones need and using rest for ones own goals?
3Richard_Kennaway
To give a more direct answer, "a job for jobs sake" sounds like a lost purpose. In harder times, everyone had to work hard for as many years as they could, to support themselves, their household, and their community, and the community couldn't afford many passengers. Having broken free of the Malthusian wolves, the pressure is off, but the attitudes remain: idleness is sinful. And then again, from the transhumanist point of view, the pressure isn't off at all, it's been replaced by a different one. We now have the prospect of a whole universe to conquer. How many passengers can the human race afford in that enterprise, among those able to contribute to it?
1Richard_Kennaway
Meeting one's needs is, by definition, necessary, and one's goals are, by definition, what one pursues. Who doesn't do that, beyond people incapable of supporting themselves and people drifting through life with no particular goals?
0Ixiel
Sure, that's true for both. The former is just more constrained, and I was looking for an argument for a over b. And thanks for defining; I had thought those definitions too obvious to bear mention. My bad.
5Lumifer
The answer really depends on the underlying value system. For example, most varieties of hedonism would find nothing wrong with retiring to the life of leisure at thirty. But if you value, say, self-actualization (a la Maslow), retiring early is a bad idea. Generally speaking, the experience of the so-called trust fund kids indicates that NOT having to work for a living is bad for you. You can also compare housewives to working women.
8Viliam_Bur
If you want want to self-actualize in a way that does not (reliably, or soon enough) bring money, retiring early can be useful.
3Lumifer
I think there's some lack of clarity in this thread about what it means to "retire". There are two interpretations (see e.g. this post): (1) Retire means financial independence, not having to work for a living, so that you can focus your energy on what you want to do instead of what you have to do. (2) Retire means a carefree life of leisure where you maximize your hedonics by doing easy and pleasant things and not doing hard and stressful things. I think these two ways of retiring are quite different and lead to different consequences.
2Ixiel
I meant to imply the former, albeit with the possibility "what you want to do" is not restricted from including leisure/hedonics/pleasure.
3Lumifer
Technically, yes, though people mostly use (1) to mean doing something purposeful, an activity after which you can point and say "I made that", while (2) is essentially trying to get as close to wireheading as you currently can :-)
0Ixiel
Fair enough :)
0[anonymous]
They aren't totally unrelated because easy and pleasant things are less likely to earn you a living than hard and stressful things for obvious supply reasons (unless you're unusual compared to the rest of the labour market with respect to which kinds of things are easy and pleasant to you).
1Ixiel
Thank you for responding. Is there a reason you think it is a bad idea beyond Lumifer says so? I have thought about reading up on housewives, but not asking (the women's studies experts I know are VERY sensitive in their field, but quite engaging in others, so I'm afraid to talk shop). Could you recommend a source on each side?
1Lumifer
Sorry, don't have any links handy, but you should be able to google up trust-fund kids' issues quite easily. With respect to housewives it's mostly personal observations aka anecdata. I would be wary of studies on the subject as it is a political minefield and a hard thing to research due to confounders and fuzzy definitions.
2Ixiel
Yeah, likely to get hit over the latter :). The former is very familiar to me in my circles, and if anything they are more happy/fulfilled/productive than the wage reliant, though both extremes exist in both groups.
3Lumifer
I am not saying that working for a living is necessarily better, my point is that being financially independent has its own particular failure mode the existence of which should be taken into account.
1Ixiel
That's a very good point and too often neglected. There's too much betterness in folks' thoughts, not enough differentness, and the "best" situations fail in different ways than the "worst," which can succeed spectacularly in their own right.
1banx
It's less ethical if you think that you can get more resources by working, and that those resources can be used to create an ethically superior world.
1Ixiel
We might be mutually holding the others point equal. Sure one can get more money working, but I meant aside from that. Did you mean aside from the best alternative use of 40 hours per week?
1banx
I just meant that working might be an opportunity to better accomplish some goal you deem ethically relevant (e.g., by earning money and donating it or by developing FAI or the cure for some disease). I'm not arguing that it is. That depends on what the goals are and what your opportunities (both "work" and "leisure" using your definitions) are.
0DanielLC
You shouldn't retire in your thirties because it limits the amount you can help others.
3Lumifer
Aren't you assuming a particular value system?
2DanielLC
Yeah. I don't know any good reason if you're an egoist.
4Lumifer
Self-actualization, for example.
1Ixiel
Are you saying the workplace is a uniquely strong opportunity to make the world better as opposed to other avenues, or more money more ability? If the former, why?
2DanielLC
Division of labor. If you're not best suited to helping people, you're better off doing what you are best suited for and hiring someone else to help people.

My 30 day karma just jumped over 40 points since I checked LW this morning. Either I've said something really popular (and none of my recent comments have karma that high), or there's a bug.

5Richard_Kennaway
I got about +30 as well, ad only a small amount is due to recent upvotes. And despite the jump, I'm out of the top 30-day contributors list, which I've been in and out of the bottom of for some weeks. The other names in that list are regulars there, so they must have got some upvotes also. Perhaps some systematic downvoter had all his votes reversed?
4Adele_L
My guess is that someone with a similar political ideology to you upvoted forty of your comments on the recent political post. ETA: Well I've been struck by the mysterious mass-upvoter as well! I'm pretty sure the political motivation hypothesis is wrong now.
6[anonymous]
The same thing happened to me today - within 12 hours I got at least +1 karma on every single post of mine from the last month and a half or so, which happened to be primarily on the history of life / 'great filter' threads. I don't think it's ideological. Mysterious mass-upvoter?
2skeptical_lurker
Since my political ideology in that debate was trying to steelman both sides, I doubt this is the case, unless there is a fanatical steelmanner out there.
3gjm
I've seen several unexpected increases on the order of 10 points over the last couple of weeks. (I don't remember the exact dates.) My guess was gradual undoing of prior mass-downvoting, but a Mystery Mass Upvoter is certainly another possibility. [EDITED to add ...] A possible variant of the Mystery Mass Upvoter hypothesis: we have a Mystery Small-Mass Upvoter, who is upvoting old posts in Main (maybe because s/he is new here and reading through old material). But that only works if everyone affected has old posts in Main, which I don't think is the case.
2[anonymous]
Hypothesis: we are the subjects of an experiment. I seem to recall recent instances of a mysterious mass downvoter that produced several threads of people complaining / trying to figure out what could be done. What if someone is doing the same thing, but with upvotes, to look for bias in community reactions? Or they're just trolling. Whichever.
0gjm
Interesting idea. Though I don't think it's really indicative of any bad sort of bias if people get angrier about gratuitous downvotes than about gratuitous upvotes.
3NancyLebovitz
My karma's been running higher than I expected, too. I wish there was some way to track karma dif. So far as I know, there's no way to do it for older comments and posts.
1A1987dM
So it's not just me? I also seemed to see something like that, but I assumed I just misremembered my previous 30-day karma score or something.
0A1987dM
Just gained another ~25 karma. Huh.
-4polymathwannabe
Indeed, it sounds like a bug. It might need direct fixing. Here, have a downvote. :-D

Trans-human thought experiment:

  • Scenario 1: A human brain is converted to a virtual brain through a destructive process (as described in many science-fiction stories). In what sense is this virtual intelligence the same "person" as the original, organic person?
  • Scenario 2: A human brain is converted to a virtual brain through a non-destructive process. The original, organic person lives on as before. In what sense is this virtual intelligence the same "person" as the original, organic person – is this the same as the answer in scenario
... (read more)
2Viliam_Bur
I am not sure about this, but seems to me that in both cases it is the same person. Only in scenario 2 we have two copies that start to diverge at that point; so they are both continuations of the old one, but are not the same as each other. This does not have a good equivalent in our intuition, because we usually don't "branch" this way. But you can imagine a magic spell that creates two identical humans from you. Both are you, but from the moment of copying, they start evolving differently, so after some time it is just like two twins, having a shared memory from before that moment.
0polymathwannabe
In both cases I'd say they're different persons. I can see why a theory of consciousness that argues that you're not the atoms, but the pattern, wouldn't care whether that pattern is realized in meat or in silicon, but my subjective experience of continuity of memories is what confirms that I'm still me. Once you copy my mind with zero loss onto a digital, durable substrate, my original brain would still have strong objections to being switched off.
[-][anonymous]10

Could someone recommend me a logic textbook? I need it to cover syntax and semantics for propositional and first-order classical logic, as well as preferably including material on intuitionistic logic and higher-order logics. I could really use material on any existing attempts to ground semantics or proof systems in computation, too.

"Computation and Logic" is my first candidate, though I want something else to go with it. This is for trying to work on logical probability research, and also because I've always been interested in type theory as a research field (hence wanting coverage of intuitionistic logic, which might as well be called computational logic what with the Curry-Howard Isomorphism).

2[anonymous]
The first five chapters of Marker's Model Theory will satisfy and some information about type theory in the context of model theory. I know it doesn't satisfy all of your requirements, but it is a seriously good book with an excellent learning curve. I took a semester course covering the first three chapters in undergrad. It almost convinced me to work in mathematical logic, but sadly economic incentives trumped aesthetic ones.
1pragmatist
This is the textbook we used in graduate school, and it is very good. Not sure if this is what you were referring to as "Computation and Logic". It covers second order logic, but not intuitionistic logic as far as I can remember.
0[anonymous]
That's indeed the one I was referring to.

Searching for genes that make people smart -- we still have no idea...

we still have no idea...

No, this is an unmitigated triumph. It's amazing how people take such a negative view of this.

So let me get this straight: over the past few decades we have slowly moved from a viewpoint where Gould is a saint, intelligence doesn't exist and has no predictive value since it's a racist made-up concept promoted by incompetent hacks and it has no genetic component and definitely nothing which could possibly differ between any groups at all, to a viewpoint where the validity of intelligence tests in multiple senses have been shown, the amount of genetic contribution has been accurately estimated, the architecture nailed down as highly polygenic & additive, the likely number of variants, and we've started accumulating the sample size to start detecting variants, and not just have we detected 60+ variants with >90% probability* (see the remarks on the Bayesian posterior probability in the supplementary material), we even have 3 which pass the usual (moronic, arbitrary, unjustified) statistical-significance thresholds - and wait, there's more, they also predict IQ out of sample and many of the implicated variants are known to relate to the central nervou... (read more)

3Lumifer
We're are looking at this in different context and are using different baselines. You are talking about how long ago we started with the genetic component of intelligence being malicious fantasies of evil people and now it's just science. Sure (though you still can't discuss it publicly). I'm talking about this particular paper and how big of a step it is compared to, say, a couple of years ago. My baseline is much more narrow and technical. It is "we look at the the genome of a baby and have no idea what will be its IQ when it grows up". That is still largely the case and the paper's ability to forecast does not look impressive to me. The fact that intelligence is largely genetic and highly polygenic is already "normal" for me -- my attitude is "yeah, sure, we know this, what have you done for me lately". I appreciate the historical context which we are not free of by any stretch of imagination (so, no, I don't see unmitigated triumphs), but I was not commenting on progress over the last half a century. I want out-of-sample predictions with noticeable magnitude and I think getting there will take a bit more than just engineering.
6gwern
This paper validates the approach (something a lot of people, for a lot of different reasons, were skeptical of), and even on its own merits we still get some predictive power out of it: the 3 top hits cover a range of ~1.5 points, and the 69 variants with 90% confidence predict even more. (I'm not sure how much since they don't bother to use all their data, but if we assume the 69 are evenly distributed between 0-0.5 points, then the mean is 0.25 and the total predictive power is more than a few points.) What use is this result? Well, what use is a new-born baby? As the cryptographers say, 'attacks only get better'. And, uh, why would you think that? There's no secret sauce here. Just take a lot of samples and run a regression. I don't think they even used anything particularly complex like a lasso or elastic net.
3Lumifer
Pretend for a second it's a nutrition study and apply your usual scepticism :-) You know quite well that "just run a regression" is, um... rarely that simple. To give one obvious example, interaction effects are an issue, including interaction between genes and the environment.

Pretend for a second it's a nutrition study and apply your usual scepticism :-) You know quite well that "just run a regression" is, um... rarely that simple.

No, that's the great thing about genetic associations! First, genes don't change over a lifetime, so every association is in effect a longitudinal study where the arrow of time immediately rules out A<-B or reverse causation in which IQ somehow causes particular variants to be overrepresented; that takes out one of the three causal pathways. Then you're left with confounding - but there's almost no way for a third variable to pick out people with particular alleles and grant them higher intelligence, no greenbeard effect, and population differences are dealt with by using relatively homogenous samples & controlling for principal components - so you don't have to worry much about A<-C->B. So all you're left with is A->B.

To give one obvious example, interaction effects are an issue, including interaction between genes and the environment.

But they're not. They're not a large part of what's going on. And they don't affect the associations you find through a straight analysis looking for additive effects.

4Lumifer
But their expression does. How do you know?
3gwern
An expression in circumstances dictated by what genes one started with. Because if they were a large part of what was going on, the estimates would not break down cleanly and the methods work so well.
2Azathoth123
Keep in mind that the outside view of biological complexity is that Or to phrase this another way:
5gwern
I don't think the outside view is relevant here. We have coming up on a century of twin studies and behavioral genetics and very motivated people coming up with possibilities for problems, and so far the traditional estimates are looking pretty good: for example, when people go and look at genetics directly, the estimates for simple additive heritability look very similar to the traditional estimates. The other day offered an example of a SNP study confirming the estimates from twin studies, "Substantial SNP-based heritability estimates for working memory performance", Vogler et al 2014. If all these complexities were real and serious problems and the Outside View advises us to be skeptical, why do we keep finding the SNP/GCTA estimates look exactly like we would have predicted?
1Azathoth123
Ok, I confess I have no idea what SNP and GCTA are. As for the study Lumifer linked to, Razib Khan's analysis of it is that it suggests intelligence is a complex polygenetic trait. This should not be surprising as it is certainly an extremely complex trait in terms of phenotype.

Does LW markup have anything for text formatting that does what the html tag does?

2gwern
Some forms of Markdown do, but not LW's AFAIK. If you're patient, you can use the Latex formatting to do subscripts.

I've been thinking about the Rokos Basilisk thought experiment, considering the drivers of creating a Basilisk and the next logical step such an entity might conceivably take, and the risk in presents in the temptation to protect ourselves. Namely, that we may be tempted to create an alternative FAI which would serve to protect humankind against uFAI, a protector AI, and how it distorts the Basilisk.

A protector AI would likely share, evolve, or copy from any future Basilisk or malevolent intelligence in order to protect and/or prevent us from it or its cre... (read more)