If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Open thread, Sep. 14 - Sep. 20, 2015
New Comment
192 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Could we live forever? - hey guys. I made a film about transhumanism for BBC News. It features some people in this community and some respected figures. Let me know what you think and if i missed anything etc.

https://www.youtube.com/watch?v=STsTUEOqP-g&feature=youtu.be

3username2
Awesome! You should repost this as a top level post.

26 Things I Learned in the Deep Learning Summer School

In the beginning of August I got the chance to attend the Deep Learning Summer School in Montreal. It consisted of 10 days of talks from some of the most well-known neural network researchers. During this time I learned a lot, way more than I could ever fit into a blog post. Instead of trying to pass on 60 hours worth of neural network knowledge, I have made a list of small interesting nuggets of information that I was able to summarise in a paragraph.

At the moment of writing, the summer school website is still online, along with all the presentation slides. All of the information and most of the illustrations come from these slides and are the work of their original authors. The talks in the summer school were filmed as well, hopefully they will also find their way to the web.

Probably the biggest cryonics story of the year. In the print edition of The New York Times, it appeared on the front page, above the fold.

A Dying Young Woman's Hope in Cryonics and a Future, by Amy Harmon

http://www.nytimes.com/2015/09/13/us/cancer-immortality-cryogenics.html

You can also watch a short documentary about Miss Suozzi here:

http://www.nytimes.com/video/science/100000003897597/kim-suozzis-last-wishes.html

9advancedatheist
Yet som there be that by due steps aspire To lay their just hands on that Golden Key That ope's the Palace of Eternity. (John Milton, Comus, lines 12-14) May Kim find that Golden Key some day.
7James_Miller
I wonder if the article will increase Alcor's membership? As "Why have so few people signed up for cryonics" is a big mystery for cryonics supporters such as myself we should use the opportunity of the article to make predictions about the article's impact. I predict that the article will boost Alcor's membership over the next year by 10% above trend which basically means membership will be 10% higher a year from now than it is currently. EDIT: I predict Alcor's membership will be 11% higher a year from now than it is today. Sorry for the poorly written comment above.
5gjm
Are those two 10% figures equal only by coincidence? To me, "boost membership by 10% above trend" means either "increase this year's signups by 10% of what they would otherwise have been" or else "increase this year's signups enough to make membership a year from now 10% higher than it otherwise would have been". The second of these is equivalent to "membership will be 10% higher a year from now" iff membership would otherwise have been exactly unaltered over the year, which would mean that signups are a negligibly small fraction of current membership. The first is equivalent to "membership will be 10% higher a year from now" iff m+1.1s = 1.1m where m,s are current membership and baseline signups for the next year, which is true iff m = 11s. Those are both rather specific conditions, and the first seems pretty unlikely. Did you actually mean either of them, or have I misunderstood?
0Lumifer
I am reading the grandparent literally as "increase membership" which does imply that the current trend is flat and the membership numbers are not increasing.
8gjm
Could be. But is Alcor really doing so badly? (Or: does James_Miller think they are?) The graphs on this Alcor page seem to indicate that membership is in fact increasing by at least a few percent year on year, even if people are no longer counted as members after cryosuspension.
2Lumifer
Hm. Yes, Alcor's membership is going up nicely. I don't know what James_Miller had in mind, then.
4btrettel
I made this into a prediction on PredictionBook.
1ChristianKl
Is the relevant data publically accessible?
0btrettel
Yes, the data is online.
0entirelyuseless
My understanding is that the number of people signed up is in the thousands, which if it is correct means probably a bit less than one in a million persons. You might have meant it rhetorically, but if it is true that it is a "big mystery" to you why most people have not signed up, then your best guess for the reason for this should be that signing up for cryonics is foolish and useless, just as if a patient in a psychological ward finds himself thinking, "I wonder why so few people say they are Napoleon?", his best guess should be that the reason for this is that the people he knows, including himself, are not in fact Napoleon. As another example, if you are at the airport and you see two lines while you are checking in, a very long one and a very short one, and you say, "It's a big mystery to me why so many people are going in that long line instead of the short one," then you'd better get in that long line, because if you get in the short one, you are going to find yourself kicked out of it. On the other hand if you do know the reasons, you may be able to get in the short line. In the cryonics case, this is pretty much true no matter how convincing you find your reasons, until you can understand why people do not sign up.

But the intellectual quality of some of the people who have signed up for cryonics is exceptionally high (Hanson, Thiel, Kurzweil, Eliezer). Among the set of people who thought they were Napoleon (excluding the original), I doubt you would find many who had racked up impressive achievements.

if you are at the airport and you see two lines while you are checking in, a very long one and a very short one, and you say, "It's a big mystery to me why so many people are going in that long line instead of the short one," then you'd better get in that long line, because if you get in the short one, you are going to find yourself kicked out of it.

What if you see Hanson, Thiel, Kurzweil, and Eliezer in the short line, ask them if you should get in the short line, and they say yes?

9[anonymous]
"What if you see Hanson, Thiel, Kurzweil, and Eliezer in the short line, ask them if you should get in the short line, and they say yes?" As I pointed at last time you brought this up,these people aren't just famous for being smart, they're also famous for being contrarians and futurists. Cryonics is precisely an area in which you'd expect them to make a bad bet, because it's seen as weird and it's futuristic.
2Rob Bensinger
This depends on whether you model contrarianism and futurism as a bias ('Hanson is especially untrustworthy about futurist topics, since he works in the area') v. modeling contrarianism and futurism as skills one can train or bodies of knowledge one can learn ('Hanson is especially trustworthy about futurist topics, since he works in the area').
3[anonymous]
My typical heuristic for reliable experts (taken from Thinking Fast and Slow I think) is that if experts have tight, reliable feedback loops, they tend to be more trustworthy. Futurism obviously fails this test. Contrarianism isn't really a "field" in itself, and I tend to think of it more as a bias... although EY would obviously disagree.
1Rob Bensinger
Then it might be that futurism is irrelevant, rather than being expertise-like or bias-like. (Unless we think 'studying X while lacking tight, reliable feedback loops' in this context is worse than 'neither studying X nor having tight, reliable feedback loops.') Thiel, Yudkowsky, Hanson, etc. use "contrarian" to mean someone who disagrees with mainstream views. Most contrarians are wrong, though correct contrarians are more impressive than correct conformists (because it's harder to be right about topics where the mainstream is wrong).
1[anonymous]
In this case futurism is two things in these people: 1. A belief in expertise about the future. 2. A tendency towards optimism about the future. Combined, these mean that these people both think cryonics will work in the future, and are more confident in this assertion than warranted. I don't think so... it's more someone who has the tendency(in the sense of an aesthetic preference) to disagree with mainstream views. In this case, they would tend to be drawn towards cryonics because it's out of the mainstream, which should give us less confidence that they're drawn towards cryonics because it's correct.
0Rob Bensinger
One of the most common ways they use the word "contrarian" is to refer to beliefs that are rejected by the mainstream, for whatever reason; by extension, contrarian people are people who hold contrarian beliefs. (E.g., Galileo is a standard example of a "correct contrarian" whether his primary motivation was rebelling against the establishment or discovering truth.) "Aesthetic preference" contrarianism is a separate idea; I don't think it matters which definition we use for "contrarianism".
1[anonymous]
I think it matters in this context. If these people are contrarian simply because they happen to have lots of different views, then it's irrelevant that they're contrarian. If they're contrarian because they're DRAWN towards contrarian views, it means they're biased towards cryonics.
1Rob Bensinger
I agree it matters in this case, but it doesn't matter whether we use the word "contrarianism" vs. tabooing it. Also, your summary assumes one of the points under dispute: whether it's possible to be good at arriving at true non-mainstream beliefs ('correct contrarianism'), or whether people who repeatedly outperform the mainstream are just lucky. 'Incorrect contrarianism' and 'correct-by-coincidence contrarianism' aren't the only two possibilities.
1[anonymous]
Ok, so to summarize: 1. These people are futurists. 1a. If you believe futurists have more expertise on the future, then they are more likely to be correct about cryonics. 1b. If you believe expertise needs tight feedback loops, they are less likely to be correct about cryonics. 1c. If you believe futurists are drawn towards optimistic views about they future, they are less likely to be correct about cryonics. 2.These people are contrarians 2a. If you believe they have a "correct contrarian cluster" of views, they are more likely to be correct about cryonics. 2b. If you believe that they arrived at contrarian views by chance, they are no more or less likely to be correct about cryonics. 2c. If you believe that they arrived at contrarian views because they are drawn to contrarian views, they are less likely to be correct about cryonics. I believe 1b, 1c, and 2c. You believe 1a and 2a. Is that correct?
6Lumifer
The intellectual quality of some people who have NOT signed up for cryonics is exceptionally high as well.
2Rob Bensinger
But the average is lower, and not signing up for cryonics is a "default" action: you don't have to expend thought or effort in order to not be signed up for cryonics. A more relevant comparison might be to people who have written refutations or rejections of cryonics.
0Lumifer
I don't think the average matters, it's the right tail of the distribution that's important. Take, say, people with 130+ IQ -- that's about 2.5% of your standard white population and the overwhelming majority of them are not signed up. In fact, in any IQ quantile only a miniscule fraction has signed up.
0Rob Bensinger
entirelyuseless made the point that low cryonics use rates in the general population are evidence against the effectiveness of cryonics. James Miller responded by citing evidence supporting cryonics: that cryonicists are disproportionately intelligent/capable/well-informed. If your response to James is just that very few people have signed up for cryonics, then that's restating entirelyuseless' point. "The intellectual quality of some people who have NOT signed up for cryonics is exceptionally high" would be true even in a world where every cryonicist were more intelligent than every non-cryonicist, just given how few cryonicists there are.
4Lumifer
No, I don't think he did. The claim that low uptake rate is evidence against the effectiveness of cryonics is nonsense on stilts. entirelyuseless' point was that if you are in a tiny minority and you don't understand why the great majority doesn't join you, your understanding of the situation is... limited. James Miller countered by implying that this problem can be solved if one assumes that it's the elite (IQ giants, possessors of secret gnostic knowledge, etc.) which signs up for cryonics and the vast majority of the population is just too stupid to take a great deal when it sees it. My counter-counter was that you can pick any measure by which to choose your elite (e.g. IQ) and still find that only a miniscule fraction of that elite chose cryonics -- which means that the "just ignore the stupid and look at the smart ones" argument does not work.
0entirelyuseless
Someone who mistakenly believes that he is Napoleon presumably thinks that he himself is impressive intellectually, and in the artificial example I was discussing, he would think that others who believe the same thing are also impressive. However, it's also true that outside observers would not admit that, and in the cryonics case many people would, so in this respect the cryonics case is much more favorable than the Napoleon example. However, as Lumifer pointed out, this is not a terribly strong positive argument, given that you will be able to find equally intelligent people who have not signed up for cryonics. In the Hanson etc airport situation, I would at least ask them why everyone else is in the long line, and if they had no idea then I would be pretty suspicious. In the cryonics case, in reality, I would expect that they would at least have some explanation, but whether it would be right or not is another matter. Ettinger at least thought that his proposal would become widely accepted rather quickly, and seems to have been pretty disappointed that it was not. In any case, I wasn't necessarily saying that signing up for cryonics is a bad thing, just that it seems like a situation where you should understand why other people don't, before you do it yourself.
2Lumifer
gjm posted a link to the data: Alcor says it has about 1,000 members at the moment.
0entirelyuseless
Yes, I meant including other groups. It might be around 2,000 or so total but I didn't want to assert that it is that low because I don't know that for a fact.
0[anonymous]
But the logic that makes signing up for cryonics make sense is the same logic that humans are REALLY BAD AT doing. Following the crowd is generally a good heuristic, but you have to recognize it's limitations.
2entirelyuseless
In principle this is saying that you know why most people don't sign up, so if you're right about that, then my argument doesn't apply to your case.
5Fluttershy
I'm impressed at how positively the author portrayed cryonicists. The parts which described the mishaps which occurred during/before the freezing process were especially moving.
3advancedatheist
The article discusses the Brain Preservation Foundation. The BPF has responded here: A COURAGEOUS STORY OF BRAIN PRESERVATION, “DYING YOUNG” BY AMY HARMON, THE NEW YORK TIMES. http://www.brainpreservation.org/a-courageous-story-of-brain-preservation-dying-young-by-amy-harmon-the-new-york-times/

How Grains Domesticated Us by James C. Scott. This may be of general interest as a history of how people took up farming (a more complex process than you might think), but the thing that I noticed was that there are only a handful (seven, I think) of grain species that people domesticated, and it all happened in the Neolithic Era. (I'm not sure about quinoa.) Civilized people either couldn't or wouldn't find another grain species to domesticate, and civilization presumably wouldn't have happened without the concentrated food and feasibility of social control that grain made possible.

Could domestcatable grain be a rather subtle filter for technological civilization? On the one hand, we do have seven species, not just one or two. On the other, I don't know how likely the biome which makes domesticable grain possible is.

I suspect that developing a highly nutritious crop that is easy to grow in large quantities is a prerequisite for technological civilization. However, I wonder if something other than grains might have sufficed (e.g. potatoes).

7NancyLebovitz
One of the points made in the video is that it's much easier to conquer and rule people who grow grains than people who grow root crops. Grains have to be harvested in a timely fashion-- the granaries can be looted, the fields can be burnt. If your soldiers have to dig up the potatoes, it just isn't worth it.
6Douglas_Knight
Yes, it's easier to loot people who grow grains than roots, but I don't think that's so relevant to taxation by a stationary bandit.
3polymathwannabe
Hmm, abundant and easily accessible food is also a requisite for the evolution of eusocial animal colonies. I guess that's what cities ultimately are.
6Lumifer
Grain is just food that happened to possess two essential features: * Making it was sufficiently productive, that is, a group of humans could grow more grain than they themselves would need; * It could be stored for a long time with only minor spoilage. Having reserves of stored food to survive things like winters, droughts, and plagues of locusts is rather essential for a burgeoning civilization. Besides, without non-perishable food it's hard to have cities.
0VoiceOfRa
You left out an important property: * Making it requires that the makers stay in the same place for a large fraction of the year. Furthermore, if they are forced to leave for any reason, all the effort they have expended so far is wasted and they probably can't try again until next year.
0Lumifer
That's a relevant feature for figuring out the consequences of depending on grain production. I'm not sure it's a relevant feature for the purposes of deciding why growing grains became so popular.
5Vaniver
This seems somewhat unlikely to me, and we might be able to answer it by exploring "grain." It seems to me that there are a handful of non-grain staple crops around the world that suggest that a planet would need to have no controllable vegetation sufficient for humans to sustain themselves on (either directly, or indirectly through feed animals). Even ants got agriculture to work.
9[anonymous]
Potatoes, sweet potatoes, turnips, taro, tapioca, those weird south american tubers related to Malibar spinach, and the tubers of runner beans immediately come to mind as long term storable calorie crops. Of note, the consumption of flour has recently been pushed back to at the very least 32,000 years ago, probably much longer, even if field agriculture has not: http://www.npr.org/sections/thesalt/2015/09/14/440292003/paleo-people-were-making-flour-32-000-years-ago
1Lumifer
Doesn't that depend on the climate? I don't know for how long can you store potatoes and such in tropical climates -- my guess is not for long. If you are in, say, Northern Europe, the situation changes considerably. Plus, the tubers you name are predominantly starch and people relying on them as a staple would have issues with at least insufficient protein.
5[anonymous]
Climate does make a difference, for sure. But there are two things to consider. One, climates that are warmer let things rot easier but tend to have longer or continuous growing seasons. Two, climate control is a thing that people do (dig deep enough and you get cooler temperature almost anywhere on Earth) as is processing for storage via drying or chemical treatment. Forgot to mention nuts too. You are certainly right about protein. Something else must be added, be it meat or veggies of some sort or legumes.
3Lumifer
Hm, interesting. I don't know of any culture which heavily relied on nuts as a food source. I wonder why that is so. Nuts are excellent food -- fairly complete nutritionally, high caloric density, don't spoil easily, etc. Moreover, they grow on trees, so once you have a mature orchard, you don't need to do much other than collect them. One possibility is that trees are too "inflexible" for agriculture -- if your fields got destroyed (say, an army rolled through), you'll get a new crop next year (conditional, of course, on having seed grain, labour to work the fields, etc.). But if your orchard got chopped down, well, the wait till the new crop is much longer. A counter to this line of thought is complex irrigation systems which are certainly "inflexible" and yet were very popular. I wonder how land-efficient (calories/hectare) nut trees are. Ah, I just figured out that coconuts are nuts and there are Polynesian cultures which heavily depend on them. But still, there is nothing in temperate regions and there are a lot of nut trees and bushes growing there.
3[anonymous]
I'm aware of pre-european Californian societies whose main calorie crop was acorns, rendered edible by soaking after crushing to remove irritating tannins and then cooked, and sometimes preserved by soaking in various other substances.
1Lumifer
Yes, a good point. But weren't these American Indians mostly hunter-gatherers? I don't know if you can say that they engaged in agriculture. Some other tribes did, but those didn't rely on nuts or acorns.
6[anonymous]
Eh, to my mind the boundary between agriculture and gathering is fuzzy when your plants live a long time and grow pretty thickly and you encourage the growth of those you like. Like, there's 11.5k year old seedless fig trees found in the middle east, a thousand years before there's any evidence of grain field agriculture. Those simply don't grow unless planted by humans.
2Lumifer
All true. Still, grain very decisively won over nuts. I wonder if there's a good reason for that or it was just a historical accident. Maybe you can just make many more yummy things our of flour than out of nuts. Or maybe nuts don't actually store all that well because of fats going rancid...

AI risk going mainstream

This week on the BBC you may get the impression that the robots have taken over. Every day, under the banner Intelligent Machines, we will bring you stories on online, TV, radio about advances in artificial intelligence and robotics and what they could mean for us all.

Why now? Well at the end of last year Prof Stephen Hawking told the BBC that full artificial intelligence could spell the end for mankind. [...] That gloomy view started a public debate. Roboticists and computer scientists who specialise in the AI field rushed to reassure us that the "singularity", the moment when machines surpass humans, is so far off that it is still the stuff of science fiction.

Looks like Stephen Hawking is finally someone high enough status that he can say this sort of think and people will take him seriously.

4ChristianKl
That's a pretty self serving explanation from the BBC. I think that Bostrom book plays in a major role for the change we have seen in the last year. It can be read by intelligent people and then they understand the problem. Beforehand there was no straightforward way to get a deep understanding in a straightforward way.
0skeptical_lurker
I came across Bostrom a decade ago. I'm sure his book is great but 'Bostrom writes a book' isn't that different from 'Bostrom has a website'. Also, Kurtzweil had some highly selling books out a long time ago. Elon Musk also made similar claims lately, and Bill Gates. Bostrom is pretty smart, but he's not a pre-existing household name like these guys.
3ChristianKl
Yes, but with a quite different message. No, it's quite different. I don't think Bill Gates would have made those claims when it wasn't for Bostrom's book. Bill Gates also promotes the book to other people. Bill Gates likely wouldn't tell important people: "Go read up on Bostroms website how we should think about AGI risk", the way he does it with the book. Elon Musk is a busy guy with 80 hour workweeks. Bostrom and FHI made to him the case that UFAI risk is important. Personal conversations were likely important but reading Bostrom's book helped raise the importance of the issue in Elon's mind.
1skeptical_lurker
Oh, so Bostrom was behind these three people? Then his book is more important than I thought.
4ChristianKl
I'm not saying that Bostrom was behind Stephen Hawking remarks but I think he's partly responsible for Musk and Gates positions. When it comes to Musk I think there was a facebook post a while ago about FHI efforts in drafting Musk for the cause. With Gates there's https://www.youtube.com/watch?v=6DBNKRYVY8g where Gates and Musk sit at a conferece for the Chinese elite and get interviewed by Baidu's CEO. As part of that Gates get asked for his take on AI risk and he says that he's concerned and people who want to delve deeper into the issue should read Bostroms book. As far as the timeline goes I think it's probable that Gates public comments on the issue come after him reading the book. I don't think that a smart person suddenly starts to fear AI risk because they read in a newspaper that Steven Hawking is afraid of it. On the other hand a smart person who reads Bostrom's book can be convinced by the case of the book that the issue is really important. That's something a book can do but that newspapers usually don't do. Books that express ideas in a way that convinces a smart person that reads them are powerful.
2skeptical_lurker
Well, Steven Hawking is far smarter than most people, so on most subjects which Steven Hawking is familiar it would be a good idea to update in the same direction as him, unless you are an expert on it too. Also, it raises AI risk as a possible concern, at which point people might then try to find more information, such as Bostrom's book, or website. So yes, people get more information from reading a book than reading a newspaper article, but the article might be what lead them to read the book in the first place.

A while back, I was having a discussion with a friend (or maybe more of a friendly acquaintance) about linguistic profiling. It was totally civil, but we disagreed. Thinking about it over lunch, I noticed that my argument felt forced, while his argument seemed very reasonable, and I decided that he was right, or at least that his position seemed better than mine. So, I changed my mind. Later that day I told him I'd changed my mind and I thought he was right. He didn't seem to know how to respond to that. I'm not sure he even thought I was being serious at first.

Have other people had similar experiences with this? Is there a way to tell someone you've changed your mind that lessens this response of incredulity?

6Strangeattractor
Sometimes saying why you changed your mind can help. In more detail than "his position seemed better than mine". But sometimes it takes doing some action that is in line with the new idea in order for other people to think you may be serious. Another thing that may help is to wait some time before telling the person. "Later that day" makes it seem like a quick turnaround. Waiting until the next day to say something like "I've had some time to think about it, and I think you were right about X" might make more sense to the other person and lessen the incredulity. Also, it depends on what your past history has been with this person, and what they have observed in your behaviour.
1Dahlen
It happened to me only with people who were extremely, unreasonably cynical about people's rationality in the first place (including their own). People who couldn't update on the belief of people being unable to update on their beliefs. There's an eerie kind of consistency about these people's beliefs, at least for that much one can give them credit... You have to engage in some extra signaling of having changed your own mind; just stating it wouldn't be as convincing.

The Fallacy of Placing Confidence in Confidence Intervals

Welcome to the web site for the upcoming paper "The Fallacy of Placing Confidence in Confidence Intervals." Here you will find a number of resources connected to the paper, including the itself, the supplement, teaching resources and in the future, links to discussion of the content.

The paper is accepted for publication in Psychonomic Bulletin & Review.

pdf

Interval estimates – estimates of parameters that include an allowance for sampling uncertainty – have long been touted as a key component of statistical analyses. There are several kinds of interval estimates, but the most popular are confidence intervals (CIs): intervals that contain the true parameter value in some known proportion of repeated samples, on average. The width of confidence intervals is thought to index the precision of an estimate; CIs are thought to be a guide to which parameter values are plausible or reasonable; and the confidence coefficient of the interval (e.g., 95%) is thought to index the plausibility that the true parameter is included in the interval. We show in a number of examples that CIs do not necessarily have any of t

... (read more)
0[anonymous]
I just read through this, and it sounds like they're trying to squish a frequentist interpretation on a Bayesian tool. They keep saying how the confidence intervals don't correspond with reality, but confidence intervals are supposed to be measuring degrees of belief. Am I missing something here?
7VincentYu
I briefly skimmed the paper and don't see how you are getting this impression. Confidence intervals are—if we force the dichotomy—considered a frequentist rather than Bayesian tool. They point out that others are trying to squish a Bayesian interpretation on a frequentist tool by treating confidence intervals as though they are credible intervals, and they state this quite explicitly (p.17–18, emphasis mine):
4[anonymous]
Hmmm, yes, I suppose I was making the same mistake they were... I thought that what confidence intervals were are actually what credible intervals are.

I see. Looking into this, it seems that the (mis)use of the phrase "confidence interval" to mean "credible interval" is endemic on LW. A Google search for "confidence interval" on LW yields more than 200 results, of which many—perhaps most—should say "credible interval" instead. The corresponding search for "credible interval" yields less than 20 results.

How many hours of legitimate work do you get done per day?

Legitimate = uninterrupted, focused work. Regarding the time you spend working but not fully focused, use your judgement in scaling it. Ie. maybe an hour of semi-productive work = .75 hours of legitimate work.

Edit: work doesn't only include work for your employer/school. It could be self-education, side projects etc. It doesn't include chores or things like casual pleasure reading though. Per day = per day that you intend to put in a full days work.

[pollid:1029]

3Vladimir_Golovin
I do about 3 hours of legit work when I'm in my usual situation (family, work), but I do way more when I'm alone, both on- and off-the-grid: 12 hours or even more (of course assuming that the problem I'm working on is workable and I don't hit any serious brick walls). My last superfocus period lasted for about two weeks, it happened when my family went on vacation, and I took a mini-vacation from work myself (though the task I was working on was pretty trivial). My longest superfocus period was about 45 days, it happened on a long off-the-grid vacation.
3Gunnar_Zarncke
In the absence of any indications whether this included weekends I assumed that it doesn't include weekends. On weekends my producivity is way lower.
2Adam Zerner
Good point. I intended for it to mean "on days where you intend to put in a full days work". I'm a little crazy so for me that's every day :) But I definitely should have clarified.
2Gunnar_Zarncke
I also don't strictly distinguish between work days and other, but you also clarified that the time shouldn't include chores which are work too but not usually associated with work for money or education so I had to make some cut. If you had included any kind of productive work the number would have read differently. Lots of pleasure reading e.g. LW can count as such; the line (or factor) could be how much it contributed to your own future development.
0Adam Zerner
This is way lower than I expected. Thoughts?
4Gunnar_Zarncke
Maybe you should have added another poll that asked for formally expected or billed hours.
3lmm
It's about where I expected. I think 6 is probably the best you can do under ideal circumstances. Legitimate, focussed work is exhausting. If you're looking for bias, this is a community where people who are less productive probably prefer to think of themselves as intelligent and akrasikal (sp?). Also you've asked at the end of a long holiday for any students here.

Should we actually expect 'big world immortality' to be true? I know the standard LW response is that what we should care about is measure, but what I'm interested in is whether it should be true that from every situation in which we can find ourselves in, we should expect a never-ending continuity of consciousness?

Max Tegmark has put forth a couple of objections: the original one (apart from simple binary situations, a consciousness often undergoes diminishment before dying and there's no way to draw continuity from it to a world in which it survives) an... (read more)

2ChristianKl
Could you define exactly what you mean with 'big world immortality'?
2qmotus
Quantum immortality is an example, but something similar would arguably also apply to, for example, a multiverse or a universe of infinite size or age. Basically the idea that an observer should perceive subjective immortality, since in a big world there is always a strand in which they continue to exist. Edit: Essentially, I'm talking about cryonics without freezers.

I have a variant on linear regression. Can anyone tell me what it's called / point me to more info about it / tell me that it's (trivially reducible to / nothing like) standard linear regression?

Standard linear regression has a known matrix X = x(i,j) and a known target vector Y = y(j), and seeks to find weights W = w(i) to best approximate X * W = Y.

In my version, instead of knowing the values of the input variables (X), I know how much each contributes to the output. So I don't know x(i,j) but I kind of know x(i,j) * w(i), except that W isn't really a th... (read more)

3bogus
Sounds like your problem is fitting a sparse matrix, i.e. where you want many entries to be 0. This is usually called compressed sensing, and it's non-trivial.
2Lumifer
Well, it's going to depend on some specifics and on how much data do you have (with the implications for the complexity of the model that you can afford), but the most basic approach that comes to my mind doesn't involve any regression at all. Given your historical data ("I have a list of previous widgets, and how much they sold in each city") you can convert the sales per widget per city into percentages (e.g. widget A sold 27% in New York, 15% in Austin, etc.) and then look at the empirical distribution of these percentages by city. The next step would be introducing some conditionality -- e.g. checking whether the sales percentage per city depends, for example, on the number of cities where the widget was sold. Generally speaking, you want to find some structure in your percentages by city, but what kind of structure is there really depends on your particular data.
2philh
The problem - at least the one I'm currently focusing on, which might not be the one I need to focus on - is converting percentages-by-city on a collection of subsets, into percentages-by-city in general. I'm currently assuming that there's no structure beyond what I specified, partly because I'm not currently able to take advantage of it if there is. A toy example, with no randomness, would be - widget A sold 2/3 in city X and 1/3 in city Y. Widget B sold 6/7 in city X and 1/7 in city Z. Widget C sold 3/4 in city Y and 1/4 in city Z. Widget D is to be sold in cities X, Y and Z. What fraction of its sales should I expect to come from each city? The answer here is 0.6 from X, 0.3 from Y and 0.1 from Z, but I'm looking for some way to generate these in the face of randomness. (My first thought was to take averages - e.g. city A got an average of (2/3 + 6/7)/2 = 16/21 of the sales - and then normalize those averages. But none of the AM, GM and HM gave the correct results on the toy version, so I don't expect them to do well with high randomness. It might be that with more data they come closer to being correct, so that's something I'll look into if no one can point me to existing literature.)
6skeptical_lurker
So, there's some sort of function mapping from (cities,widgets)->sales, plus randomness. In general, I would say use some standard machine learning technique, but if you know the function is linear you can do it directly. So: sales=constant x cityvalue x widgetvalue + noise d sales/d cityvalue = constant x widgetvalue d sales/d widgetvalue = constant x cityvalue (all vectors) So then you pick random starting values of cityvalue , widgetvalue, calculate the error and do gradient decent. Or just plug Error = sum((constant x cityvalue x widgetvalue - sales)^2) Into an optimisation function, which will be slower but quicker to code.
4philh
Thank you! This seems like the conceptual shift I needed.
0Lumifer
You need to specify what kind of randomness you are expecting. For example, the standard ordinary least-squares regression expects no noise at all in the X values and the noise in Y to be additive, iid, and zero-mean Gaussian. If you relax some of these assumptions (e.g. your noise is autocorrelated) some properties of your regression estimates hold and some do not any more. In the frequentist paradigm I expect you to need something in the maximum-likelihood framework. In the Bayesian paradigm you'll need to establish a prior and then update on your data in a fairly straightforward way. In any case you need to be able to write down a model for the process that generates your data. Once you do, you will know the parameters you need to estimate and the form of the model will dictate how the estimation will proceed.
2philh
Sure, I'm aware that this is the sort of thing I need to think about. It's just that right now, even if I do specify exactly how I think the generating process works, I still need to work out how to do the estimation. I somewhat suspect that's outside of my weight class (I wouldn't trust myself to be able to invent linear regression, for example). Even if it's not, if someone else has already done the work, I'd prefer not to duplicate it.
6gwern
If you can implement a good simulation of the generating process, then you are already done - estimating is as simple as ABC. (Aside from the hilariously high computing demands of the naive/exact ABC, I've been pleased & impressed just how dang easy it is to use ABC. Complicated interval-censored data? No problem. Even more complicated mixture distribution / multilevel problem where data flips from garbage to highly accurate? Ne pas!)
4satt
Even if you know only the generating process and not an estimation procedure, you might be able to get away with just feeding a parametrization of the generating process into an MCMC sampler, and seeing whether the sampler converges on sensible posterior distributions for the parameters. I like Stan for this; you write a file telling Stan the data's structure, the parameters of the generating process, and how the generating process produced the data, and Stan turns it into an MCMC sampling program you can run. If the model isn't fully identified you can get problems like the sampler bouncing around the parameter space indefinitely without ever converging on a decent posterior. This could be a problem here; to illustrate, suppose I write out my version of skeptical_lurker's formulation of the model in the obvious naive way — — where brackets capture city & widget-type indices, I have a β for every city and a γ for every widget type, and I assume there's no odd correlations between the different parameters. This version of the model won't have a single optimal solution! If the model finds a promising set of parameter values, it can always produce another equally good set of parameter values by halving all of the β values and doubling all of the γ values; or by halving α and the γ values while quadrupling the β values; or by...you get the idea. A sampler might end up pulling a Flying Dutchman, swooping back and forth along a hyper-hyperbola in parameter space. I think this sort of under-identification isn't necessarily a killer in Stan if your parameter priors are unimodal and not too diffuse, because the priors end up as a lodestar for the sampler, but I'm not an expert. To be safe, I could avoid the issue by picking a specific city and a specific widget as a reference widget type, with the other cities' β and other widgets' γ effectively defined as proportional to those: Then run the sampler and back out estimates of the overall city-level sales fractions from
3gwern
I know JAGS lets you put interval limits onto terms which lets you specify that some variable must be non-negative (looks something like dist(x,y)[0,∞]), so maybe STAN has something similar.
3satt
It does. However... I see now I could've described the model better. In Stan I don't think you can literally write the observed data as the sum of the signal and the noise; I think the data always has to be incorporated into the model as something sampled from a probability distribution, so you'd actually translate the simplest additive model into Stan-speak as something like data { int<lower=1> N; int<lower=1> Ncities; int<lower=1> Nwidgets; int<lower=1> city[N]; int<lower=1> widget[N]; real<lower=0> sales[N]; } parameters { real<lower=0> alpha; real beta[Ncities]; real gamma[Nwidgets]; real<lower=0> sigma; } model { // put code here to define explicit prior distributions for parameters for (n in 1:N) { // the tilde means the left side's sampled from the right side sales[n] ~ normal(alpha + beta[city[n]] + theta[widget[n]], sigma); } } which could give you a headache because a normal distribution puts nonzero probability density on negative sales values, so the sampler might occasionally try to give sales[n] a negative value. When this happens, Stan notices that's inconsistent with sales[n]'s zero lower bound, and generates a warning message. (The quality of the sampling probably gets hurt too, I'd guess.) And I don't know a way to tell Stan, "ah, the normal error has to be non-negative", since the error isn't explicitly broken out into a separate term on which one can set bounds; the error's folded into the procedure of sampling from a normal distribution. The way to avoid this that clicks most with me is to bake the non-negativity into the model's heart by sampling sales[n] from a distribution with non-negative support: for (n in 1:N) { sales[n] ~ lognormal(log(alpha * beta[city[n]] * theta[widget[n]]), sigma); } Of course, bearing in mind the last time I indulged my lognormal fetish, this is likely to have trouble too, for the different reason that a lognormal excludes the possibi
2philh
Thanks to both you and gwern. It doesn't look like this is the direction I'm going in for this problem, but it's something I'm glad to know about.

I made a rationalist Tumblr, primarily to participate in rationalist conversations there. Solid posts will still be posted to LW, when I finish them.

[-][anonymous]40

Singer v.s. Van der Vossen:

Singer asks, if it’s obligatory to save the drowning child you happen to encounter at the expense of your shoes, why isn’t it obligatory not to buy the shoes in the first place, but instead to save a child in equally dire straits?

As a profession, we are in an odd but unfortunate situation. Our best philosophers and theorists develop accounts of global justice that are disconnected from the best empirical insights about poverty and prosperity.

Reading these theories, one might think that our best prospects for alleviating poverty

... (read more)
2bogus
This is quite right - the best case for development aid in poor countries is through its positive feedback on institutions (most plausibly, civil society). Then again, most proponents of effective giving favor interventions that would plausibly have such feedbacks - for instance, it turns out that a lot of the money GiveDirectly hands out to poor folks is spent on entrepreneurship and capital acquisition, not direct consumption.
1ChristianKl
South Korea and Taiwan had no problem with Malaria killing children in which the society invested resources. I don't understand why Van der Vossen thinks that there is clear evidence that the difference between what happened in a country like South Korea and what happened in subsaharan Africa has nothing to do with genes. Of course that's the politically correct belief. But standing there and saying that development economics proved it beyond all odds seems strange to me. The rule of law does happen to be an important ingridiant to producing wealth but I don't think you get rule of law directly through buying iPhones. To the extend that you believe that the rule of law is very useful in helping third world countries the next question would be whether there are cost effective interventions to increase it. That's a standard EA question. That seems like something nice to say, but politically it's very hard to imagine that a First World government gives up the ability of the First World country to feed it's own population without being dependent on outside forces. Politically it's easier to ship excess grain from Europe to Africa than burning it but the excess grain doesn't get produced with the goal of feeding African's at all but to have European farmers that provide Europe with a food supply that also can supply in times of crisis.
4bogus
Well, let's see. It's quite convenient for us that there's a country right next door to South Korea, called North Korea. North Korea has the same genes as South Korea, and yet its economy is much more similar to the economy of Sub-saharan Africa than South Korea. Sure, that's just N=1, anecdotes are not data and all that, but I'd call that pretty good evidence.
2ChristianKl
The fact that the bad policies of North Korea lead to bad economic outcomes is no evidence that all bad economic outcomes are due to bad policies. It simply isn't. Nobody in the EA camp denies that policies of countries matter and that the property rights and the rule of law aren't important. I haven't seen Peter Singer argue either that putting Embargos on other countries to put economic pressure on them instead of engaging in trade is bad. Most African countries on the on the other hand don't suffer under strong embargo's. They are in the sphere of the IMF who preaches property rights for decades and tries to get the countries to respect property rights.

I don't understand how the karma system here works. One my posts below, about the usefulness of prostitutes for learning how to get into sexual relationships through dating regular women, dropped off for awhile with a -4 karma. Then I just checked, and it has a +4 karma. Where did the 8 karma points come from?

This has happened to some of my posts before. Do I have some fans I don't know about who just happen to show up in a short interval to upvote my controversial posts?

8philh
I think someone is using a bunch of alts to occasionally mega-upvote posts they like.
5Lumifer
I think you do -- what you do NOT have is a good model for predicting future karma scoring of your posts :-/
2MrMind
Welcome to everybody's on this forum world.
0Elo
I make sense of karma, and generally have been using it to tune my efforts towards more helpful and more useful posts to people (or at least I think I am doing that more than pandering)
[-][anonymous]10

Many LWers, myself particularly, write awkwardly. Did you know Word can check your writing style, not just your spelling with a simple option change. I'm learning how to write with better style already.

4Dahlen
This is a good occasion for relying on natural rather than artificial intelligence. Here's a list of style suggestions that can be made by Word. It checks for a lot of things that can be considered bad style in some contexts but not in others, and to my knowledge it's not smart enough to differentiate between different genres. (For example, it can advise you both against passive voice – useful for writing fiction, sometimes – and against use of first-person personal pronouns, which is a no-no in professional documents. If it needs mentioning, sometimes you cannot follow both rules at once.) There's plenty of reason to doubt that a human who can't write very well can have an algorithm for a teacher in matters of writing style; we're not there yet, I think.
[-][anonymous]00

The Importance, tractability, and neglectedness approach is the go-to hereustic for EA's.

The open philanthropy project approaches it like this:

“What is the problem?” = importance

“What are possible interventions?” = tractability

“Who else is working on it?” = neglectedness

I reckon it's a simplification of the rational planning model:

Intelligence gathering — A comprehensive organization of data, potential problems and opportunities are identified, collected and analyzed.

Identifying problems — Accounting relevant factors.

Assessing the conseque

... (read more)
[-][anonymous]00

If a graduate student approached you to do a section of the data analyses of your research in return for credit/authorship and to degree requirements, what would you give her? Not, she's specified just a ''section'' and is not interested in any data collection, research administration or the link, she just wants to fulfill her mini-research project requirements.

0ChristianKl
That depends obviously on the skills of the individual. I think giving someone the Mnemosyne database to analyse for better ways to predict Spaced Repetition System learning would be useful if that person has enough skills to do genuine work. Gwern works to bring that data into a nicely downloadable format: https://archive.org/details/20140127MnemosynelogsAll.db

From the Foreword to Brave New World:

Nor does the sexual promiscuity of Brave New World seem so very distant. There are already certain American cities in which the number of divorces is equal to the number of marriages. In a few years, no doubt, marriage licenses will be sold like dog licenses, good for a period of twelve months, with no law against changing dogs or keeping more than one animal at a time. As political and economic freedom diminishes, sexual freedom tends compensatingly to increase. And the dictator (unless he needs cannon fodder and fam

... (read more)
[-][anonymous]00

We can remember things we don't believe and believe things we don't remember. Which source of knowledge if a better authority for our expectations and priors?

0ChristianKl
Before asking that question it's useful to ask why one wants to know priors and what one means with the term. A person with arachnophobia has one a system I level a prior about spiders being dangerous but often doesn't have that one a system II level for small spiders.

I'm trying to wrap my mind around Stuart Armstrong's post on Doomsday argument, and to do so I've undertook the task of tabooing 'randomness' in the definitions of SIA and SSA.
My first attempt clearly doesn't work: "observers should reason giving the exact same degree of belief to any proposition of the form: 'I'm the first observer', 'I'm the second observer', etc." As it has been noted before many times, by me and by others, anthropic information changes the probability distribution, and any observer has at least a modicum of that. I suspect th... (read more)

1Viliam
Trying to assign the same degree of belief to infinitely many mutually exclusive options doesn't work. The probability of being an observer #1 is greater than the probability of being an observer #10^10, simply because some possible universes contain more than 1 but less than 10^10 observers. I'm not sure how exactly the distribution should look; just saying in general that larger numbers have smaller probabilities. The exact distribution would depend on your beliefs about the universe, or actually about the whole Tegmark multiverse, and I don't have much strong beliefs in that area. For example, if you believe that universe has a limited amount of particles and a limited amount of time, that would put an (insanely generous) upper bound on the number of observers in this universe.
0MrMind
Yeah, but the class of observers in the Doomsday argument is not infinite, usually one takes a small and a huge set, both finite. So in theory you could assign a uniform distribution. Exactly, and that's an assumption I'm always willing to make, to circumvent the problem of an infinite class of reference. The problem though is not the cardinality of the set, it's rather the uniformity of the distribution, which I think is what is implied by the word 'randomness' in S(S|I)A, because I feel intuitively it shouldn't be so, due to the very definition of observer.
[+][anonymous]-60
[+][anonymous]-90
[+][anonymous]-140