You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Open Thread, April 1-15, 2013

3 Post author: Vaniver 01 April 2013 03:00PM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Comments (254)

Comment author: gwern 03 April 2013 08:34:28PM *  24 points [-]

For kicks, and reminded by all my recent searching for digging up long-forgotten launch and shut down dates for Google properties, I've compiled a partial list of times I've posted searches & results on LW:

Can't help but get the impression that even people here aren't very good at Googling. Maybe they should be taking Google's little search classes; knowing how to search seems like the sort of skill that would pay off constantly over a lifetime.

Comment author: Douglas_Knight 04 April 2013 02:08:52AM 3 points [-]

Can't help but get the impression that even people here aren't very good at Googling. Maybe they should be taking Google's little search class; knowing how to search seems like the sort of skill that would payoff constantly over a lifetime.

It appears to me that in half of these examples people hadn't tried to google at all. It doesn't seem particularly likely to me that the class would develop such a habit. Not that I have a better idea.

Comment author: gwern 04 April 2013 03:43:27AM 14 points [-]

My belief is that the more familiar and skilled you are with a tool, the more willing you are to reach for it. Someone who has been programming for decades will be far more willing to write a short one-off program to solve a problem than someone who is unfamiliar and unsure about programs (even if they suspect that they could get a canned script copied from StackExchange running in a few minutes). So the unwillingness to try googling at all is at least partially a lack of googling skill and familiarity.

Comment author: lukeprog 07 May 2013 03:36:32AM 1 point [-]

This is epic.

Comment author: Morendil 01 April 2013 03:21:58PM *  22 points [-]

Received word a few days ago that, (unofficially, pending several unresolved questions) my GJP performance is on track to make me eligible for "super forecaster" status (last year these were picked from the top 2%).

ETA, May 9th: received the official invitation.

Comment author: RolfAndreassen 01 April 2013 04:51:07PM 13 points [-]

I'm glad to report that I am one of those who make this achievement possible by occupying the other 98%. Indeed I believe I am supporting the high ranking of a good 50% of the forecasters.

More seriously, congratulations. :)

Comment author: dvasya 09 May 2013 05:27:07PM 2 points [-]

Congratulations! I also received it (thanks not the least to your posts). I wonder how many other LWers participate and who else (if anybody) got their invitations.

Comment author: ITakeBets 09 May 2013 05:45:08PM 1 point [-]

I participate and was invited the first season to be a super-forecaster in the second. It is kind of a lot of work and I have been very busy, so I really quit doing anything about it at all pretty early on, but mysteriously have been invited to participate again in the third season.

Comment author: Morendil 09 May 2013 06:36:59PM 0 points [-]

We may find out a little about that; super-forecasters will form teams, so it's somewhat likely some of us will end up on the same team.

Congrats to the others too, anyway!

Comment author: gwern 09 May 2013 05:41:39PM 0 points [-]

I participate (http://www.gwern.net/Prediction%20markets#iarpa-the-good-judgment-project); and haven't been invited. (While I stopped trying in season 2, my season 1 scores were merely great & not stellar enough to make it plausible that I could have made it.)

Comment author: [deleted] 03 April 2013 02:31:21PM 12 points [-]

Iain (sometimes M.) Banks is dying of terminal gall bladder cancer.

Of more interest is the discussion thread on Hacker News regarding cryonics. There's a lot of cached responses and misinformation going around on both sides.

Comment author: Multiheaded 03 April 2013 04:17:00PM 4 points [-]

It's really, really saddening that he of all people has been an outspoken deathist and now it's depriving him of any chance whatsoever. (Well, except for hypothetical ultra-remote reconstruction by FAI or something.)

Comment author: FiftyTwo 04 April 2013 12:58:10AM 0 points [-]

Where has he been an outspoken deathist?

Comment author: gwern 04 April 2013 01:36:02AM 7 points [-]

In the Culture novels, he has all humans just sorta choosing to die after a millennium of life, despite there being absolutely no reason for humans to die since available resources are unlimited, almost all other problems solved, aging irrelevant, and clear upgrade paths available (like growing into a Mind).

Comment author: FiftyTwo 04 April 2013 12:18:37PM 2 points [-]

Its not entirely clear cut. He has had characters from outside the culture describing it as a 'fashion' and a sign of the culture's decadence. And the characters we do see ending their lives are generally doing it for reasons of psychological trauma.

Either way, thinking a thousand years in the culture is enough doesn't mean he thinks 70 years on earth is enough. Has he ever made a direct comment about cryonics? I can't find any. So its still possible eh would eb open to it given up to date information.

Comment author: gwern 04 April 2013 01:44:08PM 3 points [-]

And the characters we do see ending their lives are generally doing it for reasons of psychological trauma.

Stories would tend to focus on characters who are interested or involved in traumatically interesting events, so not sure how much one could infer from that.

Either way, thinking a thousand years in the culture is enough doesn't mean he thinks 70 years on earth is enough.

A thousand years instead of 70 is just deathism with a slightly different n.

Comment author: ArisKatsaris 06 April 2013 11:49:50AM *  8 points [-]

A thousand years instead of 70 is just deathism with a slightly different n.

Eh, I kinda agree with you in a sense, but I'd say there's still a qualitative difference if one has successfully moved away from the deathist assumption that the current status quo for life-span durations is also roughly the optimal life-span duration.

Comment author: [deleted] 04 April 2013 02:38:55PM *  4 points [-]

A thousand years instead of 70 is just deathism with a slightly different n.

Then some form of deathism may be the truth anyway.

On the other hand, I can't remember Banks ever suggesting that organics in the Culture would want to die after a thousand years, only that if they wanted to die they would be able to. I don't think the later is incompatible with anti-deathism -- is Lazarus Long a deathist, after all?

EDIT: On the gripping hand, there's also a substantial bit of business in the Culture about subliming.

Instead of arguing on in this vein, I know that he's made comments in the past about how he believes death is a natural part of life. I just can't find the right interview now that "Iain Banks death" and variants are nearly-meaningless search terms.

Comment author: Douglas_Knight 04 April 2013 07:40:01PM 8 points [-]

now that "Iain Banks death" and variants are nearly-meaningless search terms

If you want to search the past, go to google, search, click "Search tools," "Any time," "Custom range..." and fill in the "To" field with a date, such as "2008."

Comment author: [deleted] 04 April 2013 07:46:17PM *  2 points [-]

"A Few Notes on the Culture":

Philosophy, again; death is regarded as part of life, and nothing, including the universe, lasts forever. It is seen as bad manners to try and pretend that death is somehow not natural; instead death is seen as giving shape to life.

While burial, cremation and other - to us - conventional forms of body disposal are not unknown in the Culture, the most common form of funeral involves the deceased - usually surrounded by friends - being visited by a Displacement Drone, which - using the technique of near-instantaneous transmission of a remotely induced singularity via hyperspace - removes the corpse from its last resting place and deposits it in the core of the relevant system's sun, from where the component particles of the cadaver start a million-year migration to the star's surface, to shine - possibly - long after the Culture itself is history.

None of this, of course, is compulsory (nothing in the Culture is compulsory). Some people choose biological immortality; others have their personality transcribed into AIs and die happy feeling they continue to exist elsewhere; others again go into Storage, to be woken in more (or less) interesting times, or only every decade, or century, or aeon, or over exponentially increasing intervals, or only when it looks like something really different is happening....

I'm on the fence as to whether or not this really constitutes full-blown deathism or just a belief that sentient beings should be permitted to cause their own death.

Comment author: TheOtherDave 05 April 2013 03:10:25AM 2 points [-]

I suspect that any cultural norm inconsistent with treating the death of important life forms as an event to be eradicated from the world is at least an enabler to "deathism" as defined locally.

Comment author: Nornagest 05 April 2013 03:19:01AM *  1 point [-]

There seems to be some appeal to nature floating around in it, at the very least.

Sure, death is natural. So is Ophiocordyceps, but that doesn't mean I want parasitic mind-altering fungi in my life.

Comment author: SilasBarta 13 April 2013 02:01:43PM *  3 points [-]

Great point I saw in the discussion:

Look at it like a cryptographer: Is putting a brain in liquid nitrogen a secure erasure method against all future attacks from a determined opponent with lots of resources? Would you trust your financial data to such a method of data erasure?

Comment author: Kaj_Sotala 01 April 2013 04:04:44PM 12 points [-]

I've been writing blog articles on the potential of educational games, which may be of interest to some people here:

I'd be curious to hear any comments.

Comment author: RolfAndreassen 01 April 2013 04:58:50PM 9 points [-]

I realise it's a constructed example, but a videogame that would be even remotely accurate in modelling the causes of the fall of the Roman Empire strikes me as unrealistically ambitious. I would at any rate start out with Easter Island, which at least is a relatively small and closed system.

Another point is that, if you gave the player the same levers that the actual emperors had, it's not completely clear that the fall could be prevented; but I suppose you could give points on doing better than historically.

Comment author: latanius 01 April 2013 09:38:13PM 3 points [-]

Do we need a realistic simulation at all? I was thinking about how educational games could devolve into, instead of "guessing the teacher's password", "guessing the model of the game"... but is this a bad thing?

Sure, games about physics should be able to present a reasonably accurate model so that if you understand their model, you end up knowing something about physics... but with history:

actually, what's the goal of studying history?

  • if the goal is to do well on tests, we already have a nice model for that, under the name of Anki. Of course, this doesn't make things really fun, but still.
  • if we want to make students remember what happened and approximately why (that is, "should be able to write an essay about it"), we can make up an arbitrary, dumb and scripted thing, not even close to a real model, but exhibiting some mechanics that cover the actual reasons. (e.g. if one of the causes would have been "not enough well-trained soldiers", then make "Level 8 Advanced Phalanx" the thing to build if you want to survive the next wave of attacks.)
  • if we'd like to see students discover general ideas throughout history, maybe build a game with the same mechanics across multiple levels? (and they also don't need to be really accurate or realistic.)
  • and finally, if we want to train historians who could come up with new theories, or replacement emperors to be sent back in time to fix Rome... well, for that we would need a much better model indeed. Which we are unlikely to end up with. But do we need this level in most of the cases?

TL;DR by creating games with wildly unrealistic but textbook-accurate mechanics we are unlikely to train good emperors, but at least students would understand textbook things much more than the current "study, exam, forget" level.

Comment author: Vaniver 01 April 2013 10:50:55PM 6 points [-]

Do we need a realistic simulation at all? I was thinking about how educational games could devolve into, instead of "guessing the teacher's password", "guessing the model of the game"... but is this a bad thing?

If what they learned about "evolution" comes from Pokemon, then yes.

Comment author: lfghjkl 04 April 2013 09:51:49AM 0 points [-]

When did Pokemon become an educational game about evolution?

Comment author: Vaniver 05 April 2013 08:51:31PM 8 points [-]

Pokemon is an example of what an educational game which doesn't care about realism could look like. People should be expected to learn the game, not the reality, and that will especially be the case when the game diverges from reality to make it more fun/interesting/memorable. If you decide that the most interesting way to get people to play an interactive version of Charles Darwin collecting specimens is to make him be a trainer that battles those specimens, then it's likely they will remember best the battles, because those are the most interesting part.

One of the research projects I got to see up close was an educational game about the Chesapeake; if I remember correctly, children got to play as a fish that swum around and ate other fish (and all were species that actually lived in the Chesapeake). If you ate enough other fish, you changed species upwards; if you got eaten, you changed species downwards. In the testing they did afterwards, they discovered that many of the children had incorporated that into their model of how the Chesapeake worked; if a trout eats enough, it becomes a shark.

Comment author: gwern 05 April 2013 09:13:04PM 2 points [-]

I'd like to hear more about that Chesapeake result.

Comment author: Vaniver 05 April 2013 09:41:14PM *  3 points [-]

I'm seeing if I can find a copy of their thesis. I'll share it if I manage to.

The GAMER thesis is here. (Also looking for an official copy.)

The ILL thesis is here.

Comment author: RolfAndreassen 02 April 2013 12:21:24AM 4 points [-]

It's true that you don't need a model that lets you form new theories of the downfall of the Empire; but my point is that even the accepted textbook causes would be very hard to model in a way that combines fun, challenge, and even the faintest hint of realism. Take the theory that Rome was brought down partly by climate change; what's the Emperor supposed to do about it? Impose a carbon tax on goats? Or the theory that it was plagues what did it. Again, what's the lever that the player can pull here? Or civil wars; what exactly is the player going to do to maintain the loyalty of generals in far-off provinces? At least in this case we begin to approach something you can model in a game. For example, you can have a dynastic system and make family members more loyal; then you have a tradeoff between the more limited recruiting pool of your family, which presumably has fewer military geniuses, versus the larger but less loyal pool of the general population. (I observe in passing that Crusader Kings II does have a loyalty-modelling subsystem of this sort, and it works quite well for its purposes. Actually I would propose that as a history-teaching game you could do a lot worse than CKII. Kaj, you may want to look into it.) Again, suppose the issue was the decline of the smallholder class as a result of the vast slaveholding plantations; to even engage with this you need a whole system for modelling politics, so that you can model the resistance to reform among the upper classes who both benefit by slavery and run most of your empire. Actually this sounds like it could make a good game, but easy to code it ain't.

Comment author: Eugine_Nier 03 April 2013 05:18:08AM 1 point [-]

It gets even more complicated when these causes interact. A large part of the reason for the decline of smallholders and the rise of vast manors using serfs (slavery was in decline during that period), was the fact that farmers had to turn to the lords for protection from barbarians and roving bandits. The reason there were a lot of marauding bandits is that the armies were to busy fighting over who the next emperor was going to be to do their job of protecting the populace.

Comment author: OrphanWilde 02 April 2013 07:54:04PM 2 points [-]

Dynasty Warriors and Romance of the Three Kingdoms, while heavily stylized and quite frequently diverging from actual history, nevertheless do a pretty good job of conveying the basics of the time period and region.

Comment author: Viliam_Bur 02 April 2013 09:45:59AM 5 points [-]

A big part of education today is memorization. Perhaps it is wrong, but it is going to stay here for a while anyway. And at least partially it is necessary; how else would one learn e.g. a vocabulary of a foreign language?

So while it is great to invent games that teach principles instead of memorization, let's not forget that there is a ton of low-hanging fruit in making the memorization more pleasant. If we could just take all the memorization of elementary and high school, and turn it into one big cool game, it would probably make the world a much better place. How much resources (especially human resources) do we spend today on forcing the kids to learn things they try to avoid learning? Instead we could just give them a computer game, and leave teachers only with the task of explaining things. Everyone could get today's high school level education without most of the frustration.

Recently I started using Anki for memorization, and it seems to work great. But I still need some minimum willpower to start it every day. For me that is easy, because with my small amounts of data, I get usually 10-20 questions a day. But if I tried to use it in real time for high-school knowledge, that would be much more. Also, today I know exactly why I am learning, but for a small child it is an externally imposed duty, with uncertain rewards in a very far future. So some additional rewards would be nice.

It could be interesting to make a school where in the morning the students would play some gamified Anki system, and in the afternoon they would work in groups or discuss topics with teachers.

Comment author: Kaj_Sotala 02 April 2013 01:23:17PM 1 point [-]

Sure, that's big too. I just didn't talk about it as much because everyone else seems to be talking about it already.

Comment author: latanius 01 April 2013 05:18:37PM 2 points [-]

Have you played the Portal games? They include lots of things you mention... they introduce how to use the portal gun, for example, not by explaining stuff but giving you a simplified version first... then the full feature set... and then there are all the other things with different physical properties. I can definitely imagine some Portal Advanced game when you'll actually have to use equations to calculate trajectories.

Nevertheless... I'd really like to be persuaded otherwise, but the ability to read Very Confusing Stuff, without any working model, and make sense of it can't really be avoided after a while. We can't really build a game out of every scientific paper, due to the amount of time required to write a game vs. a page of text... (even though I'd love to play games instead of reading papers. And it sounds definitely doable with CS papers. What about a conference accepting games as submissions?)

Comment author: Armok_GoB 07 April 2013 01:13:12AM 1 point [-]

These games already exist for many things, good enough that watching Letsplays of them are probably more efficient than most deliberately educational videos. It's just finding them and realizing it's that tricky.

Games relevant to this discussion include Rome: Total War and Kerbal Space Program. Look them up.

Comment author: drethelin 07 April 2013 04:37:52PM 4 points [-]

I think I would really enjoy watching civ 5 lps that simultaneously discuss world history.

Comment author: [deleted] 07 April 2013 05:04:21PM 0 points [-]

That would be super cool.

Comment author: Kaj_Sotala 07 April 2013 11:11:14AM 1 point [-]

Yes, these are good examples.

Comment author: John_Maxwell_IV 07 April 2013 01:10:04AM 0 points [-]

One point is that while memorizing specific causes of the fall of the roman empire may not be especially useful, acquiring the self-discipline necessary to do this without a game to motivate you might be very useful.

Comment author: Kaj_Sotala 07 April 2013 11:15:48AM 1 point [-]

Perhaps, but if the task doesn't also feel interesting and worthwhile by itself, then we're effectively teaching kids that much of learning is dull, pointless and tedious, detached from anything that would have any real-world significance, and something that you only do because the people in power force you to. That's one of the most harmful attitudes that anyone can pick up. Let's associate learning with something fun and interesting first, and then channel that interest into the ability to self-motivate yourself even without a game later on.

Comment author: niceguyanon 05 April 2013 06:15:28PM *  11 points [-]

I mentioned that I was attending a Landmark seminar. Here is my review of their free introductory class that hopefully adds to the conversation for those who want to know:

Coaches - They are the people who lead the class and I found them to be genuine in their belief in the benefits of taking the courses. These coaches were unpaid volunteers. I found their motives for coaching were for self-improvement and to some degree altruism. In short, it helped them, and they really want to share it.

Material - The intro course consists of more informative ideas rather than exercises. Their informative ideas are also trade-marked phrases, which makes it gimmicky and gives it more importance than an idea really warrants. We were not told these ideas were evidence-based. Lots of information on how to improve one's life was thrown around but no research or empirical evidence was given. Not once was the words " cognitive science" or "rationality" used. I speculate that the value the course gives its students is not from their informative ideas, but probably from the exercises and motivation that one gets from being actively pushed by their coaches to pursue goals.

Final thoughts - If you are rationality minded then this is not for you. I am no worse for going, and I do not believe that anyone who is rationality minded and attends will be worse off either, however I do believe that it is most likely damaging for a person's rationality , who is naive in rationality to begin with, to attend. I have never attended CFAR but just from browsing their website I can tell that Landmark is very far from what CFAR does. I think people in general would benefit more from attending CFAR than Landmark.

Comment author: Qiaochu_Yuan 05 April 2013 04:25:21AM *  11 points [-]

I am generally still very bad at steelmanning, but I think I am now capable of a very specific form of it. Namely, when people say something that sounds like a universal statement ("foos are bar") I have learned to at least occasionally give them the benefit of the doubt instead of assuming that they literally mean "all foos are bar" and subsequently feeling smug when I point out a single counterexample to their statement. I have seen several people do this on LW lately and I am happy to report that I am now more annoyed at the strawmanners than the strawmanned in this scenario.

Comment author: ThrustVectoring 05 April 2013 04:32:25AM 1 point [-]

It sounds to me like it has a lot in common with the noncentral fallacy. There's a general tendency to think of groups in terms of their central members and not their noncentral ones. This both makes sneaking in connotations by noncentral labels possible, and makes "all central foos are bar" feel like the same thing as "all foos are bar".

Comment author: Thomas 05 April 2013 09:57:30AM 0 points [-]

Even more so with the "No foo is a bar". Those statements are most probably either very common definitions like "no mammal is a bird" and therefore not very informative, either they are improbable. Like "no man can live more than X minutes without oxygen, ever".

In the last case, even if the X is huge, we can assume that maybe it can be done under some (unseen yet) circumstances.

In other words, don't be too hasty with universal negations!

Comment author: Vaniver 01 April 2013 03:06:08PM 30 points [-]

Remember, today is "Base Rate Neglect Day," also known as April Fool's.

Comment author: AlexSchell 01 April 2013 09:35:05PM 19 points [-]

I propose "Confusion Awareness Day".

Comment author: SilasBarta 01 April 2013 10:51:54PM 9 points [-]

Looks like Scott Adams has given Metamed a mention. (lotta m's there...)

I find it particularly interesting because a while back he himself was a great example of a patient independently discovering, against official advice, that their rare, debilitating illness could be cured -- specifically, that of losing his voice due to a neurological condition. He doesn't mention it in the blog post though.

(At least, I think this is a better example to use than woman who found out how to regenerate her pinky.)

Comment author: NancyLebovitz 01 April 2013 11:15:33PM *  5 points [-]

I'm a little surprised he didn't try Alexander Technique, an efficient movement method which was developed by F.M. Alexander to cure his serious problems with speaking-- problems which sound a good bit like vocal dystonia.

The problem may be that F.M. Alexander was an actor, and his technique has remained best known in the theater arts community.

In other news, Too Loud, Too Bright, Too Fast, Too Tight is about people whose range of sensory comfort is mismatched to what's generally expected. It's a problem that doesn't just happen to people on the autistic spectrum.

There's some help for it-- what was in the book was putting people in a non-stressful environment and gradually introducing difficult stimuli-- but working with this problem is cleverly concealed under occupational therapy, where no one is likely to find it.

Comment author: FiftyTwo 01 April 2013 11:26:41PM 8 points [-]

[Aside] I'm not sure how I feel about Scott Adams in general. I enjoyed his work a lot when I was younger, but he seems very prone to being contrarian for its own sake and over-estimating his competence in unrelated domains.

Comment author: pragmatist 02 April 2013 04:03:10PM *  4 points [-]

I was a big Dilbert fan in my mid-teens and bought all his books. In one of them (The Dilbert Future, I think), he has this self-confessedly serious chapter about questioning received assumptions and thinking creatively. As an example, he suggests an alternate explanation for gravity, which he claims is empirically indistinguishable from the standard theory (prima facie, at least). His bold new theory: everything in the universe is just getting bigger all the time. So when we jump in the air the Earth and our bodies get bigger so they come back into contact. Seriously. Even as a fourteen-year-old, it took me only a few minutes to think of about five reasons this could not be true.

I read that book in the late 90s, and I've read very little by Scott Adams since then. In recent years, I've heard a few people cite him as a generally smart and thoughtful guy, and I have a very hard time reconciling that description with the author of that monumentally stupid chapter.

Comment author: NancyLebovitz 02 April 2013 04:08:57PM 2 points [-]

It's conceivable that he focuses down on things that are important to him, and is quite content to do more or less humorous BS the rest of the time.

Comment author: maia 02 April 2013 05:27:44PM 1 point [-]

IIRC, in that chapter, he also discussed how quantum mechanics (specifically the double slit experiment) meant that information could travel backwards in time...

Comment author: pragmatist 03 April 2013 04:35:23AM *  4 points [-]

I don't remember that specifically, but it would be one of the less crazy things he says. There are sound theoretical motivations for a retro-causal account of quantum mechanics, although a successful retro-causal model of the theory is yet to be constructed (John Cramer's transactional interpretation comes close).

However, I do remember Adams endorsing something like The Secret in the chapter, where you can change the world to your benefit merely by wanting it enough. I don't entirely recall if he sees this as a consequence of quantum retro-causality, but I think he does, and if that's the case then yeah, the quantum stuff is batshit too.

Comment author: maia 03 April 2013 12:46:31PM 1 point [-]

Yes, he does. It's not necessarily "wanting it enough," though; he specifically instructs that you have to pick a sentence that describes what you want, such as "I want to get rich in the stock market" - specific, but not too specific - and write it, by hand, in a notebook designated for this purpose, at least 10 times each night. He claims that doing this, he did in fact make a lot of money in the stock market, and became the mostt popular cartoonist in the world by a metric he specified (some index, I don't remember which).

Not really connected to the quantum stuff, and possibly not as crazy. I think he mentions some possibility that all it actually does is force you to focus on your goals, which subconsciously makes you more responsive to opportunities, or something.

Comment author: SilasBarta 07 April 2013 08:35:23PM 1 point [-]

Confession: I was taken in by that section too for a while ... a long while. In fact, when Eliezer's quantum physics series started, my initial reaction was, "oh, I wonder how he's going to handle the backwards-in-time stuff!"

Comment author: SilasBarta 02 April 2013 12:12:03AM 3 points [-]

I agree in a lot of respects. But if you can cure such a major disorder when professionals, who are supposed to know this stuff, think it's impossible, and do it by your own research ... well, you have credibility on that issue.

Comment author: gwern 03 April 2013 03:05:09PM *  8 points [-]

I'm working on an analysis of Google services/products shutdown, inspired by http://www.guardian.co.uk/technology/2013/mar/22/google-keep-services-closed

The idea is to collate as many shuttered Google services/products as possible, and still live services/products, with their start and end dates. I'm also collecting a few covariates: number of Google hits, type (program/service/physical object/other), and maybe Alexa rank of the home page & whether source code was released.

This turns out to be much more difficult than it looks because many shut downs are not prominently advertised, and many start dates are lost to our ongoing digital dark age (for example, when did the famous & popular Google Translate open? After an hour applying my excellent research skills, #lesswrong's and no less than 5 people on Google+'s help, the best we can say is that it opened some time between 02 and 08 March 2001). Regardless, I'm up to 274 entries.

The idea is to graph the data, look for trends, and do a survival analysis with the covariates to extrapolate how much longer random Google things have to live.

Does anyone have suggestions as to additional predictive variables which could be found with a reasonable amount of effort for >274 Google things?

Comment author: shminux 08 April 2013 11:36:20PM *  7 points [-]

You know you spend too much time on LW when someone mentioning paperclips within earshot startles you.

Comment author: [deleted] 02 April 2013 12:30:18PM 7 points [-]
Comment author: GLaDOS 04 April 2013 01:03:39PM 5 points [-]

From this day forward all speculation and armchair theorizing on LessWrong should be written in Comic Sans.

Comment author: [deleted] 04 April 2013 02:11:48PM 2 points [-]

For some reason, my mind is picturing that sentence written in Comic Sans. (Similar things often happen to me with auditory imagery, e.g. when I read a sentence about a city I sometimes imagine it spoken in that city's accent, but this is the first time I recall this happening with visual imagery.)

Comment author: SilasBarta 03 April 2013 01:56:33AM 4 points [-]

Shouldn't it? Isn't epistemic hygiene correlated with font choice in known cases? I mean, if someone posts something in Comic Sans ...

Comment author: itaibn0 02 April 2013 08:39:17PM *  3 points [-]

Eyeballing this, the effect size is tiny. Looking at their own measurements, it is statistically significant, but barely.

ADDED: Hmm... I missed the second page. Over there is more explanation of the analysis. In particular:

But this analysis gives us a way to quantify the advantage to Baskerville. It’s small, but it’s about a 1% to 2% difference — 1.5% to be exact, which may seem small but to me is rather large... Many online marketers would kill for a 2% advantage either in more clicks or more clicks leading to sales.

Point taken. This is large enough that it might be useful. However, I don't think it is a large enough bias to be important for rationalist.

Comment author: gwern 03 April 2013 03:52:10AM *  2 points [-]

Depends. It would certainly be interesting to know for, say, the LW default CSS. I think I'll A/B test this Baskerville claim on gwern.net at some point.

EDIT: in progress: http://www.gwern.net/a-b-testing#fonts

Comment author: gwern 16 June 2013 10:30:24PM 0 points [-]

My A/B test has finished: http://www.gwern.net/a-b-testing#fonts

Baskerville wasn't the top font in the end, but the differences between the fonts were all trivial even with an ungodly large sample size of n=142,983 (split over 4 fonts). I dunno if the NYT result is valid, but if there's any effect, I'm not seeing it in terms of how long people spend reading my website's pages.

Comment author: NancyLebovitz 02 April 2013 11:16:26AM 6 points [-]

Offhand, I haven't seen any LWers write about having chemical addictions, which seems a little surprising considering the number of people here. Have I missed some, or is it too embarrassing to mention, or is it just that people who are attracted to LW are very unlikely to have chemical addictions?

Comment author: wedrifid 02 April 2013 11:40:39AM 8 points [-]

or is it just that people who are attracted to LW are very unlikely to have chemical addictions?

To busy with the internet addictions?

Comment author: NancyLebovitz 02 April 2013 11:49:20AM 1 point [-]

Could be, but it seems worth finding out.

Comment author: Viliam_Bur 02 April 2013 01:58:36PM 2 points [-]

Add a poll to your top-level comment. Suggested options: no chemical addiction, had one in the past, have one today.

Comment author: drethelin 02 April 2013 02:06:05PM 2 points [-]

Caffeine addiction is pretty popular, and I bet we have quite a few on adderall. Is that not what you mean?

Comment author: NancyLebovitz 02 April 2013 04:23:24PM 1 point [-]

Have you had a chemical addiction?

Unfortunately, the poll options don't seem to include ticky-boxes, so I don't see an elegant way to ask about which chemicals.

Submitting...

Comment author: gwern 02 April 2013 04:56:32PM 14 points [-]

As usual, caffeine addiction is so common that it needs to either be explicitly excluded or else its inclusion pointed out so readers know how meaningless the results may be for what they think of as 'chemical addiction'.

Comment author: NancyLebovitz 02 April 2013 05:11:10PM 2 points [-]

My original thought was to phrase it as "chemical addiction generally considered destructive", but that's problematic, too. What about sugar?

Comment author: elharo 02 April 2013 08:24:10PM 3 points [-]

Sugar is incredibly destructive. It is a major, perhaps the major, cause of diabetes, heart diseases, obesity and other diseases of civilization.

Comment author: SilasBarta 03 April 2013 01:57:33AM 7 points [-]

*wants to change answer now*

Comment author: CronoDAS 02 April 2013 08:36:32PM 4 points [-]

I get withdrawal symptoms if I miss too many antidepressant pills. Does that count?

Comment author: FiftyTwo 04 April 2013 01:00:03AM 4 points [-]

If that counts I have a serious dihydrogen monoxide problem as well...

Comment author: lsparrish 07 April 2013 03:18:29AM 5 points [-]

Yeah me too, I drink the stuff like water.

Comment author: OrphanWilde 03 April 2013 05:36:00PM 3 points [-]

Nicotine, caffeine, simple carbohydrates. (Didn't even realize the last one until I started getting hit with withdrawal - I've never been addicted to sugar before. But since I've cut it out of my diet this last time, which I've done many times before without issue, I've started getting splitting headaches that are rapidly remedied by eating an orange.)

I have alcohol cravings from time to time, but I'm not addicted, since drinking is actually infrequent for me, and not doing so doesn't cause me any issue. That's another recent development which is making me consider clearing out the liquor cabinet. (I did have alcohol cravings once before, after my grandfather died. And my grandmother just died after a few years of progressive decline - she had a form of dementia, possibly Alzheimer's - so it may be depression. I don't -feel- depressed, but I didn't feel depressed last time I was, either, and it was only obvious in retrospect.)

Comment author: evand 03 April 2013 04:09:49AM 2 points [-]

Does my caffeine addiction count? If I stop drinking coffee, I anticipate mild withdrawl symptoms. I periodically do this when I find myself drinking lots of coffee; a few days without increases the effectiveness of the caffeine later.

I take prescription adderall, and am decidedly less functional without it. I sometimes skip a day on the weekends. I anticipate no withdrawl symptoms, but would be far less willing to stop taking it than the caffeine.

One evening a number of years ago, I smoked a couple cigarettes at a party. For almost two weeks afterwards, I reacted to seeing or smelling cigarettes by wanting one. I didn't have any more, and those thoughts went away.

Which of those would you count as addictions? I can imagine plenty of obvious cases either way, but the boundary seems awkward to define, and very common in the case of things like caffeine and sucrose. (I answered yes in the poll, because of the caffeine.)

Comment author: NancyLebovitz 03 April 2013 04:41:18PM 1 point [-]

For what it's worth, what I was interested in was getting deep enough into the obvious life-wreckers that it was urgent to stop using them. Even that's vague, of course. Alcohol has short term emotional/cognitive effects which cause much more damage faster than cigarettes can.

Comment author: Desrtopa 08 April 2013 09:24:03PM 1 point [-]

It may be worth creating another poll which clarifies whether or not to count socially accepted addictions such as caffeine; some people seem to have answered on the assumption that it doesn't count, while others have answered on the assumption that it does.

Comment author: Decius 08 April 2013 09:12:00PM 1 point [-]

Caffeine here; based on serious withdrawal symptoms on quitting.

Comment author: [deleted] 03 April 2013 06:22:30PM 1 point [-]

I answered “No”, but one might quibble about whether I actually qualify as not addicted to caffeine. (I'm operationalizing “addiction” as ‘my performances when I don't use X for a couple days are substantially worse than the baseline level before I started regularly using X in the first place, or when I stop using X for several months’. I am a bit less wakeful if I let go of caffeine for a couple days than the level I revert to when I let go of it for months, but not terribly much so and there are all sorts of confounds anyway.)

Comment author: Michelle_Z 02 April 2013 06:06:14AM *  5 points [-]

I started a blog about a month or two ago. I use it as a "people might read this so I better do what I'm committing to do!" tool.

Link: Am I There Yet?

Feel free to read/comment.

Comment author: CAE_Jones 04 April 2013 11:04:42PM 4 points [-]

I get the impression that there is something extremely broken in my social skills system (or lack there of). Something subtle, since professionals have been unable to point this out to me.

I find that my interests rarely overlap with anyone else's enough to sustain (or start, really) conversation. I don't feel motivated to force myself to look at whatever everyone else is talking about in order to participate in a conversation about it.

But it feels like there's something beyond that. I was given the custom title of "the confusenator" on one forum. I was straight-up told I was boring when I interjected in a round of bickering that interrupted a debate (also on an internet forum). I find myself being ignored in many places, even those specifically narrow enough in focus to increase interest overlap. (No, I don't post enough at LW for me to count it at this point in time.)

In real life, I physically can't do the all-important eyecontact thing, and I'm too selfconscious/anxious/whatever to use a great deal of volume when speaking. And I can't see lots of things that convey important information about whether someone is available for talking to / nonverbal cues / etc. So real life, I kinda understand.

But none of those apply to the internet, and I still wind up stuck in my own little world there.

Surely I'm missing something?

Comment author: ChristianKl 07 April 2013 04:49:33PM 1 point [-]

Your writing isn't very clear.

http://lesswrong.com/lw/ou/if_you_demand_magic_magic_wont_help/8o31 is a good example. To me it isn't clear what point you want to make with that post.

I get the impression that you try to list a few facts that you consider to be true instead of trying to make a point. It might help you to edit your writing to remove words that don't help you to make the point that you want to make.

When it comes to real life conversations, lack of interest overlap is rarely the main problem. Even if you know nothing about a topic you can have a conversation where the other person explains you something about the topic.

The problem is more emotional. If you are anxious than it's hard for a conversation to flow.

*For disclosure, my own writing isn't the clearest either. It's still a lot better than it was in the past.

Comment author: NancyLebovitz 06 April 2013 02:27:33PM 1 point [-]

If you supply a sample or two of your writing in context from other forums, perhaps it will be easier for someone here to see a pattern of what you're doing.

Comment author: niceguyanon 05 April 2013 09:03:03PM 1 point [-]

Surely I'm missing something?

Perhaps more practice?

Comment author: [deleted] 02 April 2013 10:09:06PM *  4 points [-]

If I stay up ~4 hours past my normal waking period, I get into a flow state and it becomes really easy to read heavy literature. It's like the part of my brain that usually wants to shift attention to something low effort is silenced. I've had a similar, but less intense increase in concentration after sex / masturbation.

Anyone else had that experience?

Comment author: Douglas_Knight 04 April 2013 12:54:11AM 4 points [-]

A very common phenomenon is that people are inhibited from doing work because they don't like the quality of what they produce. If they are a little sleep-deprived or drunk, they can avoid this inhibition. I think you're talking about something else, though.

Comment author: RomeoStevens 05 April 2013 07:10:32AM 0 points [-]

this seems like a super important insight for creativity. Is there a way to practice caring less about initial quality? I'm thinking the obvious of just brainstorming and stream of consciousness writing with as little filter as possible.

Comment author: NancyLebovitz 06 April 2013 02:22:23PM 1 point [-]

How about meditation? Or the cognitive approach of reminding yourself that the path to excellence requires both mistakes and messing around?

Comment author: Qiaochu_Yuan 05 April 2013 07:38:26AM *  1 point [-]

Or the even more obvious of just getting drunk?

Comment author: RomeoStevens 05 April 2013 08:04:20AM 1 point [-]

Yes, I meant cultivating it in a non-impaired state.

Comment author: Qiaochu_Yuan 05 April 2013 08:54:29AM 1 point [-]

You don't think practice while drunk would transfer to non-drunk? I guess there's the issue of state-dependent memory, but I think a plausible strategy is to start your creative sessions drunk and then gradually decrease the amount of alcohol involved over time.

Comment author: Zaine 09 April 2013 06:44:25AM *  1 point [-]

Alcohol is a depressant - it binds to pre-synaptic receptors for the brain's major inhibitory neurotransmitter, gamma aminobutyric acid (GABA). The delta subunit containing GABA receptor, to which alcohol's ethanol has now bound, allows for influx of negatively charged Chlorine into the pre-synaptic GABAergic (GABA transmitting) cell; the cell's charge is lowered, which inhibits further action potentials. Cells that transmit GABA will inhibit other cells; hyperpolarising (making the cell's net charge negative) the inhibitory pre-synaptic GABAergic cell dis-inhibits the post-synaptic cell, which may be excitatory or inhibitory. In the general case of the post-synaptic cell being excitatory, one's brain will become less inhibited - which is not a good thing for cognitive computation.

Due to physics I confess to not presently comprehend, an entirely uninhibited brain will fire in synchrony. Synchrony of action potential frequency has been observed and mathematically measured to result in decreased cognitive performance: asynchronous brain activity is high performance brain activity (beta waves). I understand it from a reactivity perspective - in order to respond quickly to a stimulus, one needs to inhibit their current action and respond to that stimulus; GABAergic neurones are critical to that inhibition.

In sum, while a buzzed person may feel very happy and jumpy, their reduced cognitive ability to inhibit active firing patterns hinders cognitive performance (they are jumpy because motor neurones are being dis-inhibited, too).

With sufficient ethanol saturation voltage-gated sodium channels become less able to detect changes in the charge of their proximity; non-polar lipid-like ethanol does not conduct electricity. Impaired ability to respond to environmental changes around the cell fetters neurone firing, leading to a drunkard's depressed, or rather retarded behaviour.

From a speculative standpoint, perhaps the increased excitability and decreased potential for inhibition conduces fewer cognitive interruptions along the lines of, "Hey, listen! To experience an instant reward go to Hyrule!" One's thoughts, literally, cannot be stopped enough to have that thought.

Comment author: Kawoomba 09 April 2013 06:56:09AM *  0 points [-]

While ethanol is neurodepressant overall, its effects can initially mirror those of a stimulant ('biphasic').

Comment author: Zaine 09 April 2013 11:36:21PM 1 point [-]

It's still depressing neurones; the neurones it's depressing are inhibitory neurones, which dis-inhibits excitatory neurones. Your comment prompted me to do a research-check, and it turns out I was completely wrong (don't theorise beyond your nose, eh?). The above comment now reflects reality.

Comment author: OrphanWilde 03 April 2013 01:54:49AM 3 points [-]

~5 hours after I usually go to bed is an incredibly productive period of time for me. So the timing doesn't correspond, but the "part of my brain that usually wants to shift attention" does.

Comment author: Panic_Lobster 02 April 2013 06:41:27AM *  4 points [-]

Here is a blog which asserts that a global conspiracy of transhumanists controls the media and places subliminal messages in pop music such as the Black Eyed Peas music video "Imma Be" in order to persuade people to join the future hive-mind. It is remarkably lucid and articulate given the hysterical nature of the claim, and even includes a somewhat reasonable treatment of transhumanism.

http://vigilantcitizen.com/musicbusiness/transhumanism-psychological-warfare-and-b-e-p-s-imma-be/

Transhumanism is the name of a movement that claims to support the use of all forms of technology to improve human beings. It is far more than just a bunch of harmless and misguided techie nerds, dreaming of sci-fi movies and making robots. It is a highly organized and well financed movement that is extremely focused on subverting and replacing every aspect of what we are as human beings – including our physical biology, the individuality of our minds and purposes of our lives – and the replacement of all existing religious and spiritual beliefs with a new religion of their own – which is actually not new at all.

EDIT: I see this was previously posted back in 2010, but if you haven't witnessed this blog yet it is worth a look.

Comment author: fubarobfusco 02 April 2013 06:10:28PM 7 points [-]

Good to know that someone's keeping the ol' Illuminati flame burning. Pope Bob would be proud.

The thing I find most curious about the Illuminati conspiracy theory is that if you look at the doctrines of the historical Bavarian Illuminati, they are pretty unremarkable to any educated person today. The Illuminati were basically secular humanists — they wanted secular government, morality and charity founded on "the brotherhood of man" rather than on religious obedience, education for women, and so on. They were secret because these ideas were illegal in the conservative Catholic dictatorship of 18th-century Bavaria — which suppressed the group promptly when their security failed.

If CFAR becomes at all successful, conspiracists will start referring to it as an Illuminati group. They will not be entirely wrong.

Comment author: Multiheaded 03 April 2013 02:27:27AM 3 points [-]

The thing I find most curious about the Illuminati conspiracy theory is that if you look at the doctrines of the historical Bavarian Illuminati, they are pretty unremarkable to any educated person today. The Illuminati were basically secular humanists — they wanted secular government, morality and charity founded on "the brotherhood of man" rather than on religious obedience, education for women, and so on.

Might I interest you in the theories of Mencius Moldbug?

Comment author: [deleted] 03 April 2013 12:10:24PM *  3 points [-]

Please give the poor sap a link to a summary of them; even “A gentle introduction to Unqualified Reservations” made me go tl;dr a third of the way through Part 1.

(What little I know about reactionary ideas comes from this, but I don't know how accurate that is.)

Comment author: ChristianKl 07 April 2013 11:22:19AM 0 points [-]

They modeled themselves after the Freimauers and draw a lot of their membership from them. Being a member of the Illuminati required a pledge of obedience. I would be very surprised if CFAR introduces that kind of behavior. You don't need pledges of obedience to advocate secular humanism.

Like the Freemansons the Illuminati also performed secret rituals.

They were secret because these ideas were illegal in the conservative Catholic dictatorship of 18th-century Bavaria

That not really true. Karl Theodor who banned them was a proponent of the Englighement. He didn't want secret groups that pledge obedience to get political power. He didn't want his government to be overturned. A lot of French people died in the French revolution.

Comment author: jamesf 01 April 2013 04:53:37PM *  4 points [-]

I'm going to Hacker School this summer, and I need a place to stay in NYC between approximately June 1 and August 23. Does anyone want an intrepid 20-year-old rationalist and aspiring hacker splitting the rent with them?

Also, applications for this batch of Hacker School are still open, if you're looking for something great to do this summer.

Comment author: Nisan 01 April 2013 10:21:40PM 4 points [-]

Consider contacting the NYC LW email list.

Comment author: SilasBarta 03 April 2013 01:18:43AM 9 points [-]

It was recently brought to my attention that Eliezer Yudkowsky regards the monetary theories of Scott Sumner (short overview) to be (what we might call) a "correct contrarian cluster", or an island of sanity when most experts (though apparently a decreasing number) believe the opposite.

I would be interested in knowing why. To me, Sumner's views are a combination of:

a) Goodhart's folly ("Historically, an economic metric [that nobody cared about until he started talking about] has been correlated with economic goodness; if we only targeted this metric with policy, we would get that goodness. Here are some plausible mechanisms why ..." -- my paraphrase, of course)

b) Belief that "hoarded" money is pure waste with no upside. (For how long? A day? A month?)

If you are likewise surprised by Eliezer's high regard for these theories, please join me in encouraging him to explain his reasoning.

Comment author: bogus 03 April 2013 08:04:34AM *  2 points [-]

To address your (a) comment, some countries have implemented close approximations to NGDP level targeting after the 2008 crisis, and have done well. They include most obviously Iceland (despite a severe financial crisis), and some less obvious instances like Australia, Poland and Isreal. One could point to the UK as the clearest counterexample, but just about everyone agrees that they have severe structural problems, which NGDPLT is not intended to address. And even then, monetary easing has allowed the conservative government to implement fiscal austerity without crashing the economy - this was widely expected to happen and there was a lot of public concern (compare the situation in the US wrt the "fiscal cliff" and "sequestration" scares. Here too, the Fed offset the negative fiscal effect by printing money).

As for (b), nobody argues that money hoarding is a bad thing per se. But it needs to be offset, because practically all prices in the economy are expressed in terms of money, and the price system cannot take the impact without severe side effects and misallocations. Inflation targeting is a very rough way of doing this, but it's just not good enough (see George Selgin's book Less than Zero for an argument to this effect). ISTM that this is not well understood in the mainstream ("NK") macro literature, where supply shocks are confusingly modeled as "markup shocks". I have seen cutting-edge papers pointing out that these make inflation targeting unsound (sorry for not having a ref here).

Comment author: SilasBarta 04 April 2013 07:54:28PM 1 point [-]

To address your (a) comment, some countries have implemented close approximations to NGDP level targeting after the 2008 crisis, and have done well.

None of the examples have targeted NGDP, which is what Sumner needs to be true to have supporting evidence. Rather, they had policies which, despite not specifically intending to, were followed by rising NGDP. The purported similarity to NGDPLT is typically justified on the grounds that the policy caused something related to happen, but there is a very big difference between that and directly targeting NGDP. And hence why it can't demonstrate why targeting a metric (that, again, no one even cared about until Sumner started blogging about it) will have the causal power that is claimed of it.

As for (b), nobody argues that money hoarding is a bad thing per se.

I disagree; I have yet to see any anti-hoarders mention anything positive whatsoever about hoarding and take it as a given that eliminating it is bad. Landsburg says it better than I can: the very people promoting anti-hoarding policies lack any framework in which you can compare the benefits of hoarding to the hoarders against its costs, and thus know whether it's on net bet. The best answer he gets is essentially, "well, it's obvious that there's a shortfall that needs to be rectified" -- in other words, it's just assumed.

To find an example of anyone saying anything positive about hoarding, you have to go to fringe Austrian economists, like in this article.

But it needs to be offset, because practically all prices in the economy are expressed in terms of money, and the price system cannot take the impact without severe side effects and misallocations.

But until you've quantified (or at least acknowledged the existence of) the benefits of hoarding, you can't know if these supposed misallocations are worse than the benefits given by the hoarding. You can't even know if they are misallocations, properly understood.

For once you accept that there's a benefit to hoarding, then the changes in prices induced by it are actually vital market signals, just like any price. Which would mean that you can't eliminate the price change without also destroying information that the market uses to improve resource use. I mean, oil shocks cause widespread price changes, but any attempt to stop these price changes is going to worsen the misallocation problem.

Toy example to illustrate the benefits, and important signal sent by, hoarding: let's say we have a class of typical investors, with no special non-public knowledge about specific companies. So when they invest, they invest in the economy as a whole. (Let's say they won't even consider using this part of their money for consumption.) But! 70% of the economy's investment venues are unsustainable and are actually destroying value in a way not currently obvious. In that case, it would be much better for these potential investors to hoard, rather than further advance this malinvestment. Sure, they'll starve the good 30% of projects of funds, but they'll also pull back on the bad 70%.

So I have yet to see any actual recognition of the benefits of hoarding among this group, which puts them in a ridiculous position. If holding money is bad, then the optimal situation is for any money received to be instantly spent on something else (whether consumption or investment). But this requires that you know what you're going to spend the money on before you earn it -- which just takes us back to barter! Thus, we see the benfit of hoarding/holding money: retaining the option value when you lack certainty about what you will spend it on. It thus signals consumers' uncertainty that they will be able to enter sustainable patterns of trade, and cannot be costlessly squashed (as another another Econ School though of interest -- that it could be zeroed without negative consequence).

Comment author: bogus 05 April 2013 07:40:22AM *  1 point [-]

None of the examples have targeted NGDP, which is what Sumner needs to be true to have supporting evidence.

I think my examples do constitute supporting evidence of some kind. Yes, it would be good to have examples of countries specifically targeting NGDP, to prevent spurious correlations or Lucas critique problems. But even so, Iceland and to a lesser extent, Poland - and, to be fair, the UK - specifically accepted a rise in inflation in order to sustain demand - it wasn't a simple case of exogenously strong RGDP growth. (I think this might also apply to Australia, actually. Their institutional framework would certainly allow for that.) This makes the evidence quite credible, although it's not perfect by any means.

Also, Sumner was not at all the first economist to care about NGDP as a possible target. He is a prominent popularizer, but James Meade and Bennett McCallum had proposed it first.

Your example of the "benefits of hoarding" doesn't address the very specific problems with hoarding the unit of account for all prices in the economy, when prices are hard to adjust. Yes, money has a real option value, so money hoarding might signal some kind of uncertainty. However, you have not made the case that this "signaling" has any positive effects, especially when the operation of the price system is clearly impaired. By analogy, if peanuts were the unit of account and medium of exchange, then widespread hoarding of peanuts might signal uncertainty about the next harvest. But it would still cause a recession, and it wouldn't actually cause the relative price of peanuts to rise (or rise much at any rate), which is what might incent additional supply.

Moreover, in practice, an uncertain agent can attain most (if not all) of the benefit of hoarding money by holding some other kind of asset, such as low-risk bonds, gold or whatever the case may be. It's not at all clear that hoarding money specifically provides any additional benefit, or that such incremental benefits could be sustained without inflicting greater costs on other agents.

Comment author: Ante 03 April 2013 05:27:02AM 1 point [-]

Yes!

The comment is from hacker news thread about Bitcoin hitting $100. It would be cool to have him also expand more on bitcoin itself which he seems to regard as destructive but not necessarily doomed to fail? Here he entertains the idea about combining NGDP level targeting (which I don't understand) with the best parts of Bitcoin. This all sounds very interesting.

Comment author: therufs 04 April 2013 04:36:08PM 3 points [-]

Is there any particular protocol on reviving previously-recurring threads that are now dormant? I had some things to put in a Group Rationality Diary entry, but there hasn't been a post since early January. I sent cata a message a few days ago; haven't heard back.

Comment author: TimS 04 April 2013 05:23:32PM 4 points [-]

Alas, no such protocol exists. So just go for it.

Comment author: lukeprog 04 April 2013 02:19:19AM *  3 points [-]

Strong AI is hard to predict: see this recent study. Thus, my own position on Strong AI timelines is one of normative agnosticism: "I don't know, and neither does anyone else!"

Increases in computing power are pretty predictable, but for AI you probably need fundamental mathematical insights, and it's damn hard to predict those.

In 1900, David Hilbert posed 23 unsolved problems in mathematics. Imagine trying to predict when those would be solved. His 3rd problem was solved that same year. His 7th problem was solved in 1935. His 8th problem still hasn't been solved.

Or imagine trying to predict, back in 1990, when we'd have self-driving cars. Even in 2003 it wasn't obvious we were very close. Now it's 2013 and they totally work, they're just not legal yet.

Same problem with Strong AI. We can't be confident AI will come in the next 30 years, and we can't be confident it'll take more than 100 years, and anyone who is confident of either claim is pretending to know too much.

Comment author: gwern 04 April 2013 03:40:49AM 10 points [-]

you probably need fundamental mathematical insights, and it's damn hard to predict those.

We can still try. As it happens, a perfectly relevant paper was just released: "On the distribution of time-to-proof of mathematical conjectures"

What is the productivity of Science? Can we measure an evolution of the production of mathematicians over history? Can we predict the waiting time till the proof of a challenging conjecture such as the P-versus-NP problem? Motivated by these questions, we revisit a suggestion published recently and debated in the "New Scientist" that the historical distribution of time-to-proof's, i.e., of waiting times between formulation of a mathematical conjecture and its proof, can be quantified and gives meaningful insights in the future development of still open conjectures. We find however evidence that the mathematical process of creation is too much non-stationary, with too little data and constraints, to allow for a meaningful conclusion. In particular, the approximate unsteady exponential growth of human population, and arguably that of mathematicians, essentially hides the true distribution. Another issue is the incompleteness of the dataset available. In conclusion we cannot really reject the simplest model of an exponential rate of conjecture proof with a rate of 0.01/year for the dataset that we have studied, translating into an average waiting time to proof of 100 years. We hope that the presented methodology, combining the mathematics of recurrent processes, linking proved and still open conjectures, with different empirical constraints, will be useful for other similar investigations probing the productivity associated with mankind growth and creativity.

They took the 144 from the Wikipedia list of conjectures; their population covariate is just an exponential equation they borrowed from somewhere. Regardless, they turn in the result one would basically expect: a constant chance of solving a problem in each time period. (In turn, this and the correlation with population suggests to me that solving conjectures is more parallel than serial: delays are related more to how much mathematical effort is being devoted to each problem.)

Comment author: lukeprog 04 April 2013 04:14:02AM 2 points [-]

Nice.

Comment author: jooyous 02 April 2013 10:59:28PM *  3 points [-]

I had some students complaining about test-taking anxiety! One guy came in and solved the last midterm problem 5 minutes after he had turned in the exam, so I think this is a real thing. One girl said that calling it something that's not "exam" made her perform better. However, it seems like none of them had ever really confronted the problem? They just sort of take tests and go "Oh yeah, I should have gotten that. I'm bad at taking tests."

Have any of you guys experienced this? If so, have you tried to tackle it head-on? It seems like there should be a handy tool-box of things to do when experiencing anxiety during a test. I personally don't have this problem, so I have no idea. (I get a little nervous and take a minute to breathe and I'm fine. And avoid drinking coffee on exam days!)

Comment author: Qiaochu_Yuan 03 April 2013 05:44:02AM 4 points [-]

so I think this is a real thing

Is this meant to imply that you didn't previously think this is a real thing or that you hadn't heard of it until now? It's apparently a well-studied phenomenon, I think I know people who experience it, and it's completely consistent with my current model of human psychology.

Comment author: jooyous 03 April 2013 06:36:40AM *  1 point [-]

Nono, I believed it. I just didn't want people commenting "your students are just complaining to weasel out a better grade from you," because I had some people telling me that students sometimes try to befriend TA's and suck up to them. Though I guess it's not that relevant that these particular students had it. I was just surprised at how bad it was. It's almost like as soon as the test is over, you can think again? I sorta figured people would seek treatment for something that serious.

Comment author: Qiaochu_Yuan 03 April 2013 06:52:07AM *  3 points [-]

I think there's a double typical mind fallacy here. You were surprised because your mind doesn't work their way, and it doesn't occur to them to do anything about it because they just think that's what tests feel like. Also, an anxiety disorder is tantamount to a mild mental illness, and people still have a lot of hangups about seeking mental health services in general.

Comment author: jooyous 03 April 2013 07:03:04AM 1 point [-]

Yeah, I think you're right because when when people say they get nervous before tests, I think, "Oh sure, I get nervous too!" But not to the point where I spend half the time sitting there, unable to write anything down.

I'm a bit concerned that a lot of the treatment options on that page are drugs. Is it really safe to drug people before their brain is supposed to do mathy things? Is it cheating? Do any of the people you know have any handy CBT-style rituals that help calm them down? I think from now on I'm also going to persuade professors to call exams "quizzes" or something.

Comment author: wedrifid 03 April 2013 08:02:48AM *  1 point [-]

I'm a bit concerned that a lot of the treatment options on that page are drugs. Is it really safe to drug people before their brain is supposed to do mathy things?

Probably not more unsafe than drugging them other times. As for performance... most anxiolytic substances impair mental function somewhat. It's what they are notorious for (ie. Valium and ethanol). Still, the effects aren't strong enough that crippling anxiety wouldn't be worse. On the other hand a few things like phenibut and aniracetam could lead to somewhat increased performance even beside from anxiolytic effects.

Is it cheating?

No. There isn't (usually) a rule against it so it isn't cheating. (Sometimes there are laws against prescription substances, but that is different. That makes you a criminal not a cheater!)

Comment author: jooyous 04 April 2013 07:52:43PM *  2 points [-]

I guess I understand using drugs for other mental disorders (the persistent ones that interfere with more areas of life) but it weirds me out that we create this bizarre social construct called "tests" that give people crippling anxiety ... and then we solve the problem with drugs. Instead of developing alternative models for testing people. (Although there are probably correlations and people with test anxiety might get it for other things as well?)

Comment author: RomeoStevens 05 April 2013 07:13:27AM 1 point [-]

I think this has to do with the difference between work and curiosity mode. In curiosity mode solving problems is much easier, but stress reliably kills it. Once the stress is gone, the answers come pouring out.

Comment author: OrphanWilde 04 April 2013 08:23:47PM 1 point [-]

It's extremely common with certain learning disabilities, like dyslexia and to a lesser extent dyscalculia. For many people, it's the time limit, rather than the seriousness of the task itself, and eliminating the time limit to take the test permits them to finish it without issue (frequently within the time limit!).

Comment author: FiftyTwo 01 April 2013 09:56:07PM 3 points [-]

I've been looking at postgraduate programmes in the philosophy of Artificial intelligence, (primarily but not necessarily in the UK). Does anyone have any advice or suggestions?

Comment author: Jayson_Virissimo 02 April 2013 08:49:20PM *  2 points [-]

Why so narrow as to exclude good computer science, cognitive science, philosophy of mind, etc...programs from consideration?

Comment author: FiftyTwo 03 April 2013 11:09:32AM 1 point [-]

No particular reason. I am looking at general Philosophy programmes and cognitive science as well.

I ask specifically about AI programme's because its a very specialised field and it is difficult to distinguish which programmes are worth doing (as certain institutions have started up 'AI' programmes that are little more than pre-existing modules rearranged to make money). I figure there are enough people involved in the field here that they would have relevant expertise.

Comment author: Manfred 02 April 2013 01:47:17AM *  1 point [-]

No advice from me, sorry. But I am interested in what you'd be doing. To be succinct, do you want to write for a philosophy audience, or an AI researcher audience?

Comment author: beoShaffer 03 April 2013 02:56:25AM *  6 points [-]

I’m doing a research project on attraction and various romantic strategies. I’ve made several short online courses organizing several different approaches to seduction, and am looking for men 18 and older who are interested in taking them as well as a short pre and post survey designed to gauge the effectiveness of the techniques taught. If you are want to sign up, or know anyone who might be interested you can use this google form to register. If you have any questions comment or PM me and I’ll get back to you.

ETA:Since someone mentioned publication I thought I should clarify. This is specifically a student research project so, unlike a class project, I am aiming for a peer-reviewed publication, however the odds are much slimmer than if someone more experienced/academically higher status were running it. Also, even if it doesn't get formally published I will follow the "Gwern model". That is to say I'll publish my results online along with as much as my materials as I can (the courses are my own work + publicly available texts, but I only have a limited license for the measures I'm using).

Comment author: Adele_L 04 April 2013 04:18:14AM 1 point [-]

Are you also going to try to gauge how friendly to women each technique is?

Comment author: beoShaffer 04 April 2013 04:58:37AM 1 point [-]

That is not something the study is designed to study, however, it was a major consideration in designing the curriculums.

Comment author: Adele_L 04 April 2013 05:59:31PM 1 point [-]

Alright.

Comment author: ChristianKl 07 April 2013 12:51:47AM 0 points [-]

Your sign up form doesn't say anything about the amount of time/effort that you expect students to invest into the course.

Comment author: beoShaffer 07 April 2013 06:32:12PM 0 points [-]

Thanks for catching that. I’ve edited the instructions to be clearer. For reference here is the added text. The basic lesson format is a short reading (a few pages), an assignment applying the reading to your life, and a short follow-up/written reflection. There is some variability, but the assignments tend to be short (in the vicinity of 5 minutes) and/or designed to be worked into normal social interaction. That said, the normal social interaction part does assume that you are frequently around women that you some interest in flirting with, asking out ect. If this is not the case finding suitable women to interact with could take significantly more time

Comment author: wedrifid 04 April 2013 05:14:22AM 5 points [-]

The free will page is obnoxious. There have been several times in recent months when I have needed to link to a description of the relationship between choice, determinism and prediction but the wiki still goes out of its way to obfuscate that knowledge.

One of the easiest hard questions, as millennia-old philosophical dilemmas go. Though this impossible question is fully and completely dissolved on Less Wrong, aspiring reductionists should try to solve it on their own.

That's a nice thought. But it turns out that many lesswrong participants don't try to solve it on their own. They just stay confused.

There have been some other discussions of the subject here (and countless elsewhere). Can someone suggest the best reference available that I could link to?

Comment author: RichardKennaway 04 April 2013 07:46:11AM 1 point [-]

Can someone suggest the best reference available that I could link to?

The free will (solution) page?

Comment author: [deleted] 10 April 2013 02:50:12PM *  2 points [-]

After rereading the metaethics sequence, it occurred to me a possible reason why people can enjoy (the artistic genre) of tragedy. I think there's an argument to be made along the lines of "watching tragedy is about not feeling guilty when you can't predict the future well enough to see what right is."

Comment author: Kindly 09 April 2013 12:11:31AM 2 points [-]

Grading is the bane of my existence. Every time I have to grade homework assignments, I employ various tricks to keep myself working.

My normal approach is to grade 5 homework papers, take a short break, then grade 5 more. It occurred to me just now that this is similar to the "pomodoro" technique so many people here like, except work-based instead of time-based. Is the time-based method better? Should I switch?

Anyway, back to grading 5 more homework papers.

Comment author: Qiaochu_Yuan 09 April 2013 12:24:57AM 2 points [-]

I think using Pomodoros is more fun because you can do things like record how many assignments you grade per Pomodoro. Now you can keep track of your "high score" and try to break it. Competition is fun and worth leveraging for motivation, even if it's with your past selves.

Comment author: jooyous 09 May 2013 06:52:11PM *  1 point [-]

But doesn't that make you inclined to not read as carefully or grade as thoroughly or not leave as many comments? "Oh whatever, that was mostly right. Yay, high score!"

Comment author: Qiaochu_Yuan 09 May 2013 07:15:04PM 2 points [-]

If you're at the point where you need to employ tricks to finish the grading at all, then I think this is unfortunately a secondary concern. Once you can consistently finish the grading, then I think you can start worrying about its quality.

Comment author: jooyous 09 May 2013 08:03:20PM *  1 point [-]

See, I always worry that the easiest way to get through grading is to just give everyone A's regardless of what they turned in. So I feel like you somehow have to factor in a reward for quality or that's what your system will collapse into?

Comment author: Qiaochu_Yuan 10 May 2013 04:43:58AM 1 point [-]

I would never be tempted to do that, but that comes from a strong desire to tell people when they're wrong which is not necessarily a good thing overall.

Comment author: ciphergoth 08 April 2013 07:24:39PM 2 points [-]

I've known for a while that for every user there's an RSS feed of their comments, but for some reason it's taken me a while to get in the habit of adding interesting people in Google Reader. I'm glad I have.

(Effort in adding them now isn't wasted, since when I move from Google Reader I'll use some sort of tool to move all my subscriptions across at once to whatever I move to)

Comment author: drethelin 08 April 2013 07:42:26PM 0 points [-]

I should do this for more people than Quirrel

Comment author: DanielLC 04 April 2013 08:57:09PM 2 points [-]

In HP:MoR, Harry mentioned that breaking conservation of energy allows for faster-than-light signalling. Can someone explain how?

Comment author: [deleted] 04 April 2013 05:20:24AM 2 points [-]

How much do we know about reasoning about subjective concepts? Bayes' law tells you how probable you should consider any given black-and-white no-room-for-interpretation statement, but it doesn't tell you when you should come up with a new subjective concept, nor (I think) what to do once you've got one.

Comment author: lucidian 12 April 2013 05:33:51PM 1 point [-]

You may be interested in the literature on "concept learning", a topic in computational cognitive science. Researchers in this field have sought to formalize the notion of a concept, and to develop methods for learning these concepts from data. (The concepts learned will depend on which specific data the agent encounters, and so this captures the some of the subjectivity you are looking for.)

In this literature, concepts are usually treated as probability distributions over objects in the world. If you google "concept learning" you should find some stuff.

Comment author: Qiaochu_Yuan 05 April 2013 04:33:53AM 0 points [-]

"Subjective" seems uselessly broad. Can you give a more specific example?

Comment author: [deleted] 09 April 2013 04:53:01AM 0 points [-]

Well, I guess that by "subjective concepts", I mean every concept that doesn't have a formal mathematical definition. So stuff like "simple", "similar", "beautiful", "alive", "dead", "feline", and so on through the entire dictionary.

The only theory-of-subjective-concepts I've come across is the example of bleggs and rubes. Suppose that, among a class of objects, five binary variables are strongly correlated with each other; then it is useful to postulate a latent variable stating which of two types the object is. This latent variable is the "subjective concept" in this case.

Comment author: Qiaochu_Yuan 09 April 2013 05:15:11AM *  1 point [-]

Think of subjective concepts as heuristics that help you describe models of the world. Evaluate those models based on their predictions. (Grounding everything in terms of predictions is a great way to keep your thinking focused. Otherwise it's too easy to go on and on about beauty or whatever without ever saying anything that actually controls your anticipations.)

Have you read the rest of 37 Ways That Words Can Be Wrong?

Comment author: iconreforged 04 April 2013 03:20:24PM 2 points [-]

I watched an awesome movie, and now I'm coasting in far mode. I really like being in far mode, but is this useful? What if I don't want to lose my awesome-movie high?

Are there some things that far mode is especially good for? Should I be managing finances in this state? Reading a textbook? Is far mode instrumentally valuable in any way? Or should I make the unfortunate transition back to near mode?

Comment author: Qiaochu_Yuan 05 April 2013 04:32:31AM 1 point [-]

Based on the description at the LW wiki, it sounds like far mode is a good time to evaluate how risk-averse you've been and whether there are risky opportunities you should be taking that you previously weren't taking because of risk-aversion.

Comment author: OrphanWilde 02 April 2013 09:16:08PM 2 points [-]

Site suggestion:

When somebody attempts to post a comment with the words "why", "comment", and "downvoted", it should open a prompt directing them to an FAQ explaining most likely reasons for their downvote, and also warning them prior to actually submitting the comment that it's likely to be unproductive and just lead to more downvotes.

(Personally I think this site needs to have a little more patience with people asking these questions, as they almost always come from new users who are still getting accustomed to the community norms, but that's just me.)

Comment author: itaibn0 02 April 2013 11:12:44PM 6 points [-]

This suggestions conflicts with the advice of the Welcome Thread, which says:

However, it can feel really irritating to get downvoted, especially if one doesn't know why. It happens to all of us sometimes, and it's perfectly acceptable to ask for an explanation. (Sometimes it's the unwritten LW etiquette; we have different norms than other forums.)

Comment author: OrphanWilde 03 April 2013 01:52:40AM 5 points [-]

And yet I persistently see requests for explanations downvoted. The advice of the welcome thread does not actually correspond to downvoting behavior.

Comment author: itaibn0 03 April 2013 01:05:04PM 6 points [-]

If the advice of the welcome thread doesn't match the actual LW norms, we should change either the welcome thread or the norms.

Comment author: Kaj_Sotala 03 April 2013 08:02:46AM 1 point [-]

Do the requests remain downvoted? In my experience, they may be downvoted for a while, but then get voted back up.

Comment author: TimS 03 April 2013 01:54:04PM 2 points [-]

It depends a bit on why the original post was downvoted. Asking for explanation when the problem is obvious, or on a forbidden topic tends not to get back to neutral.

Comment author: OrphanWilde 03 April 2013 05:28:50PM 3 points [-]

Obvious to regular users != obvious to new user

Comment author: TimS 03 April 2013 06:06:42PM *  2 points [-]

Monkeymind knew why he was being downvoted.

Edit: But, I agree with your point that many community norms that will get one downvoted are not accessible to new members.

Comment author: Kawoomba 03 April 2013 09:03:57AM 1 point [-]

The Welcome Thread doesn't set official policy.

Comment author: Viliam_Bur 03 April 2013 08:10:11AM 6 points [-]

In my opinion, downvoting is necessary for forum moderation, and people don't downvote enough. It is rather easy to get a lot of karma by simply writing a lot, because the average karma of a comment (this is just my estimate) is around 1. I would prefer if the average was closer to 0.

Asking about downvoting is ok, per se (assuming that the person does not do it with every single damned comment which fell to -1 temporarily). But sometimes it seems to contain a connotation that "you should not downvote my comments unless you explain why". Which I completely disagree with and consider it actively harmful, so I automatically downvote any comment that feels like this. (Yes, there is a chance that I misunderstood the author's intentions. Well, I am not omniscient, and I don't want to get paralyzed by my lack of omniscience.)

Comment author: drethelin 02 April 2013 10:50:05PM 2 points [-]

I just downvote people complaining about downvotes

Comment author: NancyLebovitz 03 April 2013 12:33:03AM 4 points [-]

Do you make a distinction between complaining and asking?

Comment author: drethelin 03 April 2013 08:49:29PM 5 points [-]

A little? Most "asking about downvotes" are functionally indistinguishable from complaints although phrased as a question in the sense of "I don't understand why I'm getting downvoted (implying it doesn't make sense and you are wrong to do so)". If someone posts an in depth post and it gets downvoted and they ask for which specific parts of their giant post were bad, I give that more leeway.

Comment author: NancyLebovitz 06 April 2013 02:20:44PM -1 points [-]

If I see a question about downvotes that's below 0, I'm going to upvote it.

Comment author: NancyLebovitz 06 April 2013 03:23:14PM 4 points [-]

I don't think I need to ask why that got downvoted.

Comment author: [deleted] 02 April 2013 09:32:47PM *  1 point [-]

This and other variants of it have been tried on other forums before, with no real change in new user behavior.

Comment author: shminux 05 April 2013 10:19:49PM 2 points [-]

Trying to get a handle on the concept of agency. EY tends to mean something extreme, like "heroic responsibility", where all the non-heroic rest of us are NPCs. Luke's description is slightly less ambitious: an 'agent' is something that makes choices so as to maximize the fulfillment of explicit desires, given explicit beliefs. Wikipedia defines is as a "capacity to act", which is not overly useful (do ants have agency?). The LW wiki defines it as the ability to take actions which one's beliefs indicate would lead to the accomplishment of one's goals. This is also rather vague.

Assuming that agency is not all-or-nothing, one should be able to measure the degree/amount/strength of agency. Is this different from, say, intelligence as an "ability to reason, plan, solve problems"? Are there examples of intelligent non-agents or non-intelligent agents? Assuming the two are correlated but not identical, how does one separate them? Is there a way to orthogonalize the two?

Comment author: Qiaochu_Yuan 08 April 2013 08:25:35AM *  3 points [-]

CFAR's notion of agency is roughly "the opposite of sphexishness," a concept named after the behavior of a particular kind of wasp:

Some Sphex wasps drop a paralyzed insect near the opening of the nest. Before taking provisions into the nest, the Sphex first inspects the nest, leaving the prey outside. During the inspection, an experimenter can move the prey a few inches away from the opening. When the Sphex emerges from the nest ready to drag in the prey, it finds the prey missing. The Sphex quickly locates the moved prey, but now its behavioral "program" has been reset. After dragging the prey back to the opening of the nest, once again the Sphex is compelled to inspect the nest, so the prey is again dropped and left outside during another stereotypical inspection of the nest. This iteration can be repeated again and again, with the Sphex never seeming to notice what is going on, never able to escape from its programmed sequence of behaviors. Dennett's argument quotes an account of Sphex behavior from Dean Wooldridge's Machinery of the Brain (1963). Douglas Hofstadter and Daniel Dennett have used this mechanistic behavior as an example of how seemingly thoughtful behavior can actually be quite mindless, the opposite of free will (or, as Hofstadter described it, sphexishness).

So ants don't have agency. The difference between intelligence and agency seems to me to vanish for sufficiently intelligent minds but is relevant to humans. Like ArisKatsaris I think that for humans, intelligence is the ability to solve problems but agency is the ability to prioritize which problems to solve. It seems to me to be much easier to test for intelligence than for agency; I thought for a little bit awhile ago about how to test my own agency (and in particular to see how it varies with time of day, hunger level, etc.) but didn't come up with any good ideas.

One sign of sphexishness in humans is chasing after lost purposes.

Comment author: ArisKatsaris 06 April 2013 09:55:13AM *  1 point [-]

How about "agency" as the extent by which people are moved to action by deliberate thought and by preferences they're aware of -- as opposed to by habit, instinct, social expectations or various unconscious drives.

That's pretty much similar to Luke's definition I guess.

Is this different from, say, intelligence as an "ability to reason, plan, solve problems"?

It's different in that it also chooses which problems to seek to solve, in accordance to one's own self-aware preferences .

Are there examples of intelligent non-agents or non-intelligent agents?

Lots of intelligent non-agents -- a pocket calculator for example.

Comment author: NancyLebovitz 11 April 2013 07:03:15AM 1 point [-]

Hazards of botched IT Cost overruns are nothing compared to what can go wrong when you actually use the software.

Software which can answer "is this obviously stupid?" would be a step towards FAI.

Comment author: [deleted] 09 April 2013 12:05:20PM *  1 point [-]

Toby Ord gave a Google Tech Talk on efficient charity and QALYs this march.

Comment author: John_Maxwell_IV 07 April 2013 11:20:21PM *  1 point [-]

Does anyone here have thoughts on the x-risk implications of Bitcoin? Rebalancing is a way to make money off of high-volatility investments like Bitcoin (the more volatility, the more money you make through rebalancing). If lots of people included Bitcoin in their portfolios, and started rebalancing them this way, then the price of Bitcoin would also become less volatile as a side effect. (It might even start growing in price at whatever the market rate of return for stocks/bonds/etc. is, though I'd have to think about that.)

So given that I could spread this meme on how you can get paid to decrease Bitcoin's volatility, should I do it?

Comment author: [deleted] 07 April 2013 11:51:36PM 0 points [-]

So given that I could spread this meme on how you can get paid to decrease Bitcoin's volatility, should I do it?

Why wouldn't you?

Comment author: John_Maxwell_IV 08 April 2013 02:16:04AM *  0 points [-]

It would take time and effort and having Bitcoin be a legitimate alternative currency might have unforseen negative consequences, e.g.

Bitcoins weren't just created to sell a little weed on silk road.. they are used for child pornography, human trafficking, murder for hire, domestic and intl terrorism, hard drugs like heroin, and illegal arms sales... If you are for bitcoins, then you must also be OK with all of the above things...have fun supporting your pedophiles and murderers...

http://techcrunch.com/2013/04/13/beyond-the-bitcoin-bubble/

Comment author: CoffeeStain 06 April 2013 09:18:50PM 1 point [-]

So I'm running through the Quantum Mechanics sequence, and am about 2/3 of the way through. Wanted to check in here to ask a few questions, and see if there aren't some hidden gotchas from people knowledgeable about the subject who have also read the sequence.

My biggest hangup so far has been understanding when it is that different quantum configurations sum, versus when they don't. All of the experiments from the earlier posts (such as distinct configurations) seem to indicate that configurations sum when they are in the "same" time and place. Eliezer indicates at some point that this is "smeared" in some sense, perhaps due to the fact that all particles are smeared in space in time; therefore if two "particles" in different worlds don't arrive at the same place at exactly the same time, the smearing will cause the tail end of their amplitude distributions to still interact, resulting in a less perfect collision with somewhat partial results to what would have happened in the perfect experiment.

The hangup becomes an issue, barring any of my own misunderstanding (which is of course likely), when he starts talking about macroscopic other worlds. He goes so far as to say that when a quantum event is "observed," what really happens is that different versions of the experimenter become decohered with the various potential states of the particle.

Several things don't seem quite right here. First, Eliezer seems to imply here that brains only work (to the extent that they can have beliefs capable of being acted on) when they work digitally, with at least some neurons having definite on or off states. What happens to the conservation of probability volume due to Liouville's Theorem described in Classical Configuration Spaces? Or maybe I misunderstand here, and the probability volumes actually do become sharply concentrated in two positions. But then why is it not possible for probability volumes to become usually or always sharply concentrated in one position, giving us, for all practical purposes, a single world?

Backing up a bit though. What keeps different worlds from interacting? Eliezer implies in Decoherence that one important reason that decohered particles are such is a separation in space. What I fail to understand, if there is not some specified other axis, is why the claim stands that different but similar worlds (different only along that axis) fail to interact! According to his interpretation (or my interpretation of his interpretation) of quantum entanglement, your observation of a polarized particle at one end of a light-year limits the versions of your friend (who observed the tangled particle) that you are capable of meeting when you compare notes in the middle. But why do you just as easily not meet any other version of your friend? What is the invisible axis besides space and time that decoheres worlds, if we meet at the same place and time no matter what we observe?

More importantly, what keeps neurons which are at the same space and time from interacting with their other-world counterparts, as if they were as real as their this-world self?

Unless I'm completely off here, couldn't there be many fewer possible worlds than Eliezer suggests? In extremely controlled experiments, we observe decoherence on rather macroscopic levels, but isn't "controlled" precisely the point? In most normal quantum interactions, isn't there always going to be interference between worlds? And what if that interference by the nature of the fundamental laws just so happens to have some property (maybe a sort of race condition) that causes, usually, microscopic other worlds to merge? On average, if possible worlds become macroscopic enough, still-real interactions between the worlds become increasingly likely, and they are no longer "other worlds" but actually-interacting same-world, to the point where no two differently configured sets of neurons could ever observe differently.

I should stop here before I carry on any early-introduced fallacy to increasingly absurd conclusions. Would be very interested in how to resolve my confusion here.

Comment author: [deleted] 06 April 2013 10:25:47PM 1 point [-]

First, Eliezer seems to imply here that brains only work (to the extent that they can have beliefs capable of being acted on) when they work digitally, with at least some neurons having definite on or off states.

I assume you mean this section:

Your world does not split into exactly two new subprocesses on the exact occasion when you see "ABSORBED" or "TRANSMITTED" on the LCD screen of a photon sensor. We are constantly being superposed and decohered, all the time, sometimes along continuous dimensions—though brains are digital and involve whole neurons firing, and fire/not-fire would be an extremely decoherent state even of a single neuron... There would seem to be room for something unexpected to account for the Born statistics—a better understanding of the anthropic weight of observers, or a better understanding of the brain's superpositions—without new fundamentals.

He's not exactly saying that brains only work digitally -- they don't; neuron activation isn't only about electrical impulses -- he's just talking about one particular process that happens in the brain. At least, as far as I can tell.

Comment author: CoffeeStain 06 April 2013 10:56:45PM 0 points [-]

They certainly don't work only digitally, but the suggestion seems to be that for most brain states at the level of "belief" it is required that at least some neurons have definite states, if only in the sense of "neuron A is firing at some definite analog value."

Comment author: Viliam_Bur 06 April 2013 10:25:49AM *  1 point [-]

I don't know anything about quantum computing, so please tell me if this idea makes sense... if you imagine many-worlds, can it help you develop better intuitions about quantum algorithms? Anyone tried that? Any resuts?

I assume an analogy: In mathematics, proper imagination can see you some results faster, even if you could get the same results by computation. For example it is easier to imagine a "sphere" than a "set of points with distance at most D from a given center C". You can see that an intersection of a sphere and a plane is a circle faster than you can solve the corresponding equations. Even if computationally the sphere is the same as the given set of points, imagination runs much faster on the visual model.

Analogically, the copenhagen interpretation and many-world interpretation should give same results. Yet, is it possible than one of them would be more imagination-friendly? Would it be possible to immediately "see" the results in one model, which have to be mathematically calculated by the other model? Could then one of these models be a comparative advantage for a quantum programmer?

To avoid misunderstanding: I don't suggest using imagination instead of computation. I only suggest using an imagination to guess a result, and then use a proper mathematical proof to confirm it. Just like the "an intersection of a sphere and a plane is either nothing, or a point, or a circle" can be translated to equations and verified analytically, but is much easier to remember this way.

Comment author: RomeoStevens 08 April 2013 07:06:30AM 2 points [-]

Are you familiar with the Quantum Bomb Tester?

Comment author: Douglas_Knight 06 April 2013 07:03:45PM *  1 point [-]

Are you aware that David Deutsch is (1) the loudest proponent of MWI and (2) the inventor* of the quantum computer? Moreover, he claimed that MWI lead him there. He also predicted that quantum computers would convince everyone else of MWI. So far, that claim doesn't look very plausible.

I am skeptical of the possibility of many worlds contributing to imagination. I prefer the phrase "no collapse" to the phrase "many worlds" because there are a lot of straw men associated with the latter phrase. But phrasing it as a negative shows that's it's really a subset of Copenhagen QM, and thus shouldn't require more or different imagination. You might say that the first incarnation of many worlds is Schrödinger's Cat, which everyone talks about, regardless of interpretation.

There is some discussion of the fruitfulness here; in particular Scott Aaronson says "I think Many-Worlds does a better job than its competitors...at emphasizing the aspect of QM—the exponentiality of Hilbert space—that most deserves emphasizing."

* Manin, Feynman, and maybe other people could claim that title, too, but I think they were all independent. Moreover, I think Deutsch was the first person to produce a quantum algorithm that he could prove was better than a classical algorithm; he exploited QM rather than saying it was hard. It is this exploitation that he attributes to MWI.

Deutsch discusses his predecessors, but he didn't know about Manin. I think Manin's contribution is all in the 3 paragraph Appendix (p25).

Comment author: Viliam_Bur 06 April 2013 08:11:13PM 0 points [-]

I didn't know about David Deutsch, thanks for the information!

it's really a subset of Copenhagen QM, and thus shouldn't require more or different imagination.

Then perhaps the only advantage is that you don't have to waste your time worrying "what if my proposed solution is already so big that the wavefunction will collapse before it computes the result". But to get this advantage, you don't really have to believe in MWI. It's enough to profess belief in colapse, but ignore the consequences of that belief while designing algorithms, which is something humans excel at.

Comment author: somervta 17 April 2013 03:23:46AM 0 points [-]

Recently, for a philosophy course on(roughly) the implications of AI for society, I wrote an essay on whether we should take fears about AI risks seriously, and I had the thought that it might be worth posting to LW discussion. Is there/would there be interest in such a thing? THB, there's not a great deal of original content, but I'd still be interest in the comments of anyone who is interested.

Comment author: NancyLebovitz 14 April 2013 03:30:22AM 0 points [-]

LW Women: Submissions on Misogyny was moved to main, but the article doesn't show up as New, Promoted, or Recent.

Comment author: bbleeker 12 April 2013 03:15:03PM 0 points [-]

I'm not sure if this is the right place for this, but I've just read a scary article that claims that "The financial system as a whole functions as a hostile AI", and I was wondering what LW thinks of that.

Comment author: TheOtherDave 12 April 2013 04:39:28PM 2 points [-]

There have been various threads in the past about whether corporations can be considered AIs. The general consensus seems to be "not in the sense of 'AI' that this community is concerned with."

Comment author: Kawoomba 10 April 2013 03:03:07PM 0 points [-]

Sudden Clarity.

(The OB memes wasn't me.)

Comment author: Oscar_Cunningham 09 April 2013 12:34:11PM *  0 points [-]

In Anki, LaTeX is rendered too large. Does anyone know an effective fix?

EDIT: I found one. In Anki, LaTeX is rendered to an image and from then on treated as one. Adding

img{ zoom: 0.6;}

to a new line of the "Styling" section of the "Card Type" for whatever Note you're using rescales all the images in that card type. So provided you don't use LaTeX and images on the same Note then this fixes all your problems.

Comment deleted 06 April 2013 05:23:46PM [-]
Comment deleted 06 April 2013 05:22:04PM [-]