Open thread, Oct. 6 - Oct. 12, 2014
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (330)
On the suggestion of Gunnar_Zarncke, this comment has been transformed into a Discussion post.
This should at least be in Discussions. It is very valuable high level feedback about the value of LessWrong.
If you agree and if you want to avoid duplicating it you can remove the body of the text and replace it with a link to the Discussions post.
I'll do that. However, Peter Hurford listed his similar experience below. This post can generate even more value (and I don't mean karma, I mean people dovetailing) on its value with better, stronger examples than just the ones I've provided above. I'm thinking this could be a repository, and a catalyst for ever more users to get values out of Less Wrong as both a community, and a resource, like this. If you know of other users with similar experiences, please ask them if they'd be willing to share their stories, and include them in this post.
Ways to act on this idea:
Collaborate with Peter Hurford to jointly post this.
Create a Positive LW Experience Thread
Create a LW Wiki page where this can be collected.
I think the last has the most permanent effect but by itself is likely to receive little contribution.
I had a similar experience asking about my career choices.
Gunnar_Zarncke also commented that I should at least turn my above comment into a post in Discussion. Before I do that, or if I go on to post it to Main. if the reception goes well enough, I'd like to strengthen my own post by including your experience in it. I mean, the point I made above seems to be making enough headway on the few things I did alone, and if the weight of your clout as a well-known effective altruist, and rationalist, is thrown behind it, I believe we could make even more traction in generating positive externalities by encouraging others.
I remember there was a 'Less Wrong as a social catalyst' thread several months ago we both posted in, found valuable, and got great receptions for the feedback we provided. I might mine the comments there for similar experiences, message some users, and see if they don't mind doing this. If you know of other friends, or peers, on Less Wrong, who have had a similar experience, I'd encourage you to get them on board as well. The more examples we can provide, of a more diverse base of users, the stronger case we can build. In doing so, I'd attribute you as a co-author/collaborator/provider of feedback when I make this a post in its own right.
Sounds good to me. I've wanted to write a "what EA/LW has done for me" post for awhile and may still do so.
Gunnar got enough upvotes for merely suggesting that I post this in Discussion that it shows a lot of promise. I didn't anticipate this, and know I'm feeling ambitious. More than just generating a single thread of positive Less Wrong responses, I'd prefer to a call for any members of the site with a deep, broad enough experience of getting great advice from Less Wrong to make a post of their own as I will. So, yes, make your own.
However, if we can get others to come out of the woodwork to write reports, inspire more users to ask personal questions of Less Wrong, and then get them to turn that into future posts, there potential for personal growth for dozens of users on this site. I wouldn't call it a chain reaction, per se, but I anticipate an unknown unknown value of positive externalities to be generated, and I want us to capture that value.
I started keeping a diary about a month ago. The two initial reasons I had for adopting this habit were that, first of all, I thought that I would enjoy writing, and second of all, I wanted to have something relaxing to do for half an hour before my bedtime every evening, because I often have trouble getting to sleep at night.
I have found that I generally end up writing about my day-to-day social interactions in my journals. One really nice benefit of keeping a journal that I hadn't expected to reap was that writing has helped me weakly precommit to performing certain actions that help me improve at being sociable. For example, a few weeks back, there were a couple nights where I wrote about how I felt bad about how a new transfer student to my school didn't seem to know anyone in the class which we had together. A couple days after writing about this, I ended up asking him to hang out with me, which was something that I normally would have been too shy to do.
Another thing that I learned is that writing about your problems can help you digest them in ways which are helpful to you. On a meta- level, I think that writing about my social interactions with others has helped me realize that I want to spend more time with my friends, at the expense of spending less time reading through e.g. posts on Reddit. Looking back on things, it is painfully obvious to me that spending time with my friends is much better than spending time on random internet sites, though I hadn't explicitly realized that I had been failing to spend time with my friends until I ended up writing about the fact that this was the case.
Actually, before I had even started journaling, I had known that thinking about problems by writing about them or making diagrams was, in general, a helpful thing to do-- after all, plenty of people benefit from drawing pictures when stuck on, say, math problems. However, it wasn't previously obvious to me that problems other than math and science problems could be analyzed by writing about them or drawing diagrams that represented the problem. Basically, I found a way (which was previously unknown to me) to identify and solve problems in my life.
Journaling, writing a diary, expressive writing, writing therapy. This practice goes by many names and seems to be effective in psychologyically assisting a person.
Personally, I am forming a habit of writing at least 750 words a day in my diary on the computer. It seems to help me recognise trends and I can't argue with what I have written down, it is plain and simply there.
No such thing.
For any given problem, once a possible solution is reached, do you expect to be able to check that solution against reality with further observations? If so, you have constructed a theory with experimental implications, and are doing Science. If not, you have derived the truth, falsehood, or invalidity of a particular statement from a core set of axioms, and are doing Math.
I enjoy keeping a diary, to crystallise thoughts and experiences, but to restrain my tendency to blather it's a diary of haikus.
Don't you mean:
I've a diary
To get my thoughts in order
This is how it works:
To keep myself terse
All entries must be haikus
Thus I don't ramble.
[EDITED to add: of course strictly these aren't actually haiku since the 5-7-5 thing is just a surface feature, but I conjecture BenSix's diary entries also mostly aren't.]
Indeed. I attempt to juxtapose ideas but often there is too pressing a need to juxtapose my head and a pillow.
Beauty and reason
In reports of every day
Rationality
We should probably
Stop the running joke right here.
What is this, Reddit?
Here's a fun game: concepts, ideas, institutions and features of the world we (let's say 21st Century Westerners) think of as obvious, but aren't necessarily so. Extra points for particularly visceral or captivating cases.
For example: at some point in human history, the idea of a false identity or alias wouldn't have even made sense, because everyone you met would be either known to you or a novel outsider. These days, anyone familiar with, say, Batman, understands the concept of an assumed identity, it's that endemic in our culture. But there presumably must have been a time when you would have had to go to great lengths to explain to someone what an assumed identity was.
A few examples:
Accurate timekeeping and strict schedules (a very famous example). Although sundials and water clocks were known since antiquity, they weren't very accurate and the length of an hour varied with the length of the day. It was rare for an average person to have a strict schedule. Even in monasteries and churches schedules probably could not be very strict, as although clocks did strike hours usually they weren't very accurate (13th-14th century mechanical clocks had no faces at all, and it wasn't until late 17th century when they became precise enough to justify regular use of minute hands) and they would likely regularly be reset at local high noon each day. In fact, it was only after the invention of pendulum clock by Christiaan Huygens in 1656 that timekeeping became accurate and independent of the length of the day, however, as late as 1773, towns were content to order clocks without minute hands as they saw no need for them. In 1840 railway time was invented. It was "the first recorded occasion when different local times were synchronised and a single standard time applied. Railway time was progressively taken up by all railway companies in Great Britain over the following two to three years.". According to wikipedia, 98% of Great Britain's public clocks were using GMT by 1855. After the industrial revolution and invention of the light bulb, most people have schedules which depend on the official time rather than Sun's position in the sky.
Historian Roger Ekirch argues that before the industrial revolution the segmented sleep was the dominant form of human slumber in Western civilization and it is a myth that we need eight hours of uninterrupted sleep each night.
Concepts/analogies/metaphors/models that depend on having certain technologies to be understood. Possible examples: the clockwork universe, human mind as a computer. Although in some cases it is not clear whether a certain technology was necessary to inspire the creation of the philosophical concept, or was it simply a very nice example that helped to elucidate an already existing idea.
Historian David Wootton argues that until mid-19th century and the discovery of germ theory physicians did more harm than good to their patients. Nowadays most people expect positive results when they go to the doctor.
Many other inventions changed the landscape of ideas and what is taken for granted (ability to communicate over long distances, ability to store fresh food safely in the fridge (according to a documentary I watched, this was one of the main factors that enabled the growth of cities), large ships, accurate maps with no uncharted territories, etc.).
It think this question is very broad, perhaps too broad.
This raises two questions:
1) Why, despite this, doctor was in general respected and well-paid profession?
2) What would have happened if use of statistics in medicine became widespread before germ theory. Could it lead to ban on medicine?
The faith-healing preacher, the witch-doctor, and the traditional healer are respected professions in the cultures where they occur. The Hippocratic physician was basically the traditional healer of Western civilization. He offered interventions that might kill, might cure, and were certainly impressive.
(It's worth noting that surgery was not within the traditional province of physicians. The original Hippocratic oath forbids physicians from doing surgery since they were not trained in it.)
That's not a new idea!
Lewis Thomas ("The Youngest Science") dates net benefit to well past 1900.
Your first link seems to say that Wootton dates it to antiseptic surgery. But that's just one good thing, which needn't balance many bad things. I've heard that the bad doctors did increased in the 19th century. For example, Lewis Thomas says that homeopathy was a reaction to the increase in the harm of 19th century drugs. Your second link seems to say that Wootton isn't talking about net effects, but of doctors doing any good at all. That's a pretty strong claim.
I don't know about that -- the Odyssey, for example, doesn't have any trouble with the idea of a false identity...
Technically you are correct, of course, I don't know if the concept of false identity would have made sense to a paleolithic tribe, and if it did we can always go earlier until it wouldn't. But at this stage, a LOT of contemporary concepts would disappear.
As to your game, I think you need to limit it in some way, otherwise too much stuff (from women's rights to telecommunications) qualifies.
That time is clearly before the Arthurian cycle, which contains several instances of knights taking someone else's armour and being taken for that person - most famously, Kay the Seneschal and Lancelot. Arguably also before the period in which Greek myths were composed; Zeus occasionally disguises himself as someone's husband for purposes of seduction. In the Bible, Jacob disguises himself as his brother Esau to obtain their father's blessing, although admittedly the deception hinges on their father being blind. Mistaken identity seems to be a fairly old concept, then.
Low infant mortality. In many time periods, you could expect to witness as many (more?) deaths before adulthood as deaths from old age.
Tentatively: that tolerance and intolerance of strangers should be a matter of law rather than local impulse.
What do you mean?
Strangers may not have been the best choice of word, but what I meant is that how people who were in more or less outgroups were treated wasn't so much a matter of public policy. They might be accepted. They might be murdered sporadically. There was no affirmative action, no Jim Crow laws. There were pogroms, but no holocaust.
So, basically, that people-not-from-my-tribe should not be "outlaws" (in the original sense of "outside of the law")? Essentially, you are talking about the idea of law which covers everyone regardless of who/what they are?
Not just that-- instead of just having relations between people shake out under a neutral law, it's assumed that the government can achieve something better than neutrality.
In the general case, what is "better than neutrality"?
I don't know whether there is anything better than neutrality, but a great many people seem to think there is.
The ideal existed since antiquity, but — as today — wasn't consistently practiced.
"Do not mistreat or oppress a foreigner, for you were foreigners in Egypt." — Exodus 22:21
"The foreigner residing among you must be treated as your native-born. Love them as yourself, for you were foreigners in Egypt. I am the LORD your God." — Leviticus 19:34
"And I charged your judges at that time, 'Hear the disputes between your people and judge fairly, whether the case is between two Israelites or between an Israelite and a foreigner residing among you.'" — Deuteronomy 1:16
(All quotations NIV.)
The classical world also had related norms of xenia and hospitium.
The concept of adolescence:
With the trend towards an expectation of college education, we will need an extended concept to include the early twenties.
Edit: "Emerging adulthood is a phase of the life span between adolescence and full-fledged adulthood, proposed by Jeffrey Arnett in a 2000 article in the American Psychologist."
Leadership for limited time periods.
They already had it back in at least ancient Greece.
Excluding the concept of "leadership until you get killed".
Conversely, they also wouldn't be able to understand the modern totalitarian state.
Sustained, non-trivial economic growth.
(I am less sure about DeLong's remark, which I've excised, that before the Industrial Revolution, living standards were kept firmly in check by the Malthusian trap. The basic conclusion that pre-industrial economic growth was glacial nonetheless stands.)
The idea that melodies, or at least an approximation accurate to within a few cents, can be embedded into a harmonic context. Yet in western art music, it took centuries for this to go from technically achievable but unthinkable to experimental to routine.
Medieval sacred music was a special case in many ways. We have some records (albeit comparatively scant ones) of secular/folk music from pre-Renaissance times, and it was a lot more tonally structured (a more meaningful term than "harmonic") than that.
Homosexual identity. Over much of human history men and woman did engage in homosexual activity but they didn't made it a matter of personal identity.
I wonder whether we can distinguish between these two hypotheses:
I have the impression that until recently most cultures have either (1) regarded same-sex sex as abominable and shameful, or (2) regarded it as a perfectly normal activity for anyone (at least in certain circumstances). In case 1, a few percent of (what we would now call) homosexual people would be best advised to try to avoid being noticed. In case 2, they might be lost in the noise. In neither case is it clear that we'd expect to see much written about (what we would now call) actually homosexual people.
(I am vastly ignorant of history, and would not be very surprised to find that the impression reported in the previous paragraph is wrong.)
Eric Raymond has a fairly good description of historical attitudes towards homosexuality here.
Edit: here is the key paragraph:
ESR's not basing his "analysis" on anywhere near enough evidence. His claim that he is working from "primary sources" is laughable at best.
And your criticism of his analysis is based on...
Would be improved by more explicit comment on what for you would count as enough evidence and using primary sources.
(That isn't a coded way of saying you're wrong.)
There are plenty of comprehensive histories of queerness. ESR just won't read or believe any of them.
We do have writing about people who engage in homosexual activity.
Today being homosexual doesn't mean "having sex with people of the same sex or even enjoying having sex with people of the same sex". It's something much more abstract.
In the middle of the 20st century we seeing a bunch of gay people speaking their own language with Polari. That's something very strange from many view points of history and given today's situation of Polari, I don't think it will take that much time till we'll also find it strange. At the height of Polari, homosexual activity was illegal.
Sure. What did I say that suggested I thought or expected otherwise?
You put that in quotation marks as if I said it or something like it; I didn't. Of course there is more to being homosexual than having same-sex sex; at the very least homosexuality as understood nowadays involves (1) romantic love as well as sex and (2) a sustained preference for same-sex partners. I'm not sure whether that's all you're saying, or whether you're also saying that (e.g.) there's a whole lot of history and culture too. If the latter: I agree that there is, but I wouldn't regard that as strictly part of "homosexual identity", exactly, nor would I say it seems "obvious" in the same kind of way as the mere existence of homosexuality does (even though maybe in fact until recently there wasn't any such phenomenon).
Yes, I agree that that's a peculiar phenomenon. I think it's part of the transition from "abominable, shameful and illegal" to "accepted and normal", via "accepted and normal within a somewhat cohesive albeit marginal group".
I'm not sure whether any of what you wrote is intended as support for the claim that until recently no one regarded homosexuality as a matter of personal identity (as opposed to the weaker claim that until recently people didn't record instances of homosexuality being regarded as a matter of personal identity). If it is, I'm afraid I'm not seeing how it works. This may indicate that I'm misunderstanding exactly what meaning the term "homosexual identity" has in your original comment.
Being homosexual is today about making a choice to identify as homosexual.
I have a sustained preference to wear glaces but wearing glasses isn't part of my self identity. I don't think of myself as a glass wearer.
Um, those kinds of low status with shades of criminality subcultures have had separate dialects for quite a long time.
Note: I found the above link as the first link from Wikipedia's article on Polari.
Obvious notion that shouldn't be obvious: Getting what you want.
If you've had a good education, lived in an affluent society all your life, and learned useful social skills, the notion that goals are achievable will sound ridiculously redundant to you, barely worth pointing out in words.
Hypothesis: Poor societies do not develop game theories.
Are you talking about a sense of entitlement to what one wants, or the broader notion of goals as achievable future world-states that one can work towards?
I meant only the latter, but having the latter in your head may lead to the former.
I'm not sure which way this bears on that, but one of the ancient Greeks, I forget who, seeing ten thousand men prepared for battle, reflected that here also were gathered as many dreams and desires, and pondered how few of them would ever be achieved.
Could you expand on this? And are we using the definition of 'game theory': Strategies whose values depend on strategies of other people?
Societies conditioned to hopelessness by daily material frustration do not conceive of a systematized method for satisfying their needs.* They invent gods to plead with, and may backstab each other to ascend in power, but they will not develop an entire theory, involving other-modeling, based on the concept that goals are achievable by careful planning and effort.
*This puts me in a chicken-and-egg situation: What came first, mass-scale agriculture or plant breeding?
Machiavelli's "The Prince" is very illustrative in that regard. He spends a few pages arguing that men can indeed control his own fate instead of just being at whim to the grace of God.
Interestingly, the answer seems to be "plant breeding". Evidence of selective breeding of bottle gourd plants predates the Neolithic Revolution, for example.
In the New World, too, it wasn't uncommon for people to selectively propagate plants without cultivating them; but it's hard to say whether that predates agriculture on this side of the Atlantic.
But the satisfaction of our non-social needs in a modern environment depend much much less on other people's strategies. Today, you can obtain all your non-social needs with hardly any social interaction; living alone, working from home, buying groceries from strangers; ignoring news and local trends.
In the past, meeting non-social needs required more social support, and could be thwarted more easily by the whims of others. Think of living in a band or tribe level society!
I agree that certain sorts of planning are more modern, but these new forms seem to require less sophisticated social understanding than old method. Compare: Investing in a retirement fund, and investing in connections with the next generation because you need them to feed you in your old age.
Given these examples, it might be interesting to add to this thread with examples of ideas assumed to be new that are in fact old.
The way you raise your children is very important for their life outcomes (common, recent, obvious, and wrong).
Wisdom literature of antiquity contains the same idea.
Whenever we meet the aliens, I'll feel ashamed to explain to them that it took us so many thousands of generations to arrive at the idea that it's wrong to beat your wife or kids.
How about explaining to the how you get this absurd notion that it's OK to not beat your wife and kids?
How would you feel about aliens who explain to you that, while they currently endorse all the same values that you endorse, it took their species twice as long to arrive at those values as it took humanity?
Here's an offer for anyone who writes blog posts or LW articles: I'm willing to proofread as well as provide feedback on your drafts. I would probably give the most useful feedback on material concerning computer science, personal productivity and ethics, as that's where most of my experience is allocated. However, I'd be glad to read just about anything.
Only LW content?
Nope. The drafts could be for a personal blog as well.
Here's a proposal: popular books for statistically literate people.
I've read several books from the Oxford University Press Very Short Introduction series. I like the general idea of these books: roughly 140 A6 pages concisely introducing a subject, and a list of further reading if you want it.
In practice, the ones on quantitative/scientific disciplines seem to put a lot of time and effort into writing around public ignorance of statistics. Those 140 pages would go a lot further if the author could just assume familiarity with statistical research methods.
This seems like a consistent enough body of knowledge to "factor out" of a lot of material, as much educational material with prerequisites does.
Good idea.
At the rationality meetup today, there was a great newcomer. He's read up most of Eliezer's Yudkowsky's original Sequences up to 2010, and he's also read a handful of posts promoted on the front page. As a landing pad for the rationalist community, to me, Less Wrong seems to be about updating beyond the abstract reasoning principles of philosophy past, toward realizing that through a combination of microeconomics, probability theory, decision theory, cognitive science, social psychology, and information theory, that humans can each hack their own minds, and notice how they use heuristics, to increase their success rate at which they form functional beliefs, and achieve their goals.
Then, I think about how if someone has only been following the rationalist community of Less Wrong for the last few years, and then they come to a meetup for the first time in 2014, everyone else who's been around for a few years will be talking about things that don't seem to fit with the above model of what the rationalist community is about. Putting myself back into a newcomer/outsider perspective, here are some memes that don't seem to immediately, obviously follow from 'cultivating rationality habits':
Citing Moloch, an ancient demon, as a metaphorical source of all the problems humanity currently faces.
How a long series of essays yearning for the days of yore has led to intensely insular discussion of polarized contrarian social movements, This doesn't square with how Less Wrong has historically avoided political debates because of how they often drift to ideological bickering, name-calling, and signaling allegiance to a coalition. Such debates aren't usually conducive to everyone reaching more accurate conclusions together, but we're having them anyway.
Some of us reversing our previous opinions on what's fundamentally true, or false.
Less Wrong is also welcomes discussion of contrarian, and controversial, ideas, such as cryopreservation, and transhumanism. If this is the first thing somebody learns about Less Wrong through the grapevine, the first independent sources they may come across may be rather unflattering of the community as a whole, and disproportionately cynical about what most of us actually believe. Furthermore, controversy attracts media coverage like moths to a flame, which hasn't gone to well for Less Wrong, and which falsely paints divergent opinions as our majority beliefs.
I'm not calling for Less Wrong to write a press coverage package, or protocol. However, I want to foster a local community at which I can discuss cognitive science, and the applications of microeconomics of everyday life, without new friends getting hung up on the weird beliefs they associate me with.
Additionally, in growing the local meetup, my friends, and I, in Vancouver, have gone to other meetups, and seeded the idea that it's worth our friend's time to check out Less Wrong. We've made waves to the point that a local student newspaper may want to publish an article about what Less Wrong is about, and profile some of my friends in particular. However, this has backfired to the point where I meet new people, or talk to old friends, and they're associating me with creepy beliefs I don't follow. It sucks that I feel I might have to do damage control for my personal standing in a close-knit community. So, I'm going to try writing another post detailing all the boring, useful ideas on Less Wrong nobody else notices, such as Luke's posts about scientific self-help, or Scott's great arguments in favor or niceness, community, and having better debates by interpreting your opponent's arguments charitably, or the repositories of useful resources.
If you have links/resources about the most boring useful ideas on Less Wrong, or an introduction that highlights, e.g., all the discourse of Less Wrong which is merely the practical applications of scientific insight for everyday life, please share them below. I'll try including them in whatever guide I generate.
I think one of the things worth noting about LW is that Holden Karnofsky Thoughts on the Singularity Institute is the top rated post. LW is a space where you can argue against the orthodox views if you bring arguments. This distinguish LW from nearly every other online forum.
I don't think that online forums need media exposure. The usual way to find an online forum is through a Google search or through a shared link to a discussion.
Holden Karnofsky is a high-status person, which is the most important factor. I don't think the same criticism by someone else would have received as many upvotes.
If that would be the case all post by high status people should get a high amount of votes. I think it's hard to explain via status why Karnofsky's post got more votes than any single post by Yudkowsky of the sequences.
The big deal is him being a high status outsider who made a contribution with a great deal of effort in it. It can be taken for granted that high-status insiders make many contributions.
How many online community are there that considers outsiders to be high status to an extend that the highest rated post is by an outsider?
I'd say that the Moloch thing isn't that much more weird than our other local eschatological shorthands ("FAI", "Great Filter", etc.). That's just my insider's perspective, though, so take with many grains of salt.
I believe you're right. I'm not familiar with the Great Filter being lambasted outside of Less Wrong, but myself, and the people I know personally, have generally discussed the Great Filter less than we have Friendly A.I. On one hand, the Great Filter seems more associated with Overcoming Bias, so coverage of it is tangential to, and having a neutral impact upon, Less Wrong. On the other hand, I spend more time on this website, so my impression could be due to the availability heuristic only. In that case, please share any outside media coverage you know of the Great Filter.
Anyway, I chose Moloch to stay current, and also because citing a baby-eating demon as the destroyer of the world seems even more eschatological than Friendly A.I. So, Moloch strikes me as potentially even more prone to misinterpretation. (un)Friendly A.I. has already been wholly conflated with a scandal about a counterfactual monster that need not be named. It seems to me that that could snowball into misinterpretations of Moloch bouncing across the blogosphere like a game of Broken Telephone until there's a widely-read article about Less Wrong having gone about atheism, and rationalism, so thoroughly wrong that it flipped back around to ancient religions.
The fact that Less Wrong periodically has to do damage control because there is even anything on this website that can be misinterpreted as eschatology seems demonstrative of a persistent image problem. Morosely, the fact that the outside perspective misinterprets something from this site as dangerous eschatology, perhaps because someone would have to read lots of now relatively obscure blog posts to otherwise grok it, doesn't surprise me too much.
This seems pretty unlikely to me. I think the key difference from the uFAI thing is that if you ask a Less Wrong regular "So what is this 'unFriendly AI' thing you all talk about? It can't possibly be as ridiculous as what that article on Slate was saying, can it?"*, then the answer you get will probably sound exactly as silly as the caricature, if not worse, and you're likely to conclude that LW is some kind of crazy cult or something.
On the other hand, if you ask your LW-regular friend "So what's the deal with this 'Moloch' thing? You guys don't really believe in baby-easting demons, do you?", they'll say something like "What? No, of course not. We just use 'Moloch' as a sort of metaphorical shorthand for a certain kind of organizational failure. It comes from this great essay, you should read it..." which is all perfectly reasonable and will do nothing to perpetuate the original ludicrous rumor. Nobody will say "I know somebody who goes on LessWrong, and he says they really do worship the blasphemous gods of ancient Mesopotamia!", so the rumor will have much less plausibility and will be easily debunked.
* Overall this is of course an outdated example, since MIRI/FHI/etc. have pulled a spectacular public makeover of Friendly AI in the past year or so.
The Boring Advice Repository obviously.
And the repository repository, (which, sadly, does not contain itself)
Why not?
A better Wiki page on the community could be a start. Maybe Wiki's entry on "Less Wrong" (with a space in between) should redirect there, rather than to Eliezer's page (as it currently does) and that might attract attention from people who first google the term.
I am not quite sure what are you saying here. It does sound like you want LW to change so that it becomes more acceptable to your new friends and that seems to me a strange way to approach things.
Please note that none of those links points to a LessWrong page. They are two personal blogs. Personal blogs don't have to follow LW policies.
I consider Moldbug almost completely irrelevant for LW. He has a few fans here, but they are a tiny minority (probably fewer than e.g. religious LW members). We don't consider him a rationalist blogger, and don't link to him in a list of rationalist blogs.
Scott is a LW member who has posted a few articles here; that is much more relevant. But anyway, SSC is his personal blog. (Also, his articles seem sufficiently sane to me -- I would love to see more political debates be done like this.)
I guess we need a definition of some core principles of LW community, so the newcomers know what is canonical and what is not. May I suggest Sequences?
This seems like a significant understatement given that Scott has the second highest karma of all-time on LW (after only Eliezer). Even if he doesn't post much here directly anymore, he's still probably the biggest thought leader the broader rational community has right now.
I agree with ahbwramc. Going From California with An Aching Heart doesn't seem to be something written by someone only kinda involved with the rationalist community.
First of all, mea culpa.
I should have provided more context to assuage confusion. The Talon is an alternative social justice publication at a local university. Their editorial board overlaps with the skeptic community in Vancouver itself, which is quite insular, which overlaps with the rationality meetup in Vancouver, too.
There has been some ideological bickering, name-calling, and signaling allegiance to a coalition of classic skeptic community v. Less Wrong perspectives on the Internet, and at various meetups, maybe at pubs, in Vancouver. I myself, among others, may not have engaged in discussions, or debates, as judiciously as would have been prudent. This also involved arguments over articles written on Slate Star Codex, which 'social justice warriors', as some call them(selves), find upsetting.
However, none of us here on Less Wrong knew there was enough chatter going around that the first time I meet a journalist, he knew who I was, and asked him why my friends held such peculiar beliefs that are out of line with mainstream scientific consensus if we're 'rationalists'. He was a friendly guy I actually like, but his misconceptions seemed worrisome, if he wanted to profile people I know personally. I don't want a schism rising in my neck of the woods where my friends and I are seen as kooky neckbeards as soon as we enter a public space.
Yeah, when someone is very famous on LW, then even if they publish something on their private blog, it feels like an "idea connected with LW", especially if the readerships overlap. :(
No idea what to do about this. I support Scott's right to write whatever he wants on his blog; and the rules of LW do not apply for his blog. On the other hand, yes, people will see the connection anyway. It's like when someone is a celebrity, they lose their private life, because everything they do is a food for gossip.
(Heck, Scott doesn't even write under the same name on LW and SSC. But everyone knows anyway. What a horrible thing; not only one has to hide their true name, but even keep their individual pseudonyms hidden from each other.)
Uhm, I missed the connection somewhere. As far as I know, social justice warriors are not mainstream scientific consensus. And Scott doesn't blog about many-worlds interpretation of quantum physics. :)
Okay, now seriously. I think you maybe overestimate the mainstream status of SJWs. What's upsetting for them, is not necessarily upsetting for an average person. And optimizing for them... pretty much means following their doctrines, or avoiding discussing any social issues.
(Connotationally: I am not saying "upsetting SJWs is okay", although I am also not saying it isn't. Just that SJWs are not mainstream. So do we worry about the image in the eyes of mainstream, or in the eyes of SJWs?)
Optimization as Hobby
This may belong more in the Rational Diary; however, as it is not an account of any physical efforts, but rather a train of thought, I’ll put it here.
As of late (over the last three months), I’ve been suffering from ennui and listlessness. Several objectives (begin an independent study habit on Spanish and economics, write a novel, contribute to the online efforts of a library or archive) have proved either unfeasible or have had no success. Some of my other objectives (establish a daily exercise habit, begin writing again, obtain a tutoring job) have been slow in coming to fruition. Difficulties happen and I’m not here to complain about them. Indeed, I’m quite happy with the successes I’ve had. In the past four months, I have obtained a director job at a library, started an exercise routine that is noticeably beneficial, increased my writing output, and learned the history of the American Civil War.
What’s important to me is that these failures have generated a listlessness that I do not like and that seems to specifically rise out of my desire to avoid such listlessness. Many of the objectives I listed are not terminal objectives; most of what I have attempted to do or start in these past few months have been instrumental towards other goals (mostly: obtain a better, more permanent job and leave home for a more engaging location). However, this focus on optimization, combined with the setbacks of several optimizing habits, has made the very idea of optimizing wearisome. When I begin to think or feel “I should try to improve myself,” whatever causes the thought, the result is a lingering angst of, “But I’m not a machine. My goal in life isn’t to optimize. Can’t I just be happy?”
Anytime I argue about making myself happy, I doubt my intentions. “Happy,” of a certain manner, could easily be obtained through vegetation in front of a screen or monitor, absorbing the works of others without applying the content of that work to my own life. Just receiving a constant input of satisfaction, outputting nothing. Indeed, a problem I face is that my family believes this is what I do now. Since many of the resources I use for job hunting or skill learning are online, I spend much of my free time in front of a screen. From without, it appears I am doing just that. Receiving constant satisfaction, producing nothing. This has led to my being accused of laziness, despite now having two jobs. A simple misunderstanding that I don’t take personally, but it can make the ennui worse as a general obstacle in my environment, providing one more factor that needs optimizing.
I know that my terminal values are not to simply absorb constant input because I do not find that existentially satisfying. It does not make me content imagining myself not learning or producing something new. However, with most of the low hanging fruit plucked, I’ve come to a point of conflict. Thoughts of further optimization disrupt happiness (especially because now a lot of my personal optimization has reached very effecting levels: for instance, despite the excess of conflict it causes in my family, I have been slowly discarding a large portion of the material goods I accumulated during childhood. Excess books, movies, and video games, with the intent to one day discard all video games. This conflicts with my family’s image of me and insults them as they spent money on those items for me when I was a child, making it seem as if I were throwing away a gift. But now I’m off topic as this is only tangentially related to the ennui). But I understand that “happiness” in this use is not terminal value. It’s momentary satisfaction. It’s that desire to say, “Let me take a second to breathe,” but secretly wanting to stretch that second out to a minute, to an hour, to a day.
So, I am trying to create a change in thought regarding optimization. So far, I think of optimization in terms of values. This is my terminal value, this is an instrumental value to reach such an end. It is an economical way of thinking that lets me way cost and benefit. This is useful thinking, but because I have begun to associate optimization with ennui and strife, it is no longer effective. Instead, I am now trying to think of optimization as a hobby.
First, I’ll admit this is a stop gap method against ennui, and probably (though I have nothing to prove it) less effective than thinking in a more economical way. Thinking of optimization as a hobby encourages not considering solid costs and benefits. It makes optimization a “can” rather than a “should” and I lose that pressing desire to really make something work that I have when I have sat down and determined if a way is the right way. But that’s just what I’m looking for. I’m creating associations between that pressing need and defeat, which I do not want. Such an association will cut the legs from under my efforts if I let it build.
Instead, I try to think of optimization as a hobby. This reminds me that it is something I engage in freely, that I have control over (so the successes and failures are mine alone), and is only a single aspect of my life. It disposes of that continuous thought that being “better” is everything, instrumental and terminal alike, which clouds my ability to think about what I want, why I want it, how I want it.
I apologize for this lengthy post that is more personal than intended for others. These have simply been my recent thoughts on optimization and my current manner of approaching the topic. Expressing them here prevents me from changing my mind later and then thinking, “I’ve always really thought like this.”
I think this is actually a decent way to think about it. It assists in giving yourself permission to "turn off." and have mindless leisure when you need it, without worrying that your leisure time is being spent optimally preparing you to work again.
Excellent news.
A promising idea in macroeconomics is that of NGDP level targetting. Instead of targetting the inflation rate, the central bank would try to maintain a trend rate of total spending in the economy. Here's Scott Sumner's excelent paper making the case for NGDP level targetting. As economic policy suggestions go, it's extremely popular among rationalists - I recall Eliezer endorsing it a while back.
At the moment we have real-time market-implied forecasts for a variety of things; commodity prices, interest rates and inflation. These inflation expectations acted as an early warning sign of the great recession. Unfortunately, at present there does not exist a market in NGDP futures, so it's hard to get real-time information on how the economy as a whole is doing.
Fortunately, Scott Sumner is setting up a prediction market for NGDP targetting in New Zealand. A variety of work, including some by Robin Hanson, suggests that even quite small prediction markets can create much more accurate predictions than teams of experts. The market is in the early stages of creation, but if anyone was interested in supply technical skills or financial assistance*, this could potentially have a huge payoff. Even if you don't want to contribute to the project, you could participate in the market when it launches, which would help improve liquidity and aid the quality of predictions.
There is more discussion here.
I'm trying to post an article on discussion about a personally important topic, but it's not going through. Any thoughts? It won't even save the material in drafts. I haven't posted much before, but this is a question that I think Less Wrong could be a great help on. It says I have enough karma, so that shouldn't be the problem.
What the exact step by step process you use to want to publish the article?
Are there any LWers in the area you could get to look over your shoulder to troubleshoot?
The rest of my probability is in the problem going away when you try posting from a different computer.
I'm looking for research (for instance from psychology) showing that detailed feedback and criticism improves performance greatly and would be very grateful if I could get any tips in this regard. (I think I've heard that this is the case but can't find any papers on this.) I'm especially interested in how criticisms of texts can improve authors' writings but am also interested in examples from other fields.
I'm thinking that such feedback could improve performance via (at least) two mechanisms:
a) It teaches you exactly what you're doing right and what you're doing wrong; i.e. gives you knowledge b) It may incentivize you to do things rightly, if you want to get praise and avoid criticism. The strength of this incentive obviously depends on the context; for instance, public feedback should generally provide you with stronger incentives than private feedback.
This seems intuitively clear, but still it would be nice to have some research saying that the effect of feedback is very strong (if it is, as I suspect). Any help is greatly appreciated.
Are you looking for evidence to support your beliefs or are you looking for evidence to tell you what your beliefs should be? :-)
On the anecdata basis, results vary. For some people in some situations performance is improved by feedback, but not always and not for everyone. Two examples off the top of my head where feedback/criticism doesn't help: (1) if the person already reached the limits of his ability and/or motivation; (2) if criticism is used as a tool to wield power.
Thanks!
These pieces seem relevant:
http://www.ncbi.nlm.nih.gov/pubmed/24702833
http://www.ncbi.nlm.nih.gov/pubmed/24005703
http://www.ncbi.nlm.nih.gov/pubmed/23906385
http://www.ncbi.nlm.nih.gov/pubmed/23276127
http://www.ncbi.nlm.nih.gov/pubmed/22352981
http://www.ncbi.nlm.nih.gov/pubmed/17474045
http://www.ncbi.nlm.nih.gov/pubmed/16557357
http://www.ncbi.nlm.nih.gov/pubmed/16033667
http://www.ncbi.nlm.nih.gov/pubmed/12687924
http://www.ncbi.nlm.nih.gov/pubmed/12518979
Thanks a lot! Highly useful. The evidence seems to be mixed, though good feedback does increase performance. Very nice of you to dig up all those links so quickly! :)
The keyword you're looking for is "deliberate practice". A quick pubmed search should turn up relevant results.
I have started reading Qualia the Purple, a manga strongly recommended by a few LWers, such as Eliezer and Gwern. In his recommendation, Eliezer wrote: "The manga Qualia the Purple has updated with Ch. 14-15. This is what it looks like to “actually try” at something."
Does anyone else know good examples of "actually trying" in other media? Over the LW IRC channel, Gwern linked to this page (Warning, TV Tropes), and specifically recommended Monster. Any other suggestions?
The Martian is basically this non-stop.
It seems like Qualia the Purple is a manga where after a certain point, the author introduced magic and started giving philosophic explanations for how the main character can do magic, turn into other people, go back in time, and generally do whatever the fuck she wants except save one person. What does "actually try" mean?
Starting from chapter 10, the protagonist dedicates herself to a single goal, and never wavers from that goal no matter what it costs her throughout countless lifetimes. She cheats with many-worlds magic, but it's a kind of magic that still requires as much hard work as the real thing.
It may be too late now, but would it have been more appropriate to post this under the Media Thread? I was not sure whether the media thread was only for recommending media, or also for asking for recommendations.
Long time lurker here, I just recently got accepted to App Academy (A big part of my inspiration for applying came from this post) And I'm really excited to attend some meetups in the area.
I have a few questions for Less Wrong people in the area, and this seemed like a good place to post them:
I'll be going in December, any chance I'll have Less Wrong company?
I understand that at least a few folks from here have been to app academy. Any advice? I've got an Associates in CS, and none of the prep work they've given me is too difficult, but is there anything else I should do to prepare?
They allow students to stay there, but I'm hoping to bring my fiancee with me. Unfortunately, rent seems to be ridiculous and I have no idea where to look (I've never moved to a city that wasn't driving distance from me). What's the best way to find apartments in SF?
Related to the above, is anyone in the SF area looking for a roommate(or two) starting December? We are clean, quiet, and can be very unobtrusive if need be. The main issue I see is that we would prefer to also bring our cat along.
1) There are a lot of LWers in the SF area. I think Ozy Frantz might be doing App Academy then.
3) Here is a Google Doc for finding LW roommates in the SF bay area.
I'll be moving there around the same time - look forward to seeing you there!
Thanks! I added myself to that list. I'll be looking at it in more detail when I get home tonight.
I'm really excited about the density of LWers there, as I tend to do better at in person things than online. Honestly, reading stuff about the community was a big part of the reason I applied. I look forward to meeting you too!
For the last few weeks, I've been using an alarm app that forces me to take a picture of my front door before it turns off. Previously, I had been using one that forced me to do two difficult arithmetic problems. This meant that I woke up mentally, but was still unwilling to leave my bed, and instead spend half an hour checking fb and browsing the net on my phone. Now, the design of the clock forces me to leave my room, which makes it much easier for me to start my day more quickly. The photo recognition is not great, so normally I need to take 2 or 3 photos before it recognises the door, but this helps me wake up even more. I would highly reccommend the app, or something with the same functionality, for people who have difficulty leaving bed in the morning.
Some related advice:
Happy to see Elon Musk continuing to speak out about AI risk:
https://www.youtube.com/watch?v=Ze0_1vczikA
Are there any existing libraries for generating Anki decks?
It feels like generating Anki decks from data sources with defined object-relational schemas should be easy and fruitful. Alternatively, generating them from something like an R data frame seems like it could be worth doing.
ETA: I have Googled this before asking, by the way, but there are so many Anki decks about programming languages that it seems resistant to the obvious search terms.
Anki can import .csv files easily. I did create my Anki color perception deck via R and the process was very straightforward without the need for any special library.
On the other hand there great care to be taken with auto-generating cards from existing data sources. Taking time to think about each card often makes sense. Bad cards cost a lot of review time and when you automatically create cards it can frequently lead to a lot of bad cards.
Can you tell me something about your color perception deck? Are you trying to train yourself to be better at distinguising (and naming?) colours for some reason?
Yes, I train color distinctions. Every card has two colors and shows them plus a color name then the user has to decide which color Anki displayed. Over times the distance between the colors goes down and I pick colors that are more near to each other.
I have written about this on LW in the past.
I was wondering why. It doesn't seem all that useful, unless you are abnormally bad at color perception or you have a job or hobby that somehow needs good color perception (something in art or design?). I suppose it's fun and interesting to see how well that kind of thing can be trained, and how it changes your experience, but I was wondering if there was more to it.
Here and here.
When I do this, I write a little one-off program that spits out a tab-separated values file, then import the file with the Anki desktop app.
The first NGDP futures market is getting started based on the ideas of economist Scott Sumner. The idea is that the expected U.S. NGDP (nominal gross domestic product) is the single most important macroeconomic variable, and that having a futures (prediction) market will provide valuable information into this variable (Scott estimates that if it works, it will be worth hundreds of billions of dollars).
Unfortunately, due to US gambling laws (I think), the market will be based in New Zealand and U.S. citizens will not be allowed to participate.
To what extend do traditional finance markets provide an implicit prediction markets for future macroeconomic states?
A futures contract is one where you agree to buy a specific quantity of an asset today for a specific price, but you don't pay until a specified time in the future.
If you predict it will have future value $100, and it only costs $10 now, it's worth buying, hence there will be more demand, driving the price of the futures contract up. On the other hand, if it costs $100 today, but you expect it will cost $10 in the future, then the futures contract won't be worth as much, driving the price down.
We still expect the price to be about as good of an approximation of the future value as you can get (as long as the volume is high) - if you have a better prediction, you can make money off it! So the price of the future will reflect the best aggregate prediction of the future value of the asset. This is essentially the efficient-market hypothesis. This is the inspiration for idea futures, a.k.a prediction markets.
For NGDP futures, they would create contracts like this:
The prices of these contracts now would reflect the market's certainty that the future NGDP would be within that range.
Well, technically speaking the price of the future will reflect the capital-weighted opinions of the market participants. That is not necessarily the "best aggregate prediction" -- it could be, but there are no guarantees.
A couple of points:
You don't agree to "buy it today", you "agree to buy" it today. Both exchange of payments and assets take place in the future
The price of a future isn't determined by prediction any more than the asset is. The price of a future is given explicitly by the price of the underlying, suitably adjusted for cost of carry and cost of financing
First, that's not true, technically speaking. The price of the future is whatever the market clears at. Arbitrage is a strong force that keeps the future and the underlying prices in a certain relationship, true, but only under certain (though common) conditions.
Second, here we are talking about NGDP futures and with them specifically there is no arbitrage against the underlying because the underlying is just an economic number that you cannot buy and warehouse. So in this particular case the price of the future is purely prediction-based.
I'm having trouble posting an article, so any help would be appreciated. I've tried to make a discussion post, and whenever I click "submit," it says, "submitting..." for a few seconds. However, the thread never appears in the discussion section. Also, when I try to close out of the submit article page, it says, "you've made changes to the article, but haven't submitted it." Does anyone have any idea what could be causing this? My only idea is that the target and class of hyperlinks aren't set, or that the article's too long; it's 27 pages.
Can you save it to Drafts and then see it in the Drafts section (formatted as it would be if published)? It's a good idea to do this in any case, to fix any issues with formatting before publishing. After that, "you've made changes to the article" at least won't be true, so the issue could be isolated further. If that doesn't work, start another article with "Hello World" content, make sure you succeed in saving it to Drafts and observing the result, then replace its content with your article.
Formatting stories - any good evidence?
I've started toying around with setting up my story at its permanent website, and have put a test page of the first chapter at this page (NSFW due to image of tasteful female nudity). I find myself faced with all sorts of options - font? line width? dark on light or light on dark? inline style or CSS? spot colors? hyphens or em-dashes? straight or curly quotes? etc, etc? - and I don't have a lot of evidence to base any answers on.
As a preliminary set of answers, I've drawn on Butterick's Practical Typography, even though I don't have any particular reasons to favour that set of advice over any other, other than that it's a concentrated dose of a /lot/ of advice. I don't know how to set up formatting for multiple viewing devices, I've never touched CSS, and my budget for fonts or professional advice is pretty much zero.
Does anyone reading this know where I can find evidence that any changes I could make to my preliminary formatting would do any better than what I already have?
You could just answer it experimentally with A/B tests measuring time-on-page or leaving a review or something like that.
I see you're running off Apache on Dreamhost, so there's no doubt plenty of libraries to help you there, but there's other strategies: static sites can, with some effort, hook into Google Analytics to record the effect of an experiment, which is the approach I've used on gwern.net since I didn't want to manage a host like Dreamhost.
If I knew when I started what I know now, I would have begun with a large multifactorial experiment of the tweaks I tested one by one as I thought of them. It sounds like you are aware of the options you want to test, so you have it easy in that regard. (With the Abalytics approach, testing all those options simultaneously would be a major pain in the ass and probably hamper page loads since all possibilities need to be specified in advance in the HTML source, but I suspect any Apache library for A/B testing would make it much more painless to run many concurrent interventions.)
It'll probably take a few thousand visits before you have an idea of the larger effects, but that just means you need to leave the test running for a few months.
I've been reading studies on interventions to improve mood.
It seems worth taking seriously the possibility that we live in a world in which all single interventions have small to tiny effect sizes, and that, once we've removed factors known to have large negative impact, the mutable difference between people with mostly good mood and people with poorer mood comes down to a huge number of these small differences.
Some forms of therapy resemble this (examining a bunch of different thought patterns in CBT). Some studies claim to examine "lifestyle changes", but they often do it in a really lackluster and low-compliance way, such as "we gave this group of depressed people a pamphlet with 50 things they should change about their lives, and compared them to the group we aggressively tracked and encourage to exercise daily".
Since we have good evidence for small positive effect sizes for a bunch of different things, I'd love to see good evidence on how those effect sizes combine. But I can't find this research.
Any thoughts? Pointers?
The fact that the average effect size in a population of an intervention is small doesn't mean that there aren't individual members in that population that benefit a great deal.
Over time I also think of mood as less of a one dimensional thing. People often change the way they judge their happiness, so you don't have a constant standard.
Getting compliance is really hard.
A question regarding polls: I have used the polls feature quite a bit now and I got the feeling that many more people vote on the poll options than on the poll comment. Given that there mostly is an option "just show me" which could be taken to interpreted as "I don't care about this poll but want to satisfy my curiosity" we could estimate the number of people who like the poll. Shouldn't these also up-vote the poll-comment as a whole? Is it just lazyness to not up-vote or is there same higher standard for LW comments than I think?
I think it's the same phenomenon where a top-level comment can get a single upvote (or no upvotes) but still spark a pretty long comment thread. Seems a bit strange to me, as I feel that most non-troll comments that spark or contribute to a good discussion are worthy of an upvote, but I think the answer is that there is some higher standard for LW comments than (for example) reddit comments. Jokes and playful misinterpretation don't seem to do well here, even when they're funny.
As far as I can tell, the phenomenon is self-reinforcing; the sparsity of upvotes in general on LW probably discourages people from upvoting things unless they meet a higher threshold. It seems to me that people upvote based not just on whether they agree or see value in the discussion, but on whether they think it matches with the ethos of the community. The end result is that the top-voted comments are almost always either "Very Less Wrong" things to say or very well-though-out and well-said.
That sounds like a good result. Reddit rather has a problem with how top-voted things always are contentless fluff because contentless fluff takes less time to consume and most people downvote much less than they upvote.
I don't think there's any valid inference from "thinks the poll worth voting on" to "thinks the comment with a poll in worth upvoting".
Suppose I see a poll, think it's a reasonable question to ask but no more, and am feeling helpful. Then I'll likely fill it in. But why does that mean I should be upvoting it? I mean, if I see a comment, think it's a reasonable thing to say but no more, should I be upvoting that? That would seem to lead to the conclusion that most people should be upvoting most comments, which seems waaaay off to me.
I upvote things that seem to me especially interesting or insightful or useful or witty or whatever. A poll can easily be worth filling in (if only because whoever posted it presumably is interested in the answers) without meeting that condition.
It's possible to be interested in / get value from a poll without approving of the fact that it's being conducted.
There is also the hypothesis that they don't like the poll, but still think that one result is worse than another.
Do not read the following if you haven't voted yet!
SERIOUSLY!
It's interesting to notice the almost perfect bell curve, centered but slightly nudged toward more activity. My hypotheis: the majority of people noticed no change (me included), but were swayed by your opening comment.
Possibly not because there wasn't any change, but because our brains aren't so good at measuring this kind of data.
Not exactly on topic, but If you are measuring something objective (like quantity of activity), and not something subjective (like quality of activity), you are usually better off using an objective test (like number of posts) instead of a subjective one (like a self-report Likert scale).
Maybe someone can do the database query again and post the result?
But pure number of posts (which I'd bet has increased) isn't the only criteria, or?
Would get more reliable results if you tried not to prime respondents.
Ah yes. Very well observed. Should have rot13 it. Or posted separately.
[tangential] The price of Bitcoin has been dropping significantly in the past few weeks, and dropped below $300 yesterday. I've read many theories as to how this can't happen, but it is. What's going on?
Its a bear market. The price moved from ~100 to ~1100 in the fall of 2013. The price action for the past 10 months is a correction of that move. After an 11x price increase, a retracement of 70% is perfectly normal market behavior. This is just the bitcoin boom and bust market cycle.
A larger holder did sell 30000 coins yesterday at $300 each. (And in fact, did so in a much less sophisticated way than normal - he simply stuck 30000 coins out there at a price of $300, and then just sat there. A more sophisticated trader selling in smaller increments could have gotten more money for them). This action did control the price of bitcoin for a number of hours. It was one small piece of the decline from $1100 in Dec 2013 to $300 now, but obviously it wasn't the main driver.
There is nothing special about the decline in bitcoin from $1100 to $300. It is merely the result of the fact that the price previously rose from $100 to $1100 in a short time. This is how markets work. The price does not move smoothly in straight lines. It moves three steps forward and two back. It overshoots massively to the upside and to the downside.
It is very hard to tell exactly where the top and where the bottom are going to be. Back last november, it would have been hard to guess whether the top would be at $500, or $1100 as it was, or $2000. And its hard to guess the bottom now. You might have thought it was done falling when it was at $500. It might be done now. Or it might drop to $200 or lower. (You can make a pretty decent case for $275 on sunday morning having been the absolute low however, based on the fact that the volume of trading was enormous, and the extreme distance that the price moved away from the moving average. Of course, it is possible that even larger volume and an even more extreme drop could be coming. We will not be able to say for sure what the bottom was until well after the fact).
The entire market is based on speculation right now, as well as being small enough for a few big players to significantly move the needle. This is a combination that means that one or two people can cause a drop, which causes a mass sell off (the inverse can happen too). Of course, this is a "just so" story... the reality is more complicated.
Point being, you won't be able to predict bitcoin prices until bitcoin as a payment network and store of value overcomes bitcoin as speculation.
I don't know why someone would believe it couldn't happen. The price of bitcoin is determined exactly like stock prices and subject to the same variations based on the same reasons.
The growth of the bitcoin market is below expectations so people sell their bitcoins to monetize their earnings so the price drops. That's economics 101.
Stock prices are anchored to the expected discounted present value of firms' profits. Bitcoins have no anchor. Think of it this way: If the market went crazy and valued Apple at zero you would do very well to buy the entire company for $1000. But if the market decided to value Bitcoins at zero, you would not want to buy them all up for $1000.
At least with tulip bulbs you can, like, grow tulips.
In five years the go-to example for speculative bubbles that popped might be bitcoins rather than tulip bulbs.
At least some recent research suggests that the Dutch tulip bubble was in fact a tulip contracts bubble, which expanded when legal changes converted commodity futures contracts to options and collapsed when authorities halted trading.
Is there an important difference between a tulip contracts bubble and a tulip bubble?
Sure, there's the question of whether any actual tulip bulbs were exchanged.
No, not at all. Buying stock gives you a legally recognized claim on the company's assets. Companies are rarely valued below their "book value", just the price of everything they own (in cases where this happens it usually means that the market thinks the management will waste these assets before someone pries their hands off them). Bitcoin gives you a legally recognized claim on... nothing.
Even if you think of Bitcoin as a bona fide currency, foreign exchange rates are NOT determined "exactly like stock prices". Thinking bitcoins are equity shares is a category error.
There are (or were) many, many Bitcoin advocates in the world who can't see it being anything other than deflationary (as there is a limited supply), it does interesting things, etc. Then the world turns around and sends Bitcoins inflationary for this whole year. Empiricism beats praxeology (again).
We can't live in a world in which the market expects Bitcoins to steadily increase in value compared to the dollar because of the arbitrage opportunities this would create.
Yes we can, because of the relative opportunity costs of holding Bitcoins and dollars. You may as well say we can't live in a world in which the market expects stocks to steadily increase in value compared to the dollar. Dollars are far more liquid than Bitcoins (or stocks) and, in equilibrium, you have to pay for that liquidity.
If the market expected the price of Bitcoins to steady increase people would buy Bitcoins today increasing Bitcoin's price, until the price of Bitcoins was high enough so that people wouldn't be confident that it would keep increasing.
Again, you are neglecting the cost of holding Bitcoins.
Suppose today a Bitcoin is worth $100, and everyone thinks that Bitcoins will be worth $102 in one year. Anyone could gain an expected $2 by a buy-and-hold strategy on Bitcoins, but that means they will have to hold their money in Bitcoins for the next year, which is not as useful as holding dollars, because it's much easier to turn their dollars into other assets (or consumption). If people think that this liquidity cost of holding Bitcoins is at least $2, then they will not bid up Bitcoins now.
Moreover, buying-and-holding Bitcoins has an opportunity cost because it means you aren't (for example) buying stocks, bonds, land, gold, or engaging in immediate consumption. So, if we suppose the market interest rate is 3%, then even though Bitcoins are expected to go up $2 over the next year, no-one is going to bid them up to $102 now, because they can get a $3 return elsewhere.
Implied by your logic:
Needless to say, none of these are the case.
There is substantial evidence that a giant whale dumped $9m worth of coins at $300. Now that the sell wall is gone, the price is back up.
Otherwise, just the typical accretion phase of a boom-bust cycle.
Evidence or speculation? I saw the $300 sell wall, but that does not account for the previous week's dip, which is when the "bearwhale" speculation started. I did see plenty of speculation to this end ... but humans, particularly bagholders in a bubble, will grasp for any explanation that is not "we were foolish".
Really, everything is based on the assumption of conspiracy:
It's a cliche for good reason that everything and its opposite is "great news for Bitcoin!"
The ridiculously inflated prices peaking in December 2013 are almost completely explained by Mt. Gox's blatant fraud and the Willy and Marcus bots. A decline from that would be the expectation.
So what was the solid evidence for (and against) conspiracy, as opposed to the null hypothesis that this is just one week in a bubble on its way down?
Neither of those are true (probably).
The price is the result of normal market forces, not a "conspiracy' to decrease the price, or 'manipulation' upward last november due to bots. All of the 'manipulation' talk is complete bullshit. There is nothing at all unusual in the price movements of bitcoin. It is completely normal for an asset that is growing from essenitally no value, into the billions of dolalrs range, in five years.
If there was a startup company that went public with shares at its inception, until a point of it having a $10 billion market valuation, over a 5 year period, it would look a LOT like the bitcoin price chart. Huge, massive increases in value as it reached new milestones of adoption. Massive contractions as it looked like it might fail. But the thing is, you DONT see the valuation of startup companies like this, because they are owned by a small number of founders and venture capitalists and angel investors. So the bitcoin price looks unusual to most people.
So the first hypothesis: "there is a conspiracy to manipulate the price", is complete BS. (There are tons of idiots on places like bitocintalk.org that beleive things like this, because they are clueless. But this is not a truth about reality. it is a rationalization made up after the fact by people for whom bitcoin is essentially a religion that you must take on faith, to explain why the price is now going down).
The second hypothesis "Bitcoin is a bubble on the way down", is also probably not true. (Though the chance that it is true is a lot greater than the idiotic manipulation theory). The reason why this is unlikely is that the blockchain is a truly revolutionary technology, whose impact on the world is goign to be MUCH greater than a mere 10 billion dollar company. (If you look at bitcoin's market cap as a valuation of a company, it peaked around $10 billion).
It might be true that the bitcoin blockchain is about to be surpassed by a competitor. That is, if you think of bitcoin as a startup, as the first startup to pioneer a technology, it is possible that now it is losing its place to a competing startup. If this were true, maybe bitcoin is going down because some altcoin is doing a much better job and is going to surpass it.
Even if bitcoin is going to be eclipsed by a competitor, it is much more likely that bitcoin would have another rise, but during that rise, its successor will greatly outperform it, and eventually surpass it.
To summarize:
Chance that the bitcoin price movements, in the long term, are due to a conspiracy: Close to 0%. Chance that the blockchain technology will just die, and bitcoin and all altcoins will just die: Very Low. Chance that bitcoin is currently going straight to $0 because a competitor blockchain technology is surpassing it right now: Low.
I do think that bitcoin is at risk of being surpassed by competing blockchains in the future, but not so imminently that it is going straight to zero right now. I think that in the end, both bitcoin and a few different other blockchains will survive. Most of those probably have not even been created yet.
In the same way that there is not only one surviving internet company today, and in fact there are many, in the future there will be many surviving blockchain technologies. This is true, even while the vast majority of the coins currently in existence will fail, just as most of the dot com bubble companies failed.
Yes. The conspiracy theories are rationalizations that have been invented because reality contradicted their belief system. It is absolutely possible for the price to decline the way it did. It could go much lower. It could even go a lot lower and still recover, and go much higher in the future! It already did exactly that in 2011-2013.
Random fluctuations have moved the price that much on a daily basis. The fact that we are trending downward generally is certainly expected -- it is exactly what has happened 5x earlier in bitcoin's history, and numerous other times in speculative bubbles. It's possible for the near-term trend to be down 90% of the time, and yet the overall long term trend to be up. Indeed, this is expected due to the typical behavior of whales. They moderate demand so that prices continue to gradually fall, all the while accumulating coins. Eventually the bottom is reached when they no longer are in control of demand. Then the bull market starts and you have a very quick run up to an all-new high.
This is a very common pattern. It happens in commodities, it happens in stock markets, it happens in real estate prices. It has repeated over and over in the history of bitcoin.
Cryonics and transhumanism are laughably irrational. Guess that's what happens with a cult based on a Harry Potter fanfiction by a dropout
This seems like an obvious enough thought experiment there is probably a literature on it, but I have not found any: how much vacation would it be ethical for a superhero to take? The kneejerk reaction seems to be first none. Assuming even one life saved per hour, he'd be "killing" a handful of people even going on a date (beyond or assuming away a bare psychological minimum for sanity or cetera). The next kneejerk reaction is that the first one is nuts. Thoughts/references?
And despite involving superheroes, I am seriously interested.
As long as we're talking psychologically human superheroes rather than, say, aliens with perfect ethics and unlimited willpower from the planet Krypton, this seems equivalent to the problem of maximizing worker productivity (adjusted if necessary for the type of work). There's a substantial literature on that.
This seems to have two mostly-orthogonal components to consider, both of which don't seem like things we can actually discuss:
And this applies as much to superheroes as to the rest of us.
Among other fictional examples, the character of Panacea in Worm (and several wormfics) faces this issue.
I am reminded of this.
See, I KNEW there was a literature ;)
More seriously, John McCarthy.
[EDITED to add:] Note the link at the end saying "solution" which has McCarthy's own proposal.
I don't think the issue is much different than normal people donating money for betnets.
Random thought.
So, minor changes in designs of things sometimes result in better versions of things. You then build those things, and make minor design changes there. Repeat. Eventually you often get a version of a thing that no longer sees improvement from minor change.
"Global maximum!" declares a PhD biologist at a good university.
How common is this defect?
Could you give an example of a PhD biologist making that error?
I think I'd expect PhD biologists at good universities (or, at least, those working with evolutionary systems) to be aware that hill-climbing processes often get stuck in local optima.
By the way, this video showing the laryngeal nerve of the giraffe is a fascinating example of the role of local optimum in evolution.
Criticize the following idea at will: let's see if there's a nugget of truth in it or if it busts under its own weight.
Moral progress can be modeled as the strengthening of a society memetic immune system.
Two extreme cases to illustrate this: a society where the power is held by an elite with very strict and very uniform moral code. In that case, any stray meme will likely be in conflict with the elite's memeplex, and so it poses a threat to the society itself. The immune system is very weak, the society is very oppressive. Another case: a society where power is distributed or the elite derives its command from other means than a very tight set of ideas: in this case, not many memes will undermine society stability, its immune system is much stronger, the society is more tolerant and progressive.
I think you're only looking at one failure mode-- the excessively brittle society. Living organisms have boundaries that let some things in and refuse others.
A society needs to support some ideas, but not all ideas. For example, your society with the strong immune system needs some way to stabilize itself against going authoritarian.
Anyone have general thoughts on distributed computing/grid computing projects? Any in the following list interest you? Any in the following list appear to be a waste of resources?
http://en.wikipedia.org/wiki/List_of_distributed_computing_projects
On this topic, you may be interested in Gwern's brutal takedown of Folding@home.
Should The Great Internet Mersenne Prime Search be considered a bad idea by that measure?
Are there any benefits to knowing prime numbers so large they can't even be used in cryptography?
No?
Then I guess it's a bad idea.
You never know what's going to shake out from pure math. Still, hunting for extremely large primes might not be efficient, even by the standards of pure math.
This is only tangentially my field, but I'd expect the numbers themselves to be much less potentially useful than the algorithms needed to find them. Since GIMPS is just throwing FLOPs at the problem through established math, it doesn't look like an especially good approach to me.
I'm Giving Away Money!
I recently posted about a writing wager I have developed for myself and a group of friends. The wager is simple: everyone chooses a charity and writes a novel. Finish the novel: donate to your charity. Give up or go a month without progress: donate to another charity (we have yet to decide whether it should be one charity or all three other charities).
I plan to choose an effective charity, however I am still in the decision process. So, here's you're chance to influence who gets money: tell me who I should donate to.
I'd like to hear any suggestions for the most effective charity you know. Support your choice as much or as little as you wish, I'll still be making the final decision. Interpret the word "effective" as you wish. The point is, tell me who you think I should give money to.
I'm going to be the person who does the obvious because he saw the comment early, and link you to GiveWell with minimal commentary :P
Why is Quixey associated with rationalism? From its website it doesn't seem different from any of many other startups.
Several LW members were involved in its founding.
Scott's map was more about social connections than ideological ones.