If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Received word a few days ago that, (unofficially, pending several unresolved questions) my GJP performance is on track to make me eligible for "super forecaster" status (last year these were picked from the top 2%).
ETA, May 9th: received the official invitation.
I'm glad to report that I am one of those who make this achievement possible by occupying the other 98%. Indeed I believe I am supporting the high ranking of a good 50% of the forecasters.
More seriously, congratulations. :)
For kicks, and reminded by all my recent searching for digging up long-forgotten launch and shut down dates for Google properties, I've compiled a partial list of times I've posted searches & results on LW:
http://lesswrong.com/lw/h4e/differential_reproduction_for_men_and_women/8ovg
http://lesswrong.com/lw/h2h/i_hate_preparing_food_my_solution/8o19
http://lesswrong.com/lw/h1t/link_the_power_of_fiction_for_moral_instruction/8nkc
http://lesswrong.com/lw/gnk/link_scott_and_scurvy_a_reminder_of_the_messiness/8gfc
http://lesswrong.com/lw/g75/psa_please_list_your_references_dont_just_link/87ds
http://lesswrong.com/lw/f8x/rationality_quotes_november_2012/7tdc
http://lesswrong.com/lw/3dq/medieval_ballistics_and_experiment/7i2k
http://lesswrong.com/lw/e26/who_wants_to_start_an_important_startup/7adg
http://lesswrong.com/lw/if/your_strength_as_a_rationalist/768t
http://lesswrong.com/lw/dx7/link_holistic_learning_ebook/74yj
http://lesswrong.com/lw/c3g/seq_rerun_quantum_nonrealism/6h7e
http://lesswrong.com/lw/bws/stupid_questions_open_thread_round_2/6g25?context=1#6g25
My belief is that the more familiar and skilled you are with a tool, the more willing you are to reach for it. Someone who has been programming for decades will be far more willing to write a short one-off program to solve a problem than someone who is unfamiliar and unsure about programs (even if they suspect that they could get a canned script copied from StackExchange running in a few minutes). So the unwillingness to try googling at all is at least partially a lack of googling skill and familiarity.
Iain (sometimes M.) Banks is dying of terminal gall bladder cancer.
Of more interest is the discussion thread on Hacker News regarding cryonics. There's a lot of cached responses and misinformation going around on both sides.
In the Culture novels, he has all humans just sorta choosing to die after a millennium of life, despite there being absolutely no reason for humans to die since available resources are unlimited, almost all other problems solved, aging irrelevant, and clear upgrade paths available (like growing into a Mind).
A thousand years instead of 70 is just deathism with a slightly different n.
Eh, I kinda agree with you in a sense, but I'd say there's still a qualitative difference if one has successfully moved away from the deathist assumption that the current status quo for life-span durations is also roughly the optimal life-span duration.
now that "Iain Banks death" and variants are nearly-meaningless search terms
If you want to search the past, go to google, search, click "Search tools," "Any time," "Custom range..." and fill in the "To" field with a date, such as "2008."
I've been writing blog articles on the potential of educational games, which may be of interest to some people here:
I'd be curious to hear any comments.
I realise it's a constructed example, but a videogame that would be even remotely accurate in modelling the causes of the fall of the Roman Empire strikes me as unrealistically ambitious. I would at any rate start out with Easter Island, which at least is a relatively small and closed system.
Another point is that, if you gave the player the same levers that the actual emperors had, it's not completely clear that the fall could be prevented; but I suppose you could give points on doing better than historically.
Pokemon is an example of what an educational game which doesn't care about realism could look like. People should be expected to learn the game, not the reality, and that will especially be the case when the game diverges from reality to make it more fun/interesting/memorable. If you decide that the most interesting way to get people to play an interactive version of Charles Darwin collecting specimens is to make him be a trainer that battles those specimens, then it's likely they will remember best the battles, because those are the most interesting part.
One of the research projects I got to see up close was an educational game about the Chesapeake; if I remember correctly, children got to play as a fish that swum around and ate other fish (and all were species that actually lived in the Chesapeake). If you ate enough other fish, you changed species upwards; if you got eaten, you changed species downwards. In the testing they did afterwards, they discovered that many of the children had incorporated that into their model of how the Chesapeake worked; if a trout eats enough, it becomes a shark.
I mentioned that I was attending a Landmark seminar. Here is my review of their free introductory class that hopefully adds to the conversation for those who want to know:
Coaches - They are the people who lead the class and I found them to be genuine in their belief in the benefits of taking the courses. These coaches were unpaid volunteers. I found their motives for coaching were for self-improvement and to some degree altruism. In short, it helped them, and they really want to share it.
Material - The intro course consists of more informative ideas rather than exercises. Their informative ideas are also trade-marked phrases, which makes it gimmicky and gives it more importance than an idea really warrants. We were not told these ideas were evidence-based. Lots of information on how to improve one's life was thrown around but no research or empirical evidence was given. Not once was the words " cognitive science" or "rationality" used. I speculate that the value the course gives its students is not from their informative ideas, but probably from the exercises and motivation that one gets from being actively pushed by their coaches to pursue goals. ...
I am generally still very bad at steelmanning, but I think I am now capable of a very specific form of it. Namely, when people say something that sounds like a universal statement ("foos are bar") I have learned to at least occasionally give them the benefit of the doubt instead of assuming that they literally mean "all foos are bar" and subsequently feeling smug when I point out a single counterexample to their statement. I have seen several people do this on LW lately and I am happy to report that I am now more annoyed at the strawmanners than the strawmanned in this scenario.
Looks like Scott Adams has given Metamed a mention. (lotta m's there...)
I find it particularly interesting because a while back he himself was a great example of a patient independently discovering, against official advice, that their rare, debilitating illness could be cured -- specifically, that of losing his voice due to a neurological condition. He doesn't mention it in the blog post though.
(At least, I think this is a better example to use than woman who found out how to regenerate her pinky.)
[Aside] I'm not sure how I feel about Scott Adams in general. I enjoyed his work a lot when I was younger, but he seems very prone to being contrarian for its own sake and over-estimating his competence in unrelated domains.
I'm working on an analysis of Google services/products shutdown, inspired by http://www.guardian.co.uk/technology/2013/mar/22/google-keep-services-closed
The idea is to collate as many shuttered Google services/products as possible, and still live services/products, with their start and end dates. I'm also collecting a few covariates: number of Google hits, type (program/service/physical object/other), and maybe Alexa rank of the home page & whether source code was released.
This turns out to be much more difficult than it looks because many shut downs are not prominently advertised, and many start dates are lost to our ongoing digital dark age (for example, when did the famous & popular Google Translate open? After an hour applying my excellent research skills, #lesswrong's and no less than 5 people on Google+'s help, the best we can say is that it opened some time between 02 and 08 March 2001). Regardless, I'm up to 274 entries.
The idea is to graph the data, look for trends, and do a survival analysis with the covariates to extrapolate how much longer random Google things have to live.
Does anyone have suggestions as to additional predictive variables which could be found with a reasonable amount of effort for >274 Google things?
It was recently brought to my attention that Eliezer Yudkowsky regards the monetary theories of Scott Sumner (short overview) to be (what we might call) a "correct contrarian cluster", or an island of sanity when most experts (though apparently a decreasing number) believe the opposite.
I would be interested in knowing why. To me, Sumner's views are a combination of:
a) Goodhart's folly ("Historically, an economic metric [that nobody cared about until he started talking about] has been correlated with economic goodness; if we only targeted this metric with policy, we would get that goodness. Here are some plausible mechanisms why ..." -- my paraphrase, of course)
b) Belief that "hoarded" money is pure waste with no upside. (For how long? A day? A month?)
If you are likewise surprised by Eliezer's high regard for these theories, please join me in encouraging him to explain his reasoning.
And right now I don't feel buying something that costs 20% more than it did literally yesterday.
Forgive me for stating the obvious: this sounds like the sunk cost fallacy. There's a cost in that you did not buy coins when they were cheaper, and though this does affect how you feel about the issue, it shouldn't (instrumental-rationally) affect your choices.
I did buy coins when they were at ~40$, and I was then regretting that I hadn't bought more when two weeks earlier they were at 10$. When they were at 70$ I chose to buy some more -- and I regretted not buying more when they were at 40$. But both my buy at 40$ and my buy at 70$ were good ones.
Now bitcoins are at around 141$ to 143$. Whether to buy or not buy at this point should depend on an estimation of whether the price is going to go up or down from here -- and your estimation of how soon and how far the price of bitcoin is going to rise or crash from this point onwards. There's always a risk and a chance.
The free will page is obnoxious. There have been several times in recent months when I have needed to link to a description of the relationship between choice, determinism and prediction but the wiki still goes out of its way to obfuscate that knowledge.
One of the easiest hard questions, as millennia-old philosophical dilemmas go. Though this impossible question is fully and completely dissolved on Less Wrong, aspiring reductionists should try to solve it on their own.
That's a nice thought. But it turns out that many lesswrong participants don't try to solve it on their own. They just stay confused.
There have been some other discussions of the subject here (and countless elsewhere). Can someone suggest the best reference available that I could link to?
You know you spend too much time on LW when someone mentioning paperclips within earshot startles you.
I’m doing a research project on attraction and various romantic strategies. I’ve made several short online courses organizing several different approaches to seduction, and am looking for men 18 and older who are interested in taking them as well as a short pre and post survey designed to gauge the effectiveness of the techniques taught. If you are want to sign up, or know anyone who might be interested you can use this google form to register. If you have any questions comment or PM me and I’ll get back to you.
ETA:Since someone mentioned publication I thought I should clarify. This is specifically a student research project so, unlike a class project, I am aiming for a peer-reviewed publication, however the odds are much slimmer than if someone more experienced/academically higher status were running it. Also, even if it doesn't get formally published I will follow the "Gwern model". That is to say I'll publish my results online along with as much as my materials as I can (the courses are my own work + publicly available texts, but I only have a limited license for the measures I'm using).
Here is a blog which asserts that a global conspiracy of transhumanists controls the media and places subliminal messages in pop music such as the Black Eyed Peas music video "Imma Be" in order to persuade people to join the future hive-mind. It is remarkably lucid and articulate given the hysterical nature of the claim, and even includes a somewhat reasonable treatment of transhumanism.
http://vigilantcitizen.com/musicbusiness/transhumanism-psychological-warfare-and-b-e-p-s-imma-be/
Transhumanism is the name of a movement that claims to support the use of all forms of technology to improve human beings. It is far more than just a bunch of harmless and misguided techie nerds, dreaming of sci-fi movies and making robots. It is a highly organized and well financed movement that is extremely focused on subverting and replacing every aspect of what we are as human beings – including our physical biology, the individuality of our minds and purposes of our lives – and the replacement of all existing religious and spiritual beliefs with a new religion of their own – which is actually not new at all.
EDIT: I see this was previously posted back in 2010, but if you haven't witnessed this blog yet it is worth a look.
Good to know that someone's keeping the ol' Illuminati flame burning. Pope Bob would be proud.
The thing I find most curious about the Illuminati conspiracy theory is that if you look at the doctrines of the historical Bavarian Illuminati, they are pretty unremarkable to any educated person today. The Illuminati were basically secular humanists — they wanted secular government, morality and charity founded on "the brotherhood of man" rather than on religious obedience, education for women, and so on. They were secret because these ideas were illegal in the conservative Catholic dictatorship of 18th-century Bavaria — which suppressed the group promptly when their security failed.
If CFAR becomes at all successful, conspiracists will start referring to it as an Illuminati group. They will not be entirely wrong.
Offhand, I haven't seen any LWers write about having chemical addictions, which seems a little surprising considering the number of people here. Have I missed some, or is it too embarrassing to mention, or is it just that people who are attracted to LW are very unlikely to have chemical addictions?
or is it just that people who are attracted to LW are very unlikely to have chemical addictions?
To busy with the internet addictions?
As usual, caffeine addiction is so common that it needs to either be explicitly excluded or else its inclusion pointed out so readers know how meaningless the results may be for what they think of as 'chemical addiction'.
I started a blog about a month or two ago. I use it as a "people might read this so I better do what I'm committing to do!" tool.
Link: Am I There Yet?
Feel free to read/comment.
I get the impression that there is something extremely broken in my social skills system (or lack there of). Something subtle, since professionals have been unable to point this out to me.
I find that my interests rarely overlap with anyone else's enough to sustain (or start, really) conversation. I don't feel motivated to force myself to look at whatever everyone else is talking about in order to participate in a conversation about it.
But it feels like there's something beyond that. I was given the custom title of "the confusenator" on one forum....
If I stay up ~4 hours past my normal waking period, I get into a flow state and it becomes really easy to read heavy literature. It's like the part of my brain that usually wants to shift attention to something low effort is silenced. I've had a similar, but less intense increase in concentration after sex / masturbation.
Anyone else had that experience?
Is there any particular protocol on reviving previously-recurring threads that are now dormant? I had some things to put in a Group Rationality Diary entry, but there hasn't been a post since early January. I sent cata a message a few days ago; haven't heard back.
Strong AI is hard to predict: see this recent study. Thus, my own position on Strong AI timelines is one of normative agnosticism: "I don't know, and neither does anyone else!"
Increases in computing power are pretty predictable, but for AI you probably need fundamental mathematical insights, and it's damn hard to predict those.
In 1900, David Hilbert posed 23 unsolved problems in mathematics. Imagine trying to predict when those would be solved. His 3rd problem was solved that same year. His 7th problem was solved in 1935. His 8th problem still h...
you probably need fundamental mathematical insights, and it's damn hard to predict those.
We can still try. As it happens, a perfectly relevant paper was just released: "On the distribution of time-to-proof of mathematical conjectures"
...What is the productivity of Science? Can we measure an evolution of the production of mathematicians over history? Can we predict the waiting time till the proof of a challenging conjecture such as the P-versus-NP problem? Motivated by these questions, we revisit a suggestion published recently and debated in the "New Scientist" that the historical distribution of time-to-proof's, i.e., of waiting times between formulation of a mathematical conjecture and its proof, can be quantified and gives meaningful insights in the future development of still open conjectures. We find however evidence that the mathematical process of creation is too much non-stationary, with too little data and constraints, to allow for a meaningful conclusion. In particular, the approximate unsteady exponential growth of human population, and arguably that of mathematicians, essentially hides the true distribution. Another issue is the incompleteness of t
I've been looking at postgraduate programmes in the philosophy of Artificial intelligence, (primarily but not necessarily in the UK). Does anyone have any advice or suggestions?
I'm going to Hacker School this summer, and I need a place to stay in NYC between approximately June 1 and August 23. Does anyone want an intrepid 20-year-old rationalist and aspiring hacker splitting the rent with them?
Also, applications for this batch of Hacker School are still open, if you're looking for something great to do this summer.
After rereading the metaethics sequence, it occurred to me a possible reason why people can enjoy (the artistic genre) of tragedy. I think there's an argument to be made along the lines of "watching tragedy is about not feeling guilty when you can't predict the future well enough to see what right is."
Grading is the bane of my existence. Every time I have to grade homework assignments, I employ various tricks to keep myself working.
My normal approach is to grade 5 homework papers, take a short break, then grade 5 more. It occurred to me just now that this is similar to the "pomodoro" technique so many people here like, except work-based instead of time-based. Is the time-based method better? Should I switch?
Anyway, back to grading 5 more homework papers.
I've known for a while that for every user there's an RSS feed of their comments, but for some reason it's taken me a while to get in the habit of adding interesting people in Google Reader. I'm glad I have.
(Effort in adding them now isn't wasted, since when I move from Google Reader I'll use some sort of tool to move all my subscriptions across at once to whatever I move to)
Trying to get a handle on the concept of agency. EY tends to mean something extreme, like "heroic responsibility", where all the non-heroic rest of us are NPCs. Luke's description is slightly less ambitious: an 'agent' is something that makes choices so as to maximize the fulfillment of explicit desires, given explicit beliefs. Wikipedia defines is as a "capacity to act", which is not overly useful (do ants have agency?). The LW wiki defines it as the ability to take actions which one's beliefs indicate would lead to the accomplishment ...
In HP:MoR, Harry mentioned that breaking conservation of energy allows for faster-than-light signalling. Can someone explain how?
I had some students complaining about test-taking anxiety! One guy came in and solved the last midterm problem 5 minutes after he had turned in the exam, so I think this is a real thing. One girl said that calling it something that's not "exam" made her perform better. However, it seems like none of them had ever really confronted the problem? They just sort of take tests and go "Oh yeah, I should have gotten that. I'm bad at taking tests."
Have any of you guys experienced this? If so, have you tried to tackle it head-on? It seems like ...
I watched an awesome movie, and now I'm coasting in far mode. I really like being in far mode, but is this useful? What if I don't want to lose my awesome-movie high?
Are there some things that far mode is especially good for? Should I be managing finances in this state? Reading a textbook? Is far mode instrumentally valuable in any way? Or should I make the unfortunate transition back to near mode?
How much do we know about reasoning about subjective concepts? Bayes' law tells you how probable you should consider any given black-and-white no-room-for-interpretation statement, but it doesn't tell you when you should come up with a new subjective concept, nor (I think) what to do once you've got one.
Hazards of botched IT Cost overruns are nothing compared to what can go wrong when you actually use the software.
Software which can answer "is this obviously stupid?" would be a step towards FAI.
Does anyone here have thoughts on the x-risk implications of Bitcoin? Rebalancing is a way to make money off of high-volatility investments like Bitcoin (the more volatility, the more money you make through rebalancing). If lots of people included Bitcoin in their portfolios, and started rebalancing them this way, then the price of Bitcoin would also become less volatile as a side effect. (It might even start growing in price at whatever the market rate of return for stocks/bonds/etc. is, though I'd have to think about that.)
So given that I could spread this meme on how you can get paid to decrease Bitcoin's volatility, should I do it?
So I'm running through the Quantum Mechanics sequence, and am about 2/3 of the way through. Wanted to check in here to ask a few questions, and see if there aren't some hidden gotchas from people knowledgeable about the subject who have also read the sequence.
My biggest hangup so far has been understanding when it is that different quantum configurations sum, versus when they don't. All of the experiments from the earlier posts (such as distinct configurations) seem to indicate that configurations sum when they are in the "same" time and place....
I don't know anything about quantum computing, so please tell me if this idea makes sense... if you imagine many-worlds, can it help you develop better intuitions about quantum algorithms? Anyone tried that? Any resuts?
I assume an analogy: In mathematics, proper imagination can see you some results faster, even if you could get the same results by computation. For example it is easier to imagine a "sphere" than a "set of points with distance at most D from a given center C". You can see that an intersection of a sphere and a plane is a ...
Shameless self-promotion:
Recently, for a philosophy course on(roughly) the implications of AI for society, I wrote an essay on whether we should take fears about AI risks seriously, and I had the thought that it might be worth posting to LW discussion. Is there/would there be interest in such a thing? THB, there's not a great deal of original content, but I'd still be interest in the comments of anyone who is interested.
LW Women: Submissions on Misogyny was moved to main, but the article doesn't show up as New, Promoted, or Recent.
I'm not sure if this is the right place for this, but I've just read a scary article that claims that "The financial system as a whole functions as a hostile AI", and I was wondering what LW thinks of that.
In Anki, LaTeX is rendered too large. Does anyone know an effective fix?
EDIT: I found one. In Anki, LaTeX is rendered to an image and from then on treated as one. Adding
img{ zoom: 0.6;}
to a new line of the "Styling" section of the "Card Type" for whatever Note you're using rescales all the images in that card type. So provided you don't use LaTeX and images on the same Note then this fixes all your problems.
What can you usefully do with underutilised processing power? (E.g. spare computer and server time).
So far the best I can come up with is running Folding@home. But it seems like there should be a way to sell server space etc.
So far the best I can come up with is running Folding@home.
Remember the power consumption entailed: http://www.gwern.net/Charity%20is%20not%20about%20helping
Site suggestion:
When somebody attempts to post a comment with the words "why", "comment", and "downvoted", it should open a prompt directing them to an FAQ explaining most likely reasons for their downvote, and also warning them prior to actually submitting the comment that it's likely to be unproductive and just lead to more downvotes.
(Personally I think this site needs to have a little more patience with people asking these questions, as they almost always come from new users who are still getting accustomed to the community norms, but that's just me.)
If the advice of the welcome thread doesn't match the actual LW norms, we should change either the welcome thread or the norms.
Strong AI is hard to predict: see this recent study. Thus, my own position on Strong AI timelines is one of normative agnosticism: "I don't know, and neither does anyone else!"
Increases in computing power are pretty predictable, but for AI you probably need fundamental mathematical insights, and it's damn hard to predict those.
In 1900, David Hilbert posed 23 unsolved problems in mathematics. Imagine trying to predict when those would be solved. His 3rd problem was solved that same year. His 7th problem was solved in 1935. His 8th problem still hasn't been solved.
Or imagine trying to predict, back in 1990, when we'd have self-driving cars. Even in 2003 it wasn't obvious we were very close. Now it's 2013 and they totally work, they're just not legal yet.
Same problem with Strong AI. We can't be confident AI will come in the next 30 years, and we can't be confident it'll take more than 100 years, and anyone who is confident of either claim is pretending to know too much.
We can still try. As it happens, a perfectly relevant paper was just released: "On the distribution of time-to-proof of mathematical conjectures"
... (read more)