Anyone want a LW Enhancement Suite?
If anyone cares, I could probably port this to work on LW without too much trouble. Optimistically it'd just involve opening up the source and replacing reddit.com with lesswrong.com. More realistically, there'd probably be a lot of baked-in assumptions about DOM structure that'd need to be updated to have the UI enhancements make sense.
Anyway, this is mostly just a straw poll to see how many others would be interested in such a thing.
Sticky threads?
It annoys me that there's no way to sticky a thread in the discussion section.
Therefore, I propose creating an LW wiki page called "Stickies", where sticky-worthy threads would be linked to. Would that be acceptable?
These are the threads I'm planning to add:
- the current Welcome to LW thread
- the current MoR Discussion thread
- the current Rationality Quotes thread (OK, they're posted in Main, but still...)
- the current Open Thread
- the current Media Thread
- the current 'What are you working on?' thread
ETA: Following a tip-off by peaigr, I re-purposed the Special Threads wiki page for this. (The stickies are in the 'periodic threads' section.) Now if there were a way to make this page more conspicuous...
Meanwhile, dbaupp has submitted a feature request.
The Singularity Institute needs remote researchers (writing skill not required)
The Singularity Institute needs researchers capable of doing literature searches, critically analyzing studies, and summarizing their findings. The fields involved are mostly psychology (biases and debiasing, effective learning, goal-directed behavior / self help), computer science (AI and AGI), technological forecasting, and existential risks.
Gwern's work (e.g. on sunk costs and spaced repetition) is near the apex of what we need, but you don't need to be as skilled as Gwern or write as much as he does to do most of the work that we need.
Pay is hourly and starts at $14/hr but that will rise if the product is good. You must be available to work at least 20 hrs/week to be considered.
Perks:
- Work from home, with flexible hours.
- Age and credentials are irrelevant; only the product matters.
- Get paid to research things you're probably interested in anyway.
- Contribute to human knowledge in immediately actionable ways. We need this research because we're about to act on it. Your work will not fall into the journal abyss that most academic research falls into.
If you're interested, apply here.
Why post this job ad on LessWrong? We need people with some measure of genuine curiosity.
Also see Scholarship: How to Do It Efficiently.
Mathematicians & mathletes: the Singularity Institute wants your strategic input!
The Singularity Institute is undergoing a series of important strategic discussions. There are many questions for which we wish we had more confident answers. We can get more confident answers on some of them by asking top-level mathematicians & mathletes (e.g. Putnam fellow, IMO top score, or successful academic mathematician / CS researcher).
If you are such a person and want to directly affect Singularity Institute strategy, contact me at luke@intelligence.org.
Thank you.
Now back to your regularly scheduled rationality programming...
"Politics is the mind-killer" is the mind-killer
Summary: I propose we somewhat relax our stance on political speech on Less Wrong.
Related: The mind-killer, Mind-killer
[META] ajax.googleapis.com
Apparently, by not unblocking scripts for "ajax.googleapis.com", I am unable to vote on LW. I generally dislike enabling scripting for domains that are used in many places -- unblocking Google APIs would unblock it everywhere, not just here -- so the result is that I am no longer voting. I suspect that I am not alone in this.
(Apparently I can't post without enabling it either. Looks like I'll have make an exception and do the script-on-script-off dance after all. Whee.)
The Singularity Institute's Arrogance Problem
I intended Leveling Up in Rationality to communicate this:
Despite worries that extreme rationality isn't that great, I think there's reason to hope that it can be great if some other causal factors are flipped the right way (e.g. mastery over akrasia). Here are some detailed examples I can share because they're from my own life...
But some people seem to have read it and heard this instead:
I'm super-awesome. Don't you wish you were more like me? Yay rationality!
This failure (on my part) fits into a larger pattern of the Singularity Institute seeming too arrogant and (perhaps) being too arrogant. As one friend recently told me:
At least among Caltech undergrads and academic mathematicians, it's taboo to toot your own horn. In these worlds, one's achievements speak for themselves, so whether one is a Fields Medalist or a failure, one gains status purely passively, and must appear not to care about being smart or accomplished. I think because you and Eliezer don't have formal technical training, you don't instinctively grasp this taboo. Thus Eliezer's claim of world-class mathematical ability, in combination with his lack of technical publications, make it hard for a mathematician to take him seriously, because his social stance doesn't pattern-match to anything good. Eliezer's arrogance as evidence of technical cluelessness, was one of the reasons I didn't donate until I met [someone at SI in person]. So for instance, your boast that at SI discussions "everyone at the table knows and applies an insane amount of all the major sciences" would make any Caltech undergrad roll their eyes; your standard of an "insane amount" seems to be relative to the general population, not relative to actual scientists. And posting a list of powers you've acquired doesn't make anyone any more impressed than they already were, and isn't a high-status move.
So, I have a few questions:
- What are the most egregious examples of SI's arrogance?
- On which subjects and in which ways is SI too arrogant? Are there subjects and ways in which SI isn't arrogant enough?
- What should SI do about this?
New SI publications design
The Singularity Institute's publications are badly and inconsistently formatted.
Our research associate Daniel Dewey has made us a nice-looking LaTeX template for them (example), but we need some help moving them from the original files into LaTeX. If you have a little experience with LaTeX and might be willing to help, please let me know!
luke [at] intelligence.org
Or, in general, please send an email to volunteers+subscribe@intelligence.org to be added to the volunteers email list.
Volunteer project requests go out only a few times a month, so we won't flood your inbox, and you never know: maybe a project sent to that list will be something you are in the mood to do and have the skills to do.
Singularity Institute Executive Director Q&A #2
Previously: Interview as a researcher, Q&A #1
This is my second Q&A as Executive Director of the Singularity Institute. I'll skip the video this time.
Singularity Institute Activities
Bugmaster asks:
...what does the SIAI actually do? You don't submit your work to rigorous scrutiny by your peers in the field... you either aren't doing any AGI research, or are keeping it so secret that no one knows about it... and you aren't developing any practical applications of AI, either... So, what is it that you are actually working on, other than growing the SIAI itself ?
It's a good question, and my own biggest concern right now. Donors would like to know: Where is the visible return on investment? How can I see that I'm buying existential risk reduction when I donate to the Singularity Institute?
SI has a problem, here, because it has done so much invisible work lately. Our researchers have done a ton of work that hasn't been written up and published yet; Eliezer has been writing his rationality books that aren't yet published; Anna and Eliezer have been developing a new rationality curriculum for the future "Rationality Org" that will be spun off from the Singularity Institute; Carl has been doing a lot of mostly invisible work in the optimal philanthropy community; and so on. I believe this is all valuable x-risk-reducing work, but of course not all of our supporters are willing to just take our word for it that we're doing valuable work. Our supporters want to see tangible results, and all they see is the Singularity Summit, a few papers a year, some web pages and Less Wrong posts, and a couple rationality training camps. That's good, but not good enough!
I agree with this concern, which is why I'm focused on doing things that happen to be both x-risk-reducing and visible.
First, we've been working on visible "meta" work that makes the Singularity Institute more transparent and effective in general: a strategic plan, a donor database ("visible" to donors in the form of thank-yous), a new website (forthcoming), and an annual report (forthcoming).
Second, we're pushing to publish more research results this year. We have three chapters forthcoming in The Singularity Hypothesis, one chapter forthcoming in The Cambridge Handbook of Artificial Intelligence, one forthcoming article on the difficulty of AI, and several other articles and working papers we're planning to publish in 2012. I've also begun writing the first comprehensive outline of open problems in Singularity research, so that interested researchers from around the world can participate in solving the world's most important problems.
Third, there is visible rationality work forthcoming. One of Eliezer's books is now being shopped to agents and publishers, and we're field-testing different versions of rationality curriculum material for use in Less Wrong meetups and classes.
Fourth, we're expanding the Singularity Summit brand, an important platform for spreading the memes of x-risk reduction and AI safety.
So my answer is to the question is: "Yes, visible return on investment has been a problem lately due to our choice of projects. Even before I was made Executive Director, it was one of my top concerns to help correct that situation, and this is still the case today."
What if?
XiXiDu asks:
What would SI do if it became apparent that AGI is at most 10 years away?
This would be a serious problem because by default, AGI will be extremely destructive, and we don't yet know how to make AGI not be destructive.
What would we do if we thought AGI was at most 10 years away?
This depends on whether it's apparent to a wider public that AGI is at most 10 years away, or a conclusion based only on a nonpublic analysis.
If it becomes apparent to a wide variety of folks that AGI is close, then it should be much easier to get people and support for Friendly AI work, so a big intensification of effort would be a good move. If the analysis that AGI is 10 years away leads to hundreds of well-staffed and well-funded AGI research programs and a rich public literature, then trying to outrace the rest with a Friendly AI project becomes much harder. After an intensified Friendly AI effort, one could try to build up knowledge in Friendly AI theory and practice that could be applied (somewhat less effectively) to systems not designed from the ground up for Friendliness. This knowledge could then be distributed widely to increase the odds of a project pulling through, calling in real Friendliness experts, etc. But in general, a widespread belief that AGI is only 10 years away would be a much hairier situation than the one we're in now.
But if the basis for thinking AI was 10 years away was nonpublic (but nonetheless persuasive to supporters who have lots of resources), then it could be used to differentially attract support to a Friendly AI project, hopefully without provoking dozens of AGI teams to intensify their efforts. So if we had a convincing case that AGI was only 10 years away, we might not publicize this but would instead make the case to individual supporters that we needed to immediately intensify our efforts toward a theory of Friendly AI in a way that only much greater funding can allow.
Budget
MileyCyrus asks:
What kind of budget would be required to solve the friendly AI problem?
Large research projects always come with large uncertainties concerning how difficult they will be, especially ones that require fundamental breakthroughs in mathematics and philosophy like Friendly AI does.
Even a small, 10-person team of top-level Friendly AI researchers taking academic-level salaries for a decade would require tens of millions of dollars. And even getting to the point where you can raise that kind of money requires a slow "ramping up" of researcher recruitment and output. We need enough money to attract the kinds of mathematicians who are also being recruited by hedge funds, Google, and the NSA, and have a funded "chair" for each of them such that they can be prepared to dedicate their careers to the problem. That part alone requires tens of millions of dollars for just a few researchers.
Other efforts like the Summit, Less Wrong, outreach work, and early publications cost money, and they work toward having the community and infrastructure required to start funding chairs for top-level mathematicians to be career Friendly AI researchers. This kind of work costs between $500,000 and $3 million per year, with more money per year of course producing more progress.
Predictions
Wix asks:
How much do members' predictions of when the singularity will happen differ within the Singularity Institute?
I asked some Singularity Institute staff members to answer a slightly different question, one pulled from the Future of Humanity Institute's 2011 machine intelligence survey:
Assuming no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of human-level machine intelligence? Feel free to answer ‘never’ if you believe such a milestone will never be reached.
In short, the survey participants' median estimates (excepting 5 outliers) for 10%/50%/90% were:
2028 / 2050 / 2150
Here are five of the Singularity Institute's staff members' responses, names unattached, for the years by which they would assign a 10%/50%/90% chance of HLAI creation, conditioning on no global catastrophe halting scientific progress:
- 2025 / 2073 / 2168
- 2030 / 2060 / 2200
- 2027 / 2055 / 2160
- 2025 / 2045 / 2100
- 2040 / 2080 / 2200
Those are all the answers I had time to prepare in this round; I hope they are helpful!
[META] Trackbacks
When I was going through the sequences, I often found that reading about a fallacy in passing, like when it was hyperlinked in the middle of a sentence, was more really helped me get the idea.
I know that on the wiki, there is a feature were you can see all the trackbacks to a page.
Is there a way to do this for non-wiki pages? This could be useful even for non-LW pages.
'Next post' and 'Previous post' links for posts in a sequence
Less Wrong would be stickier if there were links at the bottom of each post in The Sequences to the next and previous posts in that sequence.
I just added those links for each post in my own sequences: The Science of Winning at Life, Rationality and Philosophy, and No-Nonsense Metaethics.
I can't do that for sequences written by somebody else. Perhaps one or more of the LW editors would be willing to start hacking away on that project?
Here's the algorithm I executed:
- Open all the posts from one sequence, in order, in browser tabs.
- Go to first post in the sequence.
- Click 'Edit'.
- Click 'HTML' and uncheck 'Word Wrap.'
- Scroll to the bottom of the post (not counting notes and references) and paste in the following:
<p> </p>
<p align="right">Next post: <a href=""></a></p>
<p align="right">Previous post: <a href=""></a></p>
<p> </p>
<p> </p> - If post is first in sequence, remove 'Previous post' line.
- If post is last in sequence, remove 'Next post' line.
- Paste in URL and post title for remaining 'Next post' or 'Previous post' lines of HTML.
- Click 'Update', then click 'Submit'.
- If this is the last post in the sequence, return 0. Else, move to next post in sequence and go to step #3.
Tell me what you think of me
Time and time again, honest feedback has improved my life. I have sought it out on many specific occasions, but now I have a static, anonymous way for people to give me feedback — for any reason, at any time.
You can give me feedback on my personality, my conduct, or the organization for which I work by following this link right here.
Thank you. I apologize for making a discussion post that is all about me.
I operate by Crocker's Rules.
Volunteer needed to change the front page 'Featured Articles' each week
Less Wrong needs a volunteer to change the 'Featured Articles' once a week. I have detailed instructions on what to do (which wiki entry to edit, which articles have already been featured, which articles to feature in the future, etc.) I've been doing it so far, but not changing it as often as it should be changed. It takes about 5 minutes each time you do it.
Who is willing to do this?
LessWrong running very slow?
LessWrong pages are taking a long time to load. Today they are especially bad, to the point where if I make a comment the page times out before it is posted. Is this true for other people? Do those who run the site know the cause? Can it be fixed?
EDIT: Confirmed: It's not just me, it's probably everyone.
EDIT2: I also apologise for my appalling grammar in the title.
Who owns LessWrong?
The LessWrong wiki contains a biased and offensive entry on group selection. I edited the wiki page, to append some points representing an opposing view at the end. Eliezer removed my points, leaving only a link at the end. He said he thought my points were wrong, but would not say which points he thought were wrong, or why he thought they were wrong.
Is it reasonable for me to restore my changes over Eliezer's edit, since he is unwilling to give reasons for his edit? What sort of rights or privileges does Eliezer have over LW or LW wiki content?
(Please try not to turn this into a discussion of group selection.)
ADDED: Please go meta, folks. I am not trying to argue about this specific Wiki article. I am not asking for redress. Specifics about this wiki article are irrelevant. I am asking whether this is still a benevolent dictatorship.
The relevant questions are not what the appropriate form of debate is, or anything about this wiki article. The relevant questions are:
- Who owns the domain?
- Who created the Wiki?
- Who owns the code?
- Who pays for the servers?
- If someone is in charge, what rights do they reserve for themselves?
- At what point does the ratio of community contributions to Eliezer's contributions mean we have the right to claim some ownership?
The Wiki main page says, "The wiki about rationality that anyone who is logged in can edit". Apparently that is a lie. If I do not have as much right as Eliezer does to write a wiki post, I want that point explicitly spelled out.
How to incentivize people doing useful stuff on Less Wrong
Currently, LWers get +1 karma for a comment upvote, and +10 karma for a main post upvote. But clearly, there are other valuable things LWers could do for the community besides writing comments and posts. Writing isn't everyone's forte. Why not award karma for doing productive non-writing things? It's probably not optimal that karma and the community status that comes with it are awarded only for the thing that myself and a few other people are good at. For example, I really wish LW could award karma to programmers for improving LW.
The challenge is doing it fairly, in a way that doesn't alienate too many people. But there might be a workable way to do this, so let's explore.
Perhaps tasks could be assigned karma award amounts by LW editors (Nesov, Eliezer, Louie, etc.), or even just one person who is appointed as the Karma Genie.
Examples:
- Write a 5-page document describing how to use the Less Wrong virtual machine to hack new features into Less Wrong. 900 points.
- Add a Facebook 'Like' button to the left of the up-down vote buttons on every post. 700 points.
- Collect PDFs for every paper on debiasing thinking error X, upload the ZIP file to mediafire. 700 points.
- Write a single-page introduction to The Sequences that makes them easier to navigate and see the value of. 800 points.
- Launch a new LessWrong meetup group and hold at least three meetings. 1200 points.
Useful Things Volunteers Can Do Right Now
Per Kaj's suggestion, I'm posting my list of useful things volunteers can do right now. Without help, most of these things won't occur, because I need to be spending my time writing papers, promoting the Singularity Summit, collaborating with other researchers, improving Singularity Institute's transparency, etc.
Previously, I tried to have volunteers contact me so that I could assign people tasks, but that has become too time consuming (as Eliezer predicted), largely because (in my experience) the odds that a volunteer will actually perform a task given that they've agreed to perform it are very low.
So, I'll post my list here and hope that a few people self-organize to get some of them done. It's worth a shot! My sincere thanks to anyone completes any task on this list.
- Translate the Singularity FAQ into other languages, besides English and Italian.
I have about 8 fashion photos each from 3 minicampers, which need to be shown in random order to 5 straight females (in meatspace) who will judge which they prefer. I have the exact experimental design and the photos. Please email me [lukeprog at gmail] if you'd like to do this; it's perfect for somebody social.- Begin to develop a list of AI technology predictions; a step on this path is to create a list of sources for AI technology predictions. We want to eventually be able to write up a report of correlates between predictions and the properties of prediction-makers at the time of their predictions.
- Find out how much money the U.S. government/military has spent researching machine ethics (e.g. via Ronald Arkin), and how much of that money was given to whom and for which projects (citing sources along the way).
- Work with XiXiDu to interview (via email) more AI researchers about AI risks; see here.
- Come up with ways to illustrate the idea of an intelligence explosion or friendly ai with a static graphic or a very short animation; create those graphics or animations if possible.
- Make a list of additional media for IntelligenceExplosion.com (e.g. summit videos).
Find a free tool that backs up all your Google Docs, and is available for Mac, Linux, or PC.(thanks Dreaded_Anomaly!)Research nootropics for rationality, especially for increasing cognitive reflectiveness / need for cognition. Make a list of recent, useful papers on the topic.- Join the Less Wrong Public Goods Team google group and work on, for example, the project of making it much easier to add features to Less Wrong.
- There is no end to the useful things that you can do at SingularityVolunteers.org.
If you live in Berkeley and want to work directly with me on tasks, even better. I have a bottomless need for in-person personal assistants who want to do things that decrease existential risk, and you'll get to see the inner workings of an org currently making a run at being the world's leading independent transhumanist organization.
Questions for a Friendly AI FAQ
I've begun work (with a few others) on a somewhat comprehensive Friendly AI F.A.Q. The answers will be much longer and more detailed than in the Singularity FAQ. I'd appreciate feedback on which questions should be added.
1. Friendly AI: History and Concepts
1. What is Friendly AI?
2. What is the Singularity? [w/ explanation of all three types]
3. What is the history of the Friendly AI Concept?
4. What is nanotechnology?
5. What is biological cognitive enhancement?
6. What are brain-computer interfaces?
7. What is whole brain emulation?
8. What is general intelligence? [w/ explanation of why 'optimization power' may less confusing than 'intelligence', which tempts anthropomorphic bias]
9. What is greater-than-human intelligence?
10. What is superintelligence, and what powers might it have?
2. The Need for Friendly AI
1. What are the paths to an intelligence explosion?
2. When might an intelligence explosion occur?
3. What are AI takeoff scenarios?
4. What are the likely consequences of an intelligence explosion? [survey of possible effects, good and bad]
5. Can we just keep the machine superintelligence in a box, with no access to the internet?
6. Can we just create an Oracle AI that informs us but doesn't do anything?
7. Can we just program machines not to harm us?
8. Can we program a machine superintelligence to maximize human pleasure or desire satisfaction?
9. Can we teach a machine superintelligence a moral code with machine learning?
10. Won’t some other sophisticated system constrain AGI behavior?
3. Coherent Extrapolated Volition
1. What is Coherent Extrapolated Volition (CEV)?
2. ...
4. Alternatives to CEV
1. ...
5. Open Problems in Friendly AI Research
1. What is reflective decision theory?
2. What is timeless decision theory?
3. How can an AI preserve its utility function throughout ontological shifts?
4. How can an AI have preferences over the external world?
5. How can an AI choose an ideal prior given infinite computing power?
6. How can an AI deal with logical uncertainty?
7. How can we elicit a utility function from human behavior and function?
8. How can we develop microeconomic models for self-improving systems?
9. How can temporal, bounded agents approximate ideal Bayesianism?
Consolidated link thread, September 2011
Recently the Discussion section has been full of link threads, most of them with a pretty low karma score and few if any comments. While many of them are interesting, I'd prefer to have less of them around. Right now they clutter up the discussion section so that it's getting hard to find the threads with actual discussion going on.
Therefore I'd suggest having regular link threads, in the same manner as rationality quotes and open threads. If you're only posting a link together with a brief description or excerpt, and it isn't something really really interesting, please post it as a comment in a link thread.
My intentions for my metaethics sequence
Recently a friend of mine told me that he and a few others were debating how likely it is that I've 'solved metaethics.' Others on this site have gotten the impression that I'm claiming to have made a fundamental breakthrough that I'm currently keeping a secret, and that's what my metaethics sequence is leading up to. Alas, it isn't the case. The first post in my sequence began:
A few months ago, I predicted that we could solve metaethics in 15 years. To most people, that was outrageously optimistic. But I've updated since then. I think much of metaethics can be solved now (depending on where you draw the boundary around the term 'metaethics'.) My upcoming sequence 'No-Nonsense Metaethics' will solve the part that can be solved, and make headway on the parts of metaethics that aren't yet solved. Solving the easier problems of metaethics will give us a clear and stable platform from which to solve the hard questions of morality.
The part I consider 'solved' is the part discussed in Conceptual Analysis and Moral Theory and Pluralistic Moral Reductionism. These posts represent an application of the lessons learned from Eliezer's free will sequence and his words sequence to the subject of metaethics.
I did this because Eliezer mostly skipped this step in his metaethics sequence, perhaps assuming that readers had already applied these lessons to metaethics to solve the easy problems of metaethics, so he could skip right to discussing the harder problems of metaethics. But I think this move was a source of confusion for many LWers, so I wanted to go back and work through the details of what it looks like to solve the easy parts of metaethics with lessons learned from Eliezer's sequences.
The next part of my metaethics sequence will be devoted to "bringing us all up to speed" on several lines of research that seem relevant to solving open problems in metaethics: the literature on how human values work (in brain and behavior), the literature on extracting preferences from what human brains actually do, and the literature on value extrapolation algorithms. For the most part, these literature sets haven't been discussed on Less Wrong despite their apparent relevance to metaethics, so I'm trying to share them with LW myself (e.g. A Crash Course in the Neuroscience of Human Motivation).
Technically, most of these posts will not be listed as being part of my metaethics sequence, but I will refer to them from posts that are technically part of my metaethics sequence, drawing lessons for metaethics from them.
After "bringing us all up to speed" on these topics and perhaps a couple others, I'll use my metaethics sequence to clarify the open problems in metaethics and suggest some places we can hack away at and perhaps make progress. Thus, my metaethics sequence aims to end with something like a Polymath Project set up for collaboratively solving metaethics problems.
I hope this clarifies my intentions for my metaethics sequence.
Call for Personal Volunteers
Those who wish to volunteer some of their time toward reducing existential risk and increasing our chances of a positive singularity can follow the directions on SingularityVolunteers.org.
And as a freshly hired Singularity Institute researcher, I also have my own list of tasks that, if completed by volunteers instead of myself, will speed along the delivery of the projects I'm working on: an 'FAI Open Problems' document, two papers bound for peer review, metaethics research, and more.
So if you'd like to help me out with any of my volunteer-doable tasks, please contact me: luke [at] singinst [dot] org.
Thanks!
How much is karma worth, after all?
It's been a couple days since the funding plea, so I thought I'd like to take this chance to compare self-reported donations to short-term karma gains. Naturally, I voted on none of these comments. Note that after posting this, the karma on these posts will almost definitely change; the values here are for 27/8/11 at around 9:00 GMT.
So, the data:
- Kaj_Sotala ~172USD, 5 karma
- Rain 12000USD, 25 karma
- Nisan 100USD, 16 karma
- pengvado 10000USD, 36 karma
- JGWeissman 2000USD, 24 karma
- Benquo 1000USD, 18 karma
- AlexMennen 285USD, 7 karma; and 30USD, 2 karma
- wmorgan 1000USD, 13 karma
Note: two people (Kaj_Sotala and Rain) reported monthly commitments, but as far as I understand only the yearly pledge is matched, so for the purposes of this informal study I treat them as reporting X*12 USD donations, instead of X/month.
There's not enough data for an honest causal analysis (I tried), but there are a few observations one can make. Intuitively one expects karma to be determined by the donation amount, the duration of time since the posting, and some unknown error.
First observation: the users with the best USD/karma exchange rate made modest contributions early. Nisan came out best, with $6.25/karma — though some of this karma may be due also to the fantastic signal, on their part, that they overcame a rational hazard to make this donation. (Also, EY responded afterward, confounding the karmic flow with his wake.)
In this spirit, we now name "doing the least restrictive, obviously acceptable thing, instead of doing nothing while contemplating alternatives" Nisan's razor, (ニサンの剃刀, perhaps) unless it happens to have a better, previously-existing name.
Second observation: Hyperbolic discounting is alive and well. Those reporting monthly donations have karma below comparable one-shot donations, though both monthly data points did come slightly later than their one-shot counterparts.
Third observation: Large donations are really inefficient at netting karma. pengvado paid $277.48/karma; no one above 1000USD paid less than $50/karma.
Naturally, there's little point to this analysis. If anyone is trying to maximize net karma by donating to SIAI, something is probably wrong with their priorities.
New Favicon
LW appears to have acquired a new favicon, "<X" in place of the prior "Lw". This change wasn't announced and I don't know what the new icon means. Can someone explain it to me?
Where to report bugs on the new site?
I assume it's the same as before, but I don't recall where that is and it doesn't seem to be listed on the about page anymore. One of my comments will not mark as read on my userpage even though it will everywhere else...
Akrasic Reasoning
This post is in a constant state of revision, similar to this post. This is mainly because I do not have a beta and this is based on many personal experiences that are unclear at times.
This subject has been touched on many times throughout LessWrong because Akrasia is the most dangerous foe of any true follower of Rationality. When you know you could be amazing but you find yourself unable to change due to the havoc that feelings can play with your thoughts you feel helpless and I want to help you surpass that. I am beginning a Journey to fight Akrasia directly in all its forms and in the past such Journey's have been abandoned without much progress. In this mini-sequence of posts I plan to not only document my fight to push past the depressing weight of Akrasia as a tool to keep me on the path, I will also provide some anti-Akrasia reports on my progress with different techniques that fellow LessWrongians can look back on and draw strength from in times of despair and laziness.
My name is Matthew Baker and I want to save the world.
I think most people share the feeling that the world should be saved and that only true sociopaths can discount the value of all sentient life. This is so important because the majority of people aren't able to defeat their innate Akrasic reasoning, ugh fields, and other factors that prevent them from functioning in a way that aligns with their beliefs. I think that if you believe in something, and you wish to be more rational towards the world then you should either push your beliefs towards the current state of reality or push reality towards your current state of beliefs.
When I was younger and sought something that I could devote effort to that would change the world for the better, I was quite disillusioned by the fact that nearly every cause relied on their innate biases to deal with the problems facing them. From political struggles to moral tribulation humanity is very good at ignoring things that don't coincide with their worldview. I always sought to surpass that but for a long time I failed to find anything to believe in that coincided with reality. Now that my skepticism is satisfied I have to logically take a look at what things are preventing me from promoting my beliefs. Akrasia is the most dangerous foe of any true follower of rationality. I've personally experienced Akrasia as the feeling when you know you could be amazing but you find yourself unable to change due to the havoc that feelings can play with your thoughts. I am beginning a journey to fight Akrasia directly in all its forms. I've attempted this in the past without making much progress; I'm hoping a different approach will help me succeed (or at least make new and different mistakes). In this mini-sequence of posts I plan to document my fight to push past the depressing weight of Akrasia. As a tool to keep me on the path, I will also provide some anti-Akrasia reports on my progress with different techniques.
My goals for this quest are varied yet connected. I don't intend to take them all on at once, but instead phase them in over the upcoming month and see if i can find the limit of my ability to avoid wasting time.
-
My goal to make myself more fit and transition to eating healthier food, right now I'm fairly skinny and I want to build some muscle to match with my height(6'1"). Enough so that I dont have trouble picking up things and carrying them without much out-word signalling of effort, but I'm not looking to become a bodybuilder or anything I just wanna optimize the vessel carrying my consciousnesses with better food and habits.
-
My goal is to become more skilled socially, I rested on my social laurels for a long time and focused on associating with people that fit my views on set issues. For maximum success I will focus on general social group construction as I advance into my second year of college. I wanna see how much fun and rationality I can spread if I focus on being skilled at gathering smart and interesting people into the fun vortex I can create around me.
-
My goal is to get a substantially higher GPA then I did last semester. I spent very little time on school but managed to pull off a 3.1 which was lower than my first semester GPA and I want this trend to reverse as I spend more focused time on school and actually study for the first time in my life.
Things that prevent me from achieving my goals are mostly random web browsing and gaming, lots of ugh fields I've only recently been able to write down and start purging from my thought process, negative emotions that sap my willpower and currently unknown other factors. Hopefully I will be able to surpass these problems with the power of self reflection and sharing, classical conditioning and positive substance use.
My goals for the upcoming week involve some social and fitness goals until school starts on the 20th. Hopefully I can get these partially phased in and be able to focus more on academia once I'm back up at school. For specific milestones I want to dance closely with at least 1 girl at a rave I'm going to tonight up in LA and I want to start working on pull-ups so I can get back up to my previous total(3) and start building from there.
I expect I'll have to deal with some social anxiety at the rave and some ugh field's towards the fitness, but hopefully this form of specific goal setting and reflection will work well. I will also have substances available for backup in case I fail to perform to my personal expectations. Combined, this should allow me to surpass my Akrasic Reasoning of the past for the sake of our combined future.
What can you gain from my efforts as fellow rationalists? Hopefully, once I've competed my journey I'll be able to explain my mind state well enough that you can learn from it and apply it to your own goals. When my mental state is low reading about how someone else was able to push back up from a similarly bad state can be amazingly helpful and I hope that I can provide that to others.
Tsuyoku Naritai! My Friends
P.S. If luck exists, I wish to gain more of it and believe in it so wish me luck with my first top level post. :) Edit: Its now in discussion until I see a surge of excitement towards the idea of this mini-sequence.
I can't see comments anymore -- what was recently changed?
I can't see the comments under posts anymore. This just started happening in the last day or so. (If I know a comment is there, I can still see it in the "recent comments" section.) I'm using IE8 at work (CORRECTION: IE7), where a lot of stuff is blocked for security reasons, but not in a way that kept me from seeing comments until recently.
What changes were recently made to the site that would cause this? It would be really nice to undo those.
Btw, I won't be able to see the replies to this post unless I find them under "recent comments", so ... if you want more specifics about how it looks on my end, you'll probably have to PM me.
A WordCloud Visual of LW Main
I created this word cloud based on http://lesswrong.com/promoted/.rss.

The words "Rational" and "Rationality" might trigger negative stereotypes for some people, but this image is positive and inspiring. I'm glad to see we are promoting top level posts on these issues and thought you would be too.
LW's front page freezes, hangs and bugs on Chrome
Browser is Chrome 12.0.742.122. It doesn't happen on Firefox. "It" is:
- sometimes I can't click on links and eventually I get Chrome's "dead tab" notification
- other times it keeps loading, even though while I wait for it to load I can go to, say, my user page and have it load right away
- twice I've gotten weird graphical bugs on it. Screenshots: 1 2
To reiterate, it only happens on the front page, the one you get when you go to lesswrong.com. Other pages are fine. Perhaps it's the map?
Call for volunteers: clean up the LW issue tracker
I'm looking for a volunteer (or volunteers).
We've let the Lesswrong issue tracker get out of hand - there are 99 open issues on it, and I think that many of them are resolved by changes since they were opened, are less than awesome ideas, or, for the remainder, are valid ideas.
I'd love someone to volunteer to go through all of the open issues, close those that are complete or silly, and tag/prioritise those that remain. I'll need to give you the power to do that, so please nominate yourself in the comments.
Once the list is cleaned up, I think Trike can keep it organised.
ETA: seems to have this well in hand - serious kudos, Nic. Thank you.
Recent site changes, Mon 4th July
We've pushed out some new changes, including a front page. Promoted articles are now available at http://lesswrong.com/promoted/ and from the MAIN link in the top nav.
Most of the recent changes were requested here or here.
This is the place to comment on the new changes… but please remember that:
- we're watching this thread, and if we've done anything really horrible, you or someone else will tell us, and we'll fix it; so
- be polite, and explain clearly what we need to do to make you happy.
Front page "Featured Articles" are edited here. I humbly suggest that willing featured article editors nominate themselves in the comments, and everyone who doesn't get a lot of support forgets this link and leaves that page alone.
And if spam becomes a problem for the front page, the about page, or the comment markup help, poke a wiki admin and ask them to "protect" the special pages.
Meta: "Less Wrong" connotations?
I've always interpreted this site's title, "Less Wrong" (link is to description of origin of phrase), to name a goal that its members strive towards. After factoring in the slightly self-deprecating or human-deprecating connotations the title's message sorta feels like "We know it's impossible to become completely right, but we should at least aim to be less wrong". But earlier today I realized that it could also be interpreted as an implicit comparison of social groups: "We are less wrong than others.".1&2 Does anyone know if there's a sizable minority of people who interpret "Less Wrong" to be primarily a boastful comparison of social groups? Edit: To make it clearer, I'm worried about potential negative effects via the social psychology of credibility and people maneuvering to uncharitably resolve unintended ambiguities to make contemptible caricatures of complex social groups whose perceived-members can thenceforth be disregarded. An example is how someone might be a lot more suspicious of an intellectual if that intellectual had been seen as somehow in league with those heartless Rand-worshipping Objectivists.
1 I'm not sure if this connotation was intended but I suspect that if it was it was meant as a secondary and subtle message, and I suspect that if people were trying to sneak in secondary or subtle messages they would have been smart enough to realize that putting two negative-affect words next to each other to make a title isn't a good idea. But perhaps I underestimate the ratio of intellectualish cleverness to practical wisdom among those who named Less Wrong "Less Wrong".
2 This kinda worried me because of all those somewhat-misguided3 comment replies that start off with "For a site titled 'Less Wrong', you guys sure are wrong about [probably controversial topic].", where it's unclear what they think "Less Wrong" is supposed to mean.
3 Here's one good exception to the general awfulness of this meme.
Please vote -- What topic would be best for an investigation and brief post?
Followup to: Systematic Search for Useful Ideas
I've set up a pairwise poll for this question and additional suggestions are welcome. My original proposal was to examine topics that haven't already been covered here, but instead of that, I'd like to ask people to consider the existing level of discussion on a topic in evaluating what would be "best."
ETA: There are currently over 500 pairs. You don't have to go through all of them -- answer as many or as few as you like.
Recent site changes
Recent site changes have generated more unhappiness than I expected. This post is a brief note to share resources that will make it easier for concerned site users to track what's happening and what we intend.
- First, know that we're listening. We'll make further site changes next week that will likely include some reversions.
- The official site issue tracker remains unchanged, but for the next week or so we'll work from this public Google Doc (just because it's lighter weight). Nothing on that document is a promise - just evidence of our current thinking. We'll strike out items on that list as we deliver them to our (private) staging server, and will roll them out onto the live site soon after.
- I've reached out to a small handful of SIAI and LessWrong heavyweights to track my balance as we make these changes. My feed should make it clear that I'm trying to act with calm rationality, but I'm obviously invested in the work we've shared to date and asking for some external help seems prudent.
- I'll track discussion on this post.
What can you teach us?
In a recent thread, SarahC said:
I'd prefer more posts that aim to teach something the author knows a lot about, as opposed to an insight somebody just thought of. Even something less immediately related to rationality -- I'd love, say, posts on science, or how-to posts, at the epistemic standard of LessWrong.
... so here's the place to float ideas around: is there an area you know a lot about? A topic you've been considering writing about? Here's the place to mention it!
From a poll on what people want to see more of, the most votes went to:
- Statistics
- Game Theory
- Direct advice for young people
- General cognitive enhancing tools (such as Adderall and N-Back)
- Information Theory
- Economics
Some that got less votes:
- Data visualization
- (Defence against the) Dark Arts
- Moral Philosophy (looks like that's being done already)
- Postmodernism
- Getting along in an irrational world
- Existential risks
- Medicine, Applied Human Biology
... but there are certainly many more things that would be interesting and useful to the community. So what can you teach us?
Seeking suggestions: Less Wrong Biology 101
I’ve been a reader and occasional commenter here for a while now, but previously have not had a solid idea of what I could or wanted to contribute to the community in posting. In light of recent comments stating an interest in more posts that offer concrete, factual information as well as remembering lukeprog’s call for such things in his Back to the Basics of Rationality post, I am considering a series of condensed posts about biology. As someone who has spent my formal education on biologically-focused engineering (bioengineering BS, now studying bioinformatics under a chemical engineering department for my PhD) but has always had the bulk of my friends in electrical engineering, computer science, and more traditional chemical engineering, I’ve gotten used to offering such condensed explanations whenever biology works its way into a discussion. From what I’ve seen on LW thus far, the community educational base leans more in those (non-biology) directions, so I believe this is a niche that could use filling.
Since biology is a rather broad subject, and you could all go read Wikipedia or a textbook if you wanted a very detailed survey course, my intent is to pick targeted topics that are relevant to current events and scientific developments. Each post would focus on one such event/Awesome New Study, discussing the biological background and potential implications, including either short explanations or links to the basics needed to understand the subject. If there are any political ties to the subject, I will withhold my explicit opinions on those aspects unless asked in the comments.
My questions, then, are the following:
- Is this something that people here would find interesting/useful in the general sense? (While I do enjoy talking to myself, doing so on this topic has gotten a bit old, so I really do want to know if no one really thinks this will be helpful.)
- How long/in-depth would you like? This question is intended to gauge what my background explanation: background links ratio should be.
- And most importantly, what are some topics you would like to see discussed?
UPDATE: Having followed the comments so far and done some preliminary outlining, I'm leaning toward a more organized progression of topics that will still tie into current interests and developments, but not be centered on them. A bit more thought and putting ideas to text indicated that I could group the interest areas into biological categories (molecular, populations, developmental, neuro, etc) fairly easily, which would then allow for a 'foundations' post to introduce each major category, followed by posts that go over What We Know Now, Why We Care, and Where It's Going.
Official Less Wrong Redesign: Special pages
Following along from Louie's post and the discussion around it…
User:orthonormal suggested (and many seconded) a better Welcome section and improvements to the About page.
User:bentarm suggested doing something with the comment help link, User:jimrandomh suggested making it a wiki page, and User:Alicorn requested that it be more extensive.
Responding to the above suggestions, we propose adding functionality to Less Wrong. We'll add a special page type that collects its content from the wiki. We propose that /about, the home page, the comment help text, and each user's user page be of this type (I imagine that this change to the homepage may be controversial).
We propose that those pages link to:
- http://wiki.lesswrong.com/wiki/Lesswrong:Homepage
- http://wiki.lesswrong.com/wiki/Lesswrong:Aboutpage
- http://wiki.lesswrong.com/wiki/Lesswrong:Commentmarkuphelp
- and each user's wiki userpage if an exact name match exists.
These pages would cache wiki content for at least several hours, so would be fast to render. They would include a publicly usable "refetch content from the wiki" button (detailed placement, wording and design to follow) so that if the source page was spammed anyone could fix it on the wiki then clear the cache. If abuse became a problem we could easily "protect" those pages.
Official Less Wrong Redesign: View defaults for new users
Following along from Louie's post and the discussion around it…
How should new visitors to Lesswrong see posts and comments ordered and filtered? (Assume we'll address the new visitors should be introduced to the site issue.) These will remain settings that are easily changed, but how should they start?
Current defaults:
Promoted posts, ordered by recency is our most prominent post list.
Comments are sorted by "Popular", which is Top with a very strong ageing of points (so recently voted comments rise to the top of the list, and very high voted comments fairly quickly drop away).
Options seeded in comments by me below, my karma balance at bottom. Please vote on at least one "Posts:" comment and one "Comments:" comment.
Proposal: consolidate meetup announcements before promotion
The Less Wrong feed is getting crowded with meetups rather than substantive posts. Hopefully, this should be fixed in the redesign, but one way to work around it in the meanwhile would be to make top-level posts announcing several meetups at once.
Folk would post meetups under the 'NEW' category, and each week or even every several days one of the meetup organizers could edit her post to announce all the meetups since the last consolidated post. This would greatly reduce the cluster while still getting meetups in the main feed. On the other hand, it would reduce average warning time before meetups, and the additional activation energy might deter some meetups.
If you have thoughts on the workability of this scheme, or an adjustment to make it workable, please comment below.
[HT: Anna Salamon]
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)