MIRI's 2016 Fundraiser

18 So8res 25 September 2016 04:55PM

Our 2016 fundraiser is underway! Unlike in past years, we'll only be running one fundraiser in 2016, from Sep. 16 to Oct. 31. Our progress so far (updated live):  

 


Donate Now

Employer matching and pledges to give later this year also count towards the total. Click here to learn more.


 

MIRI is a nonprofit research group based in Berkeley, California. We do foundational research in mathematics and computer science that’s aimed at ensuring that smarter-than-human AI systems have a positive impact on the world. 2016 has been a big year for MIRI, and for the wider field of AI alignment research. Our 2016 strategic update in early August reviewed a number of recent developments:

We also published new results in decision theory and logical uncertainty, including “Parametric bounded Löb’s theorem and robust cooperation of bounded agents” and “A formal solution to the grain of truth problem.” For a survey of our research progress and other updates from last year, see our 2015 review. In the last three weeks, there have been three more major developments:

  • We released a new paper, “Logical induction,” describing a method for learning to assign reasonable probabilities to mathematical conjectures and computational facts in a way that outpaces deduction.
  • The Open Philanthropy Project awarded MIRI a one-year $500,000 grant to scale up our research program, with a strong chance of renewal next year.
  • The Open Philanthropy Project is supporting the launch of the new UC Berkeley Center for Human-Compatible AI, headed by Stuart Russell.

Things have been moving fast over the last nine months. If we can replicate last year’s fundraising successes, we’ll be in an excellent position to move forward on our plans to grow our team and scale our research activities.  

The strategic landscape

Humans are far better than other species at altering our environment to suit our preferences. This is primarily due not to our strength or speed, but to our intelligence, broadly construed -- our ability to reason, plan, accumulate scientific knowledge, and invent new technologies. AI is a technology that appears likely to have a uniquely large impact on the world because it has the potential to automate these abilities, and to eventually decisively surpass humans on the relevant cognitive metrics. Separate from the task of building intelligent computer systems is the task of ensuring that these systems are aligned with our values. Aligning an AI system requires surmounting a number of serious technical challenges, most of which have received relatively little scholarly attention to date. MIRI's role as a nonprofit in this space, from our perspective, is to help solve parts of the problem that are a poor fit for mainstream industry and academic groups. Our long-term plans are contingent on future developments in the field of AI. Because these developments are highly uncertain, we currently focus mostly on work that we expect to be useful in a wide variety of possible scenarios. The more optimistic scenarios we consider often look something like this:

  • In the short term, a research community coalesces, develops a good in-principle understanding of what the relevant problems are, and produces formal tools for tackling these problems. AI researchers move toward a minimal consensus about best practices, normalizing discussions of AI’s long-term social impact, a risk-conscious security mindset, and work on error tolerance and value specification.
  • In the medium term, researchers build on these foundations and develop a more mature understanding. As we move toward a clearer sense of what smarter-than-human AI systems are likely to look like — something closer to a credible roadmap — we imagine the research community moving toward increased coordination and cooperation in order to discourage race dynamics.
  • In the long term, we would like to see AI-empowered projects (as described by Dewey [2015]) used to avert major AI mishaps. For this purpose, we’d want to solve a weak version of the alignment problem for limited AI systems — systems just capable enough to serve as useful levers for preventing AI accidents and misuse.
  • In the very long term, we can hope to solve the “full” alignment problem for highly capable, highly autonomous AI systems. Ideally, we want to reach a position where we can afford to wait until we reach scientific and institutional maturity -- take our time to dot every i and cross every t before we risk "locking in" design choices.

The above is a vague sketch, and we prioritize research we think would be useful in less optimistic scenarios as well. Additionally, “short term” and “long term” here are relative, and different timeline forecasts can have very different policy implications. Still, the sketch may help clarify the directions we’d like to see the research community move in. For more on our research focus and methodology, see our research page and MIRI’s Approach.  

Our organizational plans

We currently employ seven technical research staff (six research fellows and one assistant research fellow), plus two researchers signed on to join in the coming months and an additional six research associates and research interns.1 Our budget this year is about $1.75M, up from $1.65M in 2015 and $950k in 2014.2 Our eventual goal (subject to revision) is to grow until we have between 13 and 17 technical research staff, at which point our budget would likely be in the $3–4M range. If we reach that point successfully while maintaining a two-year runway, we’re likely to shift out of growth mode. Our budget estimate for 2017 is roughly $2–2.2M, which means that we’re entering this fundraiser with about 14 months’ runway. We’re uncertain about how many donations we'll receive between November and next September,3 but projecting from current trends, we expect about 4/5ths of our total donations to come from the fundraiser and 1/5th to come in off-fundraiser.4 Based on this, we have the following fundraiser goals:


Basic target - $750,000. We feel good about our ability to execute our growth plans at this funding level. We’ll be able to move forward comfortably, albeit with somewhat more caution than at the higher targets.


Growth target - $1,000,000. This would amount to about half a year’s runway. At this level, we can afford to make more uncertain but high-expected-value bets in our growth plans. There’s a risk that we’ll dip below a year’s runway in 2017 if we make more hires than expected, but the growing support of our donor base would make us feel comfortable about taking such risks.


Stretch target - $1,250,000. At this level, even if we exceed my growth expectations, we’d be able to grow without real risk of dipping below a year’s runway. Past $1.25M we would not expect additional donations to affect our 2017 plans much, assuming moderate off-fundraiser support.5


If we hit our growth and stretch targets, we’ll be able to execute several additional programs we’re considering with more confidence. These include contracting a larger pool of researchers to do early work with us on logical induction and on our machine learning agenda, and generally spending more time on academic outreach, field-growing, and training or trialing potential collaborators and hires. As always, you're invited to get in touch if you have questions about our upcoming plans and recent activities. I’m very much looking forward to seeing what new milestones the growing alignment research community will hit in the coming year, and I’m very grateful for the thoughtful engagement and support that’s helped us get to this point.  

Donate Now

or

Pledge to Give

 

1 This excludes Katja Grace, who heads the AI Impacts project using a separate pool of funds earmarked for strategy/forecasting research. It also excludes me: I contribute to our technical research, but my primary role is administrative. (back)

2 We expect to be slightly under the $1.825M budget we previously projected for 2016, due to taking on fewer new researchers than expected this year. (back)

3 We're imagining continuing to run one fundraiser per year in future years, possibly in September. (back)

4 Separately, the Open Philanthropy Project is likely to renew our $500,000 grant next year, and we expect to receive the final ($80,000) installment from the Future of Life Institute's three-year grants. For comparison, our revenue was about $1.6 million in 2015: $167k in grants, $960k in fundraiser contributions, and $467k in off-fundraiser (non-grant) contributions. Our situation in 2015 was somewhat different, however: we ran two 2015 fundraisers, whereas we’re skipping our winter fundraiser this year and advising December donors to pledge early or give off-fundraiser. (back)

5 At significantly higher funding levels, we’d consider running other useful programs, such as a prize fund. Shoot me an e-mail if you’d like to talk about the details. (back)

Attention! Financial scam targeting Less Wrong users

24 Viliam_Bur 14 May 2016 05:38PM

Recently, multiple suspicious user accounts were created on Less Wrong. These accounts don't post any content in the forum. Instead, they are used only to send private messages to the existing users.

Many users have received a copy of the same message, but different variants exist, too. Here are the examples I know about. If you have received a different variant, please post it in a comment below this article:

 

Hi good day. My boss is interested on donating to MIRI's project and he is wondering if he could send money through you and you donate to miri through your company and thus accelertaing the value created. He wants to use "match donations" as a way of donating thats why he is looking for people in companies like you. I want to discuss more about this so if you could see this message please give me a reply. Thank you!

 

hi. ive made 500k+ the last half year on esport betting and i can show proof. i was a great poker player before that so i have reason to believe i am good and wellsuited at this. i want to offer free education to one of the efw people that have their priorities straight in this world and will work towards minimising existential risk. the higher intelligence the better. ultimately i would like to offload some work to someone because currently i am gettin gquite a bit burnt out and i would like to study finance, and having someone take advantage of the incredible ineffeciencies in this area is of huge importance. i would like to discuss this with someone and how to make it real, and have exchange of thoughts on all of the aspects on how to best do it. i can post proof and make donations to miri to show im serious so that we or someone else could have a discussion about it

 

I don't know yet about anyone who replied and got scammed, so this is all based on indirect evidence. If you got scammed, please tell me. If you are ashamed, I can publish your story anonymously. Your story could help other potential victims.

Most likely, the scheme is the following:

  1. The scammer will send you money.
  2. Then they will ask some of the money back because they changed their mind, or they mistakenly sent you more than they wanted, or their financial situation suddenly changed, or whatever.
  3. After receiving the money from you, they will flag the original transaction as a fraud, so they get back the money they originally sent you, plus the money you sent them back. Then they disappear, or it will turn out they used a stolen identity, etc.

(Thanks to ChristianKl for explaining the system in the Open Thread.)

If you replied to the original message and now you are already in the middle of the process, please inform your bank as soon as possible! Even if the step 2 didn't happen yet, so you can still get out without losing money, warning your bank about the scammer could help other potential victims.

 

Warning: If you have already received a check or a payment confirmation, and someone is asking you to send the overpayment back quickly, do not send anything. The check or the payment confirmation is fake, and the goal is to make you send money before you find out. (Thanks to qsz for explaining.)

2016 LessWrong Diaspora Survey Results

32 ingres 14 May 2016 05:38PM

Foreword:

As we wrap up the 2016 survey, I'd like to start by thanking everybody who took
the time to fill it out. This year we had 3083 respondents, more than twice the
number we had last year. (Source: http://lesswrong.com/lw/lhg/2014_survey_results/)
This seems consistent with the hypothesis that the LW community hasn't declined
in population so much as migrated into different communities. Being the *diaspora*
survey I had expectations for more responses than usual, but twice as many was
far beyond them.

Before we move on to the survey results, I feel obligated to put a few affairs
in order in regards to what should be done next time. The copyright situation
for the survey was ambiguous this year, and to prevent that from happening again
I'm pleased to announce that this years survey questions will be released jointly
by me and Scott Alexander as Creative Commons licensed content. We haven't
finalized the details of this yet so expect it sometime this month.

I would also be remiss not to mention the large amount of feedback we received
on the survey. Some of which led to actionable recommendations I'm going to
preserve here for whoever does it next:

- Put free response form at the very end to suggest improvements/complain.

- Fix metaethics question in general, lots of options people felt were missing.

- Clean up definitions of political affilations in the short politics section.
  In particular, 'Communist' has an overly aggressive/negative definition.

- Possibly completely overhaul short politics section.

- Everywhere that a non-answer is taken as an answer should be changed so that
  non answer means what it ought to, no answer or opinion. "Absence of a signal
  should never be used as a signal." - Julian Bigelow, 1947

- Give a definition for the singularity on the question asking when you think it
  will occur.

- Ask if people are *currently* suffering from depression. Possibly add more
  probing questions on depression in general since the rates are so extraordinarily
  high.

- Include a link to what cisgender means on the gender question.

- Specify if the income question is before or after taxes.

- Add charity questions about time donated.

- Add "ineligible to vote" option to the voting question.

- Adding some way for those who are pregnant to indicate it on the number of
  children question would be nice. It might be onerous however so don't feel
  obligated. (Remember that it's more important to have a smooth survey than it
  is to catch every edge case.)

And read this thread: http://lesswrong.com/lw/nfk/lesswrong_2016_survey/,
it's full of suggestions, corrections and criticism.

Without further ado,

Basic Results:

2016 LessWrong Diaspora Survey Questions (PDF Format)

2016 LessWrong Diaspora Survey Results (PDF Format, Missing 23 Responses)

2016 LessWrong Diaspora Survey Results Complete (Text Format, Null Entries Included)

2016 LessWrong Diaspora Survey Results Complete (Text Format, Null Entries Excluded)

2016 LessWrong Diaspora Survey Results Complete (Text Format, Null Entries Included, 13 Responses Filtered, Percentages)

2016 LessWrong Diaspora Survey Results Complete (Text Format, Null Entries Excluded, 13 Responses Filtered, Percentages)

2016 LessWrong Diaspora Survey Results Complete (HTML Format, Null Entries Excluded)

Our report system is currently on the fritz and isn't calculating numeric questions. If I'd known this earlier I'd have prepared the results for said questions ahead of time. Instead they'll be coming out later today or tomorrow. (EDIT: These results are now in the text format survey results.)

 

Philosophy and Community Issues At LessWrong's Peak (Write Ins)

Peak Philosophy Issues Write Ins (Part One)

Peak Philosophy Issues Write Ins (Part Two)

Peak Community Issues Write Ins (Part One)

Peak Community Issues Write Ins (Part Two)


Philosophy and Community Issues Now (Write Ins)

Philosophy Issues Now Write Ins (Part One)

Philosophy Issues Now Write Ins (Part Two)

Community Issues Now Write Ins (Part One)

Community Issues Now Write Ins (Part Two)

 

Rejoin Conditions

Rejoin Condition Write Ins (Part One)

Rejoin Condition Write Ins (Part Two)

Rejoin Condition Write Ins (Part Three)

Rejoin Condition Write Ins (Part Four)

Rejoin Condition Write Ins (Part Five)

 

CC-Licensed Machine Readable Survey and Public Data

2016 LessWrong Diaspora Survey Structure (License)

2016 LessWrong Diaspora Survey Public Dataset

(Note for people looking to work with the dataset: My survey analysis code repository includes a sqlite converter, examples, and more coming soon. It's a great way to get up and running with the dataset really quickly.)

In depth analysis:

Analysis Posts

Part One: Meta and Demographics

Part Two: LessWrong Use, Successorship, Diaspora

Part Three: Mental Health, Basilisk, Blogs and Media

Part Four: Politics, Calibration & Probability, Futurology, Charity & Effective Altruism

Aggregated Data

Effective Altruism and Charitable Giving Analysis

Mental Health Stats By Diaspora Community (Including self dxers)

How Diaspora Communities Compare On Mental Health Stats (I suspect these charts are subtly broken somehow, will investigate later)

Improved Mental Health Charts By Obormot (Using public survey data)

Improved Mental Health Charts By Anonymous (Using full survey data)

Political Opinions By Political Affiliation

Political Opinions By Political Affiliation Charts (By anonymous)

Blogs And Media Demographic Clusters

Blogs And Media Demographic Clusters (HTML Format, Impossible Answers Excluded)

Calibration Question And Brier Score Analysis

More coming soon!

Survey Analysis Code

Some notes:

1. FortForecast on the communities section, Bayesed And Confused on the blogs section, and Synthesis on the stories section were all 'troll' answers designed to catch people who just put down everything. Somebody noted that the three 'fortforecast' users had the entire DSM split up between them, that's why.

2. Lots of people asked me for a list of all those cool blogs and stories and communities on the survey, they're included in the survey questions PDF above.

Public TODO:

1. Add more in depth analysis, fix the ones that decided to suddenly break at the last minute or I suspect were always broken.

2. Add a compatibility mode so that the current question codes are converted to older ones for 3rd party analysis that rely on them.

If anybody would like to help with these, write to jd@fortforecast.com

Several free CFAR summer programs on rationality and AI safety

18 AnnaSalamon 14 April 2016 02:35AM
CFAR will be running several free summer programs this summer which are currently taking applications.  Please apply if you’re interested, and forward the programs also to anyone else who may be a good fit!
continue reading »

Lesswrong 2016 Survey

28 Elo 30 March 2016 06:17PM

It’s time for a new survey!

Take the survey now


The details of the last survey can be found here.  And the results can be found here.

 

I posted a few weeks back asking for suggestions for questions to include on the survey.  As much as we’d like to include more of them, we all know what happens when we have too many questions. The following graph is from the last survey.


http://i.imgur.com/KFTn2Bt.png

KFTn2Bt.png

(Source: JD’s analysis of 2014 survey data)


Two factors seem to predict if a question will get an answer:

  1. The position

  2. Whether people want to answer it. (Obviously)


People answer fewer questions as we approach the end. They also skip tricky questions. The least answered question on the last survey was - “what is your favourite lw post, provide a link”.  Which I assume was mostly skipped for the amount of effort required either in generating a favourite or in finding a link to it.  The second most skipped questions were the digit-ratio questions which require more work, (get out a ruler and measure) compared to the others. This is unsurprising.


This year’s survey is almost the same size as the last one (though just a wee bit smaller).  Preliminary estimates suggest you should put aside 25 minutes to take the survey, however you can pause at any time and come back to the survey when you have more time.  If you’re interested in helping process the survey data please speak up either in a comment or a PM.


We’re focusing this year particularly on getting a glimpse of the size and shape of the LessWrong diaspora.  With that in mind; if possible - please make sure that your friends (who might be less connected but still hang around in associated circles) get a chance to see that the survey exists; and if you’re up to it - encourage them to fill out a copy of the survey.


The survey is hosted and managed by the team at FortForecast, you’ll be hearing more from them soon. The survey can be accessed through http://lesswrong.com/2016survey.


Survey responses are anonymous in that you’re not asked for your name. At the end we plan to do an opt-in public dump of the data. Before publication the row order will be scrambled, datestamps, IP addresses and any other non-survey question information will be stripped, and certain questions which are marked private such as the (optional) sign up for our mailing list will not be included. It helps the most if you say yes but we can understand if you don’t.  


Thanks to Namespace (JD) and the FortForecast team, the Slack, the #lesswrong IRC on freenode, and everyone else who offered help in putting the survey together, special thanks to Scott Alexander whose 2014 survey was the foundation for this one.


When answering the survey, I ask you be helpful with the format of your answers if you want them to be useful. For example if a question asks for an number, please reply with “4” not “four”.  Going by the last survey we may very well get thousands of responses and cleaning them all by hand will cost a fortune on mechanical turk. (And that’s for the ones we can put on mechanical turk!) Thanks for your consideration.

 

The survey will be open until the 1st of may 2016

 


Addendum from JD at FortForecast: During user testing we’ve encountered reports of an error some users get when they try to take the survey which erroneously reports that our database is down. We think we’ve finally stamped it out but this particular bug has proven resilient. If you get this error and still want to take the survey here are the steps to mitigate it:

 

  1. Refresh the survey, it will still be broken. You should see a screen with question titles but no questions.

  2. Press the “Exit and clear survey” button, this will reset your survey responses and allow you to try again fresh.

  3. Rinse and repeat until you manage to successfully answer the first two questions and move on. It usually doesn’t take more than one or two tries. We haven’t received reports of the bug occurring past this stage.


If you encounter this please mail jd@fortforecast.com with details. Screenshots would be appreciated but if you don’t have the time just copy and paste the error message you get into the email.

 

Take the survey now


Meta - this took 2 hours to write and was reviewed by the slack.


My Table of contents can be found here.

"3 Reasons It’s Irrational to Demand ‘Rationalism’ in Social Justice Activism"

8 PhilGoetz 29 March 2016 03:16PM

The lead article on everydayfeminism.com on March 25:

3 Reasons It’s Irrational to Demand ‘Rationalism’ in Social Justice Activism

The scenario is always the same: I say we should  abolish prisonspolice, and the  American settler state— someone tells me I’m irrational. I say we need  decolonization of the land — someone tells me I’m not being realistic.... When those who are the loudest, the most disruptive — the ones who want to destroy America and all of the oppression it has brought into the world — are being silenced even by others in social justice groups, that is unacceptable.

(The link from "decolonization" is to "Decolonization is not a metaphor", to make it clear s/he means actually giving the land back to the Native Americans.)

I regularly see people who describe how social justice activists act accused of setting up a straw man.  This article show that the bias of some SJWs against reason is impossible to strawman.  The author argues at length that rationality is bad, and that justice arguments shouldn't be rational or be defended rationally.  Ze is, or was, confused about what "rationality" means, but clearly now means it to include reason-based argumentation.

This isn't just some wacko's blog; it was chosen as the headline article for the website.  I had to click around to a few other articles to make sure it wasn't a parody site.

But it isn't just a sign of how irrational the social justice movement is—it has clues to how it got that way.

continue reading »

Weekly LW Meetups

1 FrankAdamek 19 February 2016 04:50PM

The Value of Those in Effective Altruism

14 Gleb_Tsipursky 17 February 2016 12:59AM

Summary/TL;DR: this piece offers Fermi Estimates of the value of those in EA, focusing on the distinctions between typical EA members and dedicated members (defined below). These estimates suggest that, compared to the current movement baseline, we should prioritize increasing the number of “typical” EA members and getting more non-EA people to behave like typical EA members, rather than getting typical EAs to become dedicated ones.

 

[Acknowledgments: Thanks to Tom Ash, Jon Behar, Ryan Carey, Denis Drescher, Michael Dickens, Stefan Schubert, Claire Zabel, Owen Cotton-Barratt, Ozzie Gooen, Linchuan Zheng, Chris Watkins, Julia Wise, Kyle Bogosian, Max Chapnick, Kaj Sotaja, Taryn East, Kathy Forth, Scott Weathers, Hunter Glenn, Alfredo Parra, William Kiely,  Jay Quigley, and others who prefer to remain anonymous for looking at various draft versions of this post. Thanks to their feedback, the post underwent heavy revisions. Any remaining oversights, as well as all opinions expressed, are my responsibility.]

 

This article is a follow-up to "Celebrating All Who Are In Effective Altruism"

continue reading »

Unofficial Canon on Applied Rationality

28 ScottL 15 February 2016 01:03PM

I have been thinking for a while that it would be useful if there was something similar to the Less Wrong Canon on Rationality for the CFAR material. Maybe, it could be called the 'CFAR Canon on Applied Rationality'. To start on this I have compiled a collection of descriptions for the CFAR techniques that I could find. I have separated the techniques into a few different sections. The sections and descriptions have mostly been written by me, with a lot of borrowing from other material, which means that they may not accurately reflect what CFAR actually teaches.

Please note that I have not attended any CFAR workshops, nor am I affiliated with CFAR in any way. My understanding of these techniques comes from CFAR videos, blogs and other websites which I have provided links to. If I have missed any important techniques or if my understanding of any of the techniques is incorrect or if you can provide links to the research that these techniques are based on, please let me know and I will update this post. 

Warning:

Learning this material based solely on the descriptions written here may be unhelpful, arduous or even harmful. (See Duncan_Sabien's full comment for more information on this) It is because the material is very hard to learn correctly. Most of the techniques below involve in one way or another volitionally overriding your instinctual, intuitive or ingrained behaviours and thoughts. These are thoughts which not only often feel enticing and alluring, but that also often feel unmistakably right. If you are anything like me, then you should be very careful if you are trying to learn this material alone. For you will be prone to rationalization,  taking shortcuts and making mistakes.

My recommendations for trying to learn this material are:

  • learn it deeply and be sure to put what you have learnt into practice. It will often help if you take notes on what works for you and what doesn't. Also take note of the 'Mindsets and perspectives that help you in discovering potential situations that you could end up valuing' section as these are very important.
  • get the help of experts or other people who have already expended great amounts of effort in trying to implement this material like the people at cfar. This will save you a great amount of stress and effort as it will allow you to avoid a plethora of potential mistakes and inefficiencies. If you really want to learn this material, then you should deeply consider attending a CFAR workshop. 
  • get the help of or involve friends. As Duncan_Sabien has said:
    It is better on almost every axis with instructors, mentors, friends, companions—people to help you avoid the biggest pitfalls, help you understand the subtle points, tease apart the interesting implications, shore up your motivation, assist you in seeing your own mistakes and weaknesses. None of that is impossible on your own, but it's somewhere between one and two orders of magnitude more efficient and more efficacious with guidance".
  • be dubious of your mental models. Beware thoughts and ideas that feel unequivocally right especially if they are solely located internally rather than also being expressed or formulated externally. 
  • You might want to bookmark this page instead of reading it all at once as it is quite long.

Sections:

continue reading »

Less Wrong Karma Chart Website

21 ScottL 15 February 2016 12:36PM

As a learning exercise, I wrote a web app which shows some charts on your karma score.

I recommend just going to the website and trying it out, but here is a description of it as well if you're interested. To use it just enter your user id in the text box at the top and then press the go button. It will show that it is loading and after a while five charts will be shown.

  • The first chart is a time series chart which shows you when you have posted a comment or discussion post. This chart allows you to zoom in on any desired area.
  • The second chart is a time series chart which shows you when you have posted a main post. This chart allows you to zoom in on any desired area.
  • The third chart shows your cumulative score. This chart allows you to zoom in on any desired area.
  • The fourth chart shows proportions, i.e. how many comments/posts you have made and how many were positive, neutral or negative. 
  • The fifth and final chart shows information on your total positive and negative scores. This chart allows you to drill down and see where your points have originated from, i.e. from comments or discussion posts or main posts. 

Please note that it may take a while to load. I am scraping all the information from your users page. It shouldn't take too long though. On my computer it takes less than a minute to load all the information on my karma score, but it did take around half an hour to load Eliezer_Yudkowsky's karma information. YMMV depending on what computer you are using. It is not your score that determines how long it will take, but the amount of comments and posts that you have made. I recommend using chrome as I haven't tested it in any other browsers.

The karma calculated by LessWrong also might be slightly different to what my web app shows. For example, my webapp shows Eliezers karma total score as 290096 and in LessWrong it is 290174. I am pretty sure that my code is right since I counted one example out and I do know of one bug in the LessWrong code that would effect the total score. There are also other things that LessWrong takes into account that I don't, e.g. karma awards and the troll tax. The difference shouldn't be too major, though, so it shouldn't be a big problem.

The website is hosted on github and the code can be found here.

TLDR: Try out the Less Wrong Karma Chart Website and let me know what you think or if you run into any issues.

View more: Next