Filter This month

Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Against lone wolf self-improvement

27 cousin_it 07 July 2017 03:31PM

LW has a problem. Openly or covertly, many posts here promote the idea that a rational person ought to be able to self-improve on their own. Some of it comes from Eliezer's refusal to attend college (and Luke dropping out of his bachelors, etc). Some of it comes from our concept of rationality, that all agents can be approximated as perfect utility maximizers with a bunch of nonessential bugs. Some of it is due to our psychological makeup and introversion. Some of it comes from trying to tackle hard problems that aren't well understood anywhere else. And some of it is just the plain old meme of heroism and forging your own way.

I'm not saying all these things are 100% harmful. But the end result is a mindset of lone wolf self-improvement, which I believe has harmed LWers more than any other part of our belief system.

Any time you force yourself to do X alone in your room, or blame yourself for not doing X, or feel isolated while doing X, or surf the web to feel some human contact instead of doing X, or wonder if X might improve your life but can't bring yourself to start... your problem comes from believing that lone wolf self-improvement is fundamentally the right approach. That belief is comforting in many ways, but noticing it is enough to break the spell. The fault wasn't with the operator all along. Lone wolf self-improvement doesn't work.

Doesn't work compared to what? Joining a class. With a fixed schedule, a group of students, a teacher, and an exam at the end. Compared to any "anti-akrasia technique" ever proposed on LW or adjacent self-help blogs, joining a class works ridiculously well. You don't need constant willpower: just show up on time and you'll be carried along. You don't get lonely: other students are there and you can't help but interact. You don't wonder if you're doing it right: just ask the teacher.

Can't find a class? Find a club, a meetup, a group of people sharing your interest, any environment where social momentum will work in your favor. Even an online community for X that will reward your progress with upvotes is much better than going X completely alone. But any regular meeting you can attend in person, which doesn't depend on your enthusiasm to keep going, is exponentially more powerful.

Avoiding lone wolf self-improvement seems like embarrassingly obvious advice. But somehow I see people trying to learn X alone in their rooms all the time, swimming against the current for years, blaming themselves when their willpower isn't enough. My message to such people: give up. Your brain is right and what you're forcing it to do is wrong. Put down your X, open your laptop, find a class near you, send them a quick email, and spend the rest of the day surfing the web. It will be your most productive day in months.

Bi-Weekly Rational Feed

21 deluks917 24 June 2017 12:07AM

===Highly Recommended Articles:

Introducing The Ea Involvement Guide by The Center for Effective Altruism (EA forum) - A huge list of concrete actions you can take to get involved. Every action has a brief description and a link to an article. Each article rates the action on time commitment, duration, familiarity and occupation. Very well put together.

Deep Reinforcement Learning from Human Preferences - An algorithm learns to backflip with 900 bits of feedback from the human evaluator. "One step towards building safe AI systems is to remove the need for humans to write goal functions, since using a simple proxy for a complex goal, or getting the complex goal a bit wrong, can lead to undesirable and even dangerous behavior. In collaboration with DeepMind’s safety team, we’ve developed an algorithm which can infer what humans want by being told which of two proposed behaviors is better."

Build Baby Build by Bryan Caplan - Quote from a paper estimating the high costs of housing restrictions. We should blame the government, especially local government. The top alternate theory is wrong. Which regulations are doing the damage? It's complicated. Functionalists are wrong. State government is our best hope.

The Use And Abuse Of Witchdoctors For Life by Lou (sam[]zdat) - Anti-bullet magic and collective self-defense. Cultural evolution. People don't directly believe in anti-bullet magic, they believe in elders and witch doctors. Seeing like a State. Individual psychology is the foundation. Many psychologically important customs couldn't adapt to the marketplace.

S-risks: Why They Are The Worst Existential Risks by Kaj Sojata (lesswrong) - “S-risk – One where an adverse outcome would bring about severe suffering on a cosmic scale, vastly exceeding all suffering that has existed on Earth so far.” Why we should focus on S-risk. Probability: Artificial sentience, Lack of communication, badly aligned Ai and competitive pressures. Tractability: Relationship with x-risk. Going meta, cooperation. Neglectedness: little attention, people conflate x-risk = s-risk.

Projects Id Like To See by William MacAskill (EA forum) - CEA is giving out £100K grants. General types of applications. EA outreach and Community, Anti-Debates, Prediction Tournaments, Shark Tank Discussions, Research Groups, Specific Skill Building, New Organizations, Writing.

The Battle For Psychology by Jacob Falkovich (Put A Number On It!) - An explanation of 'power' in statistics and why its always good. Low power means that positive results are mostly due to chance. Extremely bad incentives and research practices in psychology. Studying imaginary effects. Several good images.

Identifying Sources Of Cost Disease by Kurt Spindler - Where is the money going: Administration, Increased Utilization, Decreased Risk Tolerance. What market failures are in effect: Unbounded Domains, Signaling and Competitive Pressure (ex: military spending), R&D doesn't cut costs it creates new ways to spend money, individuals don't pay. Some practical strategies to reduce cost disease.

===Scott:

To Understand Polarization Understand The Extent Of Republican Failure by Scott Alexander - Conservative voters voted for “smaller government”, “fewer regulations”, and “less welfare state”. Their reps control most branches of the government. They got more of all three (probably thanks to cost disease).

Against Murderism by Scott Alexander - Three definitions of racism. Why 'Racism as motivation' fits best. The futility of blaming the murder rate in the USA on 'murderism'. Why its often best to focus on motivations other than racism.

Open Thread Comment by John Nerst (SSC) - Bi-weekly public open thread. I am linking to a very interesting comment. The author made a list of the most statistically over-represented words in the SSC comment section.

Some Unsong Guys by Scott Alexander (Scratchpad) - Pictures of Unsong Fan Art.

Silinks Is Golden by Scott Alexander - Standard SSC links post.

What is Depression Anyway: The Synapse Hypothesis - Six seemingly distinct treatments for depression. How at least six can be explained by considering synapse generation rates. Skepticism that this method can be used to explain anything since the body is so inter-connected. Six points that confuse Scott and deserve more research. Very technical.

===Rationalist:

Idea For Lesswrong Video Tutoring by adamzerner (lesswrong) - Community Video Tutoring. Sign up to either give or receive tutoring. Teaching others is a good way to learn and lots of people enjoy teaching. Hopefully enough people want to learn similar things. This could be a great community project and I recommend taking a look.

Regulatory Arbitrage For Medical Research What I Know So Far by Sarah Constantin (Otium) - Economics of avoiding the USA/FDA. Lots of research is already conducted in other countries. The USA is too large of a market not to sell to. Investors aren't interested in cheap preliminary trials. Other options: supplements, medical tourism, clinic ships, cryptocurrency.

Responses To Folk Ontologies by Ferocious Truth - Folk ontology: Concepts and categories held by ordinary people with regard to an idea. Especially pre-scientific or unreflective ones. Responses: Transform/Rescue, Deny or Restrict/Recognize. Rescuing free will and failing to rescue personal identity. Rejecting objective morality. Restricting personal identity and moral language. When to use each approach.

The Battle For Psychology by Jacob Falkovich (Put A Number On It!) - An explanation of 'power' in statistics and why its always good. Low power means that positive results are mostly due to chance. Extremely bad incentives and research practices in psychology. Studying imaginary effects. Several good images.

A Tangled Task Future by Robin Hanson - We need to untangle the economy to automate it. What tasks are heavily tangled and which are not. Ems and the human brain as a legacy system. Human brains are well-integrated and good at tangled tasks.

Epistemic Spot Check Update by Aceso Under Glass - Reviewing self-help books. Properties of a good self-help model: As simple as possible but not more so, explained well, testable on a reasonable timescale, seriously handles the fact the techniques might now work, useful. The author would appreciate feedback.

Skin In The Game by Elo (BearLamp) - Armchair activism and philosophy. Questions to ask yourself about your life. Actually do the five minute exercise at the end.

Momentum Reflectiveness Peace by Sarah Constantin (Otium) - Rationality requires a reflective mindset; a willingness to change course and consider how things could be very different. Momentum, keeping things as they are except more so, is the opposite of reflectivity. Cultivating reflectiveness: rest, contentment, considering ideas lightly and abstractly. “Turn — slowly.”

The Fallacy Fork Why Its Time To Get Rid Of by theFriendlyDoomer (r/SSC) - "The main thesis of our paper is that each and every fallacy in the traditional list runs afoul of the Fallacy Fork. Either you construe the fallacy in a clear-cut and deductive fashion, which means that your definition has normative bite, but also that you hardly find any instances in real life; or you relax your formal definition, making it defeasible and adding contextual qualifications, but then your definition loses its teeth. Your “fallacy” is no longer a fallacy."

Instrumental Rationality 1 Starting Advice by lifelonglerner (lesswrong) - "This is the first post in the Instrumental Rationality Sequence. This is a collection of four concepts that I think are central to instrumental rationality-caring about the obvious, looking for practical things, practicing in pieces, and realistic expectations."

Concrete Ways You Can Help Make The Community Better by deluks917 (lesswrong) - Write more comments on blog posts and non-controversial posts on lw and r/SSC. Especially consider commenting on posts you agree with. People are more likely to comment if other people are posting high quality comments. Projects: Gaming Server, aggregate tumblr effort-posts, improve lesswrong wiki, leadership in local rationalist group

Daring Greatly by Bayesian Investor - Fairly positive book review, some chapters were valuable and it was an easy read. How to overcome shame and how it differs from guilt. Perfectionism vs healthy striving. If you stop caring about what others think you lose your capacity for connection

A Call To Adventure by Robin Hanson - Meaning in life can be found by joining or starting a grand project. Two possible adventures: Promoting and implementing futarchy (decision making via prediction markets). Getting a real understanding of human motivation.

Thought Experiment Coarsegrained Vr Utopia by cousin_it (lesswrong) - Assume an AI is running a Vr simulation that is hooked up to actual human brains. This means that the AI only has to simulate nature at a coarse grained level. How hard would it be to make that virtual reality a utopia?

[The Rationalist-sphere and the Lesswrong Wiki]](http://lesswrong.com/r/discussion/lw/p4y/the_rationalistsphere_and_the_less_wrong_wiki/) - What's next for the Lesswrong wiki. A distillation of Lesswrong. Fully indexing the diaspora. A list of communities. Spreading rationalist ideas. Rationalist Research.

Deep Reinforcement Learning from Human Preferences - An algorithm learns to backflip with 900 bits of feedback from the human evaluator. "One step towards building safe AI systems is to remove the need for humans to write goal functions, since using a simple proxy for a complex goal, or getting the complex goal a bit wrong, can lead to undesirable and even dangerous behavior. In collaboration with DeepMind’s safety team, we’ve developed an algorithm which can infer what humans want by being told which of two proposed behaviors is better."

Where Do Hypotheses Come From by c0rw1n (lesswrong) - Link to a 25 page article. "Why are human inferences sometimes remarkably close to the Bayesian ideal and other times systematically biased? In particular, why do humans make near-rational inferences in some natural domains where the candidate hypotheses are explicitly available, whereas tasks in similar domains requiring the self-generation of hypotheses produce systematic deviations from rational inference. We propose that these deviations arise from algorithmic processes approximating Bayes’ rule."

The Precept Of Universalism by H i v e w i r e d - "Universality, the idea that all humans experience life in roughly the same way. Do not put things or ideas above people. Honor and protect all peoples." Eight points expanding on how to put people first and honor everyone.

We Are The Athenians Not The Spartans by wubbles (lesswrong) - "Our values should be Athenian: individualistic, open, trusting, enamored of beauty. When we build social technology, it should not aim to cultivate values that stand against these. High trust, open, societies are the societies where human lives are most improved."

===EA:

Updating My Risk Estimate of Geomagnetic Big One by Open Philosophy - Risk from magnetic storms caused by the sun. "I have raised my best estimate of the chance of a really big storm, like the storied one of 1859, from 0.33% to 0.70% per decade. And I have expanded my 95% confidence interval for this estimate from 0.0–4.0% to 0.0–11.6% per decade."

Links by GiveDirectly - Eight Media articles on Cash Transfers, Basic Income and Effective Altruism.

Are Givewells Top Charities The Best Option For Every Donor by The GiveWell Blog - Why GiveWell recommend charities are a good option for most donors. Which donors have better options: Donors with lots of time, high trust in a particular institution or values different from GiveWell's.

A New President of GWWC by Giving What We Can - Julia Wise is the New president of Giving What We Can.

Angst Ennui And Guilt In Effective Altruism by Gordon (Map and Territory) - Learning about existential risk can cause psychological harm. Guilt about being unable to help solve X-risk. Akrasia. Reasons to not be guilty: comparative advantage, ability is unequally distributed.

S-risks: Why They Are The Worst Existential Risks by Kaj Sojata (lesswrong) - “S-risk – One where an adverse outcome would bring about severe suffering on a cosmic scale, vastly exceeding all suffering that has existed on Earth so far.” Why we should focus on S-risk. Probability: Artificial sentience, Lack of communication, badly aligned Ai and competitive pressures. Tractability: Relationship with x-risk. Going meta, cooperation. Neglectedness: little attention, people conflate x-risk = s-risk.

Update On Sepsis Donations Probably Unnecessary by Sarah Constantin (Otium) - Sarah C had asked people to crowdfund a sepsis RCT. The trial will probably get funded by charitable foundations. Diminishing returns. Finding good giving opportunities is hard and talking to people in the know is a good way to find things out.

What Is Valuable About Effective Altruism by Owen_Cotton-Barratt (EA forum) - Why should people join EA? The impersonal and personal perspectives. Tensions and synergies between the two perspectives. Bullet point conclusions for researchers, community leaders and normal members.

QALYs/$ Are More Intuitive Than $/QALYs by ThomasSittler (EA forum) - QALYs/$ are preferable to $/QALYs. visual representations on graphs. Avoiding Small numbers and re-normalizing to QUALs/10K$.

Introducing The Ea Involvement Guide by The Center for Effective Altruism (EA forum) - A huge list of concrete actions you can take to get involved. Every action has a brief description and a link to an article. Each article rates the action on time commitment, duration, familiarity and occupation. Very well put together.

Cash is King by GiveDirectly - Eight media articles about Effective Altruism and Cash transfers.

Separating GiveWell and the Open Philanthropy Project by The GiveWell Blog - The GiveWell perspective. Context for the sale. Effect on donors who rely on GiveWell. Organization changes at GiveWell. Steps taken to sell Open Phil assets. The new relationship between GiveWell and Open Phil.

Open Philanthropy Project is Now an Independent Organization by Open Philosophy - The evolution of Open Phil. Why should Open Phil split from GiveWell. LLC structure.

Projects Id Like To See by William MacAskill (EA forum) - CEA is giving out £100K grants. General types of applications. EA outreach and Community, Anti-Debates, Prediction Tournaments, Shark Tank Discussions, Research Groups, Specific Skill Building, New Organizations, Writing.

===Politics and Economics:

No Us School Funding Is Actually Somewhat Progressive by Random Critical Analysis - Many people think that wealthy public school districts spend more per pupil. This information is outdated. Within most states spending is higher on disadvantaged students. This is despite the fact that school funding is mostly local. Extremely thorough with loads of graphs.

Build Baby Build by Bryan Caplan - Quote from a paper estimating the high costs of housing restrictions. We should blame the government, especially local government. The top alternate theory is wrong. Which regulations are doing the damage? It's complicated. Functionalists are wrong. State government is our best hope.

Identifying Sources Of Cost Disease by Kurt Spindler - Where is the money going: Administration, Increased Utilization, Decreased Risk Tolerance. What market failures are in effect: Unbounded Domains, Signaling and Competitive Pressure (ex: military spending), R&D doesn't cut costs it creates new ways to spend money, individuals don't pay. Some practical strategies to reduce cost disease.

The Use And Abuse Of Witchdoctors For Life by Lou (sam[]zdat) - Anti-bullet magic and collective self-defense. Cultural evolution. People don't directly believe in anti-bullet magic, they believe in elders and witch doctors. Seeing like a State. Individual psychology is the foundation. Many psychologically important customs couldn't adapt to the marketplace.

Greece Gdp Forecasting by João Eira (Lettuce be Cereal) - Transforming the Data. Evaluating the Model with Exponential Smoothing, Bagged ETS and ARIMA. The regression results and forecast.

Links 9 by Artir (Nintil) - Economics, Psychology, Artificial Intelligence, Philosophy and other links.

Amazon Buying Whole Foods by Tyler Cowen - Quotes from Matt Yglesias, Alex Tabarrock, Ross Douthat and Tyler. “Dow opens down 10 points. Amazon jumps 3% after deal to buy Whole Foods. Walmart slumps 7%, Kroger plunges 16%”

Historical Returns Market Portfolio by Tyler Cowen - From 1960 to 2015 the global market portfolio realized a compounded real return of 4.38% with a std of 11.6%. Investors beat savers by 3.24%. Link to the original paper.

Trust And Diver by Bryan Caplan - Robert Putnam's work is often cited as showing the costs of diversity. However Putnam's work shows the negative effect of diversity on trust is rather modest. On the other hand Putnam found multiple variables that are much more correlated with trust (such as home ownership).

Why Optimism is More Rational than Pessimism by TheMoneyIllusion - Splitting 1900-2017 into Good and Bad periods. We learn something from our mistakes. Huge areas where things have improved long term. Top 25 movies of the 21st Century. Artforms in decline.

Is Economics Science by Noah Smith - No one knows what a Science is. Thoeries that work (4 examples). The empirical and credibility revolutions. Why we still need structural models. Ways economics could be more scientific. Data needs to kill bad theories. Slides from Noah's talk are included and worth playing but assume familiarity with the economics profession.

===Misc:

Clojure Concurrency And Blocking With Coreasync by Eli Bendersky - Concurrent applications and blocking operations using core.async. Most of the article compares threads and go-blocks. Lots of code and well presented test results.

Optopt by Ben Kuhn - Startup options are surprisingly valuable once you factor in that you can quit of the startup does badly. A mathematical model of the value of startup options and the optimal time to quit. The ability to quit rose the option value by over 50%. The sensitivity of the analysis with respect to parameters (opportunity cost, volatility, etc).

Epistemic Spot Check: The Demon Under The Microscope by Aceso Under Glass - Biography of the man who invented sulfa drugs, the early anti-bacteria treatments which were replaced by penicillin. Interesting fact checks of various claims.

Sequential Conversion Rates by Chris Stucchio - Estimating success rates when you have noisy reporting. The article is a sketch of how the author handled such a problem in practice.

Set Theory Problem by protokol2020 - Bring down ZFC. Aleph-zero spheres and Aleph-one circles.

Connectome Specific Harmonic Waves On Lsd by Qualia Computing - Transcript and video of a talk on neuroimaging the brain on LSD. "Today thanks to the recent developments in structural neuroimaging techniques such as diffusion tensor imaging, we can trace the long-distance white matter connections in the brain. These long-distance white matter fibers (as you see in the image) connect distant parts of the brain, distant parts of the cortex."

Approval Maximizing Representations by Paul Christiano - Representing images. Manipulation representations. Iterative and compound encodings. Compressed representations. Putting it all together and bootstrapping reinforcement learning.

Travel by Ben Kuhn - Advice for traveling frequently. Sleeping on the plane and taking redeyes. Be robust. Bring extra clothes, medicine, backup chargers and things to read when delayed. Minimize stress. Buy good luggage and travel bags.

Learning To Cooperate, Compete And Communicate by Open Ai - Competitive multi-agent models are a step towards AGI. An algorithm for centralized learning and decentralized execution in multi-agent environment. Initial Research. Next Steps. Lots of visuals demonstrating the algorithm in practice.

Openai Baselines Dqn by Open Ai - "We’re open-sourcing OpenAI Baselines, our internal effort to reproduce reinforcement learning algorithms with performance on par with published results." Best practices we use for correct RL algorithm implementations. First release: DQN and three of its variants, algorithms developed by DeepMind.

Corrigibility by Paul Christiano - Paul defines the sort of AI he wants to build, he refers to such systems as "corrigible". Paul argues that a sufficiently corrigible agent will become more corrigible over time. This implies that friendly AI is not a narrow target but a broad basin of attraction. Corrigible agents prefer to build other agents that share the overseers preferences, not their own. Predicting that the overseer wants me to turn off when he hits the off-button is not complicated relative to being deceitful. Comparison with Eliezer's views.

G Reliant Skills Seem Most Susceptible To Automation by Freddie deBoer - Computers already outperform humans in g-loaded domains such as Go and Chess. Many g-loaded jobs might get automated. Jobs involving soft or people skills are resilient to automation.

Persona 5: Spoiler Free Review - Persona games are long but deeply worthwhile if you enjoy the gameplay and the story. Persona 5 is much more polished but Persona 3 has a more meaningful story and more interesting decisions. Tips for Maximum Enjoyment of Persona 5. Very few spoilers.

Sea Problem by protokol2020 - A fun problem. Measuring sea level rise.

===Podcast:

83 The Politics Of Emergency by Waking Up with Sam Harris - Fareed Zakaria. "His career as a journalist, Samuel Huntington's "clash of civilizations," political partisanship, Trump, the health of the news media, the connection between Islam and intolerance"

On Risk, Statistics, And Improving The Public Understanding Of Science by 80,000 Hours - A lifetime of communicating science. Early career advice. Getting people to intuitively understand hazards and their effect on life expectancy.

Ed Luce by Tyler Cowen - The Retreat of Western Liberalism "What a future liberalism will look like, to what extent current populism is an Anglo-American phenomenon, Modi’s India, whether Kubrick, Hitchcock, and John Lennon are overrated or underrated, and what it is like to be a speechwriter for Larry Summers."

Thomas Ricks by EconTalk - Thomas Ricks book Churchill and Orwell. Overlapping lives and the fight to preserve individual liberty.

The End Of The World According To Isis by Waking Up with Sam Harris - Graeme Wood. His experience reporting on ISIS, the myth of online recruitment, the theology of ISIS, the quality of their propaganda, the most important American recruit to the organization, the roles of Jesus and the Anti-Christ in Islamic prophecy, free speech and the ongoing threat of jihadism.

Jason Khalipa by Tim Ferriss - "8-time CrossFit Games competitor, a 3-time Team USA CrossFit member, and — among other athletic feats — he has deadlifted 550 pounds, squatted 450 pounds, and performed 64 pullups at a bodyweight of 210 pounds."

Dario Amodei, Paul Christiano & Alex Ray. - 80K hours released a detailed guide to careers in AI policy. " We discuss the main career paths; what to study; where to apply; how to get started; what topics are most in need of research; and what progress has been made in the field so far." Transcript included.

Don Bourdreaux Emergent Order by EconTalk - "Why is it that people in large cities like Paris or New York City people sleep peacefully, unworried about whether there will be enough bread or other necessities available for purchase the next morning? No one is in charge--no bread czar. No flour czar."

Tania Lombrozo On Why We Evolved The Urge To Explain by Rational Speaking - "Research on what purpose explanation serves -- i.e., why it helps us more than our brains just running prediction algorithms. Tania and Julia also discuss whether simple explanations are more likely to be true, and why we're drawn to teleological explanations"

Rescuing the Extropy Magazine archives

18 Deku-shrub 01 July 2017 02:25PM

Possibly of more interest to old school Extropians, you may be aware the defunct Extropy Institute's website is very slow and broken, and certainly inaccessible to newcomers.

Anyhow, I have recently pieced together most of the early publications, 1988 - 1996 of 'Extropy: Vaccine For Future Shock' later, Extropy: Journal of Transhumanist Thought, as a part of mapping the history of Extropianism.

You'll find some really interesting very early articles on neural augmentation, transhumanism, libertarianism, AI (featuring Eliezer), radical economics (featuring Robin Hanson of course) and even decentralised payment systems.

Along with the ExI mailing list which is not yet wikified, it provides a great insight into early radical technological thinking, an era mostly known for the early hacker movement.

Let me know your thoughts/feedback!

https://hpluspedia.org/wiki/Extropy_Magazines

Becoming stronger together

14 b4yes 11 July 2017 01:00PM

I want people to go forth, but also to return.  Or maybe even to go forth and stay simultaneously, because this is the Internet and we can get away with that sort of thing; I've learned some interesting things on Less Wrong, lately, and if continuing motivation over years is any sort of problem, talking to others (or even seeing that others are also trying) does often help.

But at any rate, if I have affected you at all, then I hope you will go forth and confront challenges, and achieve somewhere beyond your armchair, and create new Art; and then, remembering whence you came, radio back to tell others what you learned.

Eliezer Yudkowsky, Rationality: From AI to Zombies

If you want to go fast, go alone. If you want to go far, go together.

African proverb (possibly just made up)

About a year ago, a secret rationalist group was founded. This is a report of what the group did during that year.

The Purpose

“Rationality, once seen, cannot be unseen,” are words that resonate with all of us. Having glimpsed the general shape of the thing, we feel like we no longer have a choice. I mean, of course we still have an option to think and act in stupid ways, and we probably do it a lot more than we would be okay to admit! We just no longer have an option to do it knowingly without feeling stupid about it. We can stray from the way, but we cannot pretend anymore that it does not exist. And we strongly feel that more is possible, both in our private lives, and for the society in general.

Less Wrong is the website and the community that brought us together. Rationalist meetups are a great place to find smart, interesting, and nice people; awesome people to spend your time with. But feeling good was not enough for us; we also wanted to become stronger. We wanted to live awesome lives, not just to have an awesome afternoon once in a while. But many participants seemed to be there only to enjoy the debate. Or perhaps they were too busy doing important things in their lives. We wanted to achieve something together; not just as individual aspiring rationalists, but as a rationalist group. To make peer pressure a positive force in our lives; to overcome akrasia and become more productive, to provide each other feedback and to hold each other accountable, to support each other. To win, both individually and together.

The Group

We are not super secret really; some people may recognize us by reading this article. (If you are one of them, please keep it to yourself.) We just do not want to be unnecessarily public. We know who we are and what we do, and we are doing it to win at life; trying to impress random people online could easily become a distraction, a lost purpose. (This article, of course, is an exception.) This is not supposed to be about specific individuals, but an inspiration for you.

We started as a group of about ten members, but for various reasons some people soon stopped participating; seven members remained. We feel that the current number is probably optimal for our group dynamic (see Parkinson's law), and we are not recruiting new members. We have a rule “what happens in the group, stays in the group”, which allows us to be more open to each other. We seem to fit together quite well, personality-wise. We desire to protect the status quo, because it seems to work for us.

But we would be happy to see other groups like ours, and to cooperate with them. If you want to have a similar kind of experience, we suggest starting your own group. Being in contact with other rationalists, and holding each other accountable, seems to benefit people a lot. CFAR also tries to keep their alumni in regular contact after the rationality workshops, and some have reported this as a huge added value.

To paint a bit more specific picture of us, here is some summary data:

  • Our ages are between 20 and 40, mostly in the middle of the interval.
  • Most of us, but not all, are men.
  • Most of us, but not all, are childless.
  • All of us are of majority ethnicity.
  • Most of us speak the majority language as our first language.
  • All of us are atheists; most of us come from atheist families.
  • Most of us have middle-class family background.
  • Most of us are, or were at some moment, software developers.

I guess this is more or less what you could have expected, if you are already familiar with the rationalist community.

We share many core values, but have some different perspectives, which adds value and confronts groupthink. We have entrepreneurs, employees, students, and unemployed bums; the ratio changes quite often. It is probably the combination of all of us having a good sense of epistemology, but different upbringing, education and professions, that makes supporting each other and giving advice more effective (i.e. beyond the usual benefits of the outside view); there have been plenty of situations which were trivial for one, but not for the other.

Some of us knew each other for years before starting the group, even before the local Less Wrong meetups. Some of us met the others at the meetups. And finally, some of us talked to some other members for the first time after joining the group. It is surprising how well we fit, considering that we didn’t apply any membership filter (although we were prepared to); people probably filtered themselves by their own interest, or a lack thereof, to join this kind of a group, specifically with the productivity and accountability requirements.

We live in different cities. About once in a month we meet in person; typically before or after the local Less Wrong meetup. We spend a weekend together. We walk around the city and debate random stuff in the evening. In the morning, we have a “round table” where each of us provides a summary of what they did during the previous month, and what they are planning to do during the following month; about 20 minutes per person. That takes a lot of time, and you have to be careful not to go off-topic too often.

In between meetups, we have a Slack team that we use daily. Various channels for different topics; the most important one is a “daily log”, where members can write briefly what they did during the day, and optionally what they are planning to do. In addition to providing extra visibility and accountability, it helps us feel like we are together, despite the geographical distances.

Besides mutual accountability, we are also fans of various forms of self-tracking. We share tips about tools and techniques, and show each other our data. Journaling, time tracking, exercise logging, step counting, finance tracking...

Even before starting the group, most of us were interested in various productivity systems: Getting Things Done, PJ Eby; one of us even wrote and sold their own productivity software.

We do not share a specific plan or goal, besides “winning” in general. Everyone follows their own plan. Everything is voluntary; there are no obligations nor punishments. Still, some convergent goals have emerged.

Also, good habits seem to be contagious, at least in our group. If a single person was doing some useful thing consistently, eventually the majority of the group seems to pick it up, whether it is related to productivity, exercise, diet, or finance.

Exercise

All of us exercise regularly. Now it seems like obviously the right thing to do. Exercise improves your health and stamina, including mental stamina. For example, the best chess players exercise a lot, because it helps them stay focused and keep thinking for a long time. Exercise increases your expected lifespan, which should be especially important for transhumanists, because it increases your chances to survive until the Singularity. Exercise also makes you more attractive, creating a halo effect that brings many other benefits.

If you don’t consider these benefits worth at least 2 hours of your time a week, we find it difficult to consider you a rational person who takes their ideas seriously. Yes, even if you are busy doing important things; the physical and mental stamina gained from exercising is a multiplier to whatever you are doing in the rest of your time.

Most of us lift weights (see e.g. StrongLifts 5×5, Alan Thrall); some of us even have a power rack and/or treadmill desk at home. Others exercise using their body weight (see Convict Conditioning). Exercising at home saves time, and in long term also money. Muscle mass correlates with longevity, in addition to the effect of exercise itself; and having more muscle allows you to eat more food. Speaking of which...

Diet

Most of us are, mostly or completely, vegetarian or vegan. Ignoring the ethical aspects and focusing only on health benefits, there is a lot of nutrition research summarized in a book How Not to Die and a website NutritionFacts.org. The short version is that whole-food vegan diet seems to work best, but you really should look into details. (Not all vegan food is automatically healthy; there is also vegan junk food. It is important to eat a lot of unprocessed vegetables, fruit, nuts, flax seeds, broccoli, beans. Read the book, seriously. Or download the Daily Dozen app.) We often share tasty recipes when we meet.

We also helped each other research food supplements, and actually find the best and cheapest sources. Most of us take extra B12 to supplement the vegan diet, creatine monohydrate, vitamin D3, and some of us also use Omega3, broccoli sprouts, and a couple of other things that are generally aimed at health and longevity.

Finance

We strategize and brainstorm career decisions or just debug office politics. Most of us are software developers. This year, one member spent nine months learning how to program (using Codecademy, Codewars, and freeCodeCamp at the beginning; reading tutorials and documentation later); as a result their income more than doubled, and they got a job they can do fully remotely.

Recently we started researching cryptocurrencies and investing in them. Some of us started doing P2P lending.

Personal life

Many of us are polyamorous. We openly discuss sex and body image issues in the group. We generally feel comfortable sharing this information with each other; women say they do not feel the typical chilling effects.

Summary

Different members report different benefits from their membership in the group. Some quotes:

“During the first half of the year, my life was more or less the same. I was already very productive before the group, so I kept the same habits, but benefited from sharing research. Recently, my life changed more noticeably. I started training myself to think of more high-leverage moves (inspired by a talk on self-hypnosis). This changed my asset allocation, and my short-term career plans. I realize more and more that I am very much monkey see, monkey do.”

“Before stumbling over the local Less Wrong meetup, I had been longing (and looking) for people who shared, or even just understood, my interest and enthusiasm for global, long-term, and meta thinking (what I now know to be epistemic rationality). After the initial thrill of the discovery had worn off however, I soon felt another type of dissonance creeping up on me: "Wait, didn't we agree that this was ultimately about winning? Where is the second, instrumental half of rationality, that was supposedly part of the package?" Well, it turned out that the solution to erasing this lingering dissatisfaction was to be found in yet a smaller subgroup.

So, like receiving a signal free of interference for the first time, I finally feel like I'm in a "place" where I can truly belong, i.e. a tribe, or at least a precursor to one, because I believe that things hold the potential to be way more awesome still, and that just time alone may already be enough to take us there.

On a practical level, the speed of adoption of healthy habits is truly remarkable. I've always been able to generally stick to any goals and commitments I've settled on, however the process of convergence is just so much faster and easier when you can rely on the judgment of other epistemically trustworthy people. Going at full speed is orders of magnitudes easier when multiple people illuminate the path (i.e. figure out what is truly worth it), while simultaneously sharing the burdens (of research, efficient implementation, trial-and-error, etc.)”

“Now I'm on a whole-food vegan diet and I exercise 2 times a week, and I also improved in introspection and solving my life problems. But most importantly, the group provides me companionship and emotional support; for example, starting a new career is a lot easier in the presence of a group where reinventing yourself is the norm.”

“It usually takes grit and willpower to change if you do it alone; on the other hand, I think it's fairly effortless if you're simply aligning your behavior with a preexisting strong group norm. I used to eat garbage, smoke weed, and have no direction in life. Now I lift weights, eat ~healthy, and I learned programming well enough to land a great job.

The group provides existential mooring; it is a homebase out of which I can explore life. I don't think I'm completely un-lost, but instead of being alone in the middle of a jungle, I'm at a friendly village in the middle of a jungle.”

“I was already weightlifting and eating vegan, but got motivated to get more into raw and whole foods. I get confronted more with math, programming and finance, and can broaden my horizon. Sharing daily tasks in Slack helps me to reflect about my priorities. I already could discuss many current career and personal challenges with the whole group or individuals.”

“I started exercising regularly, and despite remaining an omnivore I eat much more fresh vegetables now than before. People keep telling me that my body shape improved a lot during this year. Other habits did not stick (yet).”

“Finding a tribe of sane people in an insane world was a big deal for me, now I feel more self-assured and less alone. Our tribe has helped me to improve my habits—some more than others (for example, it has inspired me to buy a power-rack for my living room and start weightlifting daily, instead of going to the gym). The friendly bragging we do among our group is our way of celebrating success and inspires me to keep going and growing.”

Random

Despite having met each other thanks to Less Wrong, most of us do not read it anymore, because our impression is that “Less Wrong is dead”. We do read Slate Star Codex.

From other rationalist blogs, we really liked the article about Ra, and we discussed it a lot.

The proposal of a Dragon Army evoked mixed reactions. On one hand, we approve of rationalists living closer to each other, and we want to encourage fellow rationalists to try it. On the other hand, we don’t like the idea of living in a command hierarchy; we are adults, and we all have our own projects. Our preferred model would be living close to each other; optimally in the same apartment building with some shared communal space, but also with a completely self-contained unit for each of us. So far our shared living happened mostly by chance, but it always worked out very well.

Jordan Peterson and his Self-Authoring Suite is very popular with about half of the group.

What next?

Well, we are obviously going to continue doing what we are doing now, hopefully even better than before, because it works for us.

You, dear reader, if you feel serious about becoming stronger and winning at life, but are not yet a member of a productive rationalist group, are encouraged to join one or start one. Geographical distances are annoying, but Slack helps you overcome the intervals between meetups. Talking to other rationalists can be a lot of fun, but accountability can make the difference between productivity and mere talking. Remember: “If this is your first night at fight club, you have to fight!”

Even if it’s seemingly small things, such as doing an exercise or adding some fiber to your diet; these things, accumulated over time, can increase your quality of life a lot. The most important habit is the meta-habit of creating and maintaining good habits. And it is always easier when your tribe is doing the same thing.

Any questions? It may take some time for our hive mind to generate an answer, and in case of too many or too complex questions we may have to prioritize. Don’t feel shy, though. We care about helping others.

 

(This account was created for the purpose of making this post, and after a week or two it will stop being used. It may be resurrected after another year, or maybe not. Please do not send private messages; they will most likely be ignored.)

In praise of fake frameworks

13 Valentine 11 July 2017 02:12AM

Related to: Bucket errors, Categorizing Has Consequences, Fallacies of Compression

Followup to: Gears in Understanding


I use a lot of fake frameworks — that is, ways of seeing the world that are probably or obviously wrong in some important way.


I think this is an important skill. There are obvious pitfalls, but I think the advantages are more than worth it. In fact, I think the "pitfalls" can even sometimes be epistemically useful.


Here I want to share why. This is for two reasons:


  • I think fake framework use is a wonderful skill. I want it represented more in rationality in practice. Or, I want to know where I'm missing something, and Less Wrong is a great place for that.

  • I'm building toward something. This is actually a continuation of Gears in Understanding, although I imagine it won't be at all clear here how. I need a suite of tools in order to describe something. Talking about fake frameworks is a good way to demo tool #2.


With that, let's get started.

continue reading »

Idea for LessWrong: Video Tutoring

13 adamzerner 23 June 2017 09:40PM

Update 7/9/17: I propose that Learners individually reach out to Teachers, and set up meetings. It seems like the most practical way of getting started, but I am not sure and am definitely open to other ideas. Other notes:

  • There seems to be agreement that the best way to do this is individualized guidance, rather than lectures and curriculums. Eg. the Teacher "debugging" the Learner. Assuming that approach, it is probably best for the amount of Learners in a session to be small.
  • Consider that it may make sense for you to act as a Teacher, even if you don't have a super strong grasp of the topic. For example, I know a decent amount about computer science, but don't have a super strong grasp of it. Still, I believe it would be valuable for me to teach computer science to others. I can definitely offer value to people with no CS background. And for people who do have a CS background, there could be value in us taking turns teaching/learning, and debugging each other.
  • We may not be perfect at this in the beginning, but let's dive in and see what we can do! I think it'd be a good idea to comment on this post with what did/didn't work for you, so we as a group could learn and improve.
  • I pinned http://lesswrong.com/r/discussion/lw/p69/idea_for_lesswrong_video_tutoring/ to #productivity on the LessWrongers Slack group.

Update 6/28/17: With 14 people currently interested, it does seem that there's enough to get started. However, I'd like to give it a bit more time and see how much overall interest we get.

Idea: we coordinate to teach each other things via video chat.

  • We (mostly) all like learning. Whether it be for fun, curiosity, a stepping stone towards our goals.
  • My intuition is that there's a lot of us who also enjoy teaching. I do, personally.
  • Enjoyment aside, teaching is a good way of solidifying ones knowledge.
  • Perhaps there would be positive unintended consequences. Eg. socially.
  • Why video? a) I assume that medium is better for education than simply text. b) Social and motivational benefits, maybe. A downside to video is that some may find it intimidating.
  • It may be nice to evolve this into a group project where we iteratively figure out how to do a really good job teaching certain topics.
  • I see the main value in personalization, as opposed to passive lectures/seminars. Those already exist, and are plentiful for most topics. What isn't easily accessible is personalization. With that said, I figure it'd make sense to have about 5 learners per teacher.

So, this seems like something that would be mutually beneficial. To get started, we'd need:

  1. A place to do this. No problem: there's Hangouts, Skype, https://talky.io/, etc.
  2. To coordinate topics and times.

Personally, I'm not sure how much I can offer as far as doing the teaching. I worked as a web developer for 1.5 years and have been teaching myself computer science. I could be helpful to those unfamiliar with those fields, but probably not too much help for those already in the field and looking to grow. But I'm interested in learning about lots of things!

Perhaps a good place to start would be to record in some spreadsheet, a) people who want to teach, b) what topics, and c) who is interested in being a Learner. Getting more specific about who wants to learn what may be overkill, as we all seem to have roughly similar interests. Or maybe it isn't.

If you're interested in being a Learner or a Teacher, please add yourself to this spreadsheet.

Bi-Weekly Rational Feed

11 deluks917 09 July 2017 07:11PM

===Highly Recommended Articles:

Just Saying What You Mean Is Impossible by Zvi Moshowitz - "Humans are automatically doing lightning fast implicit probabilistic analysis on social information in the background of every moment of their lives." This implies there is no way to divorce the content of your communication from its myriad probabilistic social implications. Different phrasings will just send different implications.

In Defense Of Individualist Culture by Sarah Constantin (Otium) - A description of individualist culture. Criticisms of individualist culture: Lacking sympathy, few good defaults. Defenses: Its very hard to change people (psychology research review). A defense of naive personal identity. Traditional culture is fragile. Building a community project is hard in the modern world, prepare for the failure modes. Modernity has big upsides, some people will make better choices than the traditional rules allow.

My Current Thoughts On Miris Highly Reliable by Daniel Dewey (EA forum) - Report by the Open Phil AI safety lead. A basic description of and case for the MIRI program. Conclusion: 10% credence in MIRI's work being highly useful. Reasons: Hard to apply to early agents, few researchers are excited, other approaches seem more promising.

Conversation With Dario Amodei by Jeff Kaufman - "The research that's most valuable from an AI safety perspective also has substantial value from the perspective of solving problems today". Prioritize work on goals. Transparency and adversarial examples are also important.

Cfar Week 1 by mindlevelup - What is working at CFAF actually like. Less rationality research than anticipated. Communication costs scale quadratically. Organization efficiency and group rationality.

The Ladder Of Interventions by mindlevelup - "This is a hierarchy of techniques to use for in-the-moment situations where you need to “convince” yourself to do something." The author uses these methods in practice.

On Dragon Army by Zvi Moshowitz - Long response to many quotes from "Dragon Army Barracks". Duncan't attitude to criticism. Tyler Durden shouldn't appeal to Duncan. Authoritarian group houses haven't been tried. Rationalists undervalue exploration. Loneliness and doing big things. The pendulum model of social progress. Sticking to commitments even when its painful. Saving face when you screw up. True Reliability: The bay is way too unreliable but Duncan goes too far. Trust and power Dynamics. Pragmatic criticism of the charter.

Without Belief In A God But Never Without Belief In A Devil by Lou (sam[]zdat) - The nature of mass movements. The beats and the John Birchers. The taxonomy of the frustrated. Horseshoe theory. The frustrated cannot derive satisfaction from action, something else has to fill the void Poverty, work and meaning. Mass movements need to sow resentment. Hatred is the strongest unifier. Modernity inevitably causes justified resentment. Tocqueville, Polyanai, Hoffer and Scott's theories. Helpful and unhelpful responses.

On The Effects Of Inequality On Economic Growth by Artir (Nintil) - Most of the article tries to explain and analyze the economic consensus on whether inequality harms growth. A very large number of papers are cited and discussed. A conclusion that the effect is at most small.

===Scott:

Two Kinds Of Caution by Scott Alexander - Sometimes boring technologies (ex container ships) wind up being far more important than flashy tech. However Scott argues that often the flashy tech really is important. There is too much contrarianism and not enough meta-contrarianism. AI risk.

Open Road by Scott Alexander - Bi-weekly public open thread. Some messages from Scott Alexander.

To The Great City by Scott Alexander - Scott's Karass is in San Fransisco. He is going home.

Open Thread 78 75 by Scott Alexander - Bi-weekly public open thread.

Why Are Transgender People Immune To Optical Illusions by Scott Alexander - Scott's community survey showed, with a huge effect size, that transgender individuals are less susceptible to the spinning mask and dancer illusions. Trans suffer from dis-associative disorder at a high rate. Connections between the two phenomena and NDMA. Commentary on the study methodology.

Contra Otium On Individualism by Scott Alexander (Scratchpad) - Eight point summary of Sarah's defense of individualism. Scott is terrified the market place of ideals doesn't work and his own values aren't memetically fit.

Conversation Deliberately Skirts The Border Of Incomprehensibility by Scott Alexander - Communication is often designed to be confusing so as to preserve plausible deniability.

===Rationalist:

Rethinking Reality And Rationality by mindlevelup - Productivity is almost a solved problem. Much current rationalist research is very esoteric. Finally grokking effective altruism. Getting people good enough at rationality that they are self correcting. Pedagogy and making research fields legible.

The Power Of Pettiness by Sarah Perry (ribbonfarm) - "These emotions – pettiness and shame – are the engines driving epistemic progress" Four virtues: Loneliness, ignorance, pettiness and overconfidence.

Irrationality is in the Eye of the Beholder by João Eira (Lettuce be Cereal) - Is eating a chocolate croissant on a diet always irrational? Context, hidden motivations and the curse of knowledge.

The Abyss Of Want by AellaGirl - The infinite regress of 'Asking why'. Taking acid and ego death. You can't imagine the experience of death. Coming back to life. Wanting to want things. Humility and fake enlightenment.

Epistemic Laws Of Motion by SquirrelInHell - Newton's three laws re-interpreted in terms of psychology and people's strategies. A worked example using 'physics' to determine if someone will change their mind. Short and clever.

Against Lone Wolf Selfimprovement by cousin_it (lesswrong) - Lone wolf improvement is hard. Too many rationalists attempt it for cultural and historical reasons. Its often better to take a class or find a group.

Fictional Body Language by Eukaryote - Body language in literature is often very extreme compared to real life. Emojis don't easily map to irl body language. A 'random' sample of how emotion in represented in American Gods, Earth and Lirael. Three strategies: Explicitly describing feelings vs describing actions vs metaphors.

Bayesian Probability Theory As Extended Logic A by ksvanhorn (lesswrong) - Cox's theorem is often cited to support that Bayesian probability is the only valid fundamental method of plausible reasoning. A simplified guide to Cox's theorem. The author their paper that uses weaker assumptions than Cox's theorem. The author's full paper and a more detailed exposition of Cox's theorem are linked.

Steelmanning The Chinese Room Argument by cousin_it (lesswrong) - A short thought experiment about consciousness and inferring knowledge from behavior.

Ideas On A Spectrum by Elo (BearLamp) - Putting ideas like 'selfishness' on a spectrum. Putting yourself and others on the spectrum. People who give you advice might disagree with you about where you fall on the spectrum. Where do you actually stand?

A Post Em Era Hint by Robin Hanson - In past ages there were pairs of innovations that enabled the emulation age without changing the growth rate. Forager: Reasoning and language. Farmer: Writing and math. Industrial: Computers and Digital Communication. What will the em-age equivalents be?

Zen Koans by Elo (BearLamp) - Connections between koans and rationalist ideas. A large number of koans are included at the end of the post. Audio of the associated meetup is included.

Fermi Paradox Resolved by Tyler Cowen - Link to a presentation. Don't just multiply point estimates. Which Drake parameters are uncertain. The Great filter is probably in the past. Lots of interesting graphs and statistics. Social norms and laws. Religion. Eusocial society.

Developmental Psychology In The Age Of Ems by Gordan (Map and Territory) - Brief intro to the Age of Em. Farmer values. Robin's approach to futurism. Psychological implications of most ems being middle aged. Em conservatism and maturity.

Call To Action by Elo (BearLamp) - Culmination of a 21 article series on life improvement and getting things done. A review of the series as a whole and thoughts on moving forward.

Cfar Week 1 by mindlevelup - What is working at CFAF actually like. Less rationality research than anticipated. Communication costs scale quadratically. Organization efficiency and group rationality.

Onemagisterium Bayes by tristanm (lesswrong) - Toolbox-ism is the dominant mode of thinking today. Downsides of toolbox-ism. Desiderata that imply Bayesianism. Major problems: Assigning priors and encountering new hypothesis. Four minor problems. Why the author is still a strong Bayesianism. Strong Bayesians can still use frequentist tools. AI Risk.

Selfconscious Ideology by casebash (lesswrong) - Lesswrong has a self conscious ideology. Self conscious ideologies have major advantages even if any given self-conscious ideology is flawed.

Intellectuals As Artists by Robin Hanson - Many norms function to show off individual impressiveness: Conversations, modern songs, taking positions on diverse subjects. Much intellectualism is not optimized for status gains not finding truth.

Just Saying What You Mean Is Impossible by Zvi Moshowitz - "Humans are automatically doing lightning fast implicit probabilistic analysis on social information in the background of every moment of their lives." This implies there is no way to divorce the content of your communication from its myriad probabilistic social implications. Different phrasings will just send different implications.

In Defense Of Individualist Culture by Sarah Constantin (Otium) - A description of individualist culture. Criticisms of individualist culture: Lacking sympathy, few good defaults. Defenses: Its very hard to change people (psychology research review). A defense of naive personal identity. Traditional culture is fragile. Building a community project is hard in the modern world, prepare for the failure modes. Modernity has big upsides, some people will make better choices than the traditional rules allow.

Forget The Maine by Robin Hanson - Monuments are not optimized for reminding people to do better. Instead they largely serve as vehicles for simplistic ideology.

The Ladder Of Interventions by mindlevelup - "This is a hierarchy of techniques to use for in-the-moment situations where you need to “convince” yourself to do something." The author uses these methods in practice.

On Dragon Army by Zvi Moshowitz - Long response to many quotes from "Dragon Army Barracks". Duncan't attitude to criticism. Tyler Durden shouldn't appeal to Duncan. Authoritarian group houses haven't been tried. Rationalists undervalue exploration. Loneliness and doing big things. The pendulum model of social progress. Sticking to commitments even when its painful. Saving face when you screw up. True Reliability: The bay is way too unreliable but Duncan goes too far. Trust and power Dynamics. Pragmatic criticism of the charter.

===AI:

Updates To The Research Team And A Major Donation by The MIRI Blog - MIRIr received a 1 million dollar donation. Two new full-time researchers. Two researchers leaving. Medium term financial plans.

Conversation With Dario Amodei by Jeff Kaufman - "The research that's most valuable from an AI safety perspective also has substantial value from the perspective of solving problems today". Prioritize work on goals. Transparency and adversarial examples are also important.

Why Don't Ai Researchers Panic by Bayesian Investor - AI researchers predict a 5% chance of "extremely bad" (extinction level) events, why aren't they panicking? Answers: They are thinking of less bad worst cases, optimism about counter-measures, risks will be easy to deal with later, three "star theories" (MIRI, Paul Christiano, GOFAI). More answers: Fatal pessimism and resignation. It would be weird to openly worry. Benefits of AI-safety measures are less than the costs. Risks are distant.

Strategic Implications Of Ai Scenarios by (EA forum) - Questions and topics: Advanced AI timelines. Hard or soft takeoff? Goal alignment? Will advanced AI act as a single entity or a distributed system? Implication for estimating the EV of donating to AI-safety. - Tobias Baumann

Tool Use Intelligence Conversation by The Foundational Research Institute - A dialogue. Comparisons between humans and chimps/lions. The value of intelligence depends on the available tools. Defining intelligence. An addendum on "general intelligence" and factors that make intelligence useful.

Self-modification As A Game Theory Problem by (lesswrong) - "If I'm right, then any good theory for cooperation between AIs will also double as a theory of stable self-modification for a single AI, and vice versa." An article with mathematical details is linked.

Looking Into Ai Risk by Jeff Kaufman - Jeff is trying to decide if AI risk is a serious concern and whether he should consider working in the field. Jeff's AI-risk reading list. A large comment section with interesting arguments.

===EA:

Ea Marketing And A Plea For Moral Inclusivity by MichaelPlant (EA forum) - EA markets itself as being about poverty reduction. Many EAs think other topics are more important (far future, AI, animal welfare, etc). The author suggests becoming both more inclusive and more openly honest.

My Current Thoughts On Miris Highly Reliable by Daniel Dewey (EA forum) - Report by the Open Phil AI safety lead. A basic description of and case for the MIRI program. Conclusion: 10% credence in MIRI's work being highly useful. Reasons: Hard to apply to early agents, few researchers are excited, other approaches seem more promising.

How Can We Best Coordinate As A Community by Benn Todd (EA forum) - 'Replaceability' is a bad reason not to do direct work, lots of positions are very hard to fill. Comparative Advantage and division of labor. Concrete ways to boost productivity: 5 minute favours, Operations roles, Community infrastructure, Sharing knowledge and Specialization. EA Global Video is included.

Deciding Whether to Recommend Fistula Management Charities by The GiveWell Blog - "An obstetric fistula, or gynecologic fistula, is an abnormal opening between the vagina and the bladder or rectum." Fistula management, including surgery. Open questions and uncertainty particularly around costs. Our plans to partner with IDinsight to answer these questions.

Allocating the Capital by GiveDirectly - Eight media links on Give Directly, Basic Income and Cash Transfers.

Testing An Ea Networkbuilding Strategy by remmelt (EA forum) - Pivot from supporting EA charities to cooperating with EA networks. Detailed goals, strategy, assumptions, metrics, collaborators and example actions.

How Long Does It Take To Research And Develop A Vaccine by (EA forum) - How long it takes to make a vaccine. Literature review. Historical data on how long a large number of vaccines took to develop. Conclusions.

Hi Im Luke Muehlhauser Ama About Open by Luke Muelhauser (EA forum) - Animal and computer consciousness. Luke wrote a report for the open philanthropy project on consciousness. Lots of high quality questions have been posted.

Hidden Cost Digital Convenience by Innovations for Poverty - Moving from in person to digital micro-finance can harm saving rates in developing countries. Reduction in group cohesion and visible transaction fees. Linked paper with details.

Projects People And Processes by Open Philosophy - Three approaches used by donors and decision makers: Choose from projects presented by experts, defer near-fully to trusted individuals and establishing systematic criteria. Pros and cons of each. Open Phil's current approach.

Effective Altruism An Idea Repository by Onemorenickname (lesswrong) - Effective altruism is less of a closed organization than the author thought. Building a better platform for effective altruist idea sharing.

Effective Altruism As Costly Signaling by Raemon (EA forum) - " 'a bunch of people saying that rich people should donate to X' is a less credible signal than 'a bunch of people saying X thing is important enough that they are willing to donate to it themselves.' "

The Person Affecting Philanthropists Paradox by MichaelPlant (EA forum) - Population ethics. The value of creating more happy people as opposed to making pre-existing people happy. Application to the question of whether to donate now or invest and donate later.

Oops Prize by Ben Hoffman (Compass Rose) - Positive norms around admitting you were wrong. Charity Science publicly admitted they were wrong about grant writing. Did anyone organization at EA Global admit they made a costly mistake? 1K oops prize.

===Politics and Economics:

Scraps 3 Hoffer And Performance Art by Lou (sam[]zdat) - Growing out of radicalism. Either economic and family instability can cause mass movements. why the left has adopted Freud. The Left's economic platform is popular, its cultural platform is not. Performance art: Marina Abramović's' 'Rhythm 0'. Recognizing and denying your own power.

What Replaces Rights And Discourse by Feddie deBoer - Lots of current leftist discourse is dismissive of rights and open discussion. But what alternative is there? The Soviets had bad justifications and a terrible system but at least it had an explicit philosophical alternative.

Why Do You Hate Elua by H i v e w i r e d - Scott's Elua as an Eldritch Abomination that threatens traditional culture. An extended sci-fi quote about Ra the great computer. "The forces of traditional values remembered an important fact: once you have access to the hardware, it’s over."

Why Did Europe Lose Crusades by Noah Smith - Technological comparison between Europe and the Middle East. Political divisions on both sides. Geographic distance. Lack of motivation.

Econtalk On Generic Medications by Aceso Under Glass - A few egregious ways that big pharma games the patent system. Short.

Data On Campus Free Speech Cases by Ozy (Thing of Things) - Ozy classifies a sample of the cases handled by the Foundation for Individual Rights in Education. Ozy classifies 77 cases as conservative, liberal or apolitical censorship. Conservative ideas were censored 52%, liberal 26% and apolitical 22%.

Beware The Moral Spotlight by Robin Hanson - The stated goals of government/business don't much matter compared to the selective pressures on their leadership, don't obsess over which sex has the worse deal overall, don't overate the benefits of democracy and ignore higher impact changes to government.

Reply To Yudkowsky by Bryan Caplan - Caplan quotes and replies to many sections Yudkowsky's response. Points: Yudkowsky's theory is a special case of Caplan's. The left has myriad complaints about markets. Empirically the market actually has consistently provided large benefits in many countries and times.

Without Belief In A God But Never Without Belief In A Devil by Lou (sam[]zdat) - The nature of mass movements. The beats and the John Birchers. The taxonomy of the frustrated. Horseshoe theory. The frustrated cannot derive satisfaction from action, something else has to fill the void Poverty, work and meaning. Mass movements need to sow resentment. Hatred is the strongest unifier. Modernity inevitably causes justified resentment. Tocqueville, Polyanai, Hoffer and Scott's theories. Helpful and unhelpful responses.

Genetic Behaviorism Supports The Influence Of Chance On Life Outcomes by Freddie deBoer - Much of the variance in many traits is non-shared-environment. Much non-shared-environment can be thought of as luck. In addition no one chooses or deserves their genes.

Yudkowsky On My Simpistic Theory of Left and Right by Bryan Caplan - Yudkowsky claims the left holds the market to the same standards as human beings. The market as a ritual holding back a dangerous Alien God. Caplan doesn't respond he just quotes Yudkowsky.

On The Effects Of Inequality On Economic Growth by Artir (Nintil) - Most of the article tries to explain and analyze the economic consensus on whether inequality harms growth. A very large number of papers are cited and discussed. A conclusion that the effect is at most small.

===Misc:

Erisology Of Self And Will Representative Campbell Speaks by Everything Studies - An exposition of the "mainstream" view of the self and free will.

What Is The Ein Sof The Meaning Of Perfection In by arisen (lesswrong) - "Kabbalah is based on the analogy of the soul as a cup and G-d as the light that fills the cup. Ein Sof, nothing ("Ein") can be grasped ("Sof"-limitation)."

Sexualtaboos by AellaGirl - A graph of sexual fetishes. The axes are "taboo-ness" and "reported interest". Taboo correlated negatively with interest (p < 0.01). Lots of fetishes are included and the sample size is pretty large.

Huffman Codes Problem by protokol2020 - Find the possible Huffman Codes for all twenty-six English letters.

If You're In School Try The Curriculum by Freddie deBoer - Ironic detachment "leaves you with the burden of the work but without the emotional support of genuine resolve". Don't be the sort of person who tweets hundreds of thousands of times but pretends they don't care about online.

Media Recommendations by Sailor Vulcan (BYS) - Various Reviews including: Games, Animated TV shows, Rationalist Pokemon. A more detailed review of Harry Potter and the Methods of Rationality.

Sunday Assorted Links by Tyler Cowen - Variety of Topics. Ethereum Cryptocurrency, NYC Diner decline, Building Chinese Airports, Soccer Images, Drone Wars, Harberger Taxation, Douthat on Heathcare.

Summary Of Reading April June 2017 by Eli Bendersky - Brief reviews. Various topics: Heavy on Economics. Some politics, literature and other topics.

Rescuing The Extropy Magazine Archives by deku_shrub (lesswrong) - "You'll find some really interesting very early articles on neural augmentation, transhumanism, libertarianism, AI (featuring Eliezer), radical economics (featuring Robin Hanson of course) and even decentralized payment systems."

Epistemic Spot Check A Guide To Better Movement Todd Hargrove by Aceso Under Glass - Flexibility and Chronic Pain. Early section on flexibility fails check badly. Section on psychosomatic pain does much better. Model: Simplicity (Good), Explanation (Fantastic), Explicit Predictions (Good), Useful Predictions (Poor), Acknowledge Limits (Poor), Measurability (Poor).

Book Review Barriers by Eukaryote - Even cell culturing is surprisingly hard if you don't know the details. There is not much institutional knowledge left in the field of bioweapons. Forcing labs underground makes bioterrorism even harder. However synthetic biology might make things much more dangerous.

Physics Problem 2 by protokol2020 - Can tidal forces rotate a metal wheel?

Poems by Scott Alexander (Scratchpad) - Violets aren't blue.

Evaluating Employers As Junior Software by Particular Virtue - You need to write alot of code and get detailed feedback to improve as an engineer. Practical suggestions to ensure your first job fulfills both conditions.

===Podcast:

Kyle Maynard Without Limits by Tim Ferriss - "Kyle Maynard is a motivational speaker, bestselling author, entrepreneur, and ESPY award-winning mixed martial arts athlete, known for becoming the first quadruple amputee to reach the summit of Mount Kilimanjaro and Mount Aconcagua without the aid of prosthetics."

85 Is This The End Of Europe by Waking Up with Sam Harris - Douglas Murray and his book 'The Strange Death of Europe: Immigration, Identity, Islam'.

Myers Briggs, Diet, Mistakes And Immortality by Tim Ferriss - Ask me anything podcast. Topics beyond the title: Questions to prompt introspection, being a Jack of All Trades, balancing future and present goals, don't follow your passion, 80/20 memory retention, advice to your past selves.

Interview Ro Khanna Regional Development by Tyler Cowen - Bloomberg Podcast. "Technology, jobs and economic lessons from his perspective as Silicon Valley’s congressman."

Avic Roy by The Ezra Klein Show - Better Care Reconciliation Act, broader health care philosophies that fracture the right. Roy’s disagreements with the CBO’s methodology. The many ways he thinks the Senate bill needs to improve. How the GOP has moved left on health care policy. Medicaid, welfare reform, and the needy who are hard to help. The American health care system subsidizes the rich, etc.

Chris Blattman 2 by EconTalk - "Whether it's better to give poor Africans cash or chickens and the role of experiments in helping us figure out the answer. Along the way he discusses the importance of growth vs. smaller interventions and the state of development economics."

Landscapes Of Mind by Waking Up with Sam Harris - "why it’s so hard to predict future technology, the nature of intelligence, the 'singularity', artificial consciousness."

Blake Mycoskie by Tim Ferriss - Early entrepreneurial ventures. The power of journaling. How “the stool analogy” changed Blake’s life. Lessons from Ben Franklin.

Ben Sasse by Tyler Cowen - "Kansas vs. Nebraska, famous Nebraskans, Chaucer and Luther, unicameral legislatures, the decline of small towns, Ben’s prize-winning Yale Ph.d thesis on the origins of conservatism, what he learned as a university president, Stephen Curry, Chevy Chase, Margaret Chase Smith"

Danah Boyd on why Fake News is so Easy to Believe by The Ezra Klein Show - Fake news, digital white flight, how an anthropologist studies social media, machine learning algorithms reflect our prejudices rather than fixing them, what Netflix initially got wrong about their recommendations engine, the value of pretending your audience is only six people, the early utopian visions of the internet.

Robin Feldman by EconTalk - Ways pharmaceutical companies fight generics.

Jason Weeden On Do People Vote Based On Self Interest by Rational Speaking - Do people vote based on personality, their upbringing, blind loyalty or do they follow their self interest? What does self-interest even mean?

Reid Hoffman 2 by Tim Ferriss - The 10 Commandments of Startup Success according to the extremely successful investor Reid Hoffman.

Self-modification as a game theory problem

11 cousin_it 26 June 2017 08:47PM

In this post I'll try to show a surprising link between two research topics on LW: game-theoretic cooperation between AIs (quining, Loebian cooperation, modal combat, etc) and stable self-modification of AIs (tiling agents, Loebian obstacle, etc).

When you're trying to cooperate with another AI, you need to ensure that its action will fulfill your utility function. And when doing self-modification, you also need to ensure that the successor AI will fulfill your utility function. In both cases, naive utility maximization doesn't work, because you can't fully understand another agent that's as powerful and complex as you. That's a familiar difficulty in game theory, and in self-modification it's known as the Loebian obstacle (fully understandable successors become weaker and weaker).

In general, any AI will be faced with two kinds of situations. In "single player" situations, you're faced with a choice like eating chocolate or not, where you can figure out the outcome of each action. (Most situations covered by UDT are also "single player", involving identical copies of yourself.) Whereas in "multiplayer" situations your action gets combined with the actions of other agents to determine the outcome. Both cooperation and self-modification are "multiplayer" situations, and are hard for the same reason. When someone proposes a self-modification to you, you might as well evaluate it with the same code that you use for game theory contests.

If I'm right, then any good theory for cooperation between AIs will also double as a theory of stable self-modification for a single AI. That means neither problem can be much easier than the other, and in particular self-modification won't be a special case of utility maximization, as some people seem to hope. But on the plus side, we need to solve one problem instead of two, so creating FAI becomes a little bit easier.

The idea came to me while working on this mathy post on IAFF, which translates some game theory ideas into the self-modification world. For example, Loebian cooperation (from the game theory world) might lead to a solution for the Loebian obstacle (from the self-modification world) - two LW ideas with the same name that people didn't think to combine before!

[Link] Putanumonit: What statistical power means, and why I'm terrified about psychology

11 Jacobian 21 June 2017 06:29PM

LessWrong Is Not about Forum Software, LessWrong Is about Posts (Or: How to Immanentize the LW 2.0 Eschaton in 2.5 Easy Steps!)

10 enye-word 15 July 2017 09:35PM

[epistemic status: I was going to do a lot of research for this post, but I decided not to as there are no sources on the internet so I'd have to interview people directly and I'd rather have this post be imperfect than never exist.]

Many words have been written about how LessWrong is now shit. Opinions vary about how shit exactly it is. I refer you to http://lesswrong.com/lw/n0l/lesswrong_20/ and http://lesswrong.com/lw/o5z/on_the_importance_of_less_wrong_or_another_single/ for more comments about LessWrong being shit and the LessWrong diaspora being suboptimal.

However, how to make LessWrong stop being shit seems remarkably simple to me. Here are the steps to resurrect it:

1. Get Eliezer: The lifeblood of LessWrong is Eliezer Yudkowsky's writing. If you don't have that, what's the point of being on this website? Currently Eliezer is posting his writings on Facebook, (https://www.facebook.com/groups/674486385982694/) which I consider foolish, for the same reasons I would consider it foolish to house the Mona Lisa in a run-down motel.

2. Get Scott: Once you have Eliezer back, and you sound the alarm that LW is coming back, I'm fairly certain that Scott "Yvain" Alexander will begin posting on LessWrong again. As far as I can tell he's never wanted to have to moderate a comment section, and the growing pains are stressing his website at the seams. He's even mused publicly about arbitrarily splitting the Slate Star Codex comment section in two (http://slatestarcodex.com/2017/04/09/ot73-i-lik-the-thred/) which is a crazy idea on its own but completely reasonable in the context of (cross)posting to LW. Once you have Yudkowsky and Yvain, you have about 80% of what made LessWrong not shit.

3. Get Gwern: I don't read many of Gwern's posts; I just like having him around. Luckily for us, he never left!

After this is done, everyone else should wander back in, more or less.

Possible objections, with replies:

Objection: Most SSC articles and Yudkowsky essays are not on the subject of rationality and thus for your plan to work LessWrong's focus would have to subtly shift.

Reply: Shift away, then! It's LessWrong 2! We no longer have to be a community dedicated to reading Rationality: From AI to Zombies as it's written in real time; we can now be a community that takes Rationality: From AI to Zombies as a starting point and discusses whatever we find interesting! Thus the demarcation between 1.0 and 2.0!

Objection: People on LessWrong are mean and I do not like them.

Reply: The influx of new readers from the Yudkowsky-Yvain in-migration should make the tone on this website more upbeat and positive. Failing that, I don't know, ban the problem children, I guess. I don't know if it's poor form to declare this but I'd rather have a LessWrong Principate than a LessWrong Ruins. See also: http://lesswrong.com/lw/c1/against_online_pacifism/

Objection: I'd prefer, for various reasons, to just let LessWrong die.

Reply: Then kill it with your own hands! Don't let it lie here on the ground, bleeding out! Make a post called "The discussion thread at the end of the universe" that reads "LessWrong is over, piss off to r/SlateStarCodex", disallow new submissions, and be done with it! Let it end with dignity and bring a close to its history for good.

[Link] Dissolving the Fermi Paradox (Applied Bayesianism)

10 shin_getter 03 July 2017 09:44AM

One-Magisterium Bayes

9 tristanm 29 June 2017 11:02PM

[Epistemic Status: Very partisan / opinionated. Kinda long, kinda rambling.]

In my conversations with members of the rationalist community as well as in my readings of various articles and blog posts produced by them (as well as outside), I’ve noticed a recent trend towards skepticism of Bayesian principles and philosophy (see Nostalgebraist’s recent post for an example), which I have regarded with both surprise and a little bit of dismay, because I think progress within a community tends to be indicated by moving forward to new subjects and problems rather than a return to old ones that have already been extensively argued for and discussed. So the intent of this post is to summarize a few of the claims I’ve seen being put forward and try to point out where I believe these have gone wrong.

It’s also somewhat an odd direction for discussion to be going in, because the academic statistics community has largely moved on from debates between Bayesian and Frequentist theory, and has largely come to accept both the Bayesian and the Frequentist / Fisherian viewpoints as valid. When E.T. Jaynes wrote his famous book, the debate was mostly still raging on, and many questions had yet to be answered. In the 21st century, statisticians have mostly come to accept a world in which both approaches exist and have their merits.

Because I will be defending the Bayesian side here, there is a risk that this post will come off as being dogmatic. We are a community devoted to free-thought after all, and any argument towards a form of orthodoxy might be perceived as an attempt to stifle dissenting viewpoints. That is not my intent here, and in fact I plan on arguing against Bayesian dogmatism as well. My goal is to argue that having a base framework with which to feel relatively high confidence in is useful to the goals of the community, and that if we feel high enough confidence in it, then spending  extra effort trying to prove it false might be wasting brainpower than can potentially be used on more interesting or useful tasks. There could always be a point we reach where most of us strongly feel that unless we abandon Bayesianism, we can’t make any further progress. I highly doubt that we have reached such a point or that we ever will.

This is also a personal exercise to test my understanding of Bayesian theory and my ability to communicate it. My hope is that if my ideas here are well presented, it should be much easier for both myself and others to find flaws with it and allow me to update.

I will start with an outline of philosophical Bayesianism, also called “Strong Bayesianism”, or what I prefer to call it, “One Magisterium Bayes.” The reason for wanting to refer to it as being a single magisterium will hopefully become clear. The Sequences did argue for this point of view, however, I think the strength of the Sequences had more to do with why you should update your beliefs in the face of new evidence, rather than why Bayes' theorem was the correct way to do this. In contrast, I think the argument for using Bayesian principles as the correct set of reasoning principles was made more strongly by E.T. Jaynes. Unfortunately, I feel like his exposition of the subject tends to get ignored relative to the material presented in the Sequences. Not that the information in the Sequences isn’t highly relevant and important, just that Jaynes' arguments are much more technical, and their strength can be overlooked for this reason. 

The way to start an exposition on one-magisterium rationality is by contrast to multi-magisteria modes of thought. I would go as far as to argue that the multi-magisterium view, or what I sometimes prefer to call tool-boxism, is by far the most dominant way of thinking today. Tool-boxism can be summarized by “There is no one correct way to arrive at the truth. Every model we have today about how to arrive at the correct answer is just that – a model. And there are many, many models. The only way to get better at finding the correct answer is through experience and wisdom, with a lot of insight and luck, just as one would master a trade such as woodworking. There’s nothing that can replace or supersede the magic of human creativity. [Sometimes it will be added:] Also, don’t forget that the models you have about the world are heavily, if not completely, determined by your culture and upbringing, and there’s no reason to favor your culture over anyone else’s.”

As I hope to argue in this post, tool-boxism has many downsides that should push us further towards accepting the one-magisterium view. It also very dramatically differs in how it suggests we should approach the problem of intelligence and cognition, with many corollaries in both rationalism and artificial intelligence. Some of these corollaries are the following:

  • If there is no unified theory of intelligence, we are led towards the view that recursive self-improvement is not possible, since an increase in one type of intelligence does not necessarily lead to an improvement in a different type of intelligence.
  • With a diversification in different notions of correct reasoning within different domains, it heavily limits what can be done to reach agreement on different topics. In the end we are often forced to agree to disagree, which while preserving social cohesion in different contexts, can be quite unsatisfying from a philosophical standpoint.
  • Related to the previous corollary, it may lead to beliefs that are sacred, untouchable, or based on intuition, feeling, or difficult to articulate concepts. This produces a complex web of topics that have to be avoided or tread carefully around, or a heavy emphasis on difficult to articulate reasons for preferring one view over the other.
  • Developing AI around a tool-box / multi-magisteria approach, where systems are made up of a wide array of various components, limits generalizability and leads to brittleness. 

One very specific trend I’ve noticed lately in articles that aim to discredit the AGI intelligence explosion hypothesis, is that they tend to take the tool-box approach when discussing intelligence, and use that to argue that recursive self-improvement is likely impossible. So rationalists should be highly interested in this kind of reasoning. One of Eliezer’s primary motivations for writing the Sequences was to make the case for a unified approach to reasoning, because it lends credence to the view of intelligence in which intelligence can be replicated by machines, and where intelligence is potentially unbounded. And also that this was a subtle and tough enough subject that it required hundreds of blog posts to argue for it. So because of the subtle nature of the arguments I’m not particularly surprised by this drift, but I am concerned about it. I would prefer if we didn’t drift.

I’m trying not to sound No-True-Scotsman-y here, but I wonder what it is that could make one a rationalist if they take the tool-box perspective. After all, even if you have a multi-magisterium world-view, there still always is an underlying guiding principle directing the use of the proper tools. Often times, this guiding principle is based on intuition, which is a remarkably hard thing to pin down and describe well. I personally interpret the word ‘rationalism’ as meaning in the weakest and most general sense that there is an explanation for everything – so intelligence isn’t irreducibly based on hand-wavy concepts such as ingenuity and creativity. Rationalists believe that those things have explanations, and once we have those explanations, then there is no further use for tool-boxism.

I’ll repeat the distinction between tool-boxism and one-magisterium Bayes, because I believe it’s that important: Tool-boxism implies that there is no underlying theory that describes the mechanisms of intelligence. And this assumption basically implies that intelligence is either composed of irreducible components (where one component does not necessarily help you understand a different component) or some kind of essential property that cannot be replicated by algorithms or computation.

Why is tool-boxism the dominant paradigm then? Probably because it is the most pragmatically useful position to take in most circumstances when we don’t actually possess an underlying theory. But the fact that we sometimes don’t have an underlying theory or that the theory we do have isn’t developed to the point where it is empirically beating the tool box approach is sometimes taken as evidence that there isn't a unifying theory. This is, in my opinion, the incorrect conclusion to draw from these observations.

Nevertheless, it seems like a startlingly common conclusion to draw. I think the great mystery is why this is so. I don’t have very convincing answers to that question, but I suspect it has something to do with how heavily our priors are biased against a unified theory of reasoning. It may also be due to the subtlety and complexity of the arguments for a unified theory. For that reason, I highly recommend reviewing those arguments (and few people other than E.T. Jaynes and Yudkowsky have made them). So with that said, let’s review a few of those arguments, starting with one of the myths surrounding Bayes theorem I’d like to debunk:

Bayes Theorem is a trivial consequence of the Kolmogorov Axioms, and is therefore not powerful.

This claim usually presented as part of a claim that “Bayesian” probability is just a small part of regular probability theory, and therefore does not give us any more useful information than you’d get from just studying probability theory. And as a consequence of that, if you insist that you’re a “Strong” Bayesian, that means you’re insisting on using only on that small subset of probability theory and associated tools we call Bayesian.

And the part of the statement that says the theorem is a trivial consequence of the Kolmogorov axioms is technically true. It’s the implication typically drawn from this that is false. The reason it’s false has to do with Bayes theorem being a non-trivial consequence of a simpler set of axioms / desiderata. This consequence is usually formalized by Cox’s theorem, which is usually glossed over or not quite appreciated for how far-reaching it actually is.

Recall that the qualitative desiderata for a set of reasoning rules were:

  1. Degrees of plausibility are represented by real numbers.
  2. Qualitative correspondence with common sense.
  3. Consistency. 

You can read the first two chapters of Jaynes’ book, Probability Theory: The Logic of Science if you want more detail into what those desiderata mean. But the important thing to note from them is that they are merely desiderata, not axioms. This means we are not assuming those things are already true, we just want to devise a system that satisfies those properties. The beauty of Cox’s theorem is that it specifies exactly one set of rules that satisfy these properties, of which Bayes Theorem as well as the Kolmogorov Axioms are a consequence of those rules.

The other nice thing about this is that degrees of plausibility can be assigned to any proposition, or any statement that you could possibly assign a truth value to. It does not limit plausibility to “events” that take place in some kind of space of possible events like whether a coin flip comes up heads or tails. What’s typically considered the alternative to Bayesian reasoning is Classical probability, sometimes called Frequentist probability, which only deals with events drawn from a sample space, and is not able to provide methods for probabilistic inference of a set of hypotheses.

For axioms, Cox’s theorem merely requires you to accept Boolean algebra and Calculus to be true, and then you can derive probability theory as extended logic from that. So this should be mindblowing, right? One Magisterium Bayes? QED? Well apparently this set of arguments is not convincing to everyone, and it’s not because people find Boolean logic and calculus hard to accept.

Rather, there are two major and several somewhat minor difficulties encountered within the Bayesian paradigm. The two major ones are as follows:

  • The problem of hypothesis generation.
  • The problem of assigning priors. 

The list of minor problems are as follows, although like any list of minor issues, this is definitely not exhaustive:

  • Should you treat “evidence” for a hypothesis, or “data”, as having probability 1?
  • Bayesian methods are often computationally intractable.
  • How to update when you discover a “new” hypothesis.
  • Divergence in posterior beliefs for different individuals upon the acquisition of new data.

Most Bayesians typically never deny the existence of the first two problems. What some anti-Bayesians conclude from them, though, is that Bayesianism must be fatally flawed due to those problems, and that there is some other way of reasoning that would avoid or provide solutions to those problems. I’m skeptical about this, and the reason I’m skeptical is because if you really had a method for say, hypothesis generation, this would actually imply logical omniscience, and would basically allow us to create full AGI, RIGHT NOW. If you really had the ability to produce a finite list containing the correct hypothesis for any problem, the existence of the other hypotheses in this list is practically a moot point – you have some way of generating the CORRECT hypothesis in a finite, computable algorithm. And that would make you a God.

As far as I know, being able to do this would imply that P = NP is true, and as far as I know, most computer scientists do not think it’s likely to be true (And even if it were true, we might not get a constructive proof from it).  But I would ask: Is this really a strike against Bayesianism? Is the inability of Bayesian theory to provide a method for providing the correct hypothesis evidence that we can’t use it to analyze and update our own beliefs?

I would add that there are plenty of ways to generate hypotheses by other methods. For example, you can try to make the hypothesis space gargantuan, and encode different hypotheses in a vector of parameters, and then use different optimization or search procedures like evolutionary algorithms or gradient descent to find the most likely set of parameters. Not all of these methods are considered “Bayesian” in the sense that you are summarizing a posterior distribution over the parameters (although stochastic gradient descent might be). It seems like a full theory of intelligence might include methods for generating possible hypotheses. I think this is probably true, but I don’t know of any arguments that it would contradict Bayesian theory.

The reason assigning prior probabilities is such a huge concern is that it forces Bayesians to hold “subjective” probabilities, where in most cases, if you’re not an expert in the domain of interest, you don’t really have a good argument for why you should hold one prior over another. Frequentists often contrast this with their methods which do not require priors, and thus hold some measure of objectivity.

E.T. Jaynes never considered to this be a flaw in Bayesian probability, per se. Rather, he considered hypothesis generation, as well as assigning priors, to be outside the scope of “plausible inference” which is what he considered to be the domain of Bayesian probability. He himself argued for using the principle of maximum entropy for creating a prior distribution, and there are also more modern techniques such as Empirical Bayes.

In general, Frequentists often have the advantage that their methods are often simpler and easier to compute, while also having strong guarantees about the results, as long as certain constraints are satisfied. Bayesians have the advantage that their methods are “ideal” in the sense that you’ll get the same answer each time you run an analysis. And this is the most common form of the examples that Bayesians use when they profess the superiority of their approach. They typically show how Frequentist methods can give both “significant” and “non-significant” labels to their results depending on how you perform the analysis, whereas the Bayesian way just gives you the probability of the hypothesis, plain and simple.

I think that in general, once could say that Frequentist methods are a lot more “tool-boxy” and Bayesian methods are more “generally applicable” (if computational tractability wasn’t an issue).  That gets me to the second myth I’d like to debunk:

Being a “Strong Bayesian” means avoiding all techniques not labeled with the stamp of approval from the Bayes Council.

Does this mean that Frequentist methods, because they are tool box approaches, are wrong or somehow bad to use, as some argue that Strong Bayesians claim? Not at all. There’s no reason not to use a specific tool, if it seems like the best way to get what you want, as long as you understand exactly what the results you’re getting mean. Sometimes I just want a prediction, and I don’t care how I get it – I know that a specific algorithm being labeled “Bayesian” doesn’t confer it any magical properties. Any Bayesian may want to know the frequentist properties of their model. It’s easy to forget that different communities of researchers flying the flag of their tribe developed some methods and then labeled them according to their tribal affiliation. That’s ok. The point is, if you really want to have a Strong Bayesian view, then you also have to assign probabilities to various properties of each tool in the toolbox.

Chances are, if you’re a statistics/data science practitioner with a few years of experience applying different techniques to different problems and different data sets, and you have some general intuitions about which techniques apply better to which domains, you’re probably doing this in a Bayesian way. That means, you hold some prior beliefs about whether Bayesian Logistic Regression or Random Forests is more likely to get what you want on this particular problem, you try one, and possibly update your beliefs once you get a result, according to what your models predicted.

Being a Bayesian often requires you to work with “black boxes”, or tools that you know give you a specific result, but you don’t have a full explanation of how it arrives at the result or how it fits in to the grand scheme of things. A Bayesian fundamentalist may refuse to work with any statistical tool like that, not realizing that in their everyday lives they often use tools, objects, or devices that aren’t fully transparent to them. But you can, and in fact do, have models about how those tools can be used and the results you’d get if you used them. The way you handle these models, even if they are held in intuition, probably looks pretty Bayesian upon deeper inspection.

I would suggest that instead of using the term “Fully Bayesian” we use the phrase “Infinitely Bayesian” to refer to using a Bayesian method for literally everything, because it more accurately shows that it would be impossible to actually model every single atom of knowledge probabilistically. It also makes it easier to see that even the Strongest Bayesian you know probably isn’t advocating this.

Let me return to the “minor problems” I mentioned earlier, because they are pretty interesting.  Some epistemologists have a problem with Bayesian updating because it requires you to assume that the “evidence” you receive at any given point is completely true with probability 1. I don’t really understand why it requires this. I’m easily able to handle the case where I’m uncertain about my data. Take the situation where my friend is rolling a six-sided die, and I want to know the probability of it coming up 6. I assume all sides are equally likely, so my prior probability for 6 is 1/6. Let’s say that he rolls it where I can’t see it, and then tells me the die came up even. What do I update p(6) to?

Let’s say that I take my data as saying “the die came up even.” Then p(6 | even) = p(even | 6) * p(6) / p(even) = 1 * (1/6) / (1 / 2) = 1/3. Ok, so I should update p(6) to 1/3 now right? Well, that’s only if I take the evidence of “the die came up even” as being completely true with probability one. But what actually happened is that my friend TOLD ME the die came up even. He could have been lying, maybe he forgot what “even” meant, maybe his glasses were really smudged, or maybe aliens took over his brain at that exact moment and made him say that. So let’s say I give a 90% chance to him telling the truth, or equivalently, a 90% chance that my data is true. What do I update p(6) to now?

It’s pretty simple. I just expand p(6) over “even” as p(6) = p(6 | even) p(even)  + p(6 | odd) p(odd). Before he said anything, p(even) = p(odd) and this formula evaluated to (1/3)(1/2) + (0)(1/2) = 1/6, my prior. After he told me the die came up even, I update p(even) to 0.9, and this formula becomes (1/3)(9/10) + (0)(1/10) = 9/30. A little less than 1/3. Makes sense.

In general, I am able to model anything probabilistically in the Bayesian framework, including my data. So I’m not sure where the objection comes from. It’s true that from a modeling perspective, and a computational one, I have to stop somewhere, and just accept for the sake of pragmatism that probabilities very close to 1 should be treated as if they were 1, and not model those. Not doing that, and just going on forever, would mean being Infinitely Bayesian. But I don’t see why this counts as problem for Bayesianism. Again, I’m not trying to be omniscient. I just want a framework for working with any part of reality, not all of reality at once. The former is what I consider “One Magisterium” to mean, not the latter.

The rest of the minor issues are also related to limitations that any finite intelligence is going to have no matter what. They should all, though, get easier as access to data increases, models get better, and computational ability gets better.

Finally, I’d like to return to an issue that I think is most relevant to the ideas I’ve been discussing here. In AI risk, it is commonly argued that a sufficiently intelligent agent will be able to modify itself to become more intelligent. This premise assumes that an agent will have some theory of intelligence that allows it to understand which updates to itself are more likely to be improvements. Because of that, many who argue against “AI Alarmism” will argue against the premise that there is a unified theory of intelligence. In “Superintelligence: The Idea that Eats Smart People”, I think most of the arguments can be reduced to basically saying as much.

From what I can tell, most arguments against AI risk in general will take the form of anecdotes about how really really smart people like Albert Einstein were very bad at certain other tasks, and that this is proof that there is no theory of intelligence that can be used to create a self-improving AI. Well, more accurately, these arguments are worded as “There is no single axis on which to measure intelligence” but what they mean is the former, since even multiple axes of intelligence (such as measure of success on different tasks) would not actually imply that there isn’t one theory of reasoning. What multiple axes of measuring intelligence do imply is that within a given brain, the brain may have devoted more space to better modeling certain tasks than others, and that maybe the brain isn’t quite that elastic, and has a hard time picking up new tasks.

The other direction in which to argue against AI risk is to argue against the proposed theories of reasoning themselves, like Bayesianism. The alternative, it seems, is tool-boxism. I really want to avoid tool-boxism because it makes it difficult to be a rationalist. Even if Bayesianism turns out to be wrong, does this exclude other, possibly undiscovered theories of reasoning? I’ve never seen that touched upon by any of the AI risk deniers. As long as there is a theory of reasoning, then presumably a machine intelligence could come to understand that theory and all of its consequences, and use that to update itself.

I think the simplest summary of my post is this: A Bayesian need not be Bayesian in all things, for reasons of practicality. But a Bayesian can be Bayesian in any given thing, and this is what is meant by “One Magisterium”.

I didn’t get to cover every corollary of tool-boxing or every issue with Bayesian statistics, but this post is already really long, and for the sake of brevity I will probably end it here. Perhaps I can cover those issues more thoroughly in a future post. 

[Link] Daniel Dewey on MIRI's Highly Reliable Agent Design Work

8 lifelonglearner 09 July 2017 04:35AM

Bayesian probability theory as extended logic -- a new result

8 ksvanhorn 06 July 2017 07:14PM

I have a new paper that strengthens the case for strong Bayesianism, a.k.a. One Magisterium Bayes. The paper is entitled "From propositional logic to plausible reasoning: a uniqueness theorem." (The preceding link will be good for a few weeks, after which only the preprint version will be available for free. I couldn't come up with the $2500 that Elsevier makes you pay to make your paper open-access.)

Some background: E. T. Jaynes took the position that (Bayesian) probability theory is an extension of propositional logic to handle degrees of certainty -- and appealed to Cox's Theorem to argue that probability theory is the only viable such extension, "the unique consistent rules for conducting inference (i.e. plausible reasoning) of any kind." This position is sometimes called strong Bayesianism. In a nutshell, frequentist statistics is fine for reasoning about frequencies of repeated events, but that's a very narrow class of questions; most of the time when researchers appeal to statistics, they want to know what they can conclude with what degree of certainty, and that is an epistemic question for which Bayesian statistics is the right tool, according to Cox's Theorem.

You can find a "guided tour" of Cox's Theorem here (see "Constructing a logic of plausible inference"). Here's a very brief summary. We write A | X for "the reasonable credibility" (plausibility) of proposition A when X is known to be true. Here X represents whatever information we have available. We are not at this point assuming that A | X is any sort of probability. A system of plausible reasoning is a set of rules for evaluating A | X. Cox proposed a handful of intuitively-appealing, qualitative requirements for any system of plausible reasoning, and showed that these requirements imply that any such system is just probability theory in disguise. That is, there necessarily exists an order-preserving isomorphism between plausibilities and probabilities such that A | X, after mapping from plausibilities to probabilities, respects the laws of probability.

Here is one (simplified and not 100% accurate) version of the assumptions required to obtain Cox's result:

 

  1. A | X is a real number.
  2. (A | X) = (B | X) whenever A and B are logically equivalent; furthermore, (A | X) ≤ (B | X) if B is a tautology (an expression that is logically true, such as (a or not a)).
  3. We can obtain (not A | X) from A | X via some non-increasing function S. That is, (not A | X) = S(A | X).
  4. We can obtain (A and B | X) from (B | X) and (A | B and X) via some continuous function F that is strictly increasing in both arguments: (A and B | X) = F((A | B and X), B | X).
  5. The set of triples (x,y,z) such that x = A|X, y = (B | A and X), and z = (C | A and B and X) for some proposition A, proposition B, proposition C, and state of information X, is dense. Loosely speaking, this means that if you give me any (x',y',z') in the appropriate range, I can find an (x,y,z) of the above form that is arbitrarily close to (x',y',z').
The "guided tour" mentioned above gives detailed rationales for all of these requirements.

Not everyone agrees that these assumptions are reasonable. My paper proposes an alternative set of assumptions that are intended to be less disputable, as every one of them is simply a requirement that some property already true of propositional logic continue to be true in our extended logic for plausible reasoning. Here are the alternative requirements:
  1. If X and Y are logically equivalent, and A and B are logically equivalent assuming X, then (A | X) = (B | Y).
  2. We may define a new propositional symbol s without affecting the plausibility of any proposition that does not mention that symbol. Specifically, if s is a propositional symbol not appearing in A, X, or E, then (A | X) = (A | (s ↔ E) and X).
  3. Adding irrelevant background information does not alter plausibilities. Specifically, if Y is a satisfiable propositional formula that uses no propositional symbol occurring in A or X, then (A | X) = (A | Y and X).
  4. The implication ordering is preserved: if  A → B is a logical consequence of X, but B → A is not, then then A | X < B | X; that is, A is strictly less plausible than B, assuming X.
Note that I do not assume that A | X is a real number. Item 4 above assumes only that there is some partial ordering on plausibility values: in some cases we can say that one plausibility is greater than another.

 

I also explicitly take the state of information X to be a propositional formula: all the background knowledge to which we have access is expressed in the form of logical statements. So, for example, if your background information is that you are tossing a six-sided die, you could express this by letting s1 mean "the die comes up 1," s2 mean "the die comes up 2," and so on, and your background information X would be a logical formula stating that exactly one of s1, ..., s6 is true, that is,

(s1 or s2 or s3 or s5 or s6) and
not (s1 and s2) and not (s1 and s3) and not (s1 and s4) and
not (s1 and s5) and not (s1 and s6) and not (s2 and s3) and
not (s2 and s4) and not (s2 and s5) and not (s2 and s6) and
not (s3 and s4) and not (s3 and s5) and not (s3 and s6) and
not (s4 and s5) and not (s4 and s6) and not (s5 and s6).

Just like Cox, I then show that there is an order-preserving isomorphism between plausibilities and probabilities that respects the laws of probability.

My result goes further, however, in that it gives actual numeric values for the probabilities. Imagine creating a truth table containing one row for each possible combination of truth values assigned to each atomic proposition appearing in either A or X. Let n be the number of rows in this table for which X evaluates true. Let m be the number of rows in this table for which both A and X evaluate true. If P is the function that maps plausibilities to probabilities, then P(A | X) = m / n.

For example, suppose that a and b are atomic propositions (not decomposable in terms of more primitive propositions), and suppose that we only know that at least one of them is true; what then is the probability that a is true? Start by enumerating all possible combinations of truth values for a and b:
  1. a false, b false: (a or b) is false, a is false.
  2. a false, b true : (a or b) is true,  a is false.
  3. a true,  b false: (a or b) is true,  a is true.
  4. a true,  b true : (a or b) is true,  a is true.
There are 3 cases (2, 3, and 4) in which (a or b) is true, and in 2 of these cases (3 and 4) a is also true. Therefore,

    P(a | a or b) = 2/3.

This concords with the classical definition of probability, which Laplace expressed as

The probability of an event is the ratio of the number of cases favorable to it, to the number of possible cases, when there is nothing to make us believe that one case should occur rather than any other, so that these cases are, for us, equally possible.

This definition fell out of favor, in part because of its apparent circularity. My result validates the classical definition and sharpens it. We can now say that a “possible case” is simply a truth assignment satisfying the premise X. We can simply drop the problematic phrase “these cases are, for us, equally possible.” The phrase “there is nothing to make us believe that one case should occur rather than any other” means that we possess no additional information that, if added to X, would expand by differing multiplicities the rows of the truth table for which X evaluates true.

For more details, see the paper linked above.

Machine Learning Group

7 Regex 16 July 2017 08:58PM

After signing up for this post, those of us that want to study machine learning have made a team.

In an effort to actually get high returns on our time we won't delay, and instead actually build the skills. First project: work through Python Machine Learning by Sebastian Raschka, with the mid-term goal of being able to implement the "recognizing handwritten digits" code near the end.

As a matter of short term practicality currently we don't have the hardware for GPU acceleration. This limits the things we can do, but at this stage of learning most of the time spent is on understanding and implementing the basic concepts anyway.

Here is our discord invite link if you're interested in joining in on the fun.

What useless things did you understand recently?

7 cousin_it 28 June 2017 07:32PM

Please reply in the comments with things you understood recently. The only condition is that they have to be useless in your daily life. For example, "I found this idea that defeats procrastination" doesn't count, because it sounds useful and you might be deluded about its truth. Whereas "I figured out how construction cranes are constructed" qualifies, because you aren't likely to use it and it will stay true tomorrow.

I'll start. Today I understood how Heyting algebras work as a model for intuitionistic logic. The main idea is that you represent sentences as shapes. So you might have two sentences A and B shown as two circles, then "A and B" is their intersection, "A or B" is their union, etc. But "A implies B" doesn't mean one circle lies inside the other, as you might think! Instead it's a shape too, consisting of all points that lie outside A or inside B (or both). There were some other details about closed and open sets, but these didn't cause a problem for me, while "A implies B" made me stumble for some reason. I probably won't use Heyting algebras for anything ever, but it was pretty fun to figure out.

Your turn!

PS: please don't feel pressured to post something super advanced. It's really, honestly okay to post basic things, like why a stream of tap water narrows as it falls, or why the sky is blue (though I don't claim to understand that one :-))

The dark arts: Examples from the Harris-Adams conversation

6 Stabilizer 20 July 2017 11:42PM

Recently, James_Miller posted a conversation between Sam Harris and Scott Adams about Donald Trump. James_Miller titled it "a model rationalist disagreement". While I agree that the tone in which the conversation was conducted was helpful, I think Scott Adams is a top practitioner of the Dark Arts. Indeed, he often prides himself on his persuasion ability. To me, he is very far from a model for a rationalist, and he is the kind of figure we rationalists should know how to fight against.

 

Here are some techniques that Adams uses:

 

  1. Changing the subject: (a) Harris says Trump is unethical and cites the example of Trump gate-crashing a charity event to falsely get credit for himself. Adams responds by saying that others are equally bad—that all politicians do morally dubious things. When Harris points out that Obama would never do such a thing, Adams says Trump is a very public figure and hence people have lots of dirt on him. (b) When Harris points out that almost all climate scientists agree that climate change is happening and that it is wrong for Trump to have called climate change a hoax, Adams changes the subject to how it is unclear what economic policies one ought to pursue if climate change is true.
  2. Motte-and-bailey: When Harris points out that the Trump University scandal and Trump's response to it means Trump is unethical, Adams says that Trump was not responsible for the university because it was only a licensing deal. Then Harris points out that Trump is unethical because he shortchanged his contractors. Adams says that that’s what happens with big construction projects. Harris tries to argue that it’s the entirety of Trump’s behavior that makes it clear that he is unethical—i.e., Trump University, his non-payment to contractors, his charity gate-crashing, and so on. At this points Adams says we ought to stop expecting ethical behavior from our Presidents. This is a classic motte-and-bailey defense. Try to defend an indefensible position (the bailey) for a while, but then once it becomes untenable to defend it, then go to the motte (something much more defensible).
  3. Euphemisation: (a) When Harris tells Adams that Trump lies constantly and has a dangerous disregard for the truth, Adams says, I agree that Trump doesn’t pass fact checks. Indeed, throughout the conversation Adams never refers to Trump as lying or as making false statements. Instead, Adams always says, Trump “doesn’t pass the fact checks”. This move essentially makes it sound as if there’s some organization whose arbitrary and biased standards are what Trump doesn’t pass and so downplays the much more important fact that Trump lies. (b) When Harris call Trump's actions morally wrong, Adams makes it seem as if he is agreeing with Harris but then rephrases it as: “he does things that you or I may not do in the same situation”. Indeed, that's Adams's constant euphemism for a morally wrong action. This is a very different statement compared to saying that what Trump did was wrong, and makes it seem as if Trump is just a normal person doing what normal people do. 
  4. Diagnosis: Rather than debate the substance of Harris’s claims, Adams will often embark on a diagnosis of Harris’s beliefs or of someone else who has that belief. For example, when Harris says that Trump is not persuasive and does not seem to have any coherent views, Adams says that that's Harris's "tell" and that Harris is "triggered" by Trump's speeches. Adams constantly diagnoses Trump critics as seeing a different movie, or as being hypnotized by the mainstream media. By doing this, he moves away from the substance of the criticisms.
  5. Excusing: (a) When Harris says that it is wrong to not condemn, and wrong to support, the intervention of Russia in America’s election, Adams says that the US would extract revenge via its intelligence agencies and we would never know about it. He provides no evidence for the claim that Trump is indeed extracting revenge via the CIA. He also says America interferes in other elections too. (b) When Harris says that Trump degraded democratic institutions by promising to lock up his political opponent after the election, Adams says that was just a joke. (c) When Harris says Trump is using the office of the President for personal gain, Adams tries to spin the narrative as Trump trying to give as much as possible late in his life for his country. 
  6. Cherry-picking evidence: (a) When Harris points out that seventeen different intelligence agencies agreed that Russia’s government interfered in the US elections, Adams says that the intelligence agencies have been known to be wrong before. (b) When Harris points out that almost all climate scientists agree on climate change, Adams points to some point in the 1970s where (he claims) climate scientists got something wrong, and therefore we should be skeptical about the claims of climate scientists.

Overall, I think what Adams is doing is wrong. He is an ethical and epistemological relativist: he does not seem to believe in truth or in morality. At the very least, he does not care about what is true and false and what is right and wrong. He exploits his relativism to push his agenda, which is blindingly clear: support Trump.

 

(Note: I wanted to work on this essay more carefully, and find out all the different ways in which Adams subverts the truth and sound reasoning. I also wanted to cite more clearly the problematic passages from the conversations. But I don't have the time. So I relied on memory and highlighted the Dark Arts moves that struck me immediately. So please, contribute in the comments with your own observations about the Dark Arts involved here.)

90% of problems are recommendation and adaption problems

6 casebash 12 July 2017 04:53AM

Want to improve your memory? Start a business? Fix your dating life?

The chances are that out of the thousands upon thousands of books and blogs out there on each of these topics there are already several that will tell you all that you need. I'm not saying that this will immediately solve your problem - you will still need to put in the hard yards of experiment and practise - just that lack of knowledge will no longer be the limiting factor.

This suggests if we want to be winning at life (as any good rationalist should), what is most important isn't creating brilliant and completely unprecedented approaches to solve these problems, but rather taking ideas that already exist.

The first problem is recommendation - finding which out of all of the thousands of books out there are the most helpful for a particular problem. Unfortunately, recommendation is not an easy problem at all. Two people may both be dealing with procrastination problems, but what works for one person may not work for another person. Further, even for the same idea, it is incredibly subjective what counts as a clear explanation - some people may want more detail, others less, some people may find some examples really compelling, others won't. Recommendations are generally either one person's individual recommendations or those which recieved the highest vote, but there probably are other methods of producing a recommendation that should be looked into, such as asking people survey questions and matching on that, or asking people to rate a book on different factors.

The second problem is adaption. Although you shouldn't need to create any new ideas, it is likely that certain elements will need more explanation and certain elements less. For example, when writing for the rationalist community, you may need to be more precise and be clearer when you are talking figuratively, rather than literally. Alternatively, you can probably just link people to certain common ideas such as the map and territory without having to explain it.

I'll finish with a rhetorical question - what percent of solutions here are new ideas and what percentage are existing solutions? Are these in the right ratio?

UPDATE: (Please note: This article is not about time spent learning vs. time spent practising, but about existing ideas vs. new ideas. The reason why this is the focus is because LW can potentially recommend resources or adapt resources, but it can't practise for you!).

 

 

 

 

 

 

Call to action

6 Elo 07 July 2017 09:10AM

Core knowledge: List of common human goals
Part 1Exploration-Exploitation
Part 1a: The application of the secretary problem to real life dating
Part 1b: adding and removing complexity from models
Part 2Bargaining Trade-offs to your brain.
Part 2a.1: A strategy against the call of the void.
Part 2a.2: The call of the void
Part 2b.1: Empirical time management
Part 2b.2: Memory and notepads
Part 3The time that you have
Part 3a: A purpose finding exercise
Part 3b: Schelling points, trajectories and iteration cycles
Part 4What does that look like in practice?
Part 4a: Lost purposes – Doing what’s easy or what’s important
Part 4b.1: In support of yak shaving
Part 4b.2: Yak shaving 2
Part 4c: Filter on the way in, Filter on the way out…
Part 4d.1: Scientific method
Part 4d.2: Quantified self
Part 5: Skin in the game
Part 6
Call to action

A note about the contents list; you can find the list in the main parts, the a,b,c parts are linked to from the main posts.  If you understand them in the context they are mentioned you can probably skip them, but if you need the explanation, click through.


If you understand exploration and exploitation, you realise that sometimes you need to stop exploring and take advantage of what you know based on the value of the information that you have. At other times you will find your exploitations are giving you diminishing returns, you are stagnating and you need to dive into the currents again, take some risks.  If you are accurately calibrated, you will know what to do, whether to sharpen the saw, educate yourself more or cut down the tree right now.

If you are not calibrated yet and you want to start, you might want to empirically assess your time.  You might like to ask yourself in light of the information of your time use all on one page – Am I exploring and exploiting enough?  Remembering you probably make the most measurable and ongoing returns in the Exploitation phase, however the exploration might be seem more fun (to find exciting and new knowledge), and the place where you grow, but are you sure that’s what you want to be doing in regard to the value return by exploiting?

Why were you not already exploring and exploiting in the right ratio?  Brains are tricky things.  You might need to bargain trade-offs to your own brain.  You might be dealing with a System2!understanding of what you want to do and trying to carry out a System1!motivated_action.  The best thing to do is to ask the internal disagreeing parts, “How could I resolve this disagreement in my head?”, “How will I resolve my indecision at this time?“, “How do I go about gathering evidence for better making this decision?”.  This all starts with noticing.  Noticing that disagreement, noticing the chance to resolve the stress in your head…

Sometimes we do things for bad, dumb, silly, irrational, frustrating, self-defeating, or irrelevant reasons.  All you really have is the time you have.  People take actions based on their desires and goals.  That’s fine.  You have 168 hours a week. As long as you are happy with how you spend it.  If you are not content, that’s when you have the choice to do something else.

Look at all the things that you are doing or not doing that does not contribute to a specific goal (a process called the immunity to change).  This fundamentally hits on a universal; Namely what you are doing with your time is everything you are choosing not to do with your time.  There is an equal and opposite opportunity cost to each thing that you do.  And that’s where we come to revealed preferences.

Revealed preferences are different to preferences, they are in fact distinctly different.  I would argue that revealed preferences are much more real and the only real preference, because it’s made up of what actually happens, not just what you say you want to happen.  It’s firmly grounded in reality.  The reality of what you choose to do with your time (what you chose to do with your time yesterday).

On the one hand you can introspect, consider your existing revealed preferences and let that inform your future judgements and future actions.  As a person who has always watched every season of your favourite TV show, you might decide to be the type of person for which TV shows matter more than <exercise|relationships|learning> or any number of things.  Good!  Make that decision with pride!  What you cared about can be what you want to care about in the future, but it also might not be.  That’s why you might want to take stock of what you are doing and align what you are doing with your desired goals.  Change what you reveal with your ongoing actions so that they reflect who you want to be as a person.

Do you have skin in the game?  Who do you want to be as a person?  It’s a hard problem.  You want to figure out your desired goals.  I don’t know how exactly to do that but I have some ideas.  You can look around you at how other people do it, you can consider common human goals.  Without explaining why, “knowing what your goals are” is important, even if it takes a while to work that out.

If you know what your goals are you can compare your goals and the list of your empirical time use.  Realise that everything that you do will take time.  If these were your revealed preferences, what do you reveal that you care about?  But wait, don’t stop there, consider your potential:

Potential To:

  • Discover/Define/Declare what you really care about.
  • Define what results you think you can aim for within what you really care about.
  • Define what actions you can take to yield a trajectory towards those results.
  • Stick to it because it’s what you really want to do.  What you care about.

That’s what’s important right?  Doing the work you value because it leads towards your goals (which are the things you care about).  If you are not doing that, then maybe your revealed preferences are showing that you are not a very strategic human.  There is a solution to that.  Keeping yourself on track looks pretty easy when you think about it.

And If you find parts of your brain doing what they want at the detriment of other parts of your goals, you need to reason with them.  This whole; define what you really care about and then head towards it, you should know that it needs doing ASAP, or you are already making bad trade offs with your time.

Consider this post a call to action as a chance to be the you that you really want to be!  Get to it! With passion and joy!


Core knowledge: List of common human goals
Part 1Exploration-Exploitation
Part 1a: The application of the secretary problem to real life dating
Part 1b: adding and removing complexity from models
Part 2Bargaining Trade-offs to your brain.
Part 2a.1: A strategy against the call of the void.
Part 2a.2: The call of the void
Part 2b.1: Empirical time management
Part 2b.2: Memory and notepads
Part 3The time that you have
Part 3a: A purpose finding exercise
Part 3b: Schelling points, trajectories and iteration cycles
Part 4What does that look like in practice?
Part 4a: Lost purposes – Doing what’s easy or what’s important
Part 4b.1: In support of yak shaving
Part 4b.2: Yak shaving 2
Part 4c: Filter on the way in, Filter on the way out…
Part 4d.1: Scientific method
Part 4d.2: Quantified self
Part 5: Skin in the game
Part 6
Call to action

A note about the contents list; you can find the list in the main parts, the a,b,c parts are linked to from the main posts.  If you understand them in the context they are mentioned you can probably skip them, but if you need the explanation, click through.


Meta: This took about 3 hours to write, and was held up by many distractions in my life.

I am not done.  Not by any means.  I feel like I left some unanswered questions along the way.  Things like:

  • “I don’t know what is good, am I somehow bound by a duty to go seeking out what is good or truly important to go do that?”
  • “So maybe I know what’s good, but I keep wondering if it is the best thing to do.  How can I be sure?”
  • “I am sure it is the best thing but I don’t seem to be doing it.  What’s up?”
  • “I am doing the things I thing are right but other people keep trying to tell me I am not.  What now?”
  • “I have a track record of getting it wrong a lot.  How do I even trust myself this time?”
  • “I am doing the thing but I feel wrong, what should I do about that?”

And many more.  But I see other problems worth writing about first.

[Link] Timeline of Machine Intelligence Research Institute

5 riceissa 15 July 2017 04:57PM

Best Of Rationality Blogs RSS Feed

5 SquirrelInHell 10 July 2017 11:11AM

[Note: There's already a gather-it-all feed by deluks917, and the lw summary recently had a "most recommended" section, so it covers some of what I'm doing here.]

This is an RSS feed that aggregates the most valuable posts (according to me) from around 40 or 50 rationality blogs. It's relatively uncluttered, averaging 3-5 articles per week.

Feed URL: http://www.inoreader.com/stream/user/1005752783/tag/user-favorites

There's also a Facebook page version, and you can view it online using any of the available free RSS viewers.

Edit: see my comment below for details of the heuristics I use for selecting articles for the feed.

Self-conscious ideology

5 casebash 28 June 2017 05:32AM

Operating outside of ideology is extremely hard, if not impossible. Even groups that see themselves as non-ideological, still seem to end up operating within an ideology of some sort.

Take for example Less Wrong. It seems to operate within a few assumptions:

  1.  That studying rationality will provide use with a greater understanding of the world. 
  2. That studying rationality will improve you as a person.
  3. That science is one of our most important tools for understanding the world.

...

These assumptions are also subject to some criticisms. Here's one criticism for each of the previous points:

  1. But will it or are we dealing with problems that are simply beyond our ability to understand (see epistemic learned helplessness)? Do we really understand how minds work well enough to know whether a mind uploaded would still be "you"?
  2. But religious people are happier.
  3. Hume's critique of induction

I could continue discussing assumptions and possible criticisms, but that would be a distraction from the core point, which is that there are advantages to having a concrete ideology that is aware of it's own limitations, as opposed to an implicit ideology that is beyond all criticism.

Self-conscious ideologies also have other advantages:

  • Quick and easy to write since you don't have to deal with all of the special cases.
  • Easy to share and explain. Imagine trying to explain to someone, "Rationality gives us a better understanding of the world, except when it does not". Okay, I'm exaggerating, epistemic humility typically isn't explained that badly, but it certainly complicates sharing.
  • Easier for people to adopt the ideology as a lens through which to examine the world, without needing to assume that it is literally true.
I wrote this post so that people can create self-conscious ideologies and have something to link to so as to avoid having to write up an explanation themselves. Go out into the world and create =P.

[Link] The Use and Abuse of Witchdoctors for Life

5 lifelonglearner 24 June 2017 08:59PM

[Link] Examples of Superintelligence Risk (by Jeff Kaufman)

4 Wei_Dai 15 July 2017 04:03PM

[Link] The Internet as an existential threat

4 Kaj_Sotala 09 July 2017 11:40AM

Steelmanning the Chinese Room Argument

4 cousin_it 06 July 2017 09:37AM

(This post grew out of an old conversation with Wei Dai.)

Imagine a person sitting in a room, communicating with the outside world through a terminal. Further imagine that the person knows some secret fact (e.g. that the Moon landings were a hoax), but is absolutely committed to never revealing their knowledge of it in any way.

Can you, by observing the input-output behavior of the system, distinguish it from a person who doesn't know the secret, or knows some other secret instead?

Clearly the only reasonable answer is "no, not in general".

Now imagine a person in the same situation, claiming to possess some mental skill that's hard for you to verify (e.g. visualizing four-dimensional objects in their mind's eye). Can you, by observing the input-output behavior, distinguish it from someone who is lying about having the skill, but has a good grasp of four-dimensional math otherwise?

Again, clearly, the only reasonable answer is "not in general".

Now imagine a sealed box that behaves exactly like a human, dutifully saying things like "I'm conscious", "I experience red" and so on. Moreover, you know from trustworthy sources that the box was built by scanning a human brain, and then optimizing the resulting program to use less CPU and memory (preserving the same input-output behavior). Would you be willing to trust that the box is in fact conscious, and has the same internal experiences as the human brain it was created from?

A philosopher believing in computationalism would emphatically say yes. But considering the examples above, I would say I'm not sure! Not at all!

Open thread, July 10 - July 16, 2017

3 Thomas 10 July 2017 06:31AM
If it's worth saying, but not worth its own post, then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

[Link] Postdoc opening at U. of Washington in AI law and policy

3 mindspillage 07 July 2017 06:53PM

The Unreasonable Effectiveness of Certain Questions

3 ig0r 04 July 2017 03:37AM

Cross-posted on my blog: http://garybasin.com/the-unreasonable-effectiveness-of-certain-questions/

About a year ago I was sitting around trying to grok the concept of Evil — where does it come from and how does it work? After a few hours of spinning in circles, I experienced a sudden shift. My mind conjured up the question: “Is this a thing out in the world or just a projection?” (Map vs Territory). Immediately, a part of my mind replied with “Well, this may not be anything other than a story we tell about the behavior of people we dislike”. Let’s ignore the truth value for today and notice the process. I’m interested in this mechanism of how a simple query — checking if I’m looking at a confusion of map with the territory — was able to instantly reframe a problem in a way that allowed me to effortlessly make a mental leap. What’s fascinating is that you don’t even need someone else’s brain to come up with these questions (although that often helps) — you can try to explain your problem to a rubber duck which creates a conversation with yourself and generates queries, or just go through a list of things to ask yourself when stuck.

 

There are a few different categories of these types of queries and many examples of each. For instance, when thinking about plans we can ask ourselves to perform prehindsight/inner simulator or reference class forecasting/outside view. When introspecting on our own behavior, we can perform sentence completion to check for limiting beliefs, ask questions like “Why aren’t I done yet?” or “What can I do to 10x my results?”. When thinking about problems or situations, we can ask ourselves to invert, reframe into something falsifiable, and taboo your words or perform paradjitsu. Or consider the miracle question: Imagine you wake up and the problem is entirely solved — what do you see, as concretely as possible, such that you know this is true?

So “we know more than we can tell” — somewhere in our head often lies the answer, if only we could get to it. In some sense, parts of our brain are not speaking to each other (do they even share the same ontologies?) except through our language processor, and only then if the sentences are constructed in specific ways. This may make you feel relieved if you think you can rely on your subconscious processing — which may have access to this knowledge — to guide you to effective action, or terrified if you need to use conscious reasoning to think through a chain of consequences.

My thoughts on Evil have continued to evolve since that initial revelation, partially driven by trying new queries on the concept (and partially from finally reading Nietzsche). Once you have a set of tools to throw at problems, the bottleneck to clearer thinking becomes remembering to apply them and actually having the time to do so. This makes me wonder about people that have formed habits to automatically apply a litany of these mental moves whenever approaching a problem — how much of their effectiveness and intelligence can this explain?

[Link] The evolution of superstitious and superstition-like behaviour

3 c0rw1n 23 June 2017 04:14PM

Sleeping Beauty Problem Can Be Explained by Perspectivism (II)

2 Xianda_GAO 19 July 2017 11:00PM

This is the second part of my argument. It mainly involves a counter example to SIA and Thirdism.

First part of my argument can be found here.

 

The 81-Day Experiment(81D):

There is a circular corridor connected to 81 rooms with identical doors. At the beginning all rooms have blue walls. A random number R is generated between 1 and 81. Then a painter randomly selects R rooms and paint them red. Beauty would be put into a drug induced sleep lasting 81 day, spending one day in each room. An experimenter would wake her up if the room she currently sleeps in is red and let her sleep through the day if the room is blue. Her memory of each awakening would be wiped at the end of the day. Each time after beauty wakes up she is allowed to exit her room and open some other doors in the corridor to check the colour of those rooms. Now suppose one day after opening 8 random doors she sees 2 red rooms and 6 blue rooms. How should beauty estimate the total number of red rooms(R).

For halfers, waking up in a red room does not give beauty any more information except that R>0. Randomly opening 8 doors means she took a simple random sample of size 8 from a population of 80. In the sample 2 rooms (1/4) are red. Therefore the total number of red rooms(R) can be easily estimated as 1/4 of the 80 rooms plus her own room, 21 in total.

For thirders, beauty's own room is treated differently.As SIA states, finding herself awake is  as if she chose a random room from the 81 rooms and find out it is red. Therefore her room and the other 8 rooms she checked are all in the same sample. This means she has a simple random sample of size 9 from a population of 81. 3 out of 9 rooms in the sample (1/3) are red. The total number of red rooms can be easily estimated as a third of the 81 rooms, 27 in total.

If a bayesian analysis is performed R=21 and R=27 would also be the case with highest credence according to halfers and thirders respectively. It is worth mentioning if an outside Selector randomly chooses 9 rooms and check them, and it just so happens those 9 are the same 9 rooms beauty saw (her own room plus the 8 randomly chosen rooms), the Selector would estimate R=27 and has the highest credence for R=27. Because he and the beauty has the exact same information about the rooms their answer would not change even if they are allowed to communicate. So again, there will be a perspective disagreement according to halfers but not according to thirders. Same as mentioned in part I. 

However, thirder's estimation is very problematic. Because beauty believes the 9 rooms she knows is a fair sample of all 81 rooms, it means red rooms (and blue rooms) are not systematically over- or under-represented. Since beauty is always going to wake up in a red room, she has to conclude the other 8 rooms is not a fair sample. Red rooms have to be systematically underrepresent in those 8 rooms. This means even before beauty decides which doors she wants to open we can already predict with certain confidence that those 8 rooms is going to contains less reds than the average of the 80 suggests. This supernatural predicting power is a strong evidence against SIA and thirding. 

Another way to see the problem is ask beauty how many red rooms would she expect to see if we let her open another 8 rooms. According to SIA she should expect to see 24/72x8=2.67 reds. Meaning even after seeing 2 reds in the first 8 random rooms she would expect to see almost 3 in another set of randomly chosen rooms. Which is counterintuitive to say the least. 

The argument can also be structured this way. Consider the following three statements:

A: The 9 rooms is an unbiased sample of the 81 rooms.

B: Beauty is guaranteed to wake up in a red room

C: The 8 rooms beauty choose is an unbiased sample of the other 80 rooms.

These statements cannot be all true at the same time. Thirders accept A and B meaning they must reject C. In fact they must conclude the 8 rooms she choose would be biased towards blue. This contradicts the fact that the 8 rooms are randomly chosen. 

It is also easy to see why beauty should not estimate R the same way as the selector does. There are about 260 billion distinct combinations to pick 9 rooms out of 81. The selector has a equal chance to see any one of those 260 billion combinations. Beauty on the other hand could only possibility see a subset of the combinations. If a combination does not contains a red room, beauty would never see it. Furthermore, the more red rooms a combination contains the more awakening it has leading to a greater chance for a beauty to select the said combination. Therefore while the same 9 rooms is a unbiased sample for the selector it is a sample biased towards red for beauty.

One might want to argue after the selector learns a beauty has the knowledge of the same 9 rooms he should lower his estimation of R to the same as beauty’s. After all beauty could only know combinations in a subset biased towards red. The selector should also reason his sample is biased towards red. This argument is especially tempting for SSA supporters since if true it means their answer also yields no disagreements. Sadly this notion is wrong, the selector ought to remain his initial estimation. To the selector a beauty knowing the same 9 rooms simply means after waking up in one of the red rooms in his sample, beauty made a particular set of random choices coinciding said sample. It offers him no new information about the other rooms. This point can be made clearer if we look at how people reach to an agreement in an ordinary problem. Which would be shown by another thought experiment in the next part.  

Book Reviews

2 Torello 18 July 2017 02:19PM

Mini map of s-risks

2 turchin 08 July 2017 12:33PM
S-risks are risks of future global infinite sufferings. Foundational research institute suggested them as the most serious class of existential risks, even more serious than painless human extinction. So it is time to explore types of s-risks and what to do about them.

Possible causes and types of s-risks:
"Normal Level" - some forms of extreme global suffering exist now, but we ignore them:
1. Aging, loss of loved ones, moral illness, infinite sufferings, dying, death and non-existence - for almost everyone, because humans are mortal
2. Nature as a place of suffering, where animals constantly eat each other. Evolution as superintelligence, which created suffering and using it for its own advance.

Colossal level:
1. Quantum immortality creates bad immortality - I survived as old, but always dying person, because of weird observation selection.
2. AI goes wrong. 2.1 Rocobasilisk 2.2. Error in programming 2.3. Hacker's joke 2.4 Indexical blackmail.
3. Two AIs go in war with each other, and one of them is benevolent to human, so another AI tortures humans to get bargain position in the future deal.
4. X-risks, which includes infinite suffering for everyone - natural pandemic, cancer epidemic etc
5. Possible worlds (in Lewis terms) with infinite sufferings qualia in them. For any human a possible world with his infinite sufferings exist. Modal realism makes them real.

Ways to fight s-risks:
1. Ignore them by boxing personal identity inside today
2. Benevolent AI fights "measure war" to create infinitely more copies of happy beings, as well as trajectories in the space of the possible minds from sufferings to happiness

Types of most intensive sufferings:

Qualia based, listed from bad to worse:
1. Eternal, but bearable in each moment suffering (Anhedonia)
2. Unbearable sufferings - sufferings, to which death is the preferable outcome (cancer, death in fire, death by hanging). However, as said Mark Aurelius: “Unbearable pain kills. If it not kills, it is bearable"
3. Infinite suffering - qualia of the infinite pain, so the duration doesn’t matter (not known if it exists)
4. Infinitely growing eternal sufferings, created by constant upgrade of the suffering’s subject (hypothetical type of sufferings created by malevolent superintelligence)

Value based s-risks:
1. Most violent action against one’s main values: like "brutal murder of children”
2. Meaninglessness, acute existential terror or derealisation with depression (Nabokov’s short story “Terror”) - incurable and logically proved understanding of meaningless of life
3. Death and non-existence are forms of counter-value sufferings.

Time-based:
1. Infinite time without happiness.

Subjects, who may suffer from s-risks:

1. Anyone as individual person
2. Currently living human population
3. Future generation of humans
4. Sapient beings
5. Animals
6. Computers, neural nets with reinforcement learning, robots and AIs.
7. Aliens
8. Unembodied sufferings in stones, Boltzmann brains, pure qualia etc.

My position

It is important to prevent s-risks, but not by increasing probability of human extinction, as it would mean that we already fail victims of blackmail by non-existence things.

Also s-risk is already default outcome for anyone personally (so it is global), because of inevitable aging and death (and may be bad quantum immortality).

People prefer the illusive certainty of non-existence - to hypothetical possibility of infinite sufferings. But nothing is certain after death.

The same way overestimating of the animal suffering results in the underestimating of the human sufferings and risks of human extinction. But animals are more suffering in the forests than in the animal farms, where they are feed every day, get basic healthcare, there no predators, who will eat them alive etc.

The hopes are wrong that we will prevent future infinite sufferings if we stop progress or commit suicide on the personal or civilzational level. It will not help animals. It will not help in sufferings in the possible world. It even will not prevent sufferings after death, if quantum immortality in some form is true.

But the fear of infinite sufferings makes us vulnerable to any type of the “acausal" blackmail. The only way to fight sufferings in possible worlds is to create an infinitely larger possible world with happiness.


Effective Altruism : An idea repository

2 Onemorenickname 25 June 2017 12:56AM

Metainformations :

Personal Introduction

I came to define myself as a non-standard Effective Altruist. I’ve always been interested in Effective Altruism, way before I’ve even heard of EA. When I was younger, I simply thought I was altruist, and that what people did was … noise at best. Basically, naive ways to relieve one’s conscience and perpetuate one’s culture.

Since primary school I thought about global problems and solutions to these problems. So much so that the word “project” internally connotes “project solving some global problems”. As such, EA should have interested me.

However, it didn’t. The main reason was that I saw EA as some other charitists. I’ve always been skeptical toward charity, the reason being “They think too small” and “There are too much funding in standard solutions rather than in finding new ones”.

I think this exemplifies a problem about EA’s communication.

A Communication Problem

Most people I know got to know Effective Altruism through EffectiveAltruism.org.

Because of that website, these people see EA as a closed organization that help people to direct funds to better charities and find better careers.

That was my opinion of EA until I saw the grant offer : a closed organization with already defined solutions wouldn’t fund new ideas. As such, I changed my outlook of EA. I researched a bit more about it, and found an open and diverse community.

But I am busy person, therefore I have to use filters before putting more time in researching about something. I made my impression from :

What convinced me of that impression was the website’s content :

  • The tabs are “About, Blog, Donate, Effectively, Resources, Grants, Get Involved”. This looks like a standard showcase website of a closed organization with a call to donate.

  • The first four reading suggestions after the introduction are about charity and career choice. This leads people to thinking that EA is solely about that.

  • In the introduction, the three main questions are “Which cause/career/charity ?”.

I didn’t stop there, and I read more of that website, but it was along those same lines.

Counting me, my friends and people I met on LW and SSC, this directly led to losing 10-15 potential altruists in the community. Given that we were already interested in applying rationality to changing the world and my situation is not isolated (the aforementioned website is the first hit for “Effective Altruism” on Google), I do think that it is an important issue to EA.

Solutions

Well, about the website :

  • Adding a tab “Open Ideas”/“Open projects”, “Forum” and/or “Communities”. The “Get Involved” is the only tab that offers (and only implicitly) some interaction. The new Involvement Guide is an action in the right direction.

  • Putting emphasis on the different communities and approaches. Digging, I’ve seen that there are several communities. However, the most prominent discriminating factor was the location. It would be nice to see a presentation of various approaches of EA, especially in the first resources new members get in touch with.

But more than changing the website, I think that lacking to EA is a platform dedicated to collective thinking about new ideas.

Projects don’t happen magically : people think, come to an idea, think more about that idea, criticize it, and if all goes well, maybe build a plan out of it, gather, and begin a project together. If we truly want new projects to emerge, having such a platform is of utmost importance.

The current forum doesn’t cut it : it isn’t meant to that end. It’s easier to build a forum dedicated to that than try to artificially support a balance between “New Ideas” posts and “Information Sharing” posts so that none of these get overshadowed. The same problem applies to existing reddit boards and facebook groups.

That platform should contain at least the following :

  • A place where new ideas are posted and criticized. A Reddit board, a Fecebook group, a forum.

  • A place where ideas are discussed interactively. An IRC channel, a web chat, a Discord server.

  • A place where ideas/projects are improved collectively and incrementally. A web pad, a Google doc, a Git repository.

  • A basic method to deal with new ideas / project collaboration. Some formatting, some questions that every idea should answer (What problem does it solve ?, How critical is it ?, What’s the solution variance ?), content deletion policy. A sticky-post on the forum, an other Google Doc.

Questions

  • Do you think such a platform would be useful ? Why ?

  • Would you be interested in building such a platform ? Either technically (by setting up the required tools), marketing-ly (by gathering people) or content-ly (by posting and criticizing ideas).

[Link] Sam Harris and Scott Adams debate Trump: a model rationalist disagreement

1 James_Miller 20 July 2017 12:18AM

Looking for ideas about Epistemology related topics

1 Onemorenickname 19 July 2017 06:56PM

Notes :

  • "Epistemy" refers to the second meaning of epistemology : "A particular theory of knowledge".
  • I'm more interested in ideas to further the thoughts exposed here than exposing them.

 

Good Experiments

The point of "Priors are useless" is that if you update after enough experiments, you tend to the truth distribution regardless of your initial prior distribution (assuming its codomain doesn't include 0 and 1, or at least that it doesn't assign 1 to a non-truth and 0 to a truth). However, "enough experiments" is magic :

  1. The pure quantitative aspect : you might not have time to do these experiments in your lifetime.
  2. Having independent experiments is not defined. Knowing which experiments are pairwise independent embeds higher-level knowledge that could easily be used to derive truths directly. If we try to prove a mathematical theorem, comparing the pairwise success probability correlations of different approaches would give much more insights and results than trying to prove it as usual.
  3. We don't need pairwise independence. For instance, assuming we assume P=/=NP because we couldn't prove it, we assume so because we expect all used techniques not to be all correlated together. However, this expectation is ether wrong (Small list of fairly accepted conjectures that were later disproved), or stems from higher-order knowledge (knowledge about knowledge). Infinite regress.

Good Priors

However, conversely, having a good prior distribution is magic too. You can have a prior distribution affecting 1 to truths, and 0 to non-truths. So you might want the additional requirement that the prior distribution has to be computable. But there are two problems :

  1. There aren't many known computable prior distribution. Occam's razor (in term of Kolmogorov complexity in a given language) is one. But fails miserably in most interesting situations. Think of poker, or a simplified version thereof : A+K+Q. If someone bets, the simplest explanation is that he has good cards. Most interesting situations where we want to apply bayesianism are from human interactions (we managed to do hard sciences before bayesianism, and we still have troubles with social sciences). As such, failing to take into account bluff is a big epistemic fault for a prior distribution.
  2. Evaluating the efficiency of a given prior distribution will be done over the course of several experiments, and hence requires a higher order prior distribution (a prior distribution over prior distributions). Infinite regress.

Epistemies

In real-life, we don't encounter these infinite regresses. We use epistemies. An epistemy is usually a set of axioms, and a methodology to derive truths with these axioms. They form a trusted core, that we can use if we understood the limits of the underlying meta-assumptions and methodology.

Epistemies are good, because instead of thinking about the infinite chain of higher priors every time we want to prove a simple statement, we can rely on an epistemy. But they are regularly not defined, not properly followed or not even understood. Leading to epistemic faults.

Questions

As such, I'm interested in the following :

  • When and how do we define new epistemies ? Eg, "Should we define an epistemy for evaluating the Utility of actions for EA ?",  "How should we define an epistemy to build new models of human psychology ?", etc.
  • How to account for epistemic changes in Bayesianism ? (This requires self-reference, which Bayesianism lacks.)
  • How to make sense of of Scott Alexander's yearly predictions ? Is it only a blackbox telling us to bet more on future predictions, or do we have a better analysis ?
  • What prior distributions are interesting to study human behavior ? (For a given restricted class of situations, of course.)
  • Are answers to the previous questions useful ? Are the previous questions meaningful ?

I'm looking for ideas and pointers/links.

Even if your thought seems obvious, if I didn't explicitly mention it, it's worth commenting it. I'll add it to this post.

Even if you only have idea for one of the question, or a particular criticism of a point made in the post, go on.

 

Thank you for reading this far.

Open thread, Jul. 17 - Jul. 23, 2017

1 MrMind 17 July 2017 08:15AM
If it's worth saying, but not worth its own post, then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

[Link] Red Teaming Climate Change Research - Should someone be red-teaming Rationality/EA too?

1 casebash 07 July 2017 02:16AM

We need a better theory of happiness and suffering

1 toonalfrink 04 July 2017 08:14PM

We rationalists know a lot about winning, but we don't know what our terminal goals really are. Such things are handwaved away, as we just mumble something like "QALYs" and make a few guesses about what a five year old would like.

I'd like to dispel the myth that a 5 year old knows what they like. Have you ever seen a kid with a sack of candy? I don't think they really wanted to get nauseous.

"But hold up", you say. "Maybe that's true for special cases involving competing subagents, but most cases are actually pretty straightforward, like blindness and death and captivity."

Well, you may have a point with death, but what if blind people and inmates are actually as happy as the next guy? What's the point of curing blindness, then?

A special case where we need to check our assumptions is animal welfare. What if the substrate of suffering is something in higher-order cognition, something that all but mammals lack?

One could hold that it is impossible to make inferences about another being's qualia, but we can come quite far with introspection plus assuming that similar brains yield similar qualia. We can even correlate happiness with brain scans.

The former is why I've moved to a Buddhist monastery. If (whatever really causes) happiness is your goal, it seems to me that the claim that one can permanently attain a state of bliss is worth investigating.

So, to sum up, if we want to fix suffering, let's find out it's proximal cause first. Spoiler: it's not pain.

(To be continued)

Open thread, Jul. 03 - Jul. 09, 2017

1 MrMind 03 July 2017 07:20AM
If it's worth saying, but not worth its own post, then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

View more: Next