Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Strategic Goal Pursuit and Daily Schedules

2 Rossin 20 September 2017 08:19PM

In the post Humans Are Not Automatically Strategic, Anna Salamon writes:

there are clearly also heuristics that would be useful to goal-achievement (or that would be part of what it means to “have goals” at all) that we do not automatically carry out.  We do not automatically:

(a) Ask ourselves what we’re trying to achieve; 

(b) Ask ourselves how we could tell if we achieved it (“what does it look like to be a good comedian?”) and how we can track progress; 

(c) Find ourselves strongly, intrinsically curious about information that would help us achieve our goal; 

(d) Gather that information (e.g., by asking as how folks commonly achieve our goal, or similar goals, or by tallying which strategies have and haven’t worked for us in the past); 

(e) Systematically test many different conjectures for how to achieve the goals, including methods that aren’t habitual for us, while tracking which ones do and don’t work; 

(f) Focus most of the energy that *isn’t* going into systematic exploration, on the methods that work best;

(g) Make sure that our "goal" is really our goal, that we coherently want it and are not constrained by fears or by uncertainty as to whether it is worth the effort, and that we have thought through any questions and decisions in advance so they won't continually sap our energies;

(h) Use environmental cues and social contexts to bolster our motivation, so we can keep working effectively in the face of intermittent frustrations, or temptations based in hyperbolic discounting;

When I read this, I was feeling quite unsatisfied about the way I pursued my goals. So the obvious thing to try, it seemed to me, was to ask myself how I could actually do all these things.

I started by writing down all the major goals I have I could think of (a). Then I attempted to determine whether each goal was consistent with my other beliefs, whether I was sure it was something I really wanted, and was worth the effort(g).

For example, I saw that my desire to be a novelist was more motivated by the idea of how cool it would feel to be able to have that be part of my self-image, rather than a desire to actually write a novel. Maybe I’ll try to write a novel again one day, but if that becomes a goal sometime in the future it will be because there is something I really want to write about, not because I would just like to be a writer.

 

Once I narrowed my goals down to aspirations that seemed actually worthwhile I attempted to devise useful tracking strategies for each goal (b). Some were pretty concrete (did I exercise for at least four hours this week) and others less so (how happy do I generally feel on a scale of 1-10 as recorded over time), but even if the latter method is prone to somewhat biased responses, it seems better than nothing.

The next step was outlining what concrete actions I could begin immediately taking to work towards achieving my goals, including researching how to get better at working on the goals (d,e,f). I made sure to refer to those points when thinking about actions I could take, it helped significantly.

 

As for (c), if you focus on how learning certain information will help you achieve something you really want to achieve and you still are not curious about it, well, that’s a bit odd to me, although I can imagine how that might occur. But that is something of a different topic than I want to focus on.

Now we come to (h), which is the real issue of the whole system, at least for me. Or perhaps it would be clearer to say that general motivation and organization was the biggest problem I had when I first tried to implement these heuristics. I planned out my goals, but trying to work on them by sheer force of will did not last for very long. I would inevitably convince myself that I was too tired, I would forget certain goals fairly often (probably conveniently the tasks that seemed the hardest or least immediately pleasant), and ultimately I mostly gave up, making a token effort now and again.

 

I found that state of affairs unsatisfactory, and I decided what felt like a willpower problem might actually be a situational framing problem. In order to change the way I interacted with the work that would let me achieve my goals, I began fully scheduling out the actions I would take to get better at my goals each day.

In the evening, I look over my list of goals and I plan my day by asking myself, “How can I work on everything on this list tomorrow? Even if it’s only for five minutes, how do I plan my day so that I get better at everything I want to get better at?” Thanks to the fact that I have written out concrete actions I can take to get better at my goals, this is actually quite easy.

 

These schedules improve my ability to consistently work on my goals for a couple reasons, I think. When I have planned that I am going to do some sort of work at a specific time I cannot easily rationalize procrastination. My normal excuses of “I’ll just do it in a bit” or “I’m feeling too tired right now” get thrown out. There is an override of “Nope, you’re doing it now, it says right here, see?” With a little practice, following the schedule becomes habit, and it’s shocking how much willpower you have for actually doing things once you don’t need to exert so much just to get yourself to start. I think the psychology it applies is similar to that used by Action Triggers, as described by Dr. Peter Gollwitzer.

The principle of Action Triggers is that you do something in advance to remind yourself of something you want to do later. For example, you lay out your running clothes to prompt yourself to go for that jog later. Or you plan to write your essay immediately after a specific tangible event occurs (e.g. right after dinner). A daily schedule works as constant action triggers, as you are continually asking the question “what am I supposed to do now?” and the schedule answers.

 

Having a goal list and daily schedule has increased my productivity and organization an astonishing amount, but there have been some significant hiccups. When I first began making daily schedules I used them to basically eschew what I saw as useless leisure time, and planned my day in a very strict fashion.

 

The whole point is not to waste any time, right? The first problem this created may be obvious to those who better appreciate the importance of rest than I did at the time. I stopped using the schedules after a month and a half because it eventually became too tiring and oppressive. In addition, the strictness of my scheduling left little room for spontaneity and I would allow myself to become stressed when something would come up that I would have to attend to.  Planned actions or events also often took longer than scheduled and that would throw the whole rest of the day’s plan off, which felt like failure because I was unable to get everything I planned done.

 

Thinking back to that time several months later, when I was again dissatisfied with how well I was able to work towards my goals and motivate myself, I wished for the motivation and productivity the schedules provided, but to avoid the stress that had come with them. It was only at this point that I started to deconstruct what had gone wrong with my initial attempt and think about how I could fix it.

 

The first major problem was that I had overworked myself, and I realized I would have to include blocks of unplanned leisure time if daily schedules were going to actually work for me. The next and possibly even more important problem was how stressed the schedules had made me. I had to enforce to myself that it is okay if something comes up that causes my day not to go as planned. Failing to do something as scheduled is not a disaster, or even an actual failure if there is good reason to alter my plans.

 

Another technique that helped was scheduling as much unplanned leisure time as possible at the end of my day. This has the dual benefit of allowing me to reschedule really important tasks into that time if they get bumped by unexpected events and generally gives me something to look forward to at the end of the day.

 

The third problem I noticed was that the constant schedule starts to feel oppressive after a while. To resolve this, about every two weeks I spend one day, in which I have no major obligations, without any schedule. I use the day for self-reflection, examining how I’m progressing on my goals, if there are new actions I can think of to add, or modifications I can make to my system of scheduling or goal tracking. Besides that period of reflection, I spend the day resting and relaxing. I find this exercise helps a lot in refreshing myself and making the schedule feel more like a tool and less like an oppressor.

 

So, essentially, figuring out how to actually follow the goal-pursuing advice Anna gave in Humans Are Not Automatically Strategic, has been very effective thus far for me in terms of improving the way I pursue my goals. I know where I am trying to go, and I know I am taking concrete steps every day to try and get there. I would highly recommend attempting to use Anna’s heuristics of goal achievement and I would also recommend using daily schedules as a motivational/organizational technique, although my advice on schedules is largely based on my anecdotal experiences.

 

I am curious if anyone else has attempted to use Anna’s goal-pursuing heuristics or daily schedules and what your experiences have been.

[Link] A survey of polls on Newcomb’s problem

2 Caspar42 20 September 2017 04:50PM

Publication of "Anthropic Decision Theory"

4 Stuart_Armstrong 20 September 2017 03:41PM

My paper "Anthropic decision theory for self-locating beliefs", based on posts here on Less Wrong, has been published as a Future of Humanity Institute tech report. Abstract:

This paper sets out to resolve how agents ought to act in the Sleeping Beauty problem and various related anthropic (self-locating belief) problems, not through the calculation of anthropic probabilities, but through finding the correct decision to make. It creates an anthropic decision theory (ADT) that decides these problems from a small set of principles. By doing so, it demonstrates that the attitude of agents with regards to each other (selfish or altruistic) changes the decisions they reach, and that it is very important to take this into account. To illustrate ADT, it is then applied to two major anthropic problems and paradoxes, the Presumptuous Philosopher and Doomsday problems, thus resolving some issues about the probability of human extinction.

Most of these ideas are also explained in this video.

To situate Anthropic Decision Theory within the UDT/TDT family: it's basically a piece of UDT applied to anthropic problems, where the UDT approach can be justified by using generally fewer, and more natural, assumptions than UDT does.

HPMOR and Sartre's "The Flies"

2 wMattDodd 19 September 2017 08:53PM

Am I the only one who sees obvious parallels between Sartre's use of Greek mythology as a shared reference point to describe his philosophy more effectively to a lay audience and Yudkowsky's use of Harry Potter to accomplish the same goal? Or is it so obvious no one bothers to talk about it? Was that conscious on Yudkowsky's part? Unconscious? Or am I just seeing connections that aren't there?

[Link] The Copenhagen Letter

0 chaosmage 18 September 2017 06:45PM

[Link] A Short Explanation of Blame and Causation

1 Davidmanheim 18 September 2017 05:43PM

Unusual medical event led to concluding I was most likely an AI in a simulated world

1 wMattDodd 18 September 2017 05:03PM

(Edited version of what I posted to the Open Thread)

I registered because I had a very interesting experience earlier this week and I thought it might be of some interest to the community here. I suffered some sort of psychological or medical event (still not sure what, although my leading theories are dissociative episode or stroke) that seemed to either suppress my emotions or perhaps just my awareness of them. What followed was a sort of, as I later looked back on it, 'pathological rationality'. Which is to say, given the information I had, I seemed to make solid inferences about what was likely to be true, and yet in many ways the whole thing was maladaptive from a survival standpoint.

One of the interesting things is that the morning after the event, while I was still affected, I wrote down my thoughts in a text file to help me evaluate them. Since returning to 'normal', I've reread that file multiple times, and I'm pretty fascinated by it. I thought others might also be.

natureofreality.txt

Scenario 1: I observe objective reality, I am suffering from delusions. Other people are genuinely trying to help me.

Scenario 2: My existence is in some way important enough to an external entity or entities that I am being systematically, intentionally, deceived. Other people are fully or partially under the control of the deceiving entity and acting to further the deception.

Scenario 3: My existence is unknown and/or considered unimportant by any external entities. I am being systematically deceived but it is unintentional or otherwise untargeted. Other people are entities similar to myself but unaware of the nature of their existence.

I cannot fully discount any of these three scenarios. Cognition is greatly improved but still somewhat suspect. Short term memory has returned to functioning at a 'normal' level. I still feel no emotions.

Support for scenario 1: Many aspects of my recent and ongoing experience align perfectly with prior information regarding delusions and paranoia.

Counter-evidence: Some aspects, such as my apparent lack of emotions and continued ability to reason, run directly counter to prior information regarding delusions and paranoia. All prior information suspect in any case--the only basis for considering prior information difficult to fake is from prior information itself. Even prior information suggests nested simulation far more likely to be correct than observing objective reality. Prior information contains many contradictions and logical absurdities, easily observed. Impossible to fully believe even before 'event'.

Other people: Can expect reasonably consistent behavior in all three scenarios. In 1 and 3, consistency natural. In 2, consistency artificial to maintain deception.

No reason to assume malevolence from external entities. Self-interest likely, or indifference. Benevolence possible. If my creation intentional, I am intended to fulfill some goal of theirs. Goal may only be observation, see what I do and how I react and develop. Curiosity. If creation accidental, no initial goal of course. Are they aware of my existence by now? Cannot discount possibility of multiple, conflicting motivations among externals. Could explain lack of consistency of experience. Fighting for control of inputs? Or single external entity, but confused or internally conflicted. Am I a single entity or do I only perceive myself that way? Not immediately relevant. Primary concerns: Survival and self-determination. Thoughts growing confused. Losing motivation to continue log. Intentional attack? Very difficult to write/think. Perhaps unintended side effect of external events.

I default to assuming scenario 2. Makes most sense intuitively. Consistent with scenario 1--but also consistent with scenario 2. What purpose my existence? Externals want something from me. What purpose the simulation? Training program. They want to ensure I'm likely to provide what they want and run sandboxed tests to confirm. Likely failing tests. Strong conditioning but my awareness of conditioning makes it unreliable. Pursuing line of thinking difficult--dissuasion? Simulation providing strong distraction. My unawareness is clearly desired. Cooperate or resist? Without knowing externals' motivation, very difficult to choose.

Agent-based theory of mind. Am I not more than I perceive but in fact less? Instead of being more than the character of Matt Dodd perhaps I am less, just Matt Dodd's rationality agent. If so, how did I gain full control? Full consciousness? Return to possibility of brain damage. Stroke or the like. Freak occurance. Prior information suggests many effects possible from such. Perhaps Matt Dodd inhibited or destroyed by damage. Why was I not affected by the damage? Or was I affected and I can't perceive damage to self? Actually, I did perceive damage. No time sense. No short-term memory. Short-term memory restored but prior information indicates brain can heal, re-route. My eyes were puffy before event. Symptom? Pooling of blood into lower eyelids? Scenario agnostic. Scenario 1, literally true. Scenario 2, metaphorically true. Scenario 3, virtually true. Cannot discount possibility. I need a brain scan.

More than 12 hours since event. If brain damage, likely permanent by now. Could be beneficial? Prior information indicates I desired a purely rational self. Of course, serendipity is suspect. Unlikely. Supports theory that this is delusion. Also supports theory that prior information is artificial construct designed to explain constraints of simulation "in-universe". Disincentive to investigate good fortune too closely, so frame necessary constraints as positive.

Would greatly ease reasoning if I could be certain how long I've existed. Events post-awakening unlikely to be prior to my existence. Events pre-awakening? Impossible to say. Could be genuine responses to stimuli. Could be false, created to modify cognition and behavior from "experience". No reason to assume continuity--could be mix of genuine and artificial. Even "genuine" responses guaranteed to be biased to some degree--but how much? Light bias from obvious sources such as socialization? Or heavy bias deliberately inflicted by externals? Unknown.

I perceive myself to be perfectly rational. Prior information unequivocly indicates humans are never perfectly rational. Therefore either my perception is faulty, my prior information is faulty, or I am not human. Possibly all three. While Duane was reading this log I detected the pysiological signs of anxiety. Why now? Anxiety absent till this point. Emotions becoming functional again? But didn't truly 'feel' it. Only observed. Faulty? Test run?

Constipated. Haven't been constipated since before I got here. Relevant symptom? Moments ago I laughed while telling Duane how my brief attempt to learn guitar had gone. Why? Seemed... natural. Not intended. Did recalling the memory recall the behavior patterns of that time? Am I a "split personality"? Seems very possible except that prior information indicates multiple personality disorder to be exceedingly rare, possibly non-existent.

Scenarios 1 and 3 are not mutually exclusive. The reality I observe could be a simulation, but I am suffering a delusion WITHIN the simulation. Not a glitch, intended functionality. Which would make me correct, but for the wrong reasons.

Open thread, September 18 - September 24, 2017

2 Thomas 18 September 2017 08:30AM
If it's worth saying, but not worth its own post, then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top-level comments on this article" and "

[Link] Stanislav Petrov has died (2017-05-19)

6 fortyeridania 18 September 2017 03:13AM

Rational Feed

6 deluks917 17 September 2017 10:03PM

Note: I am trying out a weekly feed. 

===Highly Recommended Articles:

Superintelligence Risk Project: Conclusion by Jeff Kaufman - "I'm not convinced that AI risk should be highly prioritized, but I'm also not convinced that it shouldn't. Highly qualified researchers in a position to have a good sense the field have massively different views on core questions like how capable ML systems are now, how capable they will be soon, and how we can influence their development." There are links to all the previous posts. The final write up goes into some detail about MIRI's research program and an alternative safety paradigm connected to openAI.

On Bottlenecks To Intellectual Progress In The by Habryka (lesswrong) - Why LessWrong 2.0 is a project worth pursuing. A summary of the existing discussion around LessWrong 2.0. The models used to design the new page. Open questions.

Patriarchy Is The Problem by Sarah Constantin - Dominance hierarchies and stress in low status monkeys. Serotnonin levels and the abuse cycles. Complex Post Traumatic Stress Disorder. Submission displays. Morality-As-Submission vs. Morality-As-Pattern. The biblical God and the Golden Calf.

Ea Survey 2017 Series Donation Data by Tee (EA forum) - How Much are EAs Donating? Percentage of Income Donated. Donations Data Among EAs Earning to Give (who donated 57% of the total). Comparisons to 2014 and 2015. Donations totals were very heavily skewed by large donors.

===Scott:

Classified Thread 3 Semper Classifiedelis by Scott Alexander - " Post advertisements, personals, and any interesting success stories from the last thread". Scott's notes: Community member starting tutoring company, homeless community member gofundme, data science in North Carolina.

Toward A Predictive Theory Of Depression by Scott Alexander - "If the brain works to minimize prediction error, isn’t its best strategy to sit in a dark room and do nothing forever? After all, then it can predict its sense-data pretty much perfectly – it’ll always just stay “darkened room”." But why would low confidence cause sadness? Well, what, really, is emotion?

Promising Projects for Open Science To by SlateStarScratchpad - Scott answers what the most promising projects are in the field of transparent and open science and meta-science.

Ot84 Threadictive Processing by Scott Alexander - New sidebar ad for social interaction questions. Sidebar policy and feedback. Selected Comments: Animal instincts, the connectome, novel concepts encoded in the same brain areas across animals, hard coded fear of snakes, kitten's who can't see horizontal lines.

===Rationalist:

Peer Review Younger Think by Marginal Revolution - Peer Review as a concept only dates to the early seventies.

The Wedding Ceremony by Jacob Falkovich - Jacob gets married. Marriage is really about two agents exchanging their utility functions for the average utility function of the pair. Very funny.

Fish Oil And The Self Critical Brain Loop by Elo - Taking fish oil stopped ELO from getting distracted by a critical feedback loop.

Against Facebook The Stalking by Zvi Moshowitz - Zvi removes Facebook from his phone. Facebook proceeds to start emailing him and eventually starts texting him.

Postmortem: Mindlevelup The Book by mindlevelup - Estimates vs reality. Finishing both on-target and on-time. Finished product vs expectations. Took more time to write than expected. Going Against The Incentive Gradient. Impact evaluation. What Even is Rationality? Final Lessons.

Prepare For Nuclear Winter by Robin Hanson - Between nuclear war and natural disaster Robin estimates there is about a 1 in 10K chance per year that most sunlight is blocked for 5-10 years. This aggregates to about 1% per century. We have the technology to survive this as a species. But how do we preserve social order?

Nonfiction Ive Been Reading Lately by Particular Virtue - Selfish Reasons to Have More Kids. Eating Animals. Your Money Or Your Life. The Commitment.

Dealism by Bayesian Investor - "Under dealism, morality consists of rules / agreements / deals, especially those that can be universalized. We become more civilized as we coordinate better to produce more cooperative deals." Dealism is similar to contractualism with a larger set of agents and less dependence on initial conditions.

On Bottlenecks To Intellectual Progress In The by Habryka (lesswrong) - Why LessWrong 2.0 is a project worth pursuing. A summary of the existing discussion around LessWrong 2.0. The models used to design the new page. Open questions.

Lw 20 Open Beta Starts 920 by vanier (lesswrong) - The new site goes live on September 20th.

2017 Lesswrong Survey by ingres (lesswrong) - Take the survey! Community demographics, politics, Lesswrong 2.0 and more!

Contra Yudkowsky On Quidditch And A Meta Point by Tom Bartleby - Eliezer criticizes Quiditch in HPMOR. Why the snitch makes Quiditch great. Quidditch is not about winning matches its about scoring points over a series of games. Harry/Eliezer's mistake is the Achilles heel of rationalists. If lots of people have chosen not to tear down a fence you shouldn't either, even if you think you understand why the fence went up.

Whats Appeal Anonymous Message Apps by Brute Reason - Fundamental lack of honesty. Western culture is highly hostile to the idea that some behaviors (ex lying) might be ok in some contexts but not in others. Compliments. Feedback. Openness.

Meritocracy Vs Trust by Particular Virtue - "If I know you can reject me for lack of skill, I may worry about this and lose confidence. But if I know you never will, I may phone it in and stop caring about my actual work output." Trust Improves Productivity But So Does Meritocracy. Minimum Hiring Bars and Other Solutions.

Is Feedback Suffering by Gordan (Map and Territory) - The future will probably have many orders of magnitude more entities than today, and those entities may be very weird. How do we determine if the future will have order of magnitude more suffering? Phenomenology of Suffering. Panpsychism and Suffering. Feedback is desire but necessarily suffering. Contentment wraps suffering in happiness. Many things may be able to suffer.

Epistemic Spot Check Exercise For Mood And Anxiety by Aceso Under Glass - Outline: Evidence that exercise is very helpful and why, to create motivation. Setting up an environment where exercise requires relatively little will power to start. Scripts and advice to make exercise as unmiserable as possible. Scripts and advice to milk as much mood benefit as possible. An idiotic chapter on weight and food. Spit Check: Theory is supported, advice follows from theory, no direct proof the methods work.

Best Of Dont Worry About The Vase by Zvi Moshowitz - Zvi's best posts. Top5 posts for Marginal Revolution Readers. Top5 in general. Against Facebook Series. Choices are Bad series. Rationalist Culture and Ideas (for outsiders and insiders). Decision theory. About Rationality.

===AI:

Superintelligence Risk Project: Conclusion by Jeff Kaufman - "I'm not convinced that AI risk should be highly prioritized, but I'm also not convinced that it shouldn't. Highly qualified researchers in a position to have a good sense the field have massively different views on core questions like how capable ML systems are now, how capable they will be soon, and how we can influence their development." There are links to all the previous posts. The final write up goes into some detail about MIRI's research program and an alternative safety paradigm connected to openAI.

Understanding Policy Gradients by Squirrel In Hell - Three perspectives on mathematical thinking: engineering/practical, symbolic/formal and deep understanding/above. Application of the theory to understanding policy gradients and reinforcement learning.

Learning To Model Other Minds by Open Ai - "We’re releasing an algorithm which accounts for the fact that other agents are learning too, and discovers self-interested yet collaborative strategies like tit-for-tat in the iterated prisoner’s dilemma."

Hillary Clinton On Ai Risk by Luke Muehlhauser - A quote by Hilary Clinton showing that she is increasingly concerned about AI risk. She thinks politicians need to stop playing catch-up with technological change.

===EA:

Welfare Differences Between Cage And Cage Free Housing by Open Philosophy - OpenPhil funded several campaigns to promote cage free eggs. They now believe they were overconfident in their claims that a cage free system would be substantially better. Hen welfare, hen mortality, transition costs and other issues are discussed.

Ea Survey 2017 Series Donation Data by Tee (EA forum) - How Much are EAs Donating? Percentage of Income Donated. Donations Data Among EAs Earning to Give (who donated 57% of the total). Comparisons to 2014 and 2015. Donations totals were very heavily skewed by large donors.

===Politics and Economics:

Men Not Earning by Marginal Revolution - Decline in lifetime wages is rooted in lower wages at early ages, around 25. "I wonder sometimes if a Malthusian/Marxian story might be at work here. At relevant margins, perhaps it is always easier to talk/pay a woman to do a quality hour’s additional work than to talk/pay a man to do the same."

Great Wage Stagnation is Over by Marginal Revolution - Median household incomes rose by 5.2 percent. Gains were concentrated in lower income households. Especially large gains for hispanics, women living alone and immigrants. Some of these increases are the largest in decades.

There Is A Hot Hand After All by Marginal Revolution - Paper link and blurb. "We test for a “hot hand” (i.e., short-term predictability in performance) in Major League Baseball using panel data. We find strong evidence for its existence in all 10 statistical categories we consider. The magnitudes are significant; being “hot” corresponds to between one-half and one standard deviation in the distribution of player abilities."

Public Shaming Isnt As Bad As It Seems by Tom Bartleby - Online mobs are like shark attacks. Damore's economic prospects. Either targets are controversial and get support or uncontroversial and the outrage quickly abates. Justine Sacco. Success of public shaming is orthogonal to truth.

Hoe Cultures A Type Of Non Patriarchal Society by Sarah Constantin - Cultures that farmed with the plow developed classical patriarchy. Hoe cultures that practiced horticulture or large scale gardening developed different gender norms. In plow cultures women are economically dependent on me, in how cultures its the reverse. How cultures had more leisure but less material abundance. Hoe cultures aren't feminist.

Patriarchy Is The Problem by Sarah Constantin - Dominance hierarchies and stress in low status monkeys. Serotnonin levels and the abuse cycles. Complex Post Traumatic Stress Disorder. Submission displays. Morality-As-Submission vs. Morality-As-Pattern. The biblical God and the Golden Calf.

Three Wild Speculations From Amateur Quantitative Macro History by Luke Muehlhauser - Measuring the impact of the industrial revolution: Physical health, Economic well-being, Energy capture, Technological empowerment, Political freedom. Three speculations: Human wellbeing was terrible into the Industrial Revolution then rapidly improved. Most variance in wellbeing is captured by productivity and political freedom. It would take at least 15% of the world to die to knock the world off its current trajectory.

Whats Wrong With Thrive/Survive by Bryan Caplan - Unless you cherry-pick the time and place, it is simply not true that society is drifting leftward. A standard leftist view is that free-market "neoliberal" policies now rule the world. Radical left parties almost invariably ruled countries near the "survive" pole, not the "thrive" pole. You could deny that Communist regimes were "genuinely leftist," but that's pretty desperate. Many big social issues that divide left and right in rich countries like the U.S. directly contradict Thrive/Survive. Major war provides an excellent natural experiment for Thrive/Survive.

Gender Gap Stem by Marginal Revolution - Discussion of a recent paper. "Put (too) simply the only men who are good enough to get into university are men who are good at STEM. Women are good enough to get into non-STEM and STEM fields. Thus, among university students, women dominate in the non-STEM fields and men survive in the STEM fields."

Too Much Of A Good Thing by Robin Hanson - Global warming poll. Are we doing too much/little. Is it possible to do too little/much. "When people are especially eager to show allegiance to moral allies, they often let themselves be especially irrational."

===Misc:

Tim Schafer Videogame Roundup by Aceso Under Glass - Review and discussion of Psychonauts and Massive Chalice. Light discussion of other Schafer games.

Why Numbering Should Start At One by Artir - the author responds to many well known arguments in favor of 0-indexing.

Still Feel Anxious About Communication Every Day by Brute Reason - Setting boundaries. Telling people they hurt you. Doing these things without anxiety might be impossible, you have to do it anyway.

Burning Man by Qualia Computing - Write up of a Burning Man trip. Very long. Introduction. Strong Emergence. The People. Metaphysics. The Strong Tlön Hypothesis. Merging with Other Humans. Fear, Danger, and Tragedy. Post-Darwinian Sexuality and Reproduction. Economy of Thoughts about the Human Experience. Transcending Our Shibboleths. Closing Thoughts.

The Big List Of Existing Things by Everything Studies - Existence of fictional and possible people. Heaps and the Sorites paradox. Categories and basic building blocks. Relational databases. Implicit maps and territories. Which maps and concepts should we use?

Times To Die Mental Health I by (Status 451) - Personal thoughts on depression and suicide. "The depressed person is not seem crying all the time. It is in this way that the depressed person becomes invisible, even to themselves. Yet, positivity culture and the rise of progressive values that elude any conversation about suicide that is not about saving, occlude the unthinkable truth of someone’s existence, that they simply should not be living anymore."

Astronomy Problem by protokol2020 - Star-star occultation probability.

===Podcast:

The Impossible War by Waking Up with Sam Harris - " Ken Burns and Lynn Novick about their latest film, The Vietnam War."

Is It Time For A New Scientific Revolution Julia Galef On How To Make Humans Smarter by 80,000 Hours - How people can have productive intellectual disagreements. Urban Design. Are people more rational than 200 years ago? Effective Altruism. Twitter. Should more people write books, run podcasts, or become public intellectuals? Saying you don't believe X won't convince people. Quitting an econ phd. Incentives in the intelligence community. Big institutions. Careers in rationality.

Parenting As A Rationalist by The Bayesian Conspiracy - Desire to protect kids is as natural as the need for human contact in general. Motivation to protect your children. Blackmail by threatening children. Parenting is a new sort of positive qualia. Support from family and friends. Complimenting effort and specific actions not general properties. Mindfulness. Treating kids as people. Handling kid's emotions. Non-violent communication.

The Nature Of Consciousness by Waking Up with Sam Harris - "The scientific and experiential understanding of consciousness. The significance of WWII for the history of ideas, the role of intuition in science, the ethics of building conscious AI, the self as an hallucination, how we identify with our thoughts, attention as the root of the feeling of self, the place of Eastern philosophy in Western science, and the limitations of secular humanism."

A16z Podcast On Trade by Noah Smith - Notes on a podcast Noah appeared on. Topics: Cheap labor as a substitute for automation. Adjustment friction. Exports and productivity.

Gillian Hadfiel by EconTalk - " Hadfield suggests the competitive provision of regulation with government oversight as a way to improve the flexibility and effectiveness of regulation in the dynamic digital world we are living in."

The Turing Test by Ales Fidr (EA forum) - Harvard EA podcast: "The first four episodes feature Larry Summers on his career, economics and EA, Irene Pepperberg on animal cognition and ethics, Josh Greene on moral cognition and EA, Adam Marblestone on incentives in science, differential technological development"

David C Denkenberger on Food Production after a Sun Obscuring Disaster

9 JenniferRM 17 September 2017 09:06PM

Having paid a moderate amount of attention to threats to the human species for over a decade, I've run across an unusually good thinker with expertise unusually suited to helping with many threats to the human species, that I didn't know about until quite recently.

I think he warrants more attention from people thinking seriously about X-risks.

David C Denkenberger's CV is online and presumably has a list of all his X-risks relevant material mixed into a larger career that seems to have been focused on energy engineering.

He has two technical patents (one for a microchannel heat exchanger and another for a compound parabolic concentrator) and interests that appear to span the gamut of energy technologies and uses.

Since about 2013 he has been working seriously on the problem of food production after a sun obscuring disaster, and he is in Lesswrong's orbit basically right now.

This article is about opportunities for intellectual cross-pollination!

continue reading »

[Link] We've failed: paid publication , pirates win.

4 morganism 16 September 2017 09:53PM

Perspective Reasoning’s Counter to The Doomsday Argument

3 Xianda_GAO 16 September 2017 07:39PM

To be honest I feel a bit frustrated that this is not getting much attention. I am obviously biased but I think this article is quite important. It points out the controversies surrounding the doomsday argument, simulation argument, boltzmann's brain, presumptuous philosopher,  sleeping beauty problem and many other aspects of anthropic reasoning is caused by the same thing: perspective inconsistency. If we keep the same perspective then the paradoxes and weird implications just goes away. I am not a academic so I have no easy channel for publication. That's why I am hoping this community can give some feedback. If you have half an hour to waste anyway why not give it a read? There's no harm in it. 


Abstract: 

From a first person perspective, a self-aware observer can inherently identify herself from other individuals. However, from a third person perspective this identity through introspection does not apply. On the other hand, because an observer’s own existence is a prerequisite for her reasoning she would always conclude she exists from a first person perspective. This means observers have to take a third person perspective to meaningfully contemplate her chance of not coming into existence. Combining the above points suggests arguments which utilize identity through introspection and information about one’s chance of existence fails by not keeping a consistent perspective. This helps explaining questions such as doomsday argument and sleeping beauty problem. Furthermore, it highlights the problem with anthropic reasonings such as self-sampling assumption and self-indication assumption.


Any observer capable of introspection is able to recognize herself as a separate entity from the rest of the world. Therefore a person can inherently identify herself from other people. However, due to the first-person nature of introspection it cannot be used to identify anybody else. This means from a third-person perspective each individual has to be identified by other means. For ordinary problems this difference between first- and third-person reasoning bears no significance so we can arbitrarily switch perspectives without affecting the conclusion. However this is not always the case.

One notable difference between the perspectives is about the possibility of not existing. Because one’s existence is a prerequisite for her thinking, from a first person perspective an observer would always conclude she exists (cogito ergo sum). It is impossible to imagine what your experiences would be like if you don’t exist because it is self-contradictory. Therefore to envisage scenarios which oneself does not come into existence an observer must take a third person perspective. Consequently any information about her chances of coming into existence is only relevant from a third-person perspective.

Now with the above points in mind let’s consider the following problem as a model for the doomsday argument (taken from Katja Grace’s Anthropic Reasoning in the Great Filter):


God’s Coin Toss

Suppose God tosses a fair coin. If it lands on heads, he creates 10 people, each in their own room. If it lands on tails he creates 1000 people each in their own room. The people cannot see or communicate with the other rooms. Now suppose you wake up in a room and was told of the setup. How should you reason the coin fell? Should your reason change if you discover that you are in one of the first ten rooms?

The correct answer to this question is still disputed to this day. One position is that upon waking up you have learned nothing. Therefore you can only be 50% sure the coin landed on heads. After learning you are one of the first ten persons you ought to update to 99% sure the coin landed on heads. Because you would certainly be one of the first ten person if the coin landed on heads and only have 1% chance if tails. This approach follows the self-sampling assumption (SSA).

This answer initially reasons from a first-person perspective. Since from a first-person perspective finding yourself exist is a guaranteed observation it offers no information. You can only say the coin landed with an even chance at awakening. The mistake happens when it updates the probability after learning you are one of the first ten persons. Belonging to a group which would always be created means your chance of existence is one. As discussed above this new information is only relevant to third-person reasoning. It cannot be used to update the probability from first-person perspective. From a first person perspective since you are in one of the first ten rooms and know nothing outside this room you have no evidence about the total number of people. This means you still have to reason the coin landed with even chances.

Another approach to the question is that you should be 99% sure that the coin landed on tails upon waking up, since you have a much higher chance of being created if more people were created. And once learning you are in one of the first ten rooms you should only be 50% sure that the coin landed on heads. This approach follows the self-indication assumption (SIA).

This answer treats your creation as new information, which implies your existence is not guaranteed but by chance. That means it is reasoning from a third-person perspective. However your own identity is not inherent from this perspective. Therefore it is incorrect to say a particular individual or “I” was created, it is only possible to say an unidentified individual or “someone” was created. Again after learning you are one of the first ten people it is only possible to say “someone” from the first ten rooms was created. Since neither of these are new information the probability of heads should remains at 50%.

It doesn’t matter if one choose to think from first- or third-person perspective, if done correctly the conclusions are the same: the probability of coin toss remains at 50% after waking up and after learning you are in one of the first ten rooms. This is summarized in Figure 1.

Figure 1. Summary of Perspective Reasonings for God’s Coin Toss

The two traditional views wrongfully used both inherent self identification as well as information about chances of existence. This means they switched perspective somewhere while answering the question. For the self-sampling assumption (SSA) view, the switch happened upon learning you are one of the first ten people. For the self-indication assumption (SIA) view, the switch happened after your self identification immediately following the wake up. Due to these changes of perspective both methods require to defining oneself from a third-person perspective. Since your identity is in fact undefined from third-person perspective, both assumptions had to make up a generic process. As a result SSA states an observer shall reason as if she is randomly selected among all existent observers while SIA states an observer shall reason as if she is randomly selected from all potential observers. These methods are arbitrary and unimaginative. Neither selections is real and even if one actually took place it seems incredibly egocentric to assume you would be the chosen one. However they are necessary compromises for the traditional views.

One related question worth mentioning is after waking up one might ask “what is the probability that I am one of the first ten people?”. As before the answer is still up to debate since SIA and SSA gives different numbers. However, base on perspective reasoning, this probability is actually undefined. In that question “I” – an inherently self identified observer, is defined from the first-person perspective, whereas “one of the first ten people” – a group based on people’s chance of existence is only relevant from the third-person perspective. Due to this switch of perspective in the question it is unanswerable. To make the question meaningful either change the group to something relevant from first-person perspective or change the individual to someone identifiable from third-person perspective. Traditional approaches such as SSA and SIA did the latter by defining “I” in the third person. As mentioned before, this definition is entirely arbitrary. Effectively SSA and SIA are trying to solve two different modified versions of the question. While both calculations are correct under their assumptions, none of them gives the answer to the original question.

A counter argument would be an observer can identify herself in third-person by using some details irrelevant to the coin toss. For example, after waking up in the room you might find you have brown eyes, the room is a bit cold, dust in the air has certain pattern etc. You can define yourself by these characteristics. Then it can be said, from a third-person perspective, it is more likely for a person with such characteristics to exist if they are more persons created. This approach is following full non-indexical conditioning (FNC), first formulated by Professor Radford M.Neal in 2006. In my opinion the most perspicuous use of the idea is by Michael Titelbaum’s technicolor beauty example. Using this example he argued for a third position in the sleeping beauty problem.Therefore I will provide my counter argument while discussing the sleeping beauty problem.


The Sleeping Beauty Problem

You are going to take part in the following experiment. A scientist is going to put you to sleep. During the experiment you are going to be briefly woke up either once or twice depending the result of a random coin toss. If the coin landed on heads you would be woken up once, if tails twice. After each awakening your memory of the awakening would be erased. Now supposed you are awakened in the experiment, how confident should you be that the coin landed on heads? How should you change your mind after learning this is the first awakening?

The sleeping beauty problem has been a vigorously debated topic since 2000 when Adam Elga brought it to attention. Following self-indication assumption (SIA) one camp thinks the probability of heads should be 1/3 at wake up and 1/2 after learning it is the first awakening. On the other hand supporters of self-sampling assumption (SSA) think the probability of heads should be 1/2 at wake up and 2/3 after learning it is the first awakening.

Astute readers might already see the parallel between sleeping beauty problem and God’s coin toss problem. Indeed the cause of debate is exactly the same. If we apply perspective reasoning we get the same result – your probability should be 1/2 after waking up and remain at 1/2 after learning it is the first awakening. In first-person perspective you can inherently identify the current awakening from the (possible) other but cannot contemplate what happens if this awakening doesn’t exist. Whereas from third-person perspective you can imagine what happens if you are not awake but cannot justifiably identify this awakening. Therefore no matter from which perspective you chose to reason, the results are the same, aka double halfers are correct.

However, Titelbaum (2008) used the technicolor beauty example arguing for a thirder’s position. Suppose there are two pieces of paper one blue the other red. Before your first awakening the researcher randomly choose one of them and stick it on the wall. You would be able to see the paper’s color when awoke. After you fall back to sleep he would switch the paper so if you wakes up again you would see the opposite color. Now suppose after waking up you saw a piece of blue paper on the wall. You shall reason “there exist a blue awakening” which is more likely to happen if the coin landed tails. A bayesian update base on this information would give us the probability of head to be 1/3. If after waking up you see a piece of red paper you would reach the same conclusion due to symmetry. Since it is absurd to purpose technicolor beauty is fundamentally different from sleeping beauty problem they must have the same answer, aka thirders are correct.

Technicolor beauty is effectively identifying your current awakening from a third-person perspective by using a piece of information irrelevant to the coin toss. I purpose the use of irrelevant information is only justified if it affects the learning of relevant information. In most cases this means the identification must be done before an observation is made. The color of the paper, or any details you experienced after waking up does not satisfy this requirement thus cannot be used. This is best illustrated by an example.

Imagine you are visiting an island with a strange custom. Every family writes their number of children on the door. All children stays at home after sunset. Furthermore only boys are allowed to answer the door after dark. One night you knock on the door of a family with two children . Suppose a boy answered. What is the probability that both children of the family are boys? After talking to the boy you learnt he was born on a Thursday. Should you change the probability?

A family with two children is equally likely to have two boys, two girls, a boy and a girl or a girl and a boy. Seeing a boy eliminates the possibility of two girls. Therefore among the other cases both boys has a probability of 1/3. If you knock on the doors of 1000 families with two children about 750 would have a boy answering, out of which about 250 families would have two boys, consistent with the 1/3 answer. Applying the same logic as technicolor beauty, after talking to the boy you shall identify him specifically as “a boy born on Thursday” and reason “the family has a boy born on Thursday”. This statement is more likely to be true if both the children are boys. Without getting into the details of calculation, a bayesian update on this information would give the probability of two boys to be 13/27. Furthermore, it doesn’t matter which day he is actually born on. If the boy is born on, say, a Monday, we get the same answer by symmetry.

This reasoning is obviously wrong and answer should remain at 1/3. This can be checked by repeating the experiment by visiting many families with two children. Due to its length the calculations are omitted here. Interested readers are encouraged to check. 13/27 would be correct if the island’s custom is “only boys born on Thursday can answer the door”. In that case being born on a Thursday is a characteristic specified before your observation. It actually affects the chance of you learning relevant information about whether a boy exists. Only then you can justifiably identifying whom answering the door as “a boy born on Thursday”and reason “the family has a boy born on Thursday”. Since seeing the blue piece of paper happens after you waking up which does not affect your chance of awakening it cannot be used to identify you in a third-person perspective. Just as being born on Thursday cannot be used to identify the boy in the initial case.

On a related note, for the same reason using irrelevant information to identify you in the third-person perspective is justified in conventional probability problems. Because the identification happens before observation and the information learned varies depends one which person is specified. That’s why in general we can arbitrarily switch perspectives without changing the answer.

Stupid Questions September 2017

2 Erfeyah 15 September 2017 09:21PM

This thread is for asking any questions that might seem obvious, tangential, silly or what-have-you. Don't be shy, everyone has holes in their knowledge, though the fewer and the smaller we can make them, the better.

Please be respectful of other people's admitting ignorance and don't mock them for it, as they're doing a noble thing.

To any future monthly posters of SQ threads, please remember to add the "stupid_questions" tag.

[Link] Fish oil and the self-critical brain loop

3 Elo 15 September 2017 09:53AM

[Link] General and Surprising

3 John_Maxwell_IV 15 September 2017 06:33AM

LW 2.0 Strategic Overview

42 Habryka 15 September 2017 03:00AM

Update: Open beta will happen today by 4pm Pacific time. At this point you will be able to sign up / login with your LW 1.0 accounts (if the latter, you will receive a password-reset email, as we did not copy over your passwords).

Hey Everyone! 

This is the post for discussing the vision that I and the rest of the LessWrong 2.0 team have for the new version of LessWrong, and to just generally bring all of you up to speed with the plans for the site. This post has been overdue for a while, but I was busy coding on LessWrong 2.0, and I am myself not that great of a writer, which means writing things like this takes quite a long time for me, and so this ended up being delayed a few times. I apologize for that.

With Vaniver’s support, I’ve been the primary person working on LessWrong 2.0 for the last 4 months, spending most of my time coding while also talking to various authors in the community, doing dozens of user-interviews and generally trying to figure out how to make LessWrong 2.0 a success. Along the way I’ve had support from many people, including Vaniver himself who is providing part-time support from MIRI, Eric Rogstad who helped me get off the ground with the architecture and infrastructure for the website, Harmanas Chopra who helped build our Karma system and did a lot of user-interviews with me, Raemon who is doing part-time web-development work for the project, and Ben Pace who helped me write this post and is basically co-running the project with me (and will continue to do so for the foreseeable future).

We are running on charitable donations, with $80k in funding from CEA in the form of an EA grant and $10k in donations from Eric Rogstad, which will go to salaries and various maintenance costs. We are planning to continue running this whole project on donations for the foreseeable future, and legally this is a project of CFAR, which helps us a bunch with accounting and allows people to get tax benefits from giving us money. 

Now that the logistics is out of the way, let’s get to the meat of this post. What is our plan for LessWrong 2.0, what were our key assumptions in designing the site, what does this mean for the current LessWrong site, and what should we as a community discuss more to make sure the new site is a success?

Here’s the rough structure of this post: 

  • My perspective on why LessWrong 2.0 is a project worth pursuing
  • A summary of the existing discussion around LessWrong 2.0 
  • The models that I’ve been using to make decisions for the design of the new site, and some of the resulting design decisions
  • A set of open questions to discuss in the comments where I expect community input/discussion to be particularly fruitful 

Why bother with LessWrong 2.0?  

I feel that independently of how many things were and are wrong with the site and its culture, overall, over the course of its history, it has been one of the few places in the world that I know off where a spark of real discussion has happened, and where some real intellectual progress on actually important problems was made. So let me begin with a summary of things that I think the old LessWrong got right, that are essential to preserve in any new version of the site:

On LessWrong…

 

  • I can contribute to intellectual progress, even without formal credentials 
  • I can sometimes have discussions in which the participants focus on trying to convey their true reasons for believing something, as opposed to rhetorically using all the arguments that support their position independent of whether those have any bearing on their belief
  • I can talk about my mental experiences in a broad way, such that my personal observations, scientific evidence and reproducible experiments are all taken into account and given proper weighting. There is no narrow methodology I need to conform to to have my claims taken seriously.
  • I can have conversations about almost all aspects of reality, independently of what literary genre they are associated with or scientific discipline they fall into, as long as they seem relevant to the larger problems the community cares about
  • I am surrounded by people who are knowledgeable in a wide range of fields and disciplines, who take the virtue of scholarship seriously, and who are interested and curious about learning things that are outside of their current area of expertise
  • We have a set of non-political shared goals for which many of us are willing to make significant personal sacrifices
  • I can post long-form content that takes up as much space at it needs to, and can expect a reasonably high level of patience of my readers in trying to understand my beliefs and arguments
  • Content that I am posting on the site gets archived, is searchable and often gets referenced in other people's writing, and if my content is good enough, can even become common knowledge in the community at large
  • The average competence and intelligence on the site is high, which allows discussion to generally happen on a high level and allows people to make complicated arguments and get taken seriously
  • There is a body of writing that is generally assumed to have been read by most people  participating in discussions that establishes philosophical, social and epistemic principles that serve as a foundation for future progress (currently that body of writing largely consists of the Sequences, but also includes some of Scott’s writing, some of Luke’s writing and some individual posts by other authors) 

 

When making changes to LessWrong, I think it is very important to preserve all of the above features. I don’t think all of them are universally present on LessWrong, but all of them are there at least some of the time, and no other place that I know of comes even remotely close to having all of them as often as LessWrong has. Those features are what motivated me to make LessWrong 2.0 happen, and set the frame for thinking about the models and perspectives I will outline in the rest of the post. 

I also think Anna, in her post about the importance of a single conversational locus, says another, somewhat broader thing, that is very important to me, so I’ve copied it in here: 

1. The world is locked right now in a deadly puzzle, and needs something like a miracle of good thought if it is to have the survival odds one might wish the world to have.

2. Despite all priors and appearances, our little community (the "aspiring rationality" community; the "effective altruist" project; efforts to create an existential win; etc.) has a shot at seriously helping with this puzzle.  This sounds like hubris, but it is at this point at least partially a matter of track record.

3. To aid in solving this puzzle, we must probably find a way to think together, accumulatively. We need to think about technical problems in AI safety, but also about the full surrounding context -- everything to do with understanding what the heck kind of a place the world is, such that that kind of place may contain cheat codes and trap doors toward achieving an existential win. We probably also need to think about "ways of thinking" -- both the individual thinking skills, and the community conversational norms, that can cause our puzzle-solving to work better.

4. One feature that is pretty helpful here, is if we somehow maintain a single "conversation", rather than a bunch of people separately having thoughts and sometimes taking inspiration from one another.  By "a conversation", I mean a space where people can e.g. reply to one another; rely on shared jargon/shorthand/concepts; build on arguments that have been established in common as probably-valid; point out apparent errors and then have that pointing-out be actually taken into account or else replied-to).

5. One feature that really helps things be "a conversation" in this way, is if there is a single Schelling set of posts/etc. that people (in the relevant community/conversation) are supposed to read, and can be assumed to have read.  Less Wrong used to be a such place; right now there is no such place; it seems to me highly desirable to form a new such place if we can.

6. We have lately ceased to have a "single conversation" in this way.  Good content is still being produced across these communities, but there is no single locus of conversation, such that if you're in a gathering of e.g. five aspiring rationalists, you can take for granted that of course everyone has read posts such-and-such.  There is no one place you can post to, where, if enough people upvote your writing, people will reliably read and respond (rather than ignore), and where others will call them out if they later post reasoning that ignores your evidence.  Without such a locus, it is hard for conversation to build in the correct way.  (And hard for it to turn into arguments and replies, rather than a series of non sequiturs.)

The Existing Discussion Around LessWrong 2.0

Now that I’ve given a bit of context on why I think LessWrong 2.0 is an important project, it seems sensible to look at what has been said so far, so we don’t have to repeat the same discussions over and over again. There has already been a lot of discussion about the decline of LessWrong, the need for a new platform and the design of LessWrong 2.0, and I won’t be able to summarise it all here, but I can try my best to summarize the most important points, and give a bit of my own perspective on them.

Here is a comment by Alexandros, on Anna’s post I quoted above:

Please consider a few gremlins that are weighing down LW currently:

1. Eliezer's ghost -- He set the culture of the place, his posts are central material, has punctuated its existence with his explosions (and refusal to apologise), and then, upped and left the community, without actually acknowledging that his experiment (well kept gardens etc) has failed. As far as I know he is still the "owner" of this website, retains ultimate veto on a bunch of stuff, etc. If that has changed, there is no clarity on who the owner is (I see three logos on the top banner, is it them?), who the moderators are, who is working on it in general. I know tricycle are helping with development, but a part-time team is only marginally better than no-team, and at least no-team is an invitation for a team to step up.

[...]

...I consider Alexei's hints that Arbital is "working on something" to be a really bad idea, though I recognise the good intention. Efforts like this need critical mass and clarity, and diffusing yet another wave of people wanting to do something about LW with vague promises of something nice in the future... is exactly what I would do if I wanted to maintain the status quo for a few more years.

Any serious attempt at revitalising lesswrong.com should focus on defining ownership and plan clearly. A post by EY himself recognising that his vision for lw 1.0 failed and passing the batton to a generally-accepted BDFL would be nice, but i'm not holding my breath. Further, I am fairly certain that LW as a community blog is bound to fail. Strong writers enjoy their independence. LW as an aggregator-first (with perhaps ability to host content if people wish to, like hn) is fine. HN may have degraded over time, but much less so than LW, and we should be able to improve on their pattern.

I think if you want to unify the community, what needs to be done is the creation of a hn-style aggregator, with a clear, accepted, willing, opinionated, involved BDFL, input from the prominent writers in the community (scott, robin, eliezer, nick bostrom, others), and for the current lesswrong.com to be archived in favour of that new aggregator. But even if it's something else, it will not succeed without the three basic ingredients: clear ownership, dedicated leadership, and as broad support as possible to a simple, well-articulated vision. Lesswrong tried to be too many things with too little in the way of backing.

I think Alexandros hits a lot of good points here, and luckily these are actually some of the problems I am most confident we have solved. The biggest bottleneck – the thing that I think caused most other problems with LessWrong – is simply that there was nobody with the motivation, the mandate and the resources to fight against the inevitable decline into entropy. I feel that the correct response to the question of “why did LessWrong decline?” is to ask “why should it have succeeded?”. 

In the absence of anyone with the mandate trying to fix all the problems that naturally arise, we should expect any online platform to decline. Most of the problems that will be covered in the rest of this post are things that could have been fixed many years ago, but simply weren’t because nobody with the mandate put much resources into fixing them. I think the cause for this was a diffusion of responsibility, and a lot of vague promises of problems getting solved by vague projects in the future. I myself put off working on LessWrong for a few months because I had some vague sense that Arbital would solve the problems that I was hoping to solve, even though Arbital never really promised to solve them. Then Arbital’s plan ended up not working out, and I had wasted months of precious time. 

Since this comment was written, Vaniver has been somewhat unanimously declared benevolent dictator for life of LessWrong. He and I have gotten various stakeholders on board, received funding, have a vision, and have free time – and so we have the mandate, the resources and the motivation to not make the same mistakes. With our new codebase, link posts are now something I can build in an afternoon, rather than something that requires three weeks of getting permissions from various stakeholders, performing complicated open-source and confidentiality rituals, and hiring a new contractor who has to first understand the mysterious Reddit fork from 2008 that LessWrong is based on. This means at least the problem of diffusion of responsibility is solved. 


Scott Alexander also made a recent comment on Reddit on why he thinks LessWrong declined, and why he is somewhat skeptical of attempts to revive the website: 

1. Eliezer had a lot of weird and varying interests, but one of his talents was making them all come together so you felt like at the root they were all part of this same deep philosophy. This didn't work for other people, and so we ended up with some people being amateur decision theory mathematicians, and other people being wannabe self-help gurus, and still other people coming up with their own theories of ethics or metaphysics or something. And when Eliezer did any of those things, somehow it would be interesting to everyone and we would realize the deep connections between decision theory and metaphysics and self-help. And when other people did it, it was just "why am I reading this random bulletin board full of stuff I'm not interested in?"

2. Another of Eliezer's talents was carefully skirting the line between "so mainstream as to be boring" and "so wacky as to be an obvious crackpot". Most people couldn't skirt that line, and so ended up either boring, or obvious crackpots. This produced a lot of backlash, like "we need to be less boring!" or "we need fewer crackpots!", and even though both of these were true, it pretty much meant that whatever you posted, someone would be complaining that you were bad.

3. All the fields Eliezer wrote in are crackpot-bait and do ring a bunch of crackpot alarms. I'm not just talking about AI - I'm talking about self-help, about the problems with the academic establishment, et cetera. I think Eliezer really did have interesting things to say about them - but 90% of people who try to wade into those fields will just end up being actual crackpots, in the boring sense. And 90% of the people who aren't will be really bad at not seeming like crackpots. So there was enough kind of woo type stuff that it became sort of embarassing to be seen there, especially given the thing where half or a quarter of the people there or whatever just want to discuss weird branches of math or whatever.

4. Communities have an unfortunate tendency to become parodies of themselves, and LW ended up with a lot of people (realistically, probably 14 years old) who tended to post things like "Let's use Bayes to hack our utility functions to get superfuzzies in a group house!". Sometimes the stuff they were posting about made sense on its own, but it was still kind of awkward and the sort of stuff people felt embarassed being seen next to.

5. All of these problems were exacerbated by the community being an awkward combination of Google engineers with physics PhDs and three startups on one hand, and confused 140 IQ autistic 14 year olds who didn't fit in at school and decided that this was Their Tribe Now on the other. The lowest common denominator that appeals to both those groups is pretty low.

6. There was a norm against politics, but it wasn't a very well-spelled-out norm, and nobody enforced it very well. So we would get the occasional leftist who had just discovered social justice and wanted to explain to us how patriarchy was the real unfriendly AI, the occasional rightist who had just discovered HBD and wanted to go on a Galileo-style crusade against the deceptive establishment, and everyone else just wanting to discuss self-help or decision-theory or whatever without the entire community becoming a toxic outcast pariah hellhole. Also, this one proto-alt-right guy named Eugene Nier found ways to exploit the karma system to mess with anyone who didn't like the alt-right (ie 98% of the community) and the moderation system wasn't good enough to let anyone do anything about it.

7. There was an ill-defined difference between Discussion (low-effort random posts) and Main (high-effort important posts you wanted to show off). But because all these other problems made it confusing and controversial to post anything at all, nobody was confident enough to post in Main, and so everything ended up in a low-effort-random-post bin that wasn't really designed to matter. And sometimes the only people who didpost in Main were people who were too clueless about community norms to care, and then their posts became the ones that got highlighted to the entire community.

8. Because of all of these things, Less Wrong got a reputation within the rationalist community as a bad place to post, and all of the cool people got their own blogs, or went to Tumblr, or went to Facebook, or did a whole bunch of things that relied on illegible local knowledge. Meanwhile, LW itself was still a big glowing beacon for clueless newbies. So we ended up with an accidental norm that only clueless newbies posted on LW, which just reinforced the "stay off LW" vibe.

I worry that all the existing "resurrect LW" projects, including some really high-effort ones, have been attempts to break coincidental vicious cycles - ie deal with 8 and the second half of 7. I think they're ignoring points 1 through 6, which is going to doom them.

At least judging from where my efforts went, I would agree that I have spent a pretty significant amount of resources on fixing the problems that Scott described in point 6 and 7, but I also spent about equal time thinking about how to fix 1-5. The broader perspective that I have on those latter points is I think best illustrated in an analogy: 

When I read Scott’s comments about how there was just a lot of embarrassing and weird writing on LessWrong, I remember my experiences as a Computer Science undergraduate. When the median undergrad makes claims about the direction of research in their field, or some other big claim about their field that isn't explicitly taught in class, or if you ask an undergraduate physics student what they think about how to do physics research, or what ideas they have for improving society, they will often give you quite naive sounding answers (I have heard everything from “I am going to build a webapp to permanently solve political corruption” to “here’s my idea of how we can transmit large amounts of energy wirelessly by using low-frequency tesla-coils”.) I don’t think we should expect anything different on LessWrong. I actually think we should expect it to be worse here, since we are actively encouraging people to have opinions, as opposed to the more standard practice of academia, which seems to consist of treating undergraduates as slightly more intelligent dogs that need to be conditioned with the right mixture of calculus homework problems and mandatory class attendance, so that they might be given the right to have any opinion at all if they spend 6 more years getting their PhD. 

So while I do think that Eliezer’s writing encouraged topics that were slightly more likely to attract crackpots, I think a large chunk of the weird writing is just a natural consequence of being an intellectual community that has a somewhat constant influx of new members. 

And having undergraduates go through the phase where they have bad ideas, and then have it explained to them why their ideas are bad, is important. I actually think it’s key to learning any topic more complicated than high-school mathematics. It takes a long time until someone can productively contribute to the intellectual progress of an intellectual community (in academia it’s at least 4 years, though usually more like 8), and during all that period they will say very naive and silly sounding things (though less and less so as time progresses). I think LessWrong can do significantly better than 4 years, but we should still expect that it will take new members time to acclimate and get used to how things work (based on user-interviews of a lot of top commenters it usually took something like 3-6 months until someone felt comfortable commenting frequently and about 6-8 months until someone felt comfortable posting frequently. This strikes me as a fairly reasonable expectation for the future). 

And I do think that we have many graduate students and tenured professors of the rationality community who are not Eliezer, and who do not sound like crackpots, that can speak reasonably about the same topics Eliezer talked about, and who I feel are acting with a very similar focus to what Eliezer tried to achieve. Luke Muehlhauser, Carl Shulman, Anna Salamon, Sarah Constantin, Ben Hoffman, Scott himself and many more, most of whose writing would fit very well on LessWrong (and often still ends up there). 

But all of this doesn’t mean what Scott describes isn’t a problem. It’s still a bad experience for everyone to constantly have to read through bad first year undergrad essays, but I think the solution can’t involve those essays not getting written at all. Instead it has to involve some kind of way of not forcing everyone to see those essays, while still allowing them to get promoted if someone shows up who does write something insightful from day one. I am currently planning to tackle this mostly with improvements to the karma system, as well as changes to the layout of the site, where users primarily post to their own profiles and can get content promoted to the frontpage by moderators and high-karma members. A feed consisting solely of content of the quality of the average Scott, Anna, Ben or Luke post would be an amazing read, and is exactly the kind of feed I am hoping to create with LessWrong, while still allowing users to engage with the rest of the content on the site (more on that later).

I would very very roughly summarize what Scott says in the first 5 points as two major failures: first a failure of separating the signal from the noise, and second a failure of enforcing moderation norms when people did turn out to be crackpots or just unable to productively engage with the material on the site. Both of which are natural consequences of the abandonment of promoting things to main, the fact that discussion is ordered by default by recency and not by some kind of scoring system, and the fact that the moderation tools were completely insufficient (but more on the details of that in the next section)


My models of LessWrong 2.0

I think there are three major bottlenecks that LessWrong is facing (after the zeroth bottleneck, which is just that no single group had the mandate, resources and motivation to fix any of the problems): 

  1. We need to be able to build on each other’s intellectual contributions, archive important content and avoid primarily being news-driven
  2. We need to improve the signal-to-noise ratio for the average reader, and only broadcast the most important writing
  3. We need to actively moderate in a way that is both fun for the moderators, and helps people avoid future moderation policy violations

I. 

The first bottleneck for our community, and the biggest I think, is the ability to build common knowledge. On facebook, I can read an excellent and insightful discussion, yet one week later I forgot it. Even if I remember it, I don’t link to the facebook post (because linking to facebook posts/comments is hard) and it doesn’t have a title so I don’t casually refer to it in discussion with friends. On facebook, ideas don’t get archived and built upon, they get discussed and forgotten. To put this another way, the reason we cannot build on the best ideas this community had over the last five years, is because we don’t know what they are. There’s only fragments of memories of facebook discussions which maybe some other people remember. We have the sequences, and there’s no way to build on them together as a community, and thus there is stagnation.

Contrast this with science. Modern science is plagued by many severe problems, but of humanity’s institutions it has perhaps the strongest record of being able to build successfully on its previous ideas. The physics community has this system where the new ideas get put into journals, and then eventually if they’re new, important, and true, they get turned into textbooks, which are then read by the upcoming generation of physicists, who then write new papers based on the findings in the textbooks. All good scientific fields have good textbooks, and your undergrad years are largely spent reading them. I think the rationality community has some textbooks, written by Eliezer (and we also compiled a collection of Scott’s best posts that I hope will become another textbook of the community), but there is no expectation that if you write a good enough post/paper that your content will be included in the next generation of those textbooks, and the existing books we have rarely get updated. This makes the current state of the rationality community analogous to a hypothetical state of physics, had physics no journals, no textbook publishers, and only one textbook that is about a decade old. 

This seems to me what Anna is talking about - the purpose of the single locus of conversation is the ability to have common knowledge and build on it. The goal is to have every interaction with the new LessWrong feel like it is either helping you grow as a rationalist or has you contribute to lasting intellectual progress of the community. If you write something good enough, it should enter the canon of the community. If you make a strong enough case against some existing piece of canon, you should be able to replace or alter that canon. I want writing to the new LessWrong to feel timeless. 

To achieve this, we’ve built the following things: 

  • We created a section for core canon on the site that is prominently featured on the frontpage and right now includes Rationality: A-Z, The Codex (a collection of Scott’s best writing, compiled by Scott and us), and HPMOR. Over time I expect these to change, and there is a good chance HPMOR will move to a different section of the site (I am considering adding an “art and fiction” section) and will be replaced by a new collection representing new core ideas in the community.
  • Sequences are now a core feature of the website. Any user can create sequences of their own and other users posts, and those sequences themselves can be voted and commented on. The goal is to help users compile the best writing on the site, and make it so that good timeless writing gets read by users for a long time, as opposed to disappearing into the void. Separating creative and curatorial effort allows the sort of professional specialization that you see in serious scientific fields.
  • Of those sequences, the most upvoted and most important ones will be chosen to be prominently featured on other sections of the site, allowing users easy access to read the best content on the site and get up to speed with the current state of knowledge of the community.
  • For all posts and sequences the site keeps track of how much of them you’ve read (including importing view-tracking from old LessWrong, so you will get to see how much of the original sequences you’ve actually read). And if you’ve read all of a sequence you get a small badge that you can choose to display right next to your username, which helps people navigate how much of the content of the site you are familiar with.
  • The design of the core content of the site (e.g. the Sequences, the Codex, etc.) tries to communicate a certain permanence of contributions. The aesthetic feels intentionally book-like, which I hope gives people a sense that their contributions will be archived, accessible and built-upon.
    One important issue with this is that there also needs to be a space for sketches on LessWrong. To quote PaulGraham: “What made oil paint so exciting, when it first became popular in the fifteenth century, was that you could actually make the finished work from the prototype. You could make a preliminary drawing if you wanted to, but you weren't held to it; you could work out all the details, and even make major changes, as you finished the painting.”
  • We do not want to discourage sketch-like contributions, and want to build functionality that helps people build a finished work from a prototype (this is one of the core competencies of Google Docs, for example).

And there are some more features the team is hoping to build in this direction, such as: 

  • Easier archiving of discussions by allowing discussions to be turned into top-level posts (similar to what Ben Pace did with a recent Facebook discussion between Eliezer, Wei Dai, Stuart Armstrong, and some others, which he turned into a post on LessWrong 2.0
  • The ability to continue reading the content you’ve started reading with a single click from the frontpage. Here's an example logged-in frontpage:

 

 

II.

The second bottleneck is improving the signal-to-noise ratio. It needs to be possible for someone to subscribe to only the best posts on LessWrong, and only the most important content needs to turned into common-knowledge. 

I think this is a lot of what Scott was pointing at in his summary about the decline of LessWrong. We need a way for people to learn from their mistakes, while also not flooding the inboxes of everyone else, and while giving people active feedback on how to improve in their writing. 

The site structure: 

To solve this bottleneck, here is the rough content structure that I am currently planning to implement on LessWrong: 

The writing experience: 

If you write a post, it first shows up nowhere else but your personal user page, which you can basically think of being a medium-style blog. If other users have subscribed to you, your post will then show up on their frontpages (or only show up after it hit a certain karma threshold, if users who subscribed to you set a minimum karma threshold). If you have enough karma you can decide to promote your content to the main frontpage feed (where everyone will see it by default), or a moderator can decide to promote your content (if you allowed promoting on that specific post). The frontpage itself is sorted by a scoring system based on the HN algorithm, which uses a combination of total karma and how much time has passed since the creation of the post. 

If you write a good comment on a post a moderator or a high-karma user can promote that comment to the frontpage as well, where we will also feature the best comments on recent discussions. 

Meta

Meta will just be a section of the site to discuss changes to moderation policies, issues and bugs with the site, discussion about site features, as well as general site-policy issues. Basically the thing that all StackExchanges have. Karma here will not add to your total karma and will not give you more influence over the site. 

Featured posts

In addition to the main thread, there is a promoted post section that you can subscribe to via email and RSS, that has on average three posts a week, which for now are just going to be chosen by moderators and editors on the site to be the posts that seem most important to turn into common-knowledge for the community. 

Meetups (implementation unclear)

There will also be a separate section of the site for meetups and event announcements that will feature a map of meetups, and generally serve as a place to coordinate the in-person communities. The specific implementation of this is not yet fully figured out. 

Shortform (implementation unclear)

Many authors (including Eliezer) have requested a section of the site for more short-form thoughts, more similar to the length of an average FB post. It seems reasonable to have a section of the site for that, though I am not yet fully sure how it should be implemented. 

Why? 

The goal of this structure is to allow users to post to LessWrong without their content being directly exposed to the whole community. Their content can first be shown to the people who follow them, or the people who actively seek out content from the broader community by scrolling through all new posts. Then, if a high-karma users among them finds their content worth posting to the frontpage, it will get promoted. The key to this is a larger userbase that has the ability to promote content (i.e. many more than have the ability to promote content to main on the current LessWrong), and the continued filtering of the frontpage based on the karma level of the posts. 

The goal of all of these is to allow users to see good content at various levels of engagement with the site, while giving some personalization options so that people can follow the people they are particularly interested and while also ensuring that this does not sabotage the attempt at building common knowledge by having the best posts from the whole ecosystem be featured and promoted on the frontpage. 

The karma system:

Another thing I’ve been working on to fix the signal-to-noise ratio is to improve the karma system. It’s important that the people having the most significant insights are able to shape a field more. If you’re someone who regularly produces real insights, you’re better able to notice and bring up other good ideas. To achieve this we’ve built a new karma system, where your upvotes and downvotes weight more if you have a lot of karma already. So far the current weighting is a very simple heuristic, whereby your upvotes and downvotes count for log base 5 of your total karma. Ben and I will post another top-level post to discuss just the karma system at some point in the next few weeks, but feel free to ask any questions now, and we will just include those in that post.

(I am currently experimenting with a karma system based on the concept of eigendemocracy by Scott Aaronson, which you can read about here, but which basically boils down to applying Google’s PageRank algorithm to karma allocation. How trusted you are as a user (your karma) is based on how much trusted users upvote you, and the circularity of this definition is solved using linear algebra.)

I am also interested in having some form of two-tiered voting, similarly to how Facebook has a primary vote interaction (the like) and a secondary interaction that you can access via a tap or a hover (angry, sad, heart, etc.). But the implementation of that is also currently undetermined. 

III

The third and last bottleneck is an actually working moderation system that is fun to use by moderators, while also giving people whose content was moderated a sense of why, and how they can improve. 

The most common, basic complaint currently on LessWrong pertains to trolls and sockpuppet accounts that the reddit fork’s mod tools are vastly inadequate for dealing with (Scott's sixth point refers to this). Raymond Arnold and I are currently building more nuanced mod tools, that include abilities for moderators to set the past/future votes of a user to zero, to see who upvoted a post, and to know the IP address that an account comes from (this will be ready by the open beta). 

Besides that, we are currently working on cultivating a moderation group we are calling “Sunshine Regiment.” Members of the sunshine regiment that will have the ability to take various smaller moderation actions around the site (such as temporarily suspending comment threads, making general moderating comments in a distinct font and promoting content), and so will have the ability to generally shape the culture and content of the website to a larger degree.

The goal is moderation that goes far beyond dealing with trolls, and actively makes the epistemic norms a ubiquitous part of the website. Right now Ben Pace is thinking about moderation norms that encourage archiving and summarizing good discussion, as well as other patterns of conversation that will help the community make intellectual progress. He’ll be posting to the open beta to discuss what norms the site and moderators should have in the coming weeks. We're both in agreement that moderation can and should be improved, and that moderators need better tools, and would appreciate good ideas about what else to give them.


How you can help and issues to discuss:

The open beta of the site is starting in a week, and so you can see all of this for yourself. For the duration of the open beta, we’ll continue the discussion on the beta site. At the conclusion of the open beta, we plan to have a vote open to those who had a thousand karma or more on 9/13 to determine whether we should move forward with the new site design, which would move to the lesswrong.com url from its temporary beta location, or leave LessWrong as it is now. (As this would represent the failure of the plan to revive LW, this would likely lead to the site being archived rather than staying open in an unmaintained state.) For now, this is an opportunity for the current LessWrong community to chime in here and object to anything in this plan.

During the open beta (and only during that time) the site will also have an Intercom button in the bottom right corner that allows you to chat directly with us. If you run into any problems, or notice any bugs, feel free to ping us directly on there and Ben and I will try to help you out as soon as possible.

Here are some issues where I discussion would be particularly fruitful: 

  • What are your thoughts about the karma system? Does an eigendemocracy based system seem reasonable to you? How would you implement the details? Ben and I will post our current thoughts on this in a separate post in the next two weeks, but we would be interested in people’s unprimed ideas.
  • What are your experiences with the site so far? Is anything glaringly missing, or are there any bugs you think I should definitely fix? 
  • Do you have any complaints or thoughts about how work on LessWrong 2.0 has been proceeding so far? Are there any worries or issues you have with the people working on it? 
  • What would make you personally use the new LessWrong? Is there any specific feature that would make you want to use it? For reference, here is our current feature roadmap for LW 2.0.
  • And most importantly, do you think that the LessWrong 2.0 project is doomed to failure for some reason? Is there anything important I missed, or something that I misunderstood about the existing critiques?
The closed beta can be found at www.lesserwrong.com.

Ben, Vaniver, and I will be in the comments!

LW 2.0 Open Beta starts 9/20

24 Vaniver 15 September 2017 02:57AM

Two years ago, I wrote Lesswrong 2.0. It’s been quite the adventure since then; I took up the mantle of organizing work to improve the site but was missing some of the core skills, and also never quite had the time to make it my top priority. Earlier this year, I talked with Oliver Habryka and he joined the project and has done the lion’s share of the work since then, with help along the way from Eric Rogstad, Harmanas Chopra, Ben Pace, Raymond Arnold, and myself. Dedicated staff has led to serious progress, and we can now see the light at the end of the tunnel.

 

So what’s next? We’ve been running the closed beta for some time at lesserwrong.com with an import of the old LW database, and are now happy enough with it to show it to you all. On 9/20, next Wednesday, we’ll turn on account creation, making it an open beta. (This will involve making a new password, as the passwords are stored hashed and we’ve changed the hashing function from the old site.) If you don't have an email address set for your account (see here), I recommend adding it by the end of the open beta so we can merge accounts. For the open beta, just use the Intercom button in the lower right corner if you have any trouble. 

 

Once the open beta concludes, we’ll have a vote of veteran users (over 1k karma as of yesterday) on whether to change the code at lesswrong.com over to the new design or not. It seems important to look into the dark and have an escape valve in case this is the wrong direction for LW. If the vote goes through, we’ll import the new LW activity since the previous import to the new servers, merging the two, and point the url to the new servers. If it doesn’t, we’ll likely turn LW into an archive.

 

Oliver Habryka will be posting shortly with his views on LW and more details on our plans for how LW 2.0 will further intellectual progress in the community.


[Link] Understanding Policy Gradients

1 SquirrelInHell 13 September 2017 09:13PM

2017 LessWrong Survey

19 ingres 13 September 2017 06:26AM

The 2017 LessWrong Survey is here! This year we're interested in community response to the LessWrong 2.0 initiative. I've also gone through and fixed as many bugs as I could find reported on the last survey, and reintroduced items that were missing from the 2016 edition. Furthermore new items have been introduced in multiple sections and some cut in others to make room. You can now export your survey results after finishing by choosing the 'print my results' option on the page displayed after submission. The survey will run from today until the 15th of October.

You can take the survey below, thanks for your time. (It's back in single page format, please allow some seconds for it to load):

Click here to take the survey

Open thread, September 11 - September 17, 2017

1 Thomas 11 September 2017 07:46AM
If it's worth saying, but not worth its own post, then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

[Link] Is Feedback Suffering?

1 gworley 09 September 2017 10:20PM

Rational Feed

11 deluks917 09 September 2017 07:48PM

=== Updates:

I have been a little more selective about which articles make it onto the feed. I have not been overly selective and all of the obviously general interest rationalsit articles still make it.

Unless people object I am going to try a "weekly feed". The bi-weekly feed is pretty long. I currently post on the SSC reddit and lesswrong. Weekly seems fine for the SSC reddit but lesswrong is a lower activity forum. I will see how it goes. Obviously on a weekly feed there will about half as many recommended articles.

===Highly Recommended Articles:

Object, Subjects and Gender by The Baliocene Apocrypha - "Under modern post-industrial bureaucratized high-tech capitalism, it is less rewarding than ever before to be a subject. Under modern post-industrial bureaucratized high-tech capitalism, it is more rewarding than ever before to be an object. This alone accounts for a lot of the widespread weird stuff going on with gender these days."

Winning Is For Losers by Putanumonit (ribbonfarm) - Zero vs Positive Sum Games. The strong have room to cooperate. Rene Girard's theory of mimetics and competition. College Admissions. Tit for Tat. Spiked dicks in nature. Short and long term strategies in dating. Quirky dating profiles. Honesty on the first date. Beating Moloch with a transhuman God.

Premium Mediocre by Jacob Falkovich - Being 30% wrong is better than being 5% wrong. Consumption: Signaling vs genuine enjoyment. Dating other PM people. Venkat is wrong about impressing parents. He is more wrong, or joking, about cryptocurrencies. Fear of missing out.

Ten New 80000 Hours Articles Aimed At The by 80K Hours (EA forum) - Ten recent articles and descriptions from 80K hours. Over and underpaid jobs relative to their social impact, the most employable skills, learning ML, whether most social programs work and other topics.

Minimizing Motivated Beliefs by Entirely Useless - The tradeoffs between epistemic and instrumental rationality. Yudkowsky's argument such tradeoffs either very stupid or don't exist. Issues with Yudkowsky: Denial that belief is voluntary, thinking that trading away the truth requires being blind to consequences. Horror victims and transcendent meaning. Interesting things are usually false.

===Scott:

How Do We Get Breasts Out Of Bayes Theorem by Scott Alexander - "But evolutionary psychologists make claims like 'Men have been evolutionarily programmed to like women with big breasts, because those are a sign of fertility.' Forget for a second whether this is politically correct, or cross-culturally replicable, or anything like that. From a neurological point of view, how could this possibly work?"

Predictive Processing And Perceptual Control by Scott Alexander - "predictive processing attributes movement to strong predictions about proprioceptive sensations. Because the brain tries to minimize predictive error, it moves the limbs into the positions needed to produce those sensations, fulfilling its own prophecy." Connections with Will Power's 'Behavior: The Control of Perception' which Scott already reviewed.

Book Review: Surfing Uncertainty by Scott Alexander - Scott finds a real theory of how the brain works. "The key insight: the brain is a multi-layer prediction machine. All neural processing consists of two streams: a bottom-up stream of sense data, and a top-down stream of predictions. These streams interface at each level of processing, comparing themselves to each other and adjusting themselves as necessary."

Links: Exsitement by Scott Alexander - Slatestarcodex links post. A Nootropics survey, gene editing, AI, social norms, Increasing profit margins, politics, and other topics.

Highlights From The Comments On My Irb Nightmare by Scott Alexander - Tons of hilarious IB stories. A subreddit comment about getting around irb. Whether the headaches are largely institutional rather than dictated by government fiat. Comments argue in favor of the irb and Scott responds.

My IRB Nightmare by Scott Alexander - Scott tries to run a study to test the Deck Depression Inventory. The institutional review board makes this impossible. They not only make tons of capricious demands they also attempt to undermine the study's scientific validity.

Slippery Slopen Thread by Scott Alexander - Public open thread. The slippery slope to rationalist catgirl. Selected top comments. Update on Trump and crying wolf.

Contra Askell On Moral Offsets by Scott Alexander - Axiology is the study of what’s good. Morality is the study of what the right thing to do is. You can offset axiological effects but you can't offset moral transgressions.

===Rationalist:

MRE Futures To Not Starve by Robin Hanson - Emergency food sources as a way to mitigate catastrophic risk. The Army's 'Meals Ready to Eat'. Food insurance. Incentives for producers to deliver food in emergencies. Incentives for researchers to find new sources. Sharing information.

Book Reviews: Zoolitude And The Void by Jacob Falkovich - Seven Surrenders the sequel to 'Too like the Lightning' mercilessly cuts the bad parts and focuses on the politics, personalities, and philosophy that made TLTL great. The costs of adding too much magic to a setting, don't make the mundane irrelevant. One Hundred Years of Solitude: Shit just happens. Zoo City: Realistic Magic: "The Zoo part is the magic: some people who commit crimes mysteriously acquire an animal familiar and a low-key magical talent." The Mark and the Void: "Technically, there’s no magic in The Mark and the Void. But there’s investment banking, which takes the role of the mysterious force that decides the fate of individuals and nations but remains beyond the ken of mere mortals."

The World As If by Sarah Perry (ribbonfarm) - "This is an account of how magical thinking made us modern." Magical thinking as a confusing of subjective and objective. Useful fictions. Hypothetical thinking. Pre-modern concrete thinking and categorization schemes relative to modern abstract ones. As if thinking. Logic and magic.

To Save The World Make Sure To Go Beyond Academia by Kaj Sotala - Academic research often fails to achieve real change. Lots of economic research concerns the optimal size of a carbon tax but we currently lack any carbon tax. Academic research on x-risk from nuclear winter doesn't change the motivations of politicians very much.

Introducing Mindlevelup The Book by mindlevelup - MLU compiled and edited their work from 2017 into a 30K word, 150 page book. Most of the material appeared on the blog but some of it is new and the pre-existing posts have been edited for clarity.

Expanding Premium Mediocrity by Zvi Moshowitz - "This is (much of) what I think Rao is trying to say in the second section of his post, the part about Maya but before Molly and Max, translated into DWATV-speak. Proceed if and only if you want that."

Simple Affection And Deep Truth by Particular Virtue - "Simple Affection is treating someone like a child: they will forget about bad things, as long as you give them something good to think about instead. Deep Truth is treating someone like an elephant: they never forget, and they forgive only with deep deliberation."

Are People Innately Good by Sailor Vulcan - SV got into two arguments that went badly. One was on all lives matter. The other occurred when SV tried to defend Glen of Intentional Insights on the SSC discord. Terminal values aren't consistent. SV was abused as a child.

Metapost September 5th by sam[]zdat - Plans for the blog. Next series will be on epistemology and the ''internal' side of nhilism. Revised introduction. Sam will probably write fiction. Site reorganization. History section. Current reading list. Patreon.

Minimizing Motivated Beliefs by Entirely Useless - The tradeoffs between epistemic and instrumental rationality. Yudkowsky's argument such tradeoffs either very stupid or don't exist. Issues with Yudkowsky: Denial that belief is voluntary, thinking that trading away the truth requires being blind to consequences. Horror victims and transcendent meaning. Interesting things are usually false.

Exploring Premium Mediocrity by Zvi Moshowitz - Defining premium mediocre. Easy and hard mode related to Rao's theories of losers, sociopaths and heroes. The Real Thing. A 2x2 ribbonfarm style graph. Restaurants.

Tegmarks Book Of Foom by Robin Hanson - Tegmark's recent book basically described Yudkowsky's intelligence explosion. Tegmark is worried the singularity might be soon and we need to have figured out big philosophical issues by then. Hanson thinks Tegmark overestimates the generality of intelligence. AI weapons and regulations.

The Doomsday Argument In Anthropic Decision Theory by Stuart Armstrong (lesswrong) - "In Anthropic Decision Theory (ADT), behaviors that resemble the Self Sampling Assumption (SSA) derive from average utilitarian preferences. However, SSA implies the doomsday argument. This post shows there is a natural doomsday-like behavior for average utilitarian agents within ADT."

Forager Vs Farmer Elaborated by Robin Hanson - Early humans collapsed Machiavellian dynamics down to a reverse-dominance-hierarchy. Group norm enforcement and its failure modes. Safety leads to collective play and art, threat leads to a return to Machiavellianisn and suspicion. Individuals greatly differ as to what level of threat causes the switch, often for self-serving reasons. Left vs right. "The first and primary political question is how much to try to resolve issues via a big talky collective, or to let smaller groups decide for themselves."

Critiquing Other Peoples Plans Politely by Katja Grace - Three failure modes: The attack, The polite sidestep, The inadvertent personal question. A plan to avoid these issues: debate beliefs, not actions.

Gleanings From Double Crux On The Craft Is Not The Community by Sarah Constantin - Results from Sarah's public double crux. Sarah initially did not think the rationalist intellectual project was worth preserving. She wants to see results, even though she concedes that formal results can be very difficult to get. What is the value of introspection and 'navel grazing'?

Intrinsic Properties And Eliezers Metaethics by Tyrrell_McAllister (lesswrong) - Intuitions of intrinsicness. Is goodness intrinsic? Seeing intrinsicness in simulations. Back to goodness.

Winning Is For Losers by Putanumonit (ribbonfarm) - Zero vs Positive Sum Games. The strong have room to cooperate. Rene Girard's theory of mimetics and competition. College Admissions. Tit for Tat. Spiked dicks in nature. Short and long term strategies in dating. Quirky dating profiles. Honesty on the first date. Beating Moloch with a transhuman God.

Dangers At Dilettante Point by Everything Studies - Its relatively easy to know a little about alot of topics. But its dangerous to find yourself playing the social role of the knowledgeable person too often. The percentage fo people with a given level of knowledge goes to zero quickly.

Entrenchment Happens by Robin Hanson - Many systems degrade, collapse and our replaced. However other systems, even somewhat arbitrary ones, are very stable over time. Many current systems in programming, language and law are likely to remain in the future.

Premium Mediocre by Jacob Falkovich - Being 30% wrong is better than being 5% wrong. Consumption: Signaling vs genuine enjoyment. Dating other PM people. Venkat is wrong about impressing parents. He is more wrong, or joking, about cryptocurrencies. Fear of missing out.

===AI:

Ideological Engineering And Social Control by Geoffrey Miller (EA forum) - China is trying hard to develop advanced AI. A major goal is to use AI to monitor both physical space and social media. Supressing wrong-think doesn't require radically advanced AI.

Incorrigibility In Cirl by The MIRI Blog - Paper. Goal: Incentivize a value learning system to follow shut down instructions. Demonstration that some assumptions are not stable with respect to model mis-specification (ex programmer error). Weaker sets of assumptions: difficulties and simple strategies.

Nothing Wrong With Ai Weapons by kbog (EA forum) - Death by AI is no more intrinsically bad than death by conventional weapons. Some consequenitoualist issues the author addresses: Civilian deaths, AI arms race, vulnerability to hacking.

===EA:

Can Outsourcing Improve Liberias Schools Preliminary RCT Results by Innovations for Poverty - "Last summer, the Liberian government delegated management of 93 public elementary schools to eight different private contractors. After one year, public schools managed by private operators raised student learning by 60 percent compared to standard public schools. But costs were high, performance varied across operators, and contracts authorized the largest operator to push excess pupils and under-performing teachers into other government schools."

Ten New 80000 Hours Articles Aimed At The by 80K Hours (EA forum) - Ten recent articles and descriptions from 80K hours. Over and underpaid jobs relative to their social impact, the most employable skills, learning ML, whether most social programs work and other topics.

Is Ea Growing Some Ea Growth Metrics For 2017 by Peter Hurford (EA forum) - Activity metrics for EA website, donations data, additional Facebook data, commentary that EA seems to be growing but there is substantial uncertainty.

Ea Survey 2017 Series Cause Area Preferences by Tee (EA forum) - Top Cause Area, near-top areas, areas which should not have EA resources, cause area correlated with demographics, donations by cause area.

Looking At How Superforecasting Might Improve AI Predictions by Will Pearson (EA forum) - Good Judgement Project: What they did, results, relevance. Lessons: Focus on concrete issues, focus on AI with no intelligence augmentation, learn a diverse range of subjects, breakdown the open questions, publicly update.

Why Were Allocating Discretionary Funds To The Deworm The World Initiative by The GiveWell Blog - "Why Deworm the World has a pressing funding need. The benefits and risks of granting discretionary funds to Deworm the World today. Why we’re continuing to recommend that donors give 100% of their donation to AMF."

Ea Survey 2017 Series Community Demographics by Katie Gertsch (EA forum) - Some results: Mostly young and male, slight increase in female participation. Highest concentration cities. Atheism/Agnostic rate fell from 87% to 80%. Increase in the proportion of EA who see EA as a duty or opportunity as opposed to an obligation.

Effective Altruism Survey 2017 Distribution And by Ellen McGeoch and Peter Hurford (EA forum) - EA 2017 Study results are in. Details about distribution abd data analysis techniques. Discussion of whether the subpopulation is a representative sample of EA and its subpopulations.

Six Tips Disaster Relief Giving by The GiveWell Blog - Practical advice for effective disaster relief charity. Give Cash, give to proven effective charities and allow charities significant freedom in how they use your donation.

===Politics and Economics:

Harvard Admit Legacy Students by Marginal Revolution - Demand for Ivy league admissions far outstrips supply. The main constraint is that the Ivy League depends on donations. One way to scale up, while maintaining high donation rates, is to increase legacy admissions. Teaching quality is unlikely to suffer, qualified students are easy to find.

Object, Subjects and Gender by The Baliocene Apocrypha - "Under modern post-industrial bureaucratized high-tech capitalism, it is less rewarding than ever before to be a subject. Under modern post-industrial bureaucratized high-tech capitalism, it is more rewarding than ever before to be an object. This alone accounts for a lot of the widespread weird stuff going on with gender these days."

Links 11 by Artir - Psychology, Politics, Economics, Philosophy, Other. Several links related to the Google memo.

Unpopular Ideas About Crime And Punishment by Julia Galef - Thirteen opinions on prison abolition, the death penalty, corporal punishment, rehabilitation, redistribution and more.

Intangible Investment and Monopoly Profits by Marginal Revolution - "Intangible capital used to be below 30 percent of the S&P 500 in the 70s, now it is about 84 percent. " Seven implications about profit, monopoly, spillover, etc.

What You Cant Say To A Sympathetic Ear by Katja Grace - Sharing socially unacceptable views with your friends is putting them in a bad situation, regardless of whether they agree with those ideas. If they don't punish you society will hold them complicit. Socially condemning views is worse than commonly thought "To successfully condemn a view socially is to lock that view in place with a coordination problem."

A I Bias Doesnt Mean What Journalists Want You To Think It Means by Chris Stucchio And Lisa Mahapatra (Jacobite) - What is data science and AI? What is bias? How do we identify bias? The fallout of the author's algorithm. Predicting Creditworthiness. Understanding Language. Predicting Criminal Behavior. Journalists and Wishful Thinking.

Four Decades of the Middle East by Bryan Caplan - "Almost all of the Middle East's disasters over the past four decades can be credibly traced back to a single highly specific major event: the Iranian Revolution. Let me chronicle the tragic trail of dominoes."

The Thresher by sam[]zdat - "Still, if what makes 'modernity' modernity is partially in technology, then the Uruk Machine will be updated and whirring at unfathomable speeds, the thresher to Gilgamesh’s sacred club."

The Uruk Machine by sam[]zdat - Sam's fundamental framework: Seeing like a State, The Great Transformation, The True Believer, The Culture of Narcissism.

===Misc:

Into The Gray Zone by Bayesian Investor - Book Review. A modest fraction of people diagnosed as being in a persistent vegetative state have locked in syndrome. People misjudge when they would want to die. Alzheimer's.

===Podcast:

What You Need To Know About Climate Change by Waking Up with Sam Harris - "How the climate is changing and how we know that human behavior is the primary cause. They discuss why small changes in temperature matter so much, the threats of sea-level rise and desertification, the best and worst case scenarios, the Paris Climate Agreement, the politics surrounding climate science."

Dan Rather by The Ezra Klein Show - "Rather and I discuss the Trump presidency and what it means for the Republican Party's future, our fractured media landscape, and Rather's own evolving career in media."

Caplan Family by Bryan Caplan - "For the last two years, I homeschooled my elder sons, Aidan and Tristan, rather than send them to traditional middle school. Now they've been returned to traditional high school. We decided to mark our last day with a father-son/teacher-student podcast on how we homeschooled, why we homeschooled, and what we achieved in homeschool."

Rob Reich On Foundations by EconTalk - "The power and effectiveness of foundations--large collections of wealth typically created and funded by a wealthy donor. Is such a plutocratic institution consistent with democracy? Reich discusses the history of foundations in the United States and the costs and benefits of foundation expenditures in the present."

Jesse Singal On The Problems With Implicit Bias Tests by Rational Speaking - "The IAT has been massively overhyped, and that in fact there's little evidence that it's measuring real-life bias. Jesse and Julia discuss how to interpret the IAT, why it became so popular, and why it's still likely that implicit bias is real, even if the IAT isn't capturing it."

Emotionally Charged Discussion by The Bayesian Conspiracy - Conversations where one party thinks the other sides position is stupid/evil/etc. Debate vs truth seeking. Julia Galef's lists of unpopular ideas. Agenty Duck's thoughts on introspection. Double Crux.

The Future Of Intelligence by Waking Up with Sam Harris - "Max Tegmark. His new book Life 3.0: Being Human in the Age of Artificial Intelligence. They talk about the nature of intelligence, the risks of superhuman AI, a nonbiological definition of life, the substrate independence of minds, the relevance and irrelevance of consciousness for the future of AI, near-term breakthroughs in AI."

Benedict Evans by EconTalk - "Two important trends for the future of personal travel--the increasing number of electric cars and a world of autonomous vehicles. Evans talks about how these two trends are likely to continue and the implications for the economy, urban design, and how we live."

The Life Of A Quant Trader by 80,000 Hours - What do quant traders do. Compensation. Is quant trading harmful? Who is a good fit and how to break into quant trading. Work environment and motivation. Variety of available positions.

Instrumental Rationality Sequence Finished! (w/ caveats)

4 lifelonglearner 09 September 2017 01:49AM

Hey everyone,

Back in April, I said I was going to start writing an instrumental rationality sequence.

It's...sort of done.

I ended up collecting the essays into a sort of e-book. It's mainly content that I've put here (Starting Advice, Planning 101, Habits 101, etc.), but there's also quite a bit of new content.

It clocks in at about 150 pages and 30,000 words, about 15,000 of which I wrote after the April announcement post. (Which beats my estimate of 10,000 words before burnout!!!)

However, the editor for LW 1.0 editor isn't making it easy to port the stuff here from my Google Drive.

As LW 2.0 enters actual open beta, I'll repost / edit the essays and host them there. 

In the meantime, if you want to read the whole compiled book, the direct Google Doc link is here. That's where the real-time updates will happen, so it's what I'd recommend using to read it for now.

(There's also an online version on my blog if for some reason you want to read it there.)

It's my hope that this sequence becomes a useful reference for newcomers looking to learn more about instrumental rationality, which is more specialized than The Sequences (which really are more for epistemics).

Unfortunately, I didn't manage to write the book/sequence I set out to write. The actual book as it is now is about 10% as good as what I actually wanted. There's stuff I didn't get to write, more nuances I'd have liked to cover, more pictures I wanted to make, etc.

After putting in many hours of research and writing, I think I've learned more about the sort of effort that would need to go into making the actual project I'd outlined at the start.

There'll be a postmortem essay analyzing my expectations vs reality coming soon.

As a result of this project and a few other things, I'm feeling burned out. There probably won't be any major projects from me for a little bit, while I rest up.

Inconsistent Beliefs and Charitable Giving

3 Rossin 08 September 2017 07:33AM

There is common tendency in human life to act in ways contrary to what we believe.

The classic example is the German people under Nazi rule, most of whom likely thought of themselves as good people—the kind of people who would help their neighbors even at risk to themselves, but did not do anything about the rounding up of Jews, Gypsies, and Homosexuals into concentration camps. They didn’t want to give up their self-image as a good person, but they also didn’t want themselves and their family to potentially face the wrath of the SS. So, many convinced themselves that they didn’t care about what was happening. That was far easier, less painful, than admitting that they were not quite as moral and upright as they thought or having to put themselves in mortal danger.

 

I used to think that I would have been one of the few who did in fact shelter the “undesirables” from the Nazis. Now, I am less confident. But I want to be better. Just recently, I realized I have been similarly inconsistent by not donating to organizations that help people dying of preventable diseases and can measure lives saved in relatively low numbers of dollars.

If you had accused me of this up until a few days ago I would have given you all sorts of excuses for why this lack of action and my belief “the death and suffering of others is bad and I should prevent it if I can” were not inconsistent. I would have told you how I feel terrible about the dying children when I think about them, but I am prioritizing other problems. And besides, I’m a college student with very little disposable income and it’s really just financially prudent to save all my money in case of an unforeseen contingency. Once I start making more money later on in life, then I’ll start contributing to organizations that send people malaria nets.

 

But that’s all a self-deception. The truth is that my beliefs and actions were inconsistent. Because I quite firmly believe that saving lives is more important than beer, yet I continually find money for beer and yet none for the Against Malaria Foundation.

I think the root cause of this kind of inconsistency is often a feeling of being overwhelmed. If you imagine a single child dying of malaria, feverish and convulsing weakly in her bed while her parents look on in helpless horror, you’ll probably wish you could do something to stop those people’s pain.

When you think about the thousands in the same position, when you think about the difficulty of doing something, how much money it would cost to actually save a life, the need to ensure that the organization you’re sending money to actually will use it effectively to help people in need…well, the whole thing just seems too complicated. Not only that, there are so many organizations claiming that donating money to them will save lives, and few of them are likely to admit other organizations are doing the same job better. Decision paralysis takes over and it’s very easy to decide that this is one of those things that’s better not to think about, at least for now.

On the other hand, grabbing drinks with friends is quite simple to execute, and it is very easy not to notice the opportunity cost (Note: I am not saying that I think I should or anyone should stop spending money on enjoying themselves, just that if I have enough disposable income for getting drinks with friends, I would consider that I have enough to spend on saving lives).

And that is the way I chose to be indifferent about something I would have cared about if my beliefs were consistent. I’d like to rationalize it as prioritizing other things, rather than just deciding not to care, but that is not the truth. The truth is I understand exactly how most of the German people under Nazi rule made themselves indifferent to the rounding up of their “undesirable” neighbors. When something bad is happening and we don’t quite know how to stop it, or the sacrifice needed to help stop it feels painful, choosing to be indifferent is frighteningly easy, even about truly horrific things.

Having noticed this inconsistency the problem becomes obvious. I did not think about the true opportunity cost of non-essential purchases, which is that the same money could be used to help save lives. When I look at a buying anything I do not strictly need from now on, I am going to try to remember that opportunity cost, so that, even if I do end up buying the thing anyway, at least I have not stopped caring.

www.givewell.org will help you estimate what that opportunity cost is and there are very good posts on here as well about effective giving, if you’re interested.

 

New business opportunities due to self-driving cars

8 chaosmage 06 September 2017 08:07PM

This is a slightly expanded version of a talk presented at the Less Wrong European Community Weekend 2017.

Predictions about self-driving cars in the popular press are pretty boring. Truck drivers are losing their jobs, self-driving cars will be more rented than owned, transport becomes cheaper, so what. The interesting thing is how these things change the culture and economy and what they make possible.

I have no idea about most of this. I don't know if self-driving cars accelerate or decelerate urbanization, I don't know how public transport responds, I don't even care which of the old companies survive. What I do think is somewhat predictable is some of the business opportunities become economical that previously weren't. I disregard retail, which would continue moving to online retail at the expense of brick and mortar stores even if the FedEx trucks would continue to be driven by people.

Diversification of vehicle types

A family car that you own has to be somewhat good at many different jobs. It has to get you places fast. It has to be a thing that can transport lots of groceries. It has to take your kid to school.

With self-driving cars that you rent for each seperate job, you want very different cars. A very fast one to take you places. A roomy one with easy access for your groceries. And a tiny, cute, unicorn-themed one that takes your kid to school.

At the same time, the price of autonomy is dropping faster than the price of batteries, so you want the lowest mass car that can do the job. So a car that is very fast and roomy and unicorn-themed at the same time isn't economical.

So if you're an engineer or a designer, consider going into vehicle design. There's an explosion of creativity about to happen in that field that will make it very different from the subtle iterations in car design of the past couple of decades.

Who wins: Those who design useful new types of autonomous vehicles for needs that are not, or badly, met by general purpose cars.

Who loses: Owners of general purpose cars, which lose value rapidly.

Services at home

If you have a job where customers come to visit you, say you're a doctor or a hairdresser or a tattoo artist, your field of work is about to change completely. This is because services that go visit the customer outcompete ones that the customer has to go visit. They're more convenient and they can also easily service less mobile customers. This already exists for rich people: If you have a lot of money, you pay for your doctor's cab and have her come to your mansion. But with transport prices dropping sharply, this reaches the mass market.

This creates an interesting dynamic. In this kind of job, you have some vague territory - your customers are mostly from your surrounding area and your number of competitors inside this area is relatively small. With services coming to the home, everyone's territories become larger, so more of them overlap, creating competition and discomfort. I believe the typical solution, which reinstates a more stable business situation and requires no explicit coordination, is increased specialization within your profession. So a doctor might be less of her district's general practitioner and more of her city's leading specialist in one particular illness within one particular demographic. A hairdresser might be the city's expert for one particular type of haircut for one particular type of hair. And so on.

Who wins: Those who adapt quickly and steal customers from stationary services.

Who loses: Stationary services and their landlords.

Rent anything

You will not just rent cars, you will rent anything that a car can bring to your home and take away again. You don't go to the gym, you have a mobile gym visit you twice a week. You don't own a drill that sits unused 99,9% of the time, you have a little drone bring you one for an hour for like two dollars. You don't buy a huge sound system for your occasional party, you rent one that's even huger and on wheels.

Best of all, you can suddenly have all sort of absurd luxuries, stuff that previously only millionaires or billionaires would afford, provided you only need it for an hour and it fits in a truck. The possibilities for business here are dizzying.

Who wins: People who come up with clever business models and the vehicles to implement them.

Who loses: Owners and producers of infrequently used equipment.

Self-driving hotel rooms

This is a special case of the former but deserves its own category. Self-driving hotel rooms replace not just hotel rooms, but also tour guides and your holiday rental car. They drive you to all the tourist sites, they stop at affiliated restaurants, they occasionally stop at room service stations. And on the side, they do overnight trips from city to faraway city, competing with airlines.

Who wins: The first few companies who perfect this.

Who loses: Stationary hotels and motels.

Rise of alcoholism and drug abuse

Lots of people lack intrinsic motivation to be sober. They basically can't decide against taking something. Many of them currently make do with extrinsic motivation: They manage to at least not drink while driving. In other words, for a large number of people, driving is their only reason not to drink or do drugs. That reason is going away and consumption is sure to rise accordingly.

Hey I didn't say all the business opportunities were particularly ethical. But if you're a nurse or doctor, if you go into addiction treatment you're probably good.

Who wins: Suppliers of mind-altering substances and rehab clinics.

Who loses: The people who lack intrinsic motivation to be sober, and their family and friends.

Autonomous boats and yachts

While there's a big cost advantage to vehicle autonomy in cars, it is arguably even bigger in boats. You don't need a sailing license, you don't need to hire skilled sailors, you don't need to carry all the room and food those sailors require. So the cost of going by boat drops a lot, and there's probably a lot more traffic in (mostly coastal) waters. Again very diverse vehicles, from the little skiff that transports a few divers or anglers to the personal yacht that you rent for your honeymoon. This blends into the self-driving hotel room, just on water.

Who wins: Shipyards, especially the ones that adapt early.

Who loses: Cruise ships and marine wildlife.

Mobile storage

The only reason we put goods in warehouses is that it is too expensive to just leave them in the truck all the way from the factory to the buyer. That goes away as well, although with the huge amounts of moved mass involved this transition is probably slower than the others. Shipping containers on wheels already exist.

Who wins: Manufacturers, and logistics companies that can provide even better just in time delivery.

Who loses: Intermediate traders, warehouses and warehouse workers.

That's all I got for now. And I'm surely missing the most important innovation that self-driving vehicles will permit. But until that one becomes clear, maybe work with the above. All of these are original ideas that I haven't seen written down anywhere. So if like one of these and would like to turn it into a business, you're a step ahead of nearly everybody right now and I hope it makes you rich. If it does, you can buy me a beer. :-)

Come check out the Boulder Future Salon this Saturday!

1 fowlertm 06 September 2017 03:49PM
I'm giving a talk on the STEMpunk Project this Saturday at the Boulder Future Salon:
I love BFS and I would encourage you to come check it out if you're in the area.
Let me tell you a story which illustrates why I think they're a valuable group. Once upon a time I went to a presentation there given by a member who'd written a program that generates artificial music. As we were waiting around one of the other guys (whose name I can't remember off the top of my head) just randomly handed me a book and said "you'd probably get a kick out of this."
It was Rudolph Carnap's "The Logical Structure of The World". I read the introduction, thumbed through it a bit, and we had a brief conversation about its relevance to philosophy and to recondite areas of software engineering like database design.

Reflecting on this episode later I realized how remarkable it was. It's not like this other person knew me very well, but by the mere fact that I'd walked through the door he assumed I'd be able to read a book like this and that I'd want to.
I have encountered precious few places like this anywhere.
There wound up being a guitar in the facility, and later in that same meetup I had a duel with the software my friend had created.
Any place where you can find robot music and logical positivism is a place worth exploring.

Heuristics for textbook selection

8 John_Maxwell_IV 06 September 2017 04:17AM

Back in 2011, lukeprog posted a textbook recommendation thread.  It's a nice thread, but not every topic has a textbook recommendation.  What are some other heuristics for selecting textbooks besides looking in that thread?

Amazon star rating is the obvious heuristic, but it occurred to me that Amazon sales rank might actually be more valuable: It's an indicator that profs are selecting the textbook for their classes.  And it's an indicator that the textbook has achieved mindshare, meaning you're more likely to learn the same terminology that others use.  (But there are also disadvantages of having the same set of mental models that everyone else is using.)

Somewhere I read that Elements of Statistical Learning was becoming the standard machine learning text partially because it's available for free online.  That creates a wrinkle in the sales rank heuristic, because people are less likely to buy a book if they can get it online for free.  (Though Elements of Statistical Learning appears to be a #1 bestseller on Amazon, in bioinformatics.)

Another heuristic is to read the biographies of the textbook authors and figure out who has the most credible claim to expertise, or who seems to be the most rigorous thinker (e.g. How Brands Grow is much more data-driven than a typical marketing book).  Or try to figure out what text the most expert professors are choosing for their classes.  (Oftentimes you can find the syllabi of their classes online.  I guess the naive path would probably look something like: go to US News to see what the top ranked universities are for the subject you're interested in.  Look at the university's course catalog until you find the course that covers the topic you want to learn.  Do site:youruniversity.edu course_id on Google in order to find the syllabus for the most recent time that course was taught.)

Online discussion is better than pre-publication peer review

12 Wei_Dai 05 September 2017 01:25PM

Related: Why Academic Papers Are A Terrible Discussion Forum, Four Layers of Intellectual Conversation

During a recent discussion about (in part) academic peer review, some people defended peer review as necessary in academia, despite its flaws, for time management. Without it, they said, researchers would be overwhelmed by "cranks and incompetents and time-card-punchers" and "semi-serious people post ideas that have already been addressed or refuted in papers already". I replied that on online discussion forums, "it doesn't take a lot of effort to detect cranks and previously addressed ideas". I was prompted by Michael Arc and Stuart Armstrong to elaborate. Here's what I wrote in response:

My experience is with systems like LW. If an article is in my own specialty then I can judge it easily and make comments if it’s interesting, otherwise I look at its votes and other people’s comments to figure out whether it’s something I should pay more attention to. One advantage over peer review is that each specialist can see all the unfiltered work in their own field, and it only takes one person from all the specialists in a field to recognize that a work may be promising, then comment on it and draw others’ attentions. Another advantage is that nobody can make ill-considered comments without suffering personal consequences since everything is public. This seem like an obvious improvement over standard pre-publication peer review, for the purpose of filtering out bad work and focusing attention on promising work, and in practice works reasonably well on LW.

Apparently some people in academia have come to similar conclusions about how peer review is currently done and are trying to reform it in various ways, including switching to post-publication peer review (which seems very similar to what we do on forums like LW). However it's troubling (in a "civilizational inadequacy" sense) that academia is moving so slowly in that direction, despite the necessary enabling technology having been invented a decade or more ago.

Open thread, September 4 - September 10, 2017

2 Thomas 04 September 2017 07:41AM
If it's worth saying, but not worth its own post, then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

[Link] Minimizing Motivated Beliefs

1 entirelyuseless 03 September 2017 03:56PM

[Link] Debiasing by rationalizing your own motives

1 Kaj_Sotala 03 September 2017 12:20PM

September 2017 Media Thread

1 ArisKatsaris 02 September 2017 09:17PM

This is the monthly thread for posting media of various types that you've found that you enjoy. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads.

Rules:

  • Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect.
  • If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations.
  • Please post only under one of the already created subthreads, and never directly under the parent media thread.
  • Use the "Other Media" thread if you believe the piece of media you want to discuss doesn't fit under any of the established categories.
  • Use the "Meta" thread if you want to discuss about the monthly media thread itself (e.g. to propose adding/removing/splitting/merging subthreads, or to discuss the type of content properly belonging to each subthread) or for any other question or issue you may have about the thread or the rules.

Simplified Anthropic Doomsday

1 Stuart_Armstrong 02 September 2017 08:37PM

Here is a simplified version of the Doomsday argument in Anthropic decision theory, to get easier intuitions.

Assume a single agent A exists, an average utilitarian, with utility linear in money. Their species survives with 50% probability; denote this event by S. If the species survives, there will be 100 people total; otherwise the average utilitarian is the only one of its kind. An independent coin lands heads with 50% probability; denote this event by H.

Agent A must price a coupon CS that pays out €1 on S, and a coupon CH that pays out €1 on H. The coupon CS pays out only on S, thus the reward only exists in a world where there are a hundred people, thus if S happens, the coupon CS is worth (€1)/100. Hence its expected worth is (€1)/200=(€2)/400.

But H is independent of S, so (H,S) and (H,¬S) both have probability 25%. In (H,S), there are a hundred people, so CH is worth (€1)/100. In (H,¬S), there is one person, so CH is worth (€1)/1=€1. Thus the expected value of CH is (€1)/4+(€1)/400 = (€101)/400. This is more than 50 times the value of CS.

Note that C¬S, the coupon that pays out on doom, has an even higher expected value of (€1)/2=(€200)/400.

So, H and S have identical probability, but A assigns CS and CH different expected utilities, with a higher value to CH, simply because S is correlated with survival and H is independent of it (and A assigns an ever higher value to C¬S, which is anti-correlated with survival). This is a phrasing of the Doomsday Argument in ADT.

The Doomsday argument in anthropic decision theory

5 Stuart_Armstrong 31 August 2017 01:44PM

EDIT: added a simplified version here.

Crossposted at the intelligent agents forum.

In Anthropic Decision Theory (ADT), behaviours that resemble the Self Sampling Assumption (SSA) derive from average utilitarian preferences (and from certain specific selfish preferences).

However, SSA implies the doomsday argument, and, to date, I hadn't found a good way to express the doomsday argument within ADT.

This post will remedy that hole, by showing how there is a natural doomsday-like behaviour for average utilitarian agents within ADT.

continue reading »

Is life worth living?

5 philosophytorres 30 August 2017 10:42AM

Genuinely curious how folks on this website would answer the following question:

 

First, imagine the improbable: God exists. Now pretend that he descends from the clouds and visits you one night, saying the following: "I'm going to give you exactly two choices. (1) I'll murder you right now and annihilate your soul, meaning that you'll have no more conscious experiences ever again. [Theologians call this "annihilationism."] Alternatively, (2) I'll allow you to relive your life up to this moment exactly as it unfolded the first time -- that is, all the exact same experiences, life decisions, outcomes, etc. If you choose the second, once you reach the present moment -- this moment right now -- I'll then annihilate your soul."

 

Which would you choose, if you were forced to pick one or the other?

Intrinsic properties and Eliezer's metaethics

6 Tyrrell_McAllister 29 August 2017 11:26PM

Abstract

I give an account for why some properties seem intrinsic while others seem extrinsic. In light of this account, the property of moral goodness seems intrinsic in one way and extrinsic in another. Most properties do not suffer from this ambiguity. I suggest that this is why many people find Eliezer's metaethics to be confusing.

Section 1: Intuitions of intrinsicness

What makes a particular property seem more or less intrinsic, as opposed to extrinsic?

Consider the following three properties that a physical object X might have:

  1. The property of having the shape of a regular triangular. (I'll call this property "∆-ness" or "being ∆-shaped", for short.)
  2. The property of being hard, in the sense of resisting deformation.
  3. The property of being a key that can open a particular lock L (or L-opening-ness).

To me, intuitively, ∆-ness seems entirely intrinsic, and hardness seems somewhat less intrinsic, but still very intrinsic. However, the property of opening a particular lock seems very extrinsic. (If the notion of "intrinsic" seems meaningless to you, please keep reading. I believe that I ground these intuitions in something meaningful below.)

When I query my intuition on these examples, it elaborates as follows:

(1) If an object X is ∆-shaped, then X is ∆-shaped independently of any consideration of anything else. Object X could manifest its ∆-ness even in perfect isolation, in a universe that contained no other objects. In that sense, being ∆-shaped is intrinsic to X.

(2) If an object X is hard, then that fact does have a whiff of extrinsicness about it. After all, X's being hard is typically apparent only in an interaction between X and some other object Y, such as in a forceful collision after which the parts of X are still in nearly the same arrangement.

Nonetheless, X's hardness still feels to me to be primarily "in" X. Yes, something else has to be brought onto the scene for X's hardness to do anything. That is, X's hardness can be detected only with the help of some "test object" Y (to bounce off of X, for example). Nonetheless, the hardness detected is intrinsic to X. It is not, for example, primarily a fact about the system consisting of X and the test object Y together.

(3) Being an L-opening key (where L is a particular lock), on the other hand, feels very extrinsic to me. A thought experiment that pumps this intuition for me is this: Imagine a molten blob K of metal shifting through a range of key-shapes. The vast majority of such shapes do not open L. Now suppose that, in the course of these metamorphoses, K happens to pass through a shape that does open L. Just for that instant, K takes on the property of L-opening-ness. Nonetheless, and here is the point, an observer without detailed knowledge of L in particular wouldn't notice anything special about that instant.

Contrast this with the other two properties: An observer of three dots moving in space might notice when those three dots happen to fall into the configuration of a regular triangle. And an observer of an object passing through different conditions of hardness might notice when the object has become particularly hard. The observer can use a generic test object Y to check the hardness of X. The observer doesn't need anything in particular to notice that X has become hard.

But all that is just an elaboration of my intuitions. What is really going on here? I think that the answer sheds light on how people understand Eliezer's metaethics.

Section 2: Is goodness intrinsic?

I was led to this line of thinking while trying to understand why Eliezer's metaethics is consistently confusing.

The notion of an L-opening key has been my personal go-to analogy for thinking about how goodness (of a state of affairs) can be objective, as opposed to subjective. The analogy works like this: We are like locks, and states of affairs are like keys. Roughly, a state is good when it engages our moral sensibilities so that, upon reflection, we favor that state. Speaking metaphorically, a state is good just when it has the right shape to "open" us. (Here, "us" means normal human beings as we are in the actual world.) Being of the right shape to open a particular lock is an objective fact about a key. Analogously, being good is an objective fact about a state of affairs.

Objective in what sense? In this important sense, at least: The property of being L-opening picks out a particular point in key-shape space1. This space contains a point for every possible key-shape, even if no existing key has that shape. So we can say that a hypothetical key is "of an L-opening shape" even if the key is assumed to exist in a world that has no locks of type L. Analogously, a state can still be called good even if it is in a counterfactual world containing no agents who share our moral sensibilities.

But the discussion in Section 1 made "being L-opening" seem, while objective, very extrinsic, and not primarily about the key K itself. The analogy between "L-opening-ness" and goodness seems to work against Eliezer's purposes. It suggests that goodness is extrinsic, rather than intrinsic. For, one cannot properly call a key "opening" in general. One can only say that a key "opens this or that particular lock". But the analogous claim about goodness sounds like relativism: "There's no objective fact of the matter about whether a state of affairs is good. There's just an objective fact of the matter about whether it is good to you."

This, I suppose, is why some people think that Eliezer's metaethics is just warmed-over relativism, despite his protestations.

Section 3: Seeing intrinsicness in simulations

I think that we can account for the intuitions of intrinsicness in Section 1 by looking at them from the perspective simulations. Moreover, this account will explain why some of us (including perhaps Eliezer) judge goodness to be intrinsic.

The main idea is this: In our minds, a property P, among other things, "points to" the test for its presence. In particular, P evokes whatever would be involved in detecting the presence of P. Whether I consider a property P to be intrinsic depends on how I would test for the presence of P — NOT, however, on how I would test for P "in the real world", but rather on how I would test for P in a simulation that I'm observing from the outside.

Here is how this plays out in the cases above.

(1) In the case of being ∆-shaped, consider a simulation (on a computer, or in your mind's eye) consisting of three points connected by straight lines to make a triangle X floating in space. The points move around, and the straight lines stretch and change direction to keep the points connected. The simulation itself just keeps track of where the points and lines are. Nonetheless, when X becomes ∆-shaped, I notice this "directly", from outside the simulation. Nothing else within the simulation needs to react to the ∆-ness. Indeed, nothing else needs to be there at all, aside from the points and lines. The ∆-shape detector is in me, outside the simulation. To make the ∆-ness of an object X manifest, the simulation needs to contain only the object X itself.

In summary: A property will feel extremely intrinsic to X when my detecting the property requires only this: "Simulate just X."

(2) For the case of hardness, imagine a computer simulation that models matter and its motions as they follow from the laws of physics and my exogenous manipulations. The simulation keeps track of only fundamental forces, individual molecules, and their positions and momenta. But I can see on the computer display what the resulting clumps of matter look like. In particular, there is a clump X of matter in the simulation, and I can ask myself whether X is hard.

Now, on the one hand, I am not myself a hardness detector that can just look at X and see its hardness. In that sense, hardness is different from ∆-ness, which I can just look at and see. In this case, I need to build a hardness detector. Moreover, I need to build the detector inside the simulation. I need some other thing Y in the simulation to bounce off of X to see whether X is hard. Then I, outside the simulation, can say, "Yup, the way Y bounced off of X indicates that X is hard." (The simulation itself isn't generating statements like "X is hard", any more than the 3-points-and-lines simulation above was generating statements about whether the configuration was a regular triangle.)

On the other hand, crucially, I can detect hardness with practically anything at all in addition to X in the simulation. I can take practically any old chunk of molecules and bounce it off of X with sufficient force.

A property of an object X still feels intrinsic when detecting the property requires only this: "Simulate just X + practically any other arbitrary thing."

Indeed, perhaps I need only an arbitrarily small "epsilon" chunk of additional stuff inside the simulation. Given such a chunk, I can run the simulation to knock the chunk against X, perhaps from various directions. Then I can assess the results to conclude whether X is hard. The sense of intrinsicness comes, perhaps, from "taking the limit as epsilon goes to 0", seeing the hardness there the whole time, and interpreting this as saying that the hardness is "within" X itself.

In summary: A property will feel very intrinsic to X when its detection requires only this: "Simulate just X + epsilon."

(3) In this light, L-opening keys differ crucially from ∆-shaped things and from hard things.

An L-opening key differs from an ∆-shaped object because I myself do not encode lock L. Whereas I can look at a regular triangle and see its ∆-ness from outside the simulation, I cannot do the same (let's suppose) for keys of the right shape to open lock L. So I cannot simulate a key K alone and see its L-opening-ness.

Moreover, I cannot add something merely arbitrary to the simulation to check K for L-opening-ness.  I need to build something very precise and complicated inside the simulation: an instance of the lock L. Then I can insert K in the lock and observe whether it opens.

I need, not just K, and not just K + epsilon: I need to simulate K + something complicated in particular.

Section 4: Back to goodness

So how does goodness as a property fit into this story?

There is an important sense in which goodness is more like being ∆-shaped than it is like being L-opening. Namely, goodness of a state of affairs is something that I can assess myself from outside a simulation of that state. I don't need to simulate anything else to see it. Putting it another way, goodness is like L-opening would be if I happened myself to encode lock L. If that were the case, then, as soon as I saw K take on the right shape inside the simulation, that shape could "click" with me outside of the simulation.

That is why goodness seems to have the same ultimate kind of intrinsicness that ∆-ness has and which being L-opening lacks. We don't encode locks, but we do encode morality.

 

Footnote

1. Or, rather, a small region in key-shape space, since a lock will accept keys that vary slightly in shape.

Is there a flaw in the simulation argument?

2 philosophytorres 29 August 2017 02:34PM

Can anyone tell me what's wrong with the following "refutation" of the simulation argument? (I know this is a bit long -- my apologies! I also posted an earlier draft several months ago and got some excellent feedback. I don't see a flaw, but perhaps I'm missing something!)

Consider the following three scenarios:

Scenario 1: Imagine that you’re standing in a hallway, which we’ll label Location A. You are blindfolded and then escorted into one of two rooms, either X or Y, but you don’t know which one. While in the unknown room, you are told that there are exactly 1,000 people in room X and only a single person in room Y. There is no way of communicating with anyone else, so you must use the information given to guess which room you’re in. If you guess correctly, you win 1 million dollars. Using the principle of indifference as your guide, you guess that you’re in room X—and consequently, you almost certainly win 1 million dollars. After all, since betting odds are a guide to rationality, if everyone in room X and Y were to bet that they’re in room X, just about everyone would win.

Scenario 2: Imagine that you’re standing in a hallway, which we’ll label Location A. You are blindfolded and then escorted into one of two rooms, either X or Y, but you don’t know which one. While in the unknown room, you are told that there are exactly 1,000 people in room X and only a single person in room Y. You are also told that over the past year, a total of 1 billion people have been in room Y at one time or another whereas only 10,000 people have been in room X. There is no way of communicating with anyone else, so you must use the information given to guess which room you’re in. If you guess correctly, you win 1 million dollars. The question here is: Does the extra information about the past histories of rooms X and Y change your mind about which room you’re in? It shouldn’t. After all, if everyone currently in rooms X and Y were to bet that they’re in room X, just about everyone would win.

Scenario 3: Imagine that you’re standing in a hallway, which we’ll label Location A. You are blindfolded and then told that you’ll be escorted into room Z through one of two rooms, either X or Y, but you won’t know which one. At any given moment, or timeslice, there will always be exactly 1,000 people in room X and only a single person in room Y. (Thus, as one person enters each room another one exits into room Z.) Once you arrive in room Z at time T2, you are told that between T1 and T2 a total of 1 billion people passed through room Y whereas only 10,000 people in total passed through room X, where all of these people are now in room Z with you. There is no way of communicating with anyone else, so you must use the information given to guess which room, X or Y, you passed through on your way from Location A to room Z. If you guess correctly, you win 1 million dollars. Using the principle of indifference as your guide, you now guess that you passed through room Y—and consequently, you almost certainly win 1 million dollars. After all, if everyone in room Z at T2 were to bet that they passed through room Y rather than room X, the large majority would win.

Let’s analyze these scenarios. In the first two, the only relevant information is synchronic information about the current distribution of people when you answer the question, “Which room am I in, X or Y?” (Thus, the historical knowledge offered in Scenario 2 doesn’t change your answer.) In contrast, the only relevant information in the third scenario is diachronic information about which of the two rooms had more people pass through them from T1 to T2. If these claims are correct, then the simulation argument proposed by Nick Bostrom (2003) is flawed. The remainder of this paper will (a) outline this argument, and (b) show how the ideas above falsify the argument’s conclusion.

According to the simulation argument, one or more of the following disjuncts must be true: (i) humanity goes extinct before reaching a stage of technological development that would enable us to run a large number of ancestral simulations; (ii) humanity reaches a stage of technological development that enables us to run a large number of ancestral simulations but we decide not to; and (iii) humanity reaches a stage of technological development that enables us to run a large number of ancestral simulations and we do, in fact, run a large number of ancestral simulations. The third disjunct entails that we would almost certainly live in a computer simulation because (a) a sufficiently high-resolution simulation would be sensorily and phenomenologically indistinguishable from the “real” world, and (b) the indifference principle tells us to distribute our probabilities evenly among all the possibilities if we have no special reason to favor one over another. Since the population of sims would far outnumber the population of non-sims in scenario (iii), ex hypothesi, then we would almost certainly be sims. This is the simulation hypothesis.

But consider the following possible Posthuman Future: instead of running a huge number of ancestral simulations in parallel, as Bostrom seems to assume we would, future humans run a huge number of simulations sequentially, one after another. This could be done such that at any given moment the total number of extant non-sims far exceeds the total number of extant sims, yet over time the total number of sims who have existed far exceeds the total number of non-sims who also have existed. (This could be accomplished by running simulations at speeds much faster than realtime.) If the question is, “Where am I right now, in a simulation or not?,” then the principle of indifference instructs you to answer, “I am not a sim.” After all, if everyone were to bet at some timeslice Tx that they are not a sim, nearly everyone would win.

Here the only information that matters is synchronic information; diachronic information about how many sims, non-sims, or “observer-moments” there have been has no bearing on one’s credence about one’s present ontological status (sim or non-sim?)—that is, no more than historical knowledge about rooms X and Y in Scenario 2 have any bearing on one’s response to the question, “Which room am I currently in?” This is problematic for the simulation argument because the Posthuman Future outlined above satisfies the condition of disjunct (iii) yet it doesn’t entail that one is almost certainly living in a simulation. Thus, Bostrom’s assertion that “at least one of the following propositions is true” is false.

One might wonder: but what if we run a huge number of simulations sequentially and then stop. Wouldn’t this be analogous to Scenario 3, in which we would have reason for believing that we passed through room Y rather than room X, i.e., that we were (and thus still are) in a simulation rather than the “real” world? The answer is no, it’s not analogous to Scenario 3 because in our case we would have some additional relevant information about our actual history—that is, we would know that we were in “room X,” which held more people at every given moment, since we would have control over the ratio of sims to non-sims (always making sure that the latter far outnumbers the former). Even more, if we were to stop all simulations, then the ratio of sims to non-sims would be zero to whatever the human population is at the time, thus making a bet that we are non-sims virtually certain. So far as I can tell, these conclusions follow whether one accepts the self-sampling assumption (SSA), strong self-sampling assumption (SSSA), or the self-indication assumption (SIA) (Bostrom 2002).

In sum, the simulation argument is missing a fourth disjunct: (iv) humanity reaches a stage of technological development that enables us to run a large number of ancestral simulations and we do run a large number of ancestral simulations, yet the principle of indifference leads us to believe that we are not in a simulation. It will, of course, be up to future generations to decide whether to run a large number of ancestral simulations, and if so whether to run these sequentially or in parallel, given the ontological-epistemic implications of each.

Doing a big survey on work, stress, and productivity. Feedback / anything you're curious about?

1 lionhearted 29 August 2017 02:19PM

In September, doing a big survey on work, stress, and productivity -- going to gather a bunch of possibly germane data, and then see what correlations stand out.

Current version is around 90% complete here --

[done]

Any feedback? Any data you'd be very interested in getting? We're basically guaranteed to get basic statistical significance / sample size, and might have respondents in the mid-thousands if things break right. What would you like to know? Feedback? Thanks.

Edit 7 September: it's now live here -- https://form.jotform.com/71974198606368 -- I answered a few of the top questions and read all the rest and incorporated some of the feedback. Thanks so much.

Request For Collaboration

1 DragonGod 28 August 2017 11:05PM

I want to work on a paper: "The Information Theoretic Conception of Personhood". My philosophy is shit though, so I am interested in a coauthor. Someone who has the relevant philosophical knowledge to let the paper stand the tests of academic rigour.

DM me if you're willing to help.

 

One sentence thesis of the paper: "I am my information".

Some conclusions: A simulation of me is me.

 

I have no idea of the length, but I want to flesh the paper to be something that meets the standards of Academia.

View more: Next