Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

[Link] The land before metrics

0 arunbharatula 24 May 2017 04:31AM

[Link] Employment and wellbeing

0 arunbharatula 24 May 2017 04:30AM

[Link] Effective learning

0 arunbharatula 24 May 2017 04:28AM

[Link] Relationships and wellbeing

0 arunbharatula 24 May 2017 04:28AM

[Link] Political ideology

0 arunbharatula 24 May 2017 04:27AM

Notes from the Hufflepuff Unconference (Part 1)

8 Raemon 23 May 2017 09:04PM

April 28th, we ran the Hufflepuff Unconference in Berkeley, at the MIRI/CFAR office common space.

There's room for improvement in how the Unconference could have been run, but it succeeded the core things I wanted to accomplish: 

 - Established common knowledge of what problems people were actually interested in working on
 - We had several extensive discussions of some of those problems, with an eye towards building solutions
 - Several people agreed to work together towards concrete plans and experiments to make the community more friendly, as well as build skills relevant to community growth. (With deadlines and one person acting as project manager to make sure real progress was made)
 - We agreed to have a followup unconference in roughly three months, to discuss how those plans and experiments were going

Rough notes are available here. (Thanks to Miranda, Maia and Holden for takin really thorough notes)

This post will summarize some of the key takeaways, some speeches that were given, and my retrospective thoughts on how to approach things going forward.

But first, I'd like to cover a question that a lot of people have been asking about:

What does this all mean for people outside of the Bay?

The answer depends.

I'd personally like it if the overall rationality community got better at social skills, empathy, and working together, sticking with things that need sticking with (and in general, better at recognizing skills other than metacognition). In practice, individual communities can only change in the ways the people involved actually want to change, and there are other skills worth gaining that may be more important depending on your circumstances.

Does Project Hufflepuff make sense for your community?

If you're worried that your community doesn't have an interest in any of these things, my actual honest answer is that doing something "Project Hufflepuff-esque" probably does not make sense. I did not choose to do this because I thought it was the single-most-important thing in the abstract. I did it because it seemed important and I knew of a critical mass of people who I expected to want to work on it. 

If you're living in a sparsely populated area or haven't put a community together, the first steps do not look like this, they look more like putting yourself out there, posting a meetup on Less Wrong and just *trying things*, any things, to get something moving.

If you have enough of a community to step back and take stock of what kind of community you want and how to strategically get there, I think this sort of project can be worth learning from. Maybe you'll decide to tackle something Project-Hufflepuff-like, maybe you'll find something else to focus on. I think the most important thing is have some kind of vision for something you community can do that is worth working together, leveling up to accomplish.

Community Unconferences as One Possible Tool

Community unconferences are a useful tool to get everyone on the same page and spur them on to start working on projects, and you might consider doing something similar. 

They may not be the right tool for you and your group - I think they're most useful in places where there's enough people in your community that they don't all know each other, but do have enough existing trust to get together and brainstorm ideas. 

If you have a sense that Project Hufflepuff is worthwhile for your community but the above disclaimers point towards my current approach not making sense for you, I'm interested in talking about it with you, but the conversation will look less like "Ray has ideas for you to try" and more like "Ray is interested in helping you figure out what ideas to try, and the solution will probably look very different."

Online Spaces

Since I'm actually very uncertain about a lot of this and see it as an experiment, I don't think it makes sense to push for any of the ideas here to directly change Less Wrong itself (at least, yet). But I do think a lot of these concepts translate to online spaces in some fashion, and I think it'd make sense to try out some concepts inspired by this in various smaller online subcommunities.

Table of Contents:

I. Introduction Speech

 - Why are we here?
 - The Mission: Something To Protect
 - The Invisible Badger, or "What The Hell Is a Hufflepuff?"
 - Meta Meetups Usually Suck. Let's Try Not To.

II. Common Knowledge

 - What Do People Actually Want?
 - Lightning Talks

III. Discussing the Problem (Four breakout sessions)

 - Welcoming Newcomers
 - How to handle people who impose costs on others?
 - Styles of Leadership and Running Events
 - Making Helping Fun (or at least lower barrier-to-entry)

IV. Planning Solutions and Next Actions

V. Final Words

I. Introduction: It Takes A Village to Save a World

(A more polished version of my opening speech from the unconference)

[Epistemic Status: This is largely based on intuition, looking at what our community has done and what other communities seem to be able to do. I'm maybe 85% confident in it, but it is my best guess]

In 2012, I got super into the rationality community in New York. I was surrounded by people passionate about thinking better and using that thinking to tackle ambitious projects. And in 2012 we all decided to take on really hard projects that were pretty likely to fail, because the expected value seemed high, and it seemed like even if we failed we'd learn a lot in the process and grow stronger.

That happened - we learned and grew. We became adults together, founding companies and nonprofits and creating holidays from scratch.

But two years later, our projects were either actively failing, or burning us out. Many of us became depressed and demoralized.

There was nobody who was okay enough to actually provide anyone emotional support. Our core community withered.

I ended up making that the dominant theme of the 2014 NYC Solstice, with a call-to-action to get back to basics and take care each other.

I also went to the Berkeley Solstice that year. And... I dunno. In the back of my mind I was assuming "Berkeley won't have that problem - the Bay area has so many people, I can't even imagine how awesome and thriving a community they must have." (Especially since the Bay kept stealing all the Movers and Shakers of NYC).

The theme of the Bay Solstice turned out to be "Hey guys, so people keep coming to the Bay, running on a dream and a promise of community, but that community is not actually there, there's a tiny number of well-connected people who everyone is trying to get time with, and everyone seems lonely and sad. And we don't even know what to do about this."

In 2015, that theme in the Berkeley Solstice was revisited.

So I think that was the initial seed of what would become Project Hufflepuff - noticing that it's not enough to take on cool projects, that it's not enough to just get a bunch of people together and call it a community. Community is something you actively tend to. Insofar as Maslow's hierarchy is real, it's a foundation you need before ambitious projects can be sustainable.

There are other pieces of the puzzle - different lenses that, I believe, point towards a Central Thing. Some examples:

Group houses, individualism and coordination.

I've seen several group houses where, when people decide it no longer makes sense to live in the house, they... just kinda leave. Even if they've literally signed a lease. And everyone involved (the person leaving and those remain), instinctively act as if it's the remaining people's job to fill the leaver's spot, to make rent.

And the first time, this is kind of okay. But then each subsequent person leaving adds to a stressful undertone of "OMG are we even going to be able to afford to live here?". It eventually becomes depressing, and snowballs into a pit that makes newcomers feel like they don't WANT to move into the house.

Nowadays I've seen some people explicitly building into the roommate agreement a clear expectation of how long you stay and who's responsibility it is to find new roommates and pay rent in the meantime. But it's disappointing to me that this is something we needed, that we weren't instinctively paying to attention to how we were imposing costs on each other in the first place. That when we *violated a written contract*, let alone a handshake agreement, that we did not take upon ourselves (or hold each other accountable), to ensure we could fill our end of the bargain.

Friends, and Networking your way to the center

This community puts pressure on people to improve. It's easier to improve when you're surrounded by ambitious people who help or inspire each other level up. There's a sense that there's some cluster of cool-people-who-are-ambitious-and-smart who've been here for a while, and... it seems like everyone is trying to be friends with those people. 

It also seems like people just don't quite get that friendship is a skill, that adult friendships in City Culture can be hard, and it can require special effort to make them happen.

I'm not entirely sure what's going on here - it doesn't make sense to say anyone's obligated to hang out with any particular person (or obligated NOT to), but if 300 people aren't getting the connection they want it seems like *somewhere people are making a systematic mistake.* 

(Since the Unconference, Maia has tackled this particular issue in more detail)

 

The Mission - Something To Protect

 

As I see it, the Rationality Community has three things going on: Truth. Impact. And "Being People".

In some sense, our core focus is the practice of truthseeking. The thing that makes that truthseeking feel *important* is that it's connected to broader goals of impacting the world. And the thing that makes this actually fun and rewarding enough to stick with is a community that meets our needs, where can both flourish as individuals and find the relationships we want.

I think we have made major strides in each of those areas over the past seven years. But we are nowhere near done.

Different people have different intuitions of which of the three are most important. Some see some of them as instrumental, or terminal. There are people for whom Truthseeking is *the point*, and they'd have been doing that even if there wasn't a community to help them with it, and there are people for whom it's just one tool of many that helps them live their life better or plan important projects.

I've observed a tendency to argue about which of these things is most important, or what tradeoffs are worth making. Inclusiveness verses high standards. Truth vs action. Personal happiness vs high acheivement.

I think that kind of argument is a mistake.

We are falling woefully short on all of these things. 

We need something like 10x our current capacity for seeing, and thinking. 10x our capacity for doing. 10x our capacity for *being healthy people together.*

I say "10x" not because all these things are intrinsically equal. The point is not to make a politically neutral push to make all the things sound nice. I have no idea exactly how far short we're falling on each of these because the targets are so far away I can't even see the end, and we are doing a complicated thing that doesn't have clear instructions and might not even be possible.

The point is that all of these are incredibly important, and if we cannot find a way to improve *all* of these, in a way that is *synergistic* with each other, then we will fail.

There is a thing at the center of our community. Not all of us share the exact same perspective on it. For some of us it's not the most important thing. But it's been at the heart of the community since the beginning and I feel comfortable asserting that it is the thing that shapes our culture the most:

The purpose of our community is to make sure this place is okay:

The world isn't okay right now, on a number of levels. And a lot of us believe there is a strong chance it could become dramatically less okay. I've seen people make credible progress on taking responsibility for pieces of our home. But when all is said and done, none of our current projects really give me the confidence that things are going to turn out all right. 

Our community was brought together on a promise, a dream, and we have not yet actually proven ourselves worthy of that dream. And to make that dream a reality we need a lot of things.

We need to be able to criticize, because without criticism, we cannot improve.

If we cannot, I believe we will fail.

We need to be able to talk about ideas that are controversial, or uncomfortable - otherwise our creativity and insight will be crippled.

If we cannot, I believe we will fail.

We need to be able to do those things without alienating people. We need to be able to criticize without making people feel untrusted and discouraged from even taking action. We need to be able to discuss challenging things while earnestly respecting the notion that *talking about ideas gives those ideas power and has concrete effects on social reality*, and sometimes that can hurt people.

If we cannot figure out how to do that, I believe we will fail.

We need more people who are able and willing to try things that have never been done before. To stick with those things long enough to *get good at them*, to see if they can actually work. We need to help each other do impossible things. And we need to remember to check for and do the *possible*, boring, everyday things that are in fact straightforward and simple and not very inspiring. 

If we cannot manage to do that, I believe we will fail.

We need to be able to talk concretely about what the *highest leverage actions in the world are*. We need to prioritize those things, because the world is huge and broken and we are small. I believe we need to help each other through a long journey, building bigger and bigger levers, building connections with people outside our community who are undertaking the same journey through different perspectives.

And in the process, we need to not make it feel like if *you cannot personally work on those highest leverage things, that you are not important.* 

There's the kind of importance where we recognize that some people have scarce skills and drive, and the kind of importance where we remember that *every* person has intrinsic worth, and you owe *nobody* any special skills or prestigious sounding projects for your life to be worthwhile.

This isn't just a philosophical matter - I think it's damaging to our mental health and our collective capacity. 

We need to recognize that the distribution of skills we tend to reward or punish is NOT just about which ones are actually most valuable - sometimes it is simply founder effects and blind spots.

We cannot be a community for everyone - I believe trying to include anyone with a passing interest in us is a fool's errand. But there are many people who had valuable skills to contribute who have turned away, feeling frustrated and un-valued.

If we cannot find a way to accomplish all of these things at once, I believe we will fail.

The thesis of Project Hufflepuff is that it takes (at least) a village to save a world. 

It takes people doing experimental impossible things. It takes caretakers. It takes people helping out with unglorious tasks. It takes technical and emotional and physical skills. And while it does take some people who specialize in each of those things, I think it also needs many people who are least a little bit good at each of them, to pitch in when needed.

Project Hufflepuff is not the only things our community needs, or the most important. But I believe it is one of the necessary things that our community needs, if we're to get to 10x our current Truthseeking, Impact and Human-ing.

If we're to make sure that our home is okay.

The Invisible Badger

"A lone hufflepuff surrounded by slytherins will surely wither as if being leeched dry by vampires."

- Duncan

[Epistemic Status: My evidence for this is largely based on discussions with a few people for whom the badger seems real and valuable, and who report things being different in other communities, as well as some of my general intuitions about society. I'm 75% sure the badger exists, 90% that's it worth leaning into the idea of the badger to see if it works for you, and maybe 55% sure that it's worth trying to see the badger if you can't already make out it's edges.]


 

If I *had* to pick a clear thing that this conference is about without using Harry Potter jargon, I'd say "Interpersonal dynamics surrounding trust, and how those dynamics apply to each of the Impact/Truth/Human focuses of the rationality community."

I'm not super thrilled with that term because I think I'm grasping more for some kind of gestalt. An overall way of seeing and being that's hard to describe and that doesn't come naturally to the sort of person attracted to this community.

Much like the blind folk and the elephant, who each touched a different part of the animal and came away with a different impression (the trunk seems like a snake, the legs seem like a tree), I've been watching several people in the community try to describe things over the past few years. And maybe those things are separate but I feel like they're secretly a part of the same invisible badger.

Hufflepuff is about hard work, and loyalty, and camaraderie. It's about emotional intelligence. It's about seeing value in day to day things that don't directly tie into epic narratives. 

There's a bunch of skills that go into Hufflepuff. And part of want I want is for people to get better at those skills. But It think a mindset, an approach, that is fairly different from the typical rationalist mindset, that makes those skills easier. It's something that's harder when you're being rigorously utilitarian and building models of the world out of game theory and incentives.

Mindspace is deep and wide, and I don't expect that mindset to work for everyone. I don't think everyone should be a Hufflepuff. But I do think it'd be valuable to the community if more people at least had access to this mindset and more of these skills.

So what I'd like, for tonight, is for people to lean into this idea. Maybe in the end you'll find that this doesn't work for you. But I think many people's first instinct is going to be that this is alien and uncomfortable and I think it's worth trying to push past that.

The reason we're doing this conference together is because the Hufflepuff way doesn't really work if people are trying to do it alone - I think it requires trust and camaraderie and persistence to really work. I don't think we can have that required trust all at once, but I think if there are multiple people trying to make it work, who can incrementally trust each other more, I think we can reach a place where things run more smoothly, where we have stronger emotional connections, and where we trust each other enough to take on more ambitious projects than we could if we're all optimizing as individuals.

Meta-Meetups Suck. Let's Not.

This unconference is pretty meta - we're talking about norms and vague community stuff we want to change.

Let me tell you, meta meetups are the worst. Typically you end up going around in circles complaining and wishing there were more things happening and that people were stepping up and maybe if you're lucky you get a wave of enthusiasm that lasts a month or so and a couple things happen but nothing really *changes*.

So. Let's not do that. Here's what I want to accomplish and which seems achievable:

1) Establish common knowledge of important ideas and behavior patterns. 

Sometimes you DON'T need to develop a whole new skill, you just need to notice that your actions are impacting people in a different way, and maybe that's enough for you to decide to change somethings. Or maybe someone has a concept that makes it a lot easier for you to start gaining a new skill on your own.

2) Establish common knowledge of who's interested in trying which new norms, or which new skills. 

We don't actually *know* what the majority of people want here. I can sit here and tell you what *I* think you should want, but ultimately what matters is what things a critical mass of people want to talk about tonight.

Not everyone has to agree that an idea is good to try it out. But there's a lot of skills or norms that only really make sense when a critical mass of other people are trying them. So, maybe of the 40 people here, 25 people are interested in improving their empathy, and maybe another 20 are interested in actively working on friendship skills, or sticking to commitments. Maybe those people can help reinforce each other.

3) Explore ideas for social and skillbuilding experiments we can try, that might help. 

The failure mode of Ravenclaws is to think about things a lot and then not actually get around to doing them. A failure mode of ambitious Ravenclaws, is to think about things a lot and then do them and then assume that because they're smart, that they've thought of everything, and then not listen to feedback when they get things subtly or majorly wrong.

I'd like us to end by thinking of experiments with new norms, or habits we'd like to cultivate. I want us to frame these as experiments, that we try on a smaller scale and maybe promote more if they seem to be working, while keeping in mind that they may not work for everyone.

4) Commit to actions to take.

Since the default action is for them to peter out and fail, I'd like us to spend time bulletproofing them, brainstorming and coming up with trigger-action plans so that they actually have a chance to succeed.

Tabooing "Hufflepuff"

Having said all that talk about The Hufflepuff Way...

...the fact is, much of the reason I've used those towards is to paint a rough picture to attract the sort of person I wanted to attract to this unconference.

It's important that there's a fuzzy, hard-to-define-but-probably-real concept that we're grasping towards, but it's also important not to be talking past each other. Early on in this project I realized that a few people who I thought were on the same page actually meant fairly different things. Some cared more about empathy and friendship. Some cared more about doing things together, and expected deep friendships to arise naturally from that.

So I'd like us to establish a trigger-action-plan right now - for the rest of this unconference, if someone says "Hufflepuff", y'all should say "What do you mean by that?" and then figure out whatever concrete thing you're actually trying to talk about.

II. Common Knowledge

The first part of the unconference was about sharing our current goals, concerns and background knowledge that seemed useful. Most of the specifics are covered in the notes. But I'll talk here about why I included the things I did and what my takeaways were afterwards on how it worked.

Time to Think

The first thing I did was have people sit and think about what they actually wanted to get out of the conference, and what obstacles they could imagine getting in the way of that. I did this because often, I think our culture (ostensibly about helping us think better) doesn't give us time to think, and instead has people were are quick-witted and conversationally dominant end up doing most of the talking. (I wrote a post a year ago about this, the 12 Second Rule). In this case I gave everyone 5 minutes, which is something I've found helpful at small meetups in NYC.

This had mixed results - some people reported that while they can think well by themselves, in a group setting they find it intimidating and their mind starts wandering instead of getting anything done. They found it much more helpful when I eventually let people-who-preferred-to-talk-to-each-other go into another room to talk through their ideas outloud.

I think there's some benefit to both halves of this and I'm not sure how common which set of preferences are. It's certainly true that it's not common for conferences to give people a full 5 minutes to think so I'd expect it to be someone uncomfortable-feeling regardless of whether it was useful.

But an overall outcome of the unconference was that it was somewhat lower energy than I'd wanted, and opening with 5 minutes of silent thinking seemed to contribute to that, so for the next unconference I run, I'm leaning towards a shorter period of time for private thinking (Somewhere between 12 and 60 seconds), followed by "turn to your neighbors and talk through the ideas you have", followed by "each group shares their concepts with the room."

"What is do you want to improve on? What is something you could use help with?"

I wanted people to feel like active participants rather than passive observers, and I didn't want people to just think "it'd be great if other people did X", but to keep an internal locus of control - what can *I* do to steer this community better? I also didn't want people to be thinking entirely individualistically.

I didn't collect feedback on this specific part and am not sure how valuable others found it (if you were at the conference, I'd be interested if you left any thoughts in the comments). Some anonymized things people described:

  • When I make social mistakes, consider it failure; this is unhelpful

  • Help point out what they need help with

  • Have severe akrasia, would like more “get things done” magic tools

  • Getting to know the bay area rationalist community

  • General bitterness/burned out

  • Reduce insecurity/fear around sharing

  • Avoiding spending most words signaling to have read a particular thing; want to communicate more clearly

  • Creating systems that reinforce unnoticed good behaviour

  • Would like to learn how to try at things

  • Find place in rationalist community

  • Staying connected with the group

  • Paying attention to what they want in the moment, in particular when it’s right to not be persistent

  • Would like to know the “landing points” to the community to meet & greet new people

  • Become more approachable, & be more willing to approach others for help; community cohesiveness

  • Have been lonely most of life; want to find a place in a really good healthy community

  • Re: prosocialness, being too low on Maslow’s hierarchy to help others

  • Abundance mindset & not stressing about how to pay rent

  • Cultivate stance of being able to do helpful things (action stance) but also be able to notice difference between laziness and mental health

  • Don’t know how to respect legit safety needs w/o getting overwhelmed by arbitrary preferences; would like to model people better to give them basic respect w/o having to do arbitrary amount of work

  • Starting conversations with new people

  • More rationalist group homes / baugruppe

  • Being able to provide emotional support rather than just logistics help

  • Reaching out to people at all without putting too much pressure on them

  • Cultivate lifelong friendships that aren’t limited to particular time and place

  • Have a block around asking for help bc doesn’t expect to reciprocate; would like to actually just pay people for help w stuff

  • Want to become more involved in the community

  • Learn how to teach other people “ops skills”

  • Connections to people they can teach and who can teach them

Lightning Talks

Lightning talks are a great way to give people an opportunity to not just share ideas, but get some practice at public presentation (which I've found can be a great gateway tool for overall confidence and ability to get things done in the community). Traditionally they are 5 minutes long. CFAR has found that 3.5 minute lightning talks are better than 5 minute talks, because it cuts out some rambling and tangents.

It turned out we had more people than I'd originally planned time for, so we ended up switching to two minute talks. I actually think this was even better, and my plan for next time is do 1-minute timeslots but allow people to sign up for multiple if they think their talk requires it, so people default to giving something short and sweet.

Rough summaries of the lightning talks can be found in the notes.

III. Discussing the Problem

The next section involved two "breakout session" - two 20 minute periods for people to split into smaller groups and talk through problems in detail. This was done in an somewhat impromptu fashion, with people writing down the talks they wanted to do on the whiteboard and then arranging them so most people could go to a discussion that interested them.

The talks were:

 -  Welcoming Newcomers
 -  How to handle people who impose costs on others?
 -  Styles of Leadership and Running Events
 -  Making Helping Fun (or at least lower barrier-to-entry)
 -  Circling session 

There was a suggested discussion about outreach, which I asked to table for a future unconference. My reason was that outreach discussions tend to get extremely meta and seem to be an attractor (people end up focusing on how to bring more people into the community without actually making sure the community is good, and I wanted the unconference to focus on the latter.)

I spent some time drifting between sessions, and was generally impressed both with the practical focus each discussion had, as well as the way they were organically moderated.

Again, more details in the notes.

IV. Planning Solutions and Next Actions

After about an hour of discussion and mingling, we came back to the central common space to describe key highlights from each session, and begin making concrete plans. (Names are crediting people who suggested an idea and who volunteered to make it happen)

Creating Norms for Your Space (Jane Joyce, Tilia Bell)

The "How to handle people who impose costs on other" conversation ended up focusing on minor but repeated costs. One of the hardest things to moderate as an event host is not people who are actively disruptive, but people who just a little bit awkward or annoying - they'd often be happy to change their behavior if they got feedback, but giving feedback feels uncomfortable and it's hard to do in a tactful way. This presents two problems at once: parties/events/social-spaces end up a more awkward/annoying than they need to be, and often what happens is that rather than giving feedback, the hosts stop inviting people doing those minor things, which means a lot of people still working on their social skills end up living in fear of being excluded.

Solving this fully requires a few different things at once, and I'm not sure I have a clear picture of what it looks like, but one stepping stone people came up with was creating explicit norms for a given space, and a practice of reminding people of those norms in a low-key, nonjudgmental way.

I think will require a lot of deliberate effort and practice on the part of hosts to avoid alternate bad outcomes like "the norms get disproportionately enforced on people the hosts like and applied unfairly to people they aren't close with". But I do think it's a step in the right direction to showcase what kind of space you're creating and what the expectations are.

Different spaces can be tailored for different types of people with different needs or goals. (I'll have more to say about this in an upcoming post - doing this right is really hard, I don't actually know of any groups that have done an especially good job of it.)

I *was* impressed with the degree to which everyone in the conversation seemed to be taking into account a lot of different perspectives at once, and looking for solutions that benefited as many people as possible.

Welcoming Committee (Mandy Souza, Tessa Alexanian)

Oftentimes at events you'll see people who are new, or who don't seem comfortable getting involved with the conversation. Many successful communities do a good job of explicitly welcoming those people. Some people at the unconference decided to put together a formal group for making sure this happens more.

The exact details are still under development, but I think the basic idea is to have a network of people who are interested
he idea is to have a group of people who go to different events, playing the role of the welcomer. I think the idea is sort of a "Uber for welcomers" network (i.e. it both provides a place for people running events to go to ask for help with welcoming, and people who are interested in welcoming to find events that need welcomers)

It also included some ideas for better infrastructure, such as reviving "bayrationality.org" to make it easier for newcomers to figure out what events are going on (possibly including links to the codes of conduct for different spaces as well). In the meanwhile, some simple changes were the introduction of a facebook group for Bay Area Rationalist Social Events.

Softskill-sharing Groups (Mike Plotz and Jonathan Wallis)

The leadership styles discussion led to the concept that in order to have a flourishing community, and to be a successful leader, it's valuable to make yourself legible to others, and others more legible to yourself. Even small improvements in an activity as frequent as communication can have huge effects over time, as we make it easier to see each other as we actually are and to clearly exchange our ideas. 

A number of people wanted to improve in this area together, and so we’re working towards establishing a series of workshops with a focus on practice and individual feedback. A longer post on why this is important is coming up, and there will be information on the structure of the event after our first teacher’s meeting. If you would like to help out or participate, please fill out this poll:

https://goo.gl/forms/MzkcsMvD2bKzXCQN2

Circling Explorations (Qiaochu and others)

Much of the discussion at the Unconference, while focused on community, ultimately was explored through an intellectual lens. By contrast, "Circling" is a practice developed by the Authentic Relating community which is focused explicitly on feelings. The basic premise is (sort of) simple: you sit in a circle in a secluded space, and you talk about how you're feeling in the moment. Exactly how this plays out is a bit hard to explain, but the intended result is to become better both at noticing your own feelings and the people around you.

Opinions were divided as to whether this was something that made sense for "rationalists to do on their own", or whether it made more sense to visit more explicitly Circling-focused communities, but several people expressed interest in trying it again.

Making Helping Fun and More Accessible (Suggested by Oliver Habryka)

Ultimately we want a lot of people who are able and excited to help out with challenging projects - to improve our collective group ambition. But to get there, it'd be really helpful to have "gateway helping" - things people can easily pitch in to do that are fun, rewarding, clearly useful but on the "warm fuzzies" side of helping. Oliver suggested this as a way to get people to start identifying as people-who-help.

There were two main sets of habits that worth cultivating:

1) Making it clear to newcomers that they're encouraged to help out with events, and that this is actually a good way to make friends and get more involved. 

2) For hosts and event planners, look for opportunities to offer people things that they can help with, and make sure to publicly praise those who do help out.

Some of this might dovetail nicely with the Welcoming Committee, both as something people can easily get involved with, and if there ends up being a public facing website to introduce people to the community, using that to connect people with events that could use help).

Volunteering-as-Learning, and Big Event Specific Workshops

Sometimes volunteering just requires showing up. But sometimes it requires special skills, and some events might need people who are willing to practice beforehand or learn-by-doing with a commitment to help at multiple events.

A vague cluster of skills that's in high demand is "predict logistical snafus in advance to head them off, and notice logistical snafus happening in realtime so you can do something about them." Earlier this year there was an Ops Workshop that aimed to teach this sort of skill, which went reasonably but didn't really lead into a concrete use for the skills to help them solidify.

One idea was to do Ops workshops (or other specialized training) in the month before a major event like Solstice or EA Global, giving them an opportunity to practice skills and making that particular event run smoother.

(This specific idea is not currently planned for implementation as it was among the more ambitious ones, although Brent Dill's series of "practice setting up a giant dome" beach parties in preparation for Burning Man are pointing in a similar direction)

Making Sure All This Actually Happens (Sarah Spikes, and hopefully everyone!)

To avoid the trap of dreaming big and not actually getting anything done, Sarah Spikes volunteered as project manager, creating an Asana page. People who were interested in committing to a deadline could opt into getting pestered by her to make sure things things got done. 

V. Parting Words

To wrap up the event, I focused on some final concepts that underlie this whole endeavor. 

The thing we're aiming for looks something like this:

In a couple months (hopefully in July), there'll be a followup unconference. The theme will be "Innovation and Excellence", addressing the twofold question "how do we encourage more people to start cool projects", and "how to do we get to a place where longterm projects ultimately reach a high quality state?"

Both elements feel important to me, and they require somewhat different mindsets (both on the part of the people running the projects, and the part of the community members who respond to them). Starting new things is scary and having too high standards can be really intimidating, yet for longterm projects we may want to hold ourselves to increasingly high standards over time.

My current plan (subject to lots of revision) is for this to become a series of community unconferences that happen roughly every 3 months. The Bay area is large enough with different overlapping social groups that it seems worthwhile to get together every few months and have an open-structured event to see people you don't normally see, share ideas, and get on the same page about important things.

Current thoughts for upcoming unconference topics are:

Innovation and Excellence
Personal Epistemic Hygiene
Group Epistemology

An important piece of each unconference will be revisiting things at the previous one, to see if projects, ideas or experiments we talked about were actually carried out and what we learned from them (most likely with anonymous feedback collected beforehand so people who are less comfortable speaking publicly have a chance to express any concerns). I'd also like to build on topics from previous unconferences so they have more chance to sink in and percolate (for example, have at least one talk or discussion about "empathy and trust as related to epistemic hygiene").

Starting and Finishing Unconferences Together

My hope is to get other people involved sooner rather than later so this becomes a "thing we are doing together" rather than a "thing I am doing." One of my goals with this is also to provide a platform where people who are interested in getting more involved with community leadership can take a step further towards that, no matter where they currently stand (ranging anywhere from "give a 30 second lightning talk" to "run a discussion, or give a keynote talk" to "be the primary organizer for the unconference.")

I also hope this is able to percolate into online culture, and to other in-person communities where a critical mass of people think this'd be useful. That said, I want to caution that I consider this all an experiment, motivated by an intuitive sense that we're missing certain things as a culture. That intuitive sense has yet to be validated in any concrete fashion. I think "willingness to try things" is more important than epistemic caution, but epistemic caution is still really important - I recommend collecting lots of feedback and being willing to shift direction if you're trying anything like the stuff suggested here.

(I'll have an upcoming post on "Ways Project Hufflepuff could go horribly wrong")

Most importantly, I hope this provides a mechanism for us to collectively take ideas more seriously that we're ostensibly supposed to be taking seriously. I hope that this translates into the sort of culture that The Craft and The Community was trying to point us towards, and, ideally, eventually, a concrete sense that our community can play a more consistently useful role at making sure the world turns out okay. 

If you have concerns, criticism, or feedback, I encourage you to comment here if you feel comfortable, or on the Unconference Feedback Form. So far I've been erring on the side of move forward and set things in motion, but I'll be shifting for the time being towards "getting feedback and making sure this thing is steering in the right direction."

-

In addition to the people listed throughout the post, I'd like to give particular thanks to Duncan Sabien for general inspiration and a lot of concrete help, Lahwran for giving the most consistent and useful feedback, and Robert Lecnik for hosting the space. 

[Link] Have We Been Interpreting Quantum Mechanics Wrong This Whole Time?

2 korin43 23 May 2017 04:38PM

Physical actions that improve psychological health

4 arunbharatula 23 May 2017 04:33AM

Physical health impacts well-being. However, existing preventative health guidelines are inaccessible to the public because they are highly technical and require specific medical equipment. These notes are not medical advice nor meant to treat any illness. This is a compilation of findings I have come across at one time or another in relation to physical things that relate back to psychological health. I have not systematically reviewed the literature on any of these topics, nor am I an expert nor even familiar with any of them. I am extremely uncertain about the whole thing. But, I figure better to write this up and look stupid than keep it inside and act stupid. The hyperlinks point to the best evidence I could find on the matter. I write to solicit feedback, corrections and advice.

 

Microwaves are safe, but cockroaches and even ants are dangerous, and finally: happiness is dietary. If you want the well-being boosts associated with fruit (careful about fruit juice sugar though!), coffee’s aroma [text] [science news], vanilla yoghurt [news], Sufficient B vitamins and choline (alt), binge drinking or drinking in general, however, I don’t have any easy answers for you. Don’t worry about the smart drugs, nootropics are probably a misnomer. On the other hand, probiotics can treat depression

 

“There is growing evidence that a diet rich in fruits and vegetables is related to greater happiness, life satisfaction, and positive mood as well. This evidence cannot be entirely explained by demographic or health variables including socio-economic status, exercise, smoking, and body mass index, suggesting a causal link.[50] Further studies have found that fruit and vegetable consumption predicted improvements in positive mood the next day, not vice versa. On days when people ate more fruits and vegetables, they reported feeling calmer, happier, and more energetic than normal, and they also felt more positive the next day.”

- Wikipedia

 

If your diet is out of control: Mental contrasting is useful for diabetes self-management, dieting etc. Tangent: During a seminar I attended in Geneva, The World Health Organisation chief dietary authority said that suggesting dietary patterns (e.g. the Mediterranean diet) rather than individual nutrient intake (protein, creatine, carbs) is preferable. But I have yet to identify substantiating evidence. The broad consensus among lay skeptical scrutineers of the field of nutrition is that most truths, even those broadly accepted ones, are still unclear. However, I have yet to analyse the literature myself.

 

Exercise and sport are good for subject well-being, quality of life, depression, anxiety, stress and more. Plus, they are fun. You may not enjoy pleasant, wellbeing related activities. Do those activities anyway. I seldom enjoy correcting my posture. I tend to slouch and I have been specifically advised by specialised physiotherapist to correct for that. But, slouching typically doesn’t cause pain - posture correction is pseudoscience! So is many interventions related to posture correction, like standing desks. On the other hand, I love to get massages - but their benefits are short lived - so get them regularly!

 

I particularly enjoy them after resistance training or 1 minute workouts (high intensity interval training). Be careful about stretching, passive stretching can cause injury, unlike active stretching: 'Passive stretching is when you use an outside force other than your own muscle to move a joint or limb beyond its active range of motion, to put your body into a position that you couldn’t do by yourself (such as when you lean into a wall, or have a partner push you into a deeper stretch). Unfortunately, this is the most common form of stretching used.'

 

However, if you aim to bodybuild, protein supplementation is pseudoscientific broscience. And ‘form’, well, there’s broscience - like squat with your knees outwards but probably lots of credible safety related information one ought to head. For weight loss, if you want a real cheat sheet - weight loss aspirants can get it for a couple of hundred dollar SNP sequencing kit. But, I would be cautious about gene sequence driven health prescription, some services running that business rely on weak evidence. There are other ‘fad’ fitness ideas that are not grounded in science. For instance: 20 second of foam rolling (just as effective as 60 seconds) enhance flexibility (...for no longer than 10 minutes, unless it is done regularly - than it improves long term flexibility) but it is unclear whether they improve athletic performance or post-performance recovery.

 

Stretching for runners, but no other kinds of sports prevents injuries and increase range of motion [wikipedia]. Shoe inserts don’t work reliably either [Wikipedia]. Martial arts therapy is a thing. Physical exercise is good for you. Tai chi, qigong, and meditation (other than mindfulness) such as transcendental meditation are ineffective in treating depression and anxiety. If you are injured, try rehabilitation exercises. Exercise or performance enhancing drugs are both cognitive enhancers. Exercise for chronic lower back pain is a good idea.

 

Environment: Avoid outdoor air pollution near residences due to dementia/other-health risks. And, avoid chimney smoke fireplaces.

 

Anecdotally, hygiene improves self-esteem and well-being. Wipe with wet wipes if you wipe hard enough to cause blood to form, cover the toilet seat with toilet paper or don’t - it doesn’t matter safety wise unless the contaminant is <~1hr old, shower with soap, remove eye mucus, remove earwax (but not the way you think, likely), brush twice a day - with the correct technique, replacing your toothbrush every few months and softly. 'Don't rinse with water straight after toothbrushing'. Floss once a day (with a different piece of floss each flossing session) but do not brush immediately after drinking acidic substances. The effectiveness of Tooth Mousse is questionable. Visit the dentist for a check-up every now and then - I’d say about every year at least (does anyone know how to format this sentence consistent with the rest of the text - it doesn't appear to be a font size or type issue).

 

Consider sleeping with a face mask and earplugs for better sleep,  blow your nose, clean under your nails and trim them. Eye examinations should be conducted every 2-4 years for those under 40, and up to every 6 months for those 65+. There are health concerns around memory foam pillows/mattresses so latex pillows may be preferable for those who prefer a sturdier option than traditional pillows/mattresses Anecdotally, setting alarms to remind you to do things is a simple way to manage your time not just for waking up. Light therapy is also helpful in treating delayed sleep phase disorder (being a night owl!). Oh, and don’t bother loading the dishwasher with pre washed dishes (as long as you clean the filter regularly).

 

There are misconceptions around complementary therapies. The Australian Government reviewed the effective of The Alexander technique, homeopathy, aromatherapy, bowen therapy, buteyko, Feldenkrais, herbalism, homeopathy, iridology, kinesiology, massage therapy, pilates, reflexology, rolfing shiatsu, tai chi, yoga. Only for (Alexander technique, Buteyko, massage therapy (esp. Remedial massage?), tai chi and yoga was there credible (albeit low to moderate quality) evidence that they are useful for certain health conditions.

 

Stressed out reading all this? Pressing on your eyelids gently to temporarily forgo a headache can work. Traumatically stressed out? Video games can treat PTSD. Animal assisted therapy, like service dogs and therapeutic animals are also wonderful.

Thank you!

[Link] Probabilistic Programming and Bayesian Methods for Hackers

1 lifelonglearner 22 May 2017 09:15PM

[Link] Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them

2 Stefan_Schubert 22 May 2017 06:31PM

Open thread, May 22 - May 28, 2017

2 Thomas 22 May 2017 05:44AM

If it's worth saying, but not worth its own post, then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

[Link] Why Most Intentional Communities Fail (And Some Succeed)

3 AspiringRationalist 22 May 2017 03:04AM

[Link] Learning Deep Learning the EASY way, with Keras

2 morganism 21 May 2017 07:48PM

On-line google hangout on approaches to communication around agi risk (2017/5/27 time to be decided UTC)

1 whpearson 21 May 2017 12:32PM

We have a number of charities that are working on different aspects of AGI risk

-  The theory of the alignment problem (MIRI/FHI/more)

-  How to think about problems well (CFAR)

However we don't have body dedicated to making and testing a coherent communication strategy to help postpone the development of dangerous AIs.

I'm organising an on-line discussion around what we should do about this issue next saturday.

In order to find out when people can do it, I've created a doodle here. I'm trusting that doodle works well with timezones. The time slots should be between 1200 and 2300  UTC , let me know if they are not.

We'll be using the optimal brainstorming methodology

Give me a message if you want an invite, once the time has been decided.

I will take notes and post them here again.

AGI and Mainstream Culture

4 madhatter 21 May 2017 08:35AM

Hi all,

So, as you may know, the first episode of Doctor Who, "Smile", was about a misaligned AI trying to maximize smiles (ish). And the latest, "Extremis", was about an alien race who instantiated conscious simulations to test battle strategies for invading the Earth, of which the Doctor was a subroutine. 

I thought the common threat of AGI was notable, although I'm guessing it's just a coincidence. More seriously, though, this ties in with an argument I thought of, and want to know your take on: i

If we want to avoid an AI arms race, so that safety research has more time to catch up to AI progress, then we would want to prevent, if at all possible, these issues from becoming more mainstream. The reason is that if AGI in public perception becomes disassociated with Terminator (i.e. laughable, nerdy, and unrealistic) and more like a serious whoever-makes-this-first-can-take-over-the-world situation, then we will get an arms race faster. 

I'm not sure I believe this argument myself. For one thing, being more mainstream has the benefit of attracting more safety research talent, government funding, etc. But maybe we shouldn't be spreading awareness without thinking this through some more.

 

CFAR workshop with new instructors in Seattle, 6/7-6/11

7 Qiaochu_Yuan 20 May 2017 12:18AM

CFAR is running its first workshop in Seattle! 

Over the past several months, CFAR has been training a new batch of instructors, including me. We're now running a workshop, without the core instructors, in Seattle from June 7th to June 11th. You can apply here, and we have an FAQ here

AI safety: three human problems and one AI issue

5 Stuart_Armstrong 19 May 2017 10:48AM

Crossposted at the Intelligent agent foundation.

There have been various attempts to classify the problems in AI safety research. Our old Oracle paper that classified then-theoretical methods of control, to more recent classifications that grow out of modern more concrete problems.

These all serve their purpose, but I think a more enlightening classification of the AI safety problems is to look at what the issues we are trying to solve or avoid. And most of these issues are problems about humans.

Specifically, I feel AI safety issues can be classified as three human problems and one central AI issue. The human problems are:

  • Humans don't know their own values (sub-issue: humans know their values better in retrospect than in prediction).
  • Humans are not agents and don't have stable values (sub-issue: humanity itself is even less of an agent).
  • Humans have poor predictions of an AI's behaviour.

And the central AI issue is:

  • AIs could become extremely powerful.

Obviously if humans were agents and knew their own values and could predict whether a given AI would follow those values or not, there would be not problem. Conversely, if AIs were weak, then the human failings wouldn't matter so much.

The points about human values is relatively straightforward, but what's the problem with humans not being agents? Essentially, humans can be threatened, tricked, seduced, exhausted, drugged, modified, and so on, in order to act seemingly against our interests and values.

If humans were clearly defined agents, then what counts as a trick or a modification would be easy to define and exclude. But since this is not the case, we're reduced to trying to figure out the extent to which something like a heroin injection is a valid way to influence human preferences. This makes both humans susceptible to manipulation, and human values hard to define.

Finally, the issue of humans having poor predictions of AI is more general than it seems. If you want to ensure that an AI has the same behaviour in the testing and training environment, then you're essentially trying to guarantee that you can predict that the testing environment behaviour will be the same as the (presumably safe) training environment behaviour.

 

How to classify methods and problems

That's well and good, but how to various traditional AI methods or problems fit into this framework? This should give us an idea as to whether the framework is useful.

It seems to me that:

 

  • Friendly AI is trying to solve the values problem directly.
  • IRL and Cooperative IRL are also trying to solve the values problem. The greatest weakness of these methods is the not agents problem.
  • Corrigibility/interruptibility are also addressing the issue of humans not knowing their own values, using the sub-issue that human values are clearer in retrospect. These methods also overlap with poor predictions.
  • AI transparency is aimed at getting round the poor predictions problem.
  • Laurent's work on carefully defining the properties of agents is mainly also about solving the poor predictions problem.
  • Low impact and Oracles are aimed squarely at preventing AIs from becoming powerful. Methods that restrict the Oracle's output implicitly accept that humans are not agents.
  • Robustness of the AI to changes between testing and training environment, degradation and corruption, etc... ensures that humans won't be making poor predictions about the AI.
  • Robustness to adversaries is dealing with the sub-issue that humanity is not an agent.
  • The modular approach of Eric Drexler is aimed at preventing AIs from becoming too powerful, while reducing our poor predictions.
  • Logical uncertainty, if solved, would reduce the scope for certain types of poor predictions about AIs.
  • Wireheading, when the AI takes control of reward channel, is a problem that humans don't know their values (and hence use an indirect reward) and that the humans make poor predictions about the AI's actions.
  • Wireheading, when the AI takes control of the human, is as above but also a problem that humans are not agents.
  • Incomplete specifications are either a problem of not knowing our own values (and hence missing something important in the reward/utility) or making poor predictions (when we though that a situation was covered by our specification, but it turned out not to be).
  • AIs modelling human knowledge seem to be mostly about getting round the fact that humans are not agents.

Putting this all in a table:

 

MethodValues
Not Agents
Poor PredictionsPowerful
Friendly AI
X


IRL and CIRL X


Corrigibility/interruptibility X
X
AI transparency

X
Laurent's work

X
Low impact and Oracles
X
X
Robustness

X
Robustness to adversaries
X

Modular approach

X X
Logical uncertainty

X
Wireheading (reward channel) X X X
Wireheading (human) X
X
Incomplete specifications X
X
AIs modelling human knowledge
X

 

Further refinements of the framework

It seems to me that the third category - poor predictions - is the most likely to be expandable. For the moment, it just incorporates all our lack of understanding about how AIs would behave, but this might more useful to subdivide.

Instrumental Rationality Sequence Update (Drive Link to Drafts)

2 lifelonglearner 19 May 2017 04:01AM

Hey all,

Following my post on my planned Instrumental Rationality sequence, I thought it'd be good to give the LW community an update of where I am.

1) Currently collecting papers on habits. Planning to go through a massive sprint of the papers tomorrow. The papers I'm using are available in the Drive folder linked below.

2) I have a publicly viewable Drive folder here of all relevant articles and drafts and things related to this project, if you're curious to see what I've been writing. Feel free to peek around everywhere, but the most relevant docs are this one which is an outline of where I want to go for the sequence and this one which is the compilation of currently sorta-decent posts in a book-like format (although it's quite short right now at only 16 pages).

Anyway, yep, that's where things are at right now.

 

[Link] How To Build A Community Full Of Lonely People

6 maia 17 May 2017 03:25PM

Reaching out to people with the problems of friendly AI

4 Val 16 May 2017 07:30PM

There have been a few attempts to reach out to broader audiences in the past, but mostly in very politically/ideologically loaded topics.

After seeing several examples of how little understanding people have about the difficulties in creating a friendly AI, I'm horrified. And I'm not even talking about a farmer on some hidden ranch, but about people who should know about these things, researchers, software developers meddling with AI research, and so on.

What made me write this post, was a highly voted answer on stackexchange.com, which claims that the danger of superhuman AI is a non-issue, and that the only way for an AI to wipe out humanity is if "some insane human wanted that, and told the AI to find a way to do it". And the poster claims to be working in the AI field.

I've also seen a TEDx talk about AIs. The talker didn't even hear about the paperclip maximizer, and the talk was about the dangers presented by the AIs as depicted in the movies, like the Terminator, where an AI "rebels", but we can hope that AIs would not rebel as they cannot feel emotion, so we should hope the events depicted in such movies will not happen, and all we have to do is for ourselves to be ethical and not deliberately write malicious AI, and then everything will be OK.

The sheer and mind-boggling stupidity of this makes me want to scream.

We should find a way to increase public awareness of the difficulty of the problem. The paperclip maximizer should become part of public consciousness, a part of pop culture. Whenever there is a relevant discussion about the topic, we should mention it. We should increase awareness of old fairy tales with a jinn who misinterprets wishes. Whatever it takes to ingrain the importance of these problems into public consciousness.

There are many people graduating every year who've never heard about these problems. Or if they did, they dismiss it as a non-issue, a contradictory thought experiment which can be dismissed without a second though:

A nuclear bomb isn't smart enough to override its programming, either. If such an AI isn't smart enough to understand people do not want to be starved or killed, then it doesn't have a human level of intelligence at any point, does it? The thought experiment is contradictory.

We don't want our future AI researches to start working with such a mentality.

 

What can we do to raise awareness? We don't have the funding to make a movie which becomes a cult classic. We might start downvoting and commenting on the aforementioned stackexchange post, but that would not solve much if anything.



[Link] Keeping up with deep reinforcement learning research: /r/reinforcementlearning

2 gwern 16 May 2017 07:12PM

The robust beauty of improper linear models

1 Stuart_Armstrong 16 May 2017 03:06PM

It should come as no surprise to people on this list that models often outperform experts. But these are generally finely calibrated models, integrating huge amounts of data, so this seems less surprising. How can the poor experts compete against that?

But sometimes the models are much simpler than that, and still perform better. For instance, the models could be linear, rather than having higher order complexities. These models can still outperform experts, because in practice, despite their beliefs that they are doing a non-linear task, expert decisions can often best be modelled as being entirely linear.

But surely the weights of the linear models are subtle and need to be set exactly? Not really. It seems that if you take a linear model, and weigh the variables by +1 or -1 depending on whether it has a positive or negative impact on the result, then you will get a model that still often outperforms experts. These models with ±1 weights are called improper linear models.

What's going on here? Well, there's been a bit of a dodge. I've been talking about "taking" a linear model, with "variables", and weighing the factors depending on a positive or negative "impact". And to do all that, you need experts. They are the ones that know which variables are important, and know the direction (positive or negative) in which they impact the result. They don't choose these variables by just taking random possibilities and then figuring out what the direction is. Instead, they understand the situation, to some extent, and choose important variables.

So that's the real role of the expert here: knowing what should go into the model, what really makes the underlying dependent variable change. Selecting and coding the variable information, in the terms that are often used.

But, just as experts can be very good at that task, they are human, and humans are terrible at integrating lots of information together. So, having selected the variables, they get regularly outperformed by proper linear models. And when you add the fact that the experts have selected variables of comparable importance, and that these variables are often correlated with each other, it's not surprising that they get outperformed by improper linear models as well.

[Link] A social science without sacred values

1 ChristianKl 16 May 2017 12:26PM

Are causal decision theorists trying to outsmart conditional probabilities?

3 Caspar42 16 May 2017 08:01AM

Presumably, this has been discussed somewhere in the past, but I wonder to which extent causal decision theorists (and many other non-evidential decision theorists, too) are trying to make better predictions than (what they think to be) their own conditional probabilities.

 

To state this question more clearly, let’s look at the generic Newcomb-like problem with two actions a1 and a2 (e.g., one-boxing and two-boxing, cooperating or defecting, not smoking or smoking) and two states s1 and s2 (specifying, e.g., whether there is money in both boxes, whether the other agent cooperates, whether one has cancer). The Newcomb-ness is the result of two properties:

  • No matter the state, it is better to take action a2, i.e. u(a2,s1)>u(a1,s1) and u(a2,s2)>u(a1,s2). (There are also problems without dominance where CDT and EDT nonetheless disagree. For simplicity I will assume dominance, here.)

  • The action cannot causally affect the state, but somehow taking a1 gives us evidence that we’re in the preferable state s1. That is, P(s1|a1)>P(s1|a2) and u(a1,s1)>u(a2,s2).

Then, if the latter two differences are large enough, it may be that

E[u|a1] > E[u|a2].

I.e.

P(s1|a1) * u(s1,a1) + P(s2|a1) * u(s2,a1) > P(s1|a2) * u(s1,a2) + P(s2|a2) * u(s2,a2),

despite the dominance.

 

Now, my question is: After having taken one of the two actions, say a1, but before having observed the state, do causal decision theorists really assign the probability P(s1|a1) (specified in the problem description) to being in state s1?

 

I used to think that this was the case. E.g., the way I learned about Newcomb’s problem is that causal decision theorists understand that, once they have said the words “both boxes for me, please”, they assign very low probability to getting the million. So, if there were a period between saying those words and receiving the payoff, they would bet at odds that reveal that they assign a low probability (namely P(s1,a2)) to money being under both boxes.

 

But now I think that some of the disagreement might implicitly be based on a belief that the conditional probabilities stated in the problem description are wrong, i.e. that you shouldn’t bet on them.

 

The first data point was the discussion of CDT in Pearl’s Causality. In sections 1.3.1 and 4.1.1 he emphasizes that he thinks his do-calculus is the correct way of predicting what happens upon taking some actions. (Note that in non-Newcomb-like situations, P(s|do(a)) and P(s|a) yield the same result, see ch. 3.2.2 of Pearl’s Causality.)

 

The second data point is that the smoking intuition in smoking lesion-type problems may often be based on the intuition that the conditional probabilities get it wrong. (This point is also inspired by Pearl’s discussion, but also by the discussion of an FB post by Johannes Treutlein. Also see the paragraph starting with “Then the above formula for deciding whether to pet the cat suggests...” in the computer scientist intro to logical decision theory on Arbital.)

 

Let’s take a specific version of the smoking lesion as an example. Some have argued that an evidential decision theorist shouldn’t go to the doctor because people who go to the doctor are more likely to be sick. If a1 denotes staying at home (or, rather, going anywhere but a doctor) and s1 denotes being healthy, then, so the argument goes, P(s1|a1) > P(s1|a2). I believe that in all practically relevant versions of this problem this difference in probabilities disappears once we take into account all the evidence we already have. This is known as the tickle defense. A version of it that I agree with is given in section 4.3 of Arif Ahmed’s Evidence, Decision and Causality. Anyway, let’s assume that the tickle defense somehow doesn’t apply, such that even if taking into account our entire knowledge base K, P(s1|a1,K) > P(s1|a2,K).

 

I think the reason why many people think one should go to the doctor might be that while asserting P(s1|a1,K) > P(s1|a2,K), they don’t upshift the probability of being sick when they sit in the waiting room. That is, when offered a bet in the waiting room, they wouldn’t accept exactly the betting odds that P(s1|a1,K) and P(s1|a2,K) suggest they should accept.

 

Maybe what is going on here is that people have some intuitive knowledge that they don’t propagate into their stated conditional probability distribution. E.g., their stated probability distribution may represent observed frequencies among people who make their decision without thinking about CDT vs. EDT. However, intuitively they realize that the correlation in the data doesn’t hold up in this naive way.

 

This would also explain why people are more open to EDT’s recommendation in cases where the causal structure is analogous to that in the smoking lesion, but tickle defenses (or, more generally, ways in which a stated probability distribution could differ from the real/intuitive one) don’t apply, e.g. the psychopath button, betting on the past, or the coin flip creation problem.

 

I’d be interested in your opinions. I also wonder whether this has already been discussed elsewhere.

Acknowledgment

Discussions with Johannes Treutlein informed my view on this topic.

A Month's Worth of Rational Posts - Feedback on my Rationality Feed.

16 deluks917 15 May 2017 02:21PM

For the last two months I have been publishing a feed of rationalist articles. Oriignally the feed was only published on the SSC discord channel SSC Discord (Be charitable, kind and don't treat the place like 4chan). For the last few days I have also been publishing it on my blog deluks917.wordpress.com. I categorize the links and include a brief excerpt, review, and/or teaser. If you would like to see an exampel in practice just check today's post. The average number of links per day, in the last month, has been six. But this number has been higher recently. I have not missed a single day since I started, so I think its likely I will continue doing this. The list of blogs I check is located here: List of Blogs

I am looking for some feedback. At the bottom of this post I am  including a month's worth of posts categorized using the current system. Posts are not nescessarily in any particular order since my categorization system has not been constant over time. Lots of posts were moved around by hand. 

1 -  Should I share the feed the results somewhere other than SSC-discord + my blog? Mindlevelup suggested I write up a weekly roundup. I could share such a roundup via some on lesswrong and SSC. I would estimate the expected number of links in such a psot to be around 35. Links would be posted in chronoligcal order within categories. Alternatively I could share such a post every two weeks. Its also possible to have a mailing list but I currently find this less promising. 

2 - Do the categories make a reasonable amount of sense? What tweaks would you make. I have ocnsidered mixing some of the smaller categories (Math and CS, Amusement into "misc"). 

3 - Are there any blogs I should include/drop from the feed. For example I have been considering dropping ribbonfarm. The highest priority is to get the content thats directly about instrumental/epsitemic rationality. The bar is higher for politics and culture_war. I should note I am not going to personally include any blog without an RSS feed. 

4 - Is anyone willing to write a "Best of rationalist tumblr" post. If I write a weekly/bi-weekly round up I could combine it with an equivalent "best of tumblr" post. The tumblr post would not have to be daily, just weekly or every other week. We could take turns posting the resulting combination to lesswrong/SSC and collecting the juicy karma. However its worth noting that SSC-reddit has some controls on culture_war (outside of the CW thread). Since we want to post to r/SSC we need to keep the density of culture_war to reasonable levels. Lesswrong also has some anti-cw norms.

=== Last Month's Rationality Content === 

**Scott**

http://slatestarcodex.com/2017/05/11/silicon-valley-a-reality-check/ - What a person finds in Silicon Valley mirrors the seeker.

http://slatestarcodex.com/2017/05/09/links-517-rip-van-linkle/ - Links.

http://slatestarcodex.com/2017/04/11/sacred-principles-as-exhaustible-resources/ - Don't deplete the free speech commons.

http://slatestarcodex.com/2017/04/12/clarification-to-sacred-principles-as-exhaustible-resources/  - Clarifications and caveats on Scott's last article on free speech and sacred values.

http://slatestarcodex.com/2017/04/13/chametz/ - A Jewish Vampire Story

http://slatestarcodex.com/2017/04/17/learning-to-love-scientific-consensus/ - Scott Critiques a list of 10 maverick inventors. He then reconsiders his previous science skepticism.

http://slatestarcodex.com/2017/04/21/ssc-journal-club-childhood-trauma-and-cognition/ - A new study challenges the idea that child abuse reduces brain function.

http://slatestarcodex.com/2017/04/25/book-review-the-hungry-brain/ - Scott gives a favorable view of the "establishment" view of nutrition.

http://slatestarcodex.com/2017/04/26/anorexia-and-metabolic-set-point/ - Short Post (for Scott)

https://slatestarscratchpad.tumblr.com/post/160028275801/slatestarscratchpad-wayward-sidekick-you - Scott discusses engaging with ideas you find harmful. He also discusses his attitude toward making his blog as friendly as possible. [culture_war]

http://slatestarcodex.com/2017/05/01/neutral-vs-conservative-the-eternal-struggle/ - Formally neutral institutions have a liberal bias. Conservatives react by seceding and forming their own institutions. The end result is bad for society. [Culture War]

http://slatestarcodex.com/2017/05/04/getting-high-on-your-own-supply/ - "If you optimize for the epistemic culture that’s best for getting elected, but that culture isn’t also the best for running a party or governing a nation, then the fact that your culture affects your elites as well becomes a really big problem." Short for Scott.

http://slatestarcodex.com/2017/05/07/ot75-the-comment-king/ - bi-weekly visible open thread.

http://unsongbook.com/postscript-1-wrap-parties-fan-music/ - Final chapter of Unsong goes up approximately 8pm on Sunday. Unsong will have an epilogue will will go up on Wednesday. Wrap party details. (I will be at the wrap party on sunday).

http://unsongbook.com/book-iv-kings/ - "Somebody had to, no one would / I tried to do the best I could / And now it’s done, and now they can’t ignore us / And even though it all went wrong / I’ll stand against the whole unsong / With nothing on my tongue but HaMephorash"

http://unsongbook.com/chapter-71-but-for-another-gives-its-ease/ - Penultimate chapter of Unsong.

http://unsongbook.com/chapter-70-nor-for-itself-hath-any-care/ - Newest Chapter.

http://unsongbook.com/authors-note-10-hamephorash-hamephorash-party/ - Final Chapter goes up may 14. Bay Area Reading party announced.

http://unsongbook.com/chapter-69-love-seeketh-not-itself-to-please/ - Newest Chapter.

http://unsongbook.com/chapter-68-puts-all-heaven-in-a-rage/ - Newest Chapter.

**Rationalism**

http://lesswrong.com/r/discussion/lw/ozz/gearsness_of_understanding/ - "I want to point at a property of models that isn't about what they're modeling. It interacts with the clarity of what they're modeling, but only in the same way that smudged lines in a roadmap interact with the clarity of the roadmap. This property is how deterministically interconnected the variables of the model are.". The theory is applied to multiple explicit examples.

https://thepdv.wordpress.com/2017/05/11/how-i-use-beeminder/ - Short but gives details. Beeminder is the only productivity system that worked for the author.

https://putanumonit.com/2017/05/09/time-well-spent/ - Akrasia and procrastination. A review of some of the rationalist thinking on the topic. Jacob's personal take and his system for tracking his productivity.

https://putanumonit.com/2017/05/09/time-well-spent/ - Akrasia and procrastination. A review of some of the rationalist thinking on the topic. Jacob's personal take and his system for tracking his productivity.

http://kajsotala.fi/2017/05/cognitive-core-systems-explaining-intuitions-behind-belief-in-souls-free-will-and-creation-myths/ - Description of four core systems human, and other animals, are born with. An explanation of why these systems lead to belief in souls. Short.

https://mindlevelup.wordpress.com/2017/05/06/taking-criticism/ - Reframing criticism so that it makes sense to the author (who is bad at taking criticism). A Q&A between the author and himself.

http://lesswrong.com/r/discussion/lw/oz1/soft_skills_for_running_meetups_for_beginners/ - Concrete advice for running meetups. Not especially focused on beginning organizers. Written by the person who organized Solstice.

http://effective-altruism.com/ea/19t/mental_health_resource_for_ea_community/ - A breakdown of the most useful information about Mania and Psychosis. Extremely practical advice. Julia Wise.

http://bearlamp.com.au/working-with-multiple-problems-at-once - Problems add up and you run out of time. How do you get out? Very practical.

http://agentyduck.blogspot.com/2017/05/creativity-taps.html - Practical ideas for exercising creativity.

http://lesswrong.com/r/discussion/lw/oyk/acting_on_your_intended_preferences_what_does/ - What does it look like in practice to pursue your goals. A series of practical questions to ask your. Links to a previous series of blog post are included.

https://thingofthings.wordpress.com/2017/05/03/why-do-all-the-rationalists-live-in-the-bay-area/ - Benefits of living in the Bay. The Bay is a top place for software engineers even accounting for cost of living, Rationalist institutions are in the Bay, there are social and economic benefits to being around other community members.

https://qualiacomputing.com/2017/05/04/the-most-important-philosophical-question/ - “Is happiness a spiritual trick, or is spirituality a happiness trick?”

http://particularvirtue.blogspot.com/2017/05/how-to-build-community-full-of-lonely.html - Why so many rationalists feel lonely and concrete suggestions for improving social groups. Advice is given to people who are popular, lonely or organizers. Very practical.

https://hivewired.wordpress.com/2017/05/07/announcing-entropycon-12017/ - We beat smallpox, we will beat death, we can try to beat entropy. A humorous mantra against nihilism.

https://mindlevelup.wordpress.com/2017/04/30/there-is-no-akrasia/ - The author argues that akrasia isn't a "thing" its a "sorta-coherent concept". He also argues that "akrasia" is not a useful concept and can be harmful.

http://bearlamp.com.au/experiments-iterations-and-the-scientific-method/ - A Graph of the scientific method in practice. The author works through his quantified self in practice and discusses his experiences.

https://everythingstudies.wordpress.com/2017/04/29/all-the-worlds-a-trading-zone/ - Cultures with different norms and languages can interact successfully.

http://kajsotala.fi/2017/04/relationship-compatibility-as-patterns-of-emotional-association/ - What is relationship "chemistry"?

http://lesswrong.com/lw/oyc/nate_soares_replacing_guilt_series_compiled_in/ - Ebook. 45 blog posts on replacing guilt and shame with a stronger motivation.

http://mindingourway.com/assuming-positive-intent/ - "If you're actively working hard to make the world a better place, then we're on the same team. If you're committed to letting evidence and reason guide your actions, then I consider you friends, comrades in arms, and kin."

http://bearlamp.com.au/quantified-self-tracking-with-a-form/ - Practical advice based on Elo's personal experience.

http://lesswrong.com/r/discussion/lw/ovc/background_reading_the_real_hufflepuff_sequence/ - Links and Descriptions of rationalist articles about group norms and dynamics.

https://everythingstudies.wordpress.com/2017/04/24/people-are-different/ - "We need to understand, accept and respect differences, that one size does not fit all, but to (and from) each their own."

http://bearlamp.com.au/yak-shaving-2/ - "A question worth asking is whether you are in your life at present causing a build up of problems, a decrease of problems, or roughly keeping them about the same level."

http://lesswrong.com/r/discussion/lw/oxk/i_updated_the_list_of_rationalist_blogs_on_the/ - Up to date list of rationalist blogs.

https://aellagirl.com/2017/05/02/internet-communities-otters-vs-possums/ - Possums: people who like a specific culture. Otters are people who like most cultures. What happens when the percentage of otters in a community increases?

https://aellagirl.com/2017/04/24/how-i-lost-my-faith/ - "People sometimes ask the question of why it took so long. Really I’m amazed that it happened at all. Before we even approach the aspect of “good arguments against religion”, you have to understand exactly how much is sacrificed by the loss of religion."

http://particularvirtue.blogspot.com/2017/04/on-social-spaces.html - Twitter, Tumblr, Facebook etc. PV responds to Zvi's articles about facebook. PV defends tumblr and facebook and has some criticisms of twitter. Several examples are given where ratioanalist groups tried to change platforms.

http://www.overcomingbias.com/2017/04/superhumans-live-among-us.html - Some human polymaths really are superhuman. But they don't have the track record to prove it.

https://thezvi.wordpress.com/2017/04/22/against-facebook/ - Sections: 1. A model breaking down how Facebook actually works. 2. An experiment with my News Feed. 3. Living with the Algorithm. 4. See First, Facebook’s most friendly feature. 5. Facebook is an evil monopolistic pariah Moloch. 6. Facebook is bad for you and Facebook is ruining your life. 7. Facebook is destroying discourse and the public record. 8. Facebook is out to get you.

https://thezvi.wordpress.com/2017/04/22/against-facebook-comparison-to-alternatives-and-call-to-action/ - Zvi's advice for managing your information streams and discussion platforms. Facebook can mostly be replaced.

https://rationalconspiracy.com/2017/04/22/moving-to-the-bay-area/ - Downsides of the Bay. Extensively sourced. Cost of living, traffic, public transit, crime, cleanliness.

https://nintil.com/2017/04/18/still-not-a-zombie-replies-to-commenters/ - Thoughts on consciousness and identity.

http://bearlamp.com.au/an-inquiry-into-memory-of-humans/ - The reader is asked to try various interesting memory exercises.

https://www.jefftk.com/p/how-to-make-housing-cheaper - 9 ways to make housing cheaper.

http://lesswrong.com/r/discussion/lw/owb/straw_hufflepuffs_and_lone_heroes/ - Should Harry have joined Hufflepuff in HPMOR? Harry had reasons to be a lone hero, do you?

http://lesswrong.com/lw/owa/lesswrong_analytics_february_2009_to_january_2017/ - Activity graphs of lesswrong over time, which posts had the most views, links to source code and further reading.

https://thezvi.wordpress.com/2017/04/23/help-us-find-your-blog-and-others/ - Zvi will read a post from your blog and consider adding you to his RSS feed.

https://thingofthings.wordpress.com/2017/04/11/book-post-for-march/ - Books on parenting.

https://boardgamesandrationality.wordpress.com/2017/04/24/first-blog-post/ - Dealing With Secret Information in boardgames and real life.

http://www.overcomingbias.com/2017/04/mormon-transhumanists.html - The relationship between religious community and technological change. Long for Overcoming Bias.

https://putanumonit.com/2017/04/15/bad-religion/ - "Rationality is a really unsatisfactory religion. But it’s a great life hack."

https://thezvi.wordpress.com/2017/04/12/escalator-action/ - Should we walk on elevator?

https://putanumonit.com/2017/04/21/book-review-too-like-the-lightning/ - The world of Jacob's dreams, thought on AI, a book review.

**EA**

http://effective-altruism.com/ea/19y/understanding_charity_evaluation/ - A detailed breakdown of how charity evaluation works in practice. Openly somewhat speculative.

http://blog.givewell.org/2017/05/11/update-on-our-views-on-cataract-surgery/ - previously Givewell had unsuccessfully tried to find recommendable cataract surgery charities. The biggest issues were “room for funding” and “lack of high quality monitoring data”. However they believe that cataract surgery is a promising intervention and they are doing more analysis.

https://80000hours.org/2017/05/how-much-do-hedge-fund-traders-earn/ - Detailed report on career trajectories and earnings. "We found that junior traders typically earn $300k – $3m per year, and it’s possible to reach these roles in 4 – 8 years."

https://www.givedirectly.org/blog-post?id=7612753271623522521 - 8 News links about GiveDirectly, Basic Income and cash transfer.

https://80000hours.org/2017/05/most-people-report-believing-its-incredibly-cheap-to-save-lives-in-the-developing-world/ - Details of a study on how Much Americans think it costs to save a life. Discussion of why people gave such optimistic answers. "It turns out that most Americans believe a child can be prevented from dying of preventable diseases for very little – less than $100."

https://www.thelifeyoucansave.org/Blog/ID/1355/Are-Giving-Games-a-Better-Way-to-Teach-Philanthropy - Literature review on "philanthropy games". Covers both traditional student philanthropy courses and the much shorter "giving game".

https://www.givedirectly.org/blog-post?id=8255610968755843534 - Links to news stories about Effective Altruism

http://benjaminrosshoffman.com/an-openai-board-seat-is-surprisingly-expensive/ - " In exchange for a board seat, the Open Philanthropy Project is aligning itself socially with OpenAI, by taking the position of a material supporter of the project."

https://www.givedirectly.org/blog-post?id=5010525406506746433 - Links to News Articles about Give Directly, Basic Income and Cash Transfer.

https://www.givedirectly.org/blog-post?id=121797500310578692 - Report on a program to give cash to coffee farmers in eastern Uganda.

http://effective-altruism.com/ea/19d/update_on_effective_altruism_funds/ - Details from the first round of funding, community feedback, Mistakes and Updates.

http://lesswrong.com/r/discussion/lw/ox4/effective_altruism_is_selfrecommending/ - Open Philanthropy project has a closed validation loop. A detailed timeline of GiveWell/Open-Philanthropy is given and many untested assumptions are pointed out. A conceptual connection is made to confidence games.

http://lesswrong.com/r/discussion/lw/oxd/the_2017_effective_altruism_survey_please_take/ - Take the survey :)

https://www.givingwhatwecan.org/post/2017/04/career-of-professor-alan-fenwick/ - Retrospective on the career of the director of the Schistosomiasis Institute.

http://www.openphilanthropy.org/blog/new-report-early-field-growth - The history of attempts to grow new fields of research or advocacy.

https://www.givedirectly.org/blog-post?id=4406309858976986548 - news links about GiveDirectly, Basic Income and Cash Transfers

https://intelligence.org/2017/04/30/2017-updates-and-strategy/ - Outreach, expansion, detailed research plan, state of the AI-risk community.

http://blog.givewell.org/2017/05/04/why-givewell-is-partnering-with-idinsight/ - IDinsight is an international NGO that aims to help its clients develop and use rigorous evidence to improve social impact. Summary, Background, goals, initial plans.

https://www.thelifeyoucansave.org/Blog/ID/1354/A-Shift-in-Priorities-at-the-Giving-Game-Project - Finding sustainable funding, Providing measurable outcomes, improving follow ups with participants.

https://80000hours.org/2017/05/most-people-report-believing-its-incredibly-cheap-to-save-lives-in-the-developing-world/ - Details of a study on how Much Americans think it costs to save a life. Discussion of why people gave such optimistic answers. "It turns out that most Americans believe a child can be prevented from dying of preventable diseases for very little – less than $100."

https://www.thelifeyoucansave.org/Blog/ID/1355/Are-Giving-Games-a-Better-Way-to-Teach-Philanthropy - Literature review on "philanthropy games". Covers both traditional student philanthropy courses and the much shorter "giving game".

http://www.openphilanthropy.org/blog/why-are-us-corporate-cage-free-campaigns-succeeding - The article contains a timeline of cagefree reform. Some background reasons given are: Undercover investigations, College engagement, Corporate engagement, Ballot measures, Gestation crate pledges, European precedent.

https://www.givingwhatwecan.org/post/2017/04/a-successor-to-the-giving-what-we-can-trust/ - The Giving What we Can Trust has joined with the "Effective Altruism Funds" (run by the Center for Effective Altruism).

http://lesswrong.com/r/discussion/lw/oyf/bad_intent_is_a_behavior_not_a_feeling/ - Response to Nate Soares, application to EA. "If you try to control others’ actions, and don’t limit yourself to doing that by honestly informing them, then you’ll end up with a strategy that distorts the truth, whether or not you meant to."

**Ai_risk**

http://effective-altruism.com/ea/19c/intro_to_caring_about_ai_alignment_as_an_ea_cause/ - By Nate Soares. A modified transcript of the talk he gave at Google on the problem of Ai Alignment.

http://lukemuehlhauser.com/monkey-classification-errors/ , http://lukemuehlhauser.com/adversarial-examples-for-pigeons/ - Adversarial examples for monkeys and pigeons respectively.

https://intelligence.org/2017/05/10/may-2017-newsletter/ - Research updates, MIRI hiring, General news links about AI

https://intelligence.org/2017/04/12/ensuring/ - Nate Soares gives a talk at Google about "Ensuring smarter-than-human intelligence has a positive outcome". An outline of the talk is included.

https://intelligence.org/2017/04/07/decisions-are-for-making-bad-outcomes-inconsistent/ - An extended discussion of Soares's latest paper "Cheating Death in Damascus".

**Research**

https://everythingstudies.wordpress.com/2017/05/12/the-eurovision-song-contest-taste-landscape/ - Analysis of Voting patterns in the Eurovision Contest. Alliances and voting Blocs are analyzed in depth.

https://srconstantin.wordpress.com/2017/05/12/do-pineal-gland-extracts-promote-longevity-well-maybe/ - Analysis of hormonal systems and their effect on metabolism and longevity.

https://acesounderglass.com/2017/05/11/an-opportunity-to-throw-money-at-the-problem-of-medical-science/ - Help crowdfund a randomized controlled trial. A promising Sepsis treatment needs a RCT but the method is very cheap and unpatentable. So there is no financial incentive for a company to fund the study.

https://randomcriticalanalysis.wordpress.com/2017/05/09/towards-a-general-factor-of-consumption/ - Factor Analysis leads to a general factor of consumption. Discussion of the data and analysis of the model. Very thorough.

https://randomcriticalanalysis.wordpress.com/2017/04/13/disposable-income-also-explains-us-health-expenditures-quite-well/ - Long Article, lots of graphs. "I argued consumption, specifically Actual Individual Consumption, is an exceptionally strong predictor of national health expenditures (NHE) and largely explains high US health expenditures.  I found AIC to be a much more robust predictor of NHE than GDP... I think it useful to also demonstrate these patterns as it relates to household disposable income"

https://randomcriticalanalysis.wordpress.com/2017/04/15/some-useful-data-on-the-dispersion-characteristics-of-us-health-expenditures/ - US Health spending is highly concentrated in a small fraction of the population. Is this true for other countries?

https://randomcriticalanalysis.wordpress.com/2017/04/17/on-popular-health-utilization-metrics/ - An extremely graph dense article responding to a widely cited paper claiming that "high utilization cannot explain high US health expenditures."

https://randomcriticalanalysis.wordpress.com/2017/04/28/health-consumption-and-household-disposable-income-outside-of-the-oecd/ - Another part in the series on healthcare expenses. Extending the analysis to non-OECD countries. Lots of graphs.

https://randomcriticalanalysis.wordpress.com/2017/05/09/towards-a-general-factor-of-consumption/ - Factor Analysis leads to a general factor of consumption. Discussion of the data and analysis of the model. Very thorough.

https://srconstantin.wordpress.com/2017/04/12/parenting-and-heritability-overview/ - Detailed literature review on heritability and what parenting can affect. A significant number of references are included.

https://nintil.com/2017/04/23/links-7/ - Psychology, Economics, Philosophy, AI

http://lesswrong.com/r/discussion/lw/ox8/unstaging_developmental_psychology/ - A mathematical model of stages of psychological development. The linked technical paper is very impressive. Starting from an abstract theory the authors managed to create a psychological theory that was concrete enough to apply in practice.

**Math and CS**

http://andrewgelman.com/2017/05/10/everybody-lies-seth-stevens-davidowitz/ - A fairly positive review of Seth's book on learning from data.

http://eli.thegreenplace.net/2017/adventures-in-jit-compilation-part-4-in-python/ - Writing a JIT compiler in Python. Discusses both using native python code and the PeachPy library. Performance consideration are explicitly not discussed.

http://eli.thegreenplace.net/2017/book-review-essentials-of-programming-languages-by-d-friedman-and-m-wand/ - Short review. "This book is a detailed overview of some fundamental ideas in the design of programming languages. It teaches by presenting toy languages that demonstrate these ideas, with a full interpreter for every language"

http://eli.thegreenplace.net/2017/adventures-in-jit-compilation-part-3-llvm/ - LLVM can dramatically speed up straightforward source code.

http://www.scottaaronson.com/blog/?p=3221 - Machine Learning, Quantum Mechanics, Google Calendar

**Politics and Economics**

http://noahpinionblog.blogspot.com/2017/04/ricardo-reis-defends-macro_13.html - Macro is defended from a number of common criticisms. A large number of modern papers are cited (including 8 job market papers). Some addressed criticisms include: Macro relies on representative agents, Macro ignores inequality, Macro ignores finance and Macro ignores data and focuses mainly on theory.

http://econlog.econlib.org/archives/2017/04/economic_system.html - What are the fundamental questions an economic system must answer?

http://andrewgelman.com/2017/04/18/reputational-incentives-post-publication-review-two-partial-solutions-misinformation-problem/ - Gelman gives a list of important erroneous analysis in the news and scientific journals. He then considers if negative reputational incentives or post-publication peer review will solve the problem.

https://srconstantin.wordpress.com/2017/05/09/how-much-work-is-real/ - What fraction of jobs are genuinely productive?

https://hivewired.wordpress.com/2017/05/06/yes-this-is-a-hill-worth-dying-on/ - The Nazis were human too. Even if a hill is worth dying on its probably not worth killing for. Discussion of good universal norms. [Culture War]

https://srconstantin.wordpress.com/2017/05/09/chronic-fatigue-syndrome/ - Literature Analysis on Chronic Fatigue Syndrome. Extremely thorough.

https://www.gwern.net/newsletter/2017/04 - A Month's worth of links. Ai, Recent evolution, heritability and other topics.

https://thingofthings.wordpress.com/2017/05/05/the-cluster-structure-of-genderspace/ - For many traits the bell curves for men and women are quite close. Visualizations of Cohen's D. Discussion of trans specific medical interventions.

https://www.jefftk.com/p/replace-infrastructure-wholesale - Can you just dig up a city and replace all the infrastructure in a week?

https://thingofthings.wordpress.com/2017/04/19/deradicalizing-the-romanceless/ - Ozy discusses the problem of (male) involuntarily celibacy.

http://noahpinionblog.blogspot.com/2017/04/the-siren-song-of-homogeneity.html - The alt-right is about racial homogeneity. Smith Reviews the data studying whether a homogeneous society increases trust and social capital. Smith discusses the Japanese culture and his time in Japan. Smith considers the arbitrariness of racial categories despite admitting that race has a biological reality. Smith flips around some alt right slogans. [Extreme high quality engagement with opposing ideas. Culture War]

https://thezvi.wordpress.com/2017/04/16/united-we-blame/ - A list of articles about United, Zvi's thoughts on United, general ideas about airlines.

http://noahpinionblog.blogspot.com/2017/04/why-101-model-doesnt-work-for-labor.html - Noah Smith gives many reasons why the simple supply/demand model can't work for labor economics.

https://thingofthings.wordpress.com/2017/04/14/concerning-archive-of-our-own/ - Ozy defends the moderation policy of the fanction archive A03. [Culture War]

https://thingofthings.wordpress.com/2017/04/13/fantasies-are-okay/ - When are fantasies ok? What about sexual fantasies? [Culture War]

https://srconstantin.wordpress.com/2017/04/25/on-drama/ - Ritual, The Psychology of Adolf Hitler, the dangerous emotion of High Drama, The Rite of Spring.

https://qualiacomputing.com/2017/04/26/psychedelic-science-2017-take-aways-impressions-and-whats-next/ - Notes on the 2017 Psychedelic Science conference.

**Amusement**

http://kajsotala.fi/2017/04/fixing-the-4x-end-game-boringness-by-simulating-legibility/ - "4X games (e.g. Civilization, Master of Orion) have a well-known problem where, once you get sufficiently far ahead, you’ve basically already won and the game stops being very interesting."

https://putanumonit.com/2017/05/12/dark-fiction/ - Jacob does some Kabbalahistic Analysis on the Story of Jacob, Unsong Style.

https://protokol2020.wordpress.com/2017/04/30/several-big-numbers-to-sort/ - 12 Amusing definitions of big numbers.

http://existentialcomics.com/comic/183 - The Life of Francis

http://existentialcomics.com/comic/181 - A Presocratic Get Together.

https://protokol2020.wordpress.com/2017/05/07/problem-with-perspective/ - A 3D geometry problem.

http://existentialcomics.com/comic/184 - Wittgenstein in the Great War(edited)

http://existentialcomics.com/comic/182 - Captain Metaphysics and the Postmodern Peril

**Adjacent**

https://medium.com/@freddiedeboer/conservatives-are-wrong-about-everything-except-predicting-their-own-place-in-the-culture-e5c036fdcdc5 - Conservatives correctly predicted the effects of gay acceptance and no fault divorce. They have also been proven correct about liberal bias in academia and the media. [Culture War]

https://medium.com/@freddiedeboer/franchises-that-are-appropriate-for-children-are-inherently-limited-in-scope-8170e76a16e2 - Superhero movies have an intended audience that includes children. This drastically limits what techniques they can use and themes they can explore. Freddie goes into the details.

https://fredrikdeboer.com/2017/05/11/study-of-the-week-rebutting-academically-adrift-with-its-own-mechanism/ - Freddie wrote his dissertation on the College Learning Assessment, the primary source in "Academically Adrift".

https://medium.com/@freddiedeboer/politics-as-politics-12ab43429e64 - Politics as “group affiliation” vs politics as politics. Annoying atheists aren’t as bad as fundamentalist Christians even if more annoying atheists exist in educated leftwing spaces. Freddie’s clash with the identitarian left despite huge agreement on the object level. Freddie is a socialist not a liberal. [Culture War]

https://www.ribbonfarm.com/2017/05/09/priest-guru-nerd-king/ - Facebook, Governance, Doctrine, Strategy, Tactics and Operations. Fairly short post for Ribbonfarm.

https://fredrikdeboer.com/2017/05/09/lets-take-a-deep-dive-into-that-times-article-on-school-choice/ - A critique of the problems in the Time's well cited article on school choice. Points out issues with selection bias, lack of theory and the fact that "not everyone can be average".

https://fredrikdeboer.com/2017/05/09/lets-take-a-deep-dive-into-that-times-article-on-school-choice/ - A critique of the problems in the Time's well cited article on school choice. Points out issues with selection bias, lack of theory and the fact that "not everyone can be average".

http://marginalrevolution.com/marginalrevolution/2017/05/conversation-garry-kasparov.html - "We talked about AI, his new book Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins, why he has become more optimistic, how education will have to adjust to smart software, Russian history and Putin, his favorites in Russian and American literature, Tarkovsky..."

http://econlog.econlib.org/archives/2017/04/iq_with_conscie.html - "My fellow IQ realists are, on average, a scary bunch.  People who vocally defend the power of IQ are vastly more likely than normal people to advocate extreme human rights violations." There are interesting comments here: https://redd.it/6697sh .

http://econlog.econlib.org/archives/2017/04/iq_with_conscie_1.html - Short follow up to the above article.(edited)

http://marginalrevolution.com/marginalrevolution/2017/04/what-would-people-do-if-they-had-superpowers.html - Link to a paper showing 94% of people said they would use superpowers selfishly.

http://waitbutwhy.com/2017/04/neuralink.html - Elon Musk Wants to Build a wizard hat for the brain. Lots of details on the science behind Neuralink.

http://marginalrevolution.com/marginalrevolution/2017/04/dont-people-care-economic-inequality.html - Most Americans don’t mind inequality nearly as much as pundits and academics suggest.

http://marginalrevolution.com/marginalrevolution/2017/04/two-rationality-tests.html - What would you ask to determine if someone is rational? What would Tyler ask?(edited)

http://tim.blog/2017/05/04/exploring-smart-drugs-fasting-and-fat-loss-dr-rhonda-patrick/ - “Avoiding all stress isn’t the answer to fighting aging; it’s about building resiliency to environmental stress.”

http://wakingup.libsyn.com/what-should-we-eat - "Sam Harris speaks with Gary Taubes about his career as a science journalist, the difficulty of studying nutrition and public health scientifically, the growing epidemics of obesity and diabetes, the role of hormones in weight gain, the controversies surrounding his work, and other topics."(edited)

http://www.econtalk.org/archives/2017/05/jennifer_pahlka.html - Code for America. Bringing technology into the government sector.

http://heterodoxacademy.org/resources/viewpoint-diversity-experience/ - A six step process to appreciating viewpoint diversity. I am not sure this site will be the most useful to rationalists , on the object level, but its interesting to see what Haidt came up with.

http://www.econtalk.org/archives/2017/04/elizabeth_pape.html - Elizabeth Pape on Manufacturing and Selling Women's Clothing and Elizabeth Suzann(edited)

http://www.mrmoneymustache.com/2017/04/25/there-are-no-guarantees/ - Avoid Contracts. Don't work another year "just in case".

http://marginalrevolution.com/marginalrevolution/2017/04/saturday-assorted-links-109.html - Assorted Links on politics, Derrida, Shaolin Monks.

http://econlog.econlib.org/archives/2017/04/earth_20.html - Bryan Caplan was a guest on freakanomics Radio. The topic was  "Earth 2.0: Is Income Inequality Inevitable?".

https://www.ribbonfarm.com/2017/04/18/entrepreneurship-is-metaphysical-labor/ - Metaphysics as Intellectual Ergonomics. Entrepreneurship is Applied Metaphysics.

https://www.ribbonfarm.com/2017/04/13/idiots-scaring-themselves-in-the-dark/ - Getting Lost. "The uncanny. This is the emotion of eeriness, spookiness, creepiness"

**Podcast**

http://rationallyspeakingpodcast.org/show/rs-182-spencer-greenberg-on-how-online-research-can-be-faste.html - Podcast. Spencer Greenberg on "How online research can be faster, better, and more useful".

https://medium.com/conversations-with-tyler/patrick-collison-stripe-podcast-tyler-cowen-books-3e43cfe42d10 - Patrick Collison, co founder of Stripe, interviews Tyler.

http://tim.blog/2017/04/11/cory-booker/ - Podcast with US Senator Cory Booker. "Street Fights, 10-Day Hunger Strikes, and Creative Problem-Solving"

http://econlog.econlib.org/archives/2017/04/the_undermotiva_1.html - Two Case studies on libertarians who changed their views for bad reasons.(edited)

http://www.stitcher.com/podcast/vox/the-ezra-klein-show/e/death-sex-and-moneys-anna-sale-on-bringing-empathy-to-politics-50101701 - Interview with the host of the WNYC podcast Death, Sex, and Money.

http://marginalrevolution.com/marginalrevolution/2017/05/econtalk-podcast-russ-roberts-complacent-class.html - "Cowen argues that the United States has become complacent and the result is a loss of dynamism in the economy and in American life, generally. Cowen provides a rich mix of data, speculation, and creativity in support of his claims."

http://tim.blog/2017/04/16/marie-kondo/ - Podcast. "Marie Kondo is a Japanese organizing consultant, author, and entrepreneur."

http://www.econtalk.org/archives/2017/04/rana_foroohar_o.html - Podcast. Rana Foroohar on the Financial Sector and Makers and Takers

http://www.stitcher.com/podcast/vox/the-ezra-klein-show/e/cal-newport-on-doing-deep-work-and-escaping-social-media-49878016 - Cal Newport on doing Deep Work and escaping social media.

https://www.samharris.org/podcast/item/forbidden-knowledge - Podcast with Charles Murray. Controversy over The Bell Curve, the validity and significance of IQ as a measure of intelligence, the problem of social stratification, the rise of Trump. [culture war](edited)

http://www.stitcher.com/podcast/vox/the-ezra-klein-show/e/elizabeth-warren-on-what-barack-obama-got-wrong-49949167 - Ezra Klein Podcast with Elizabeth Warren.

http://marginalrevolution.com/marginalrevolution/2017/04/stubborn-attachments-podcast-ft-alphaville.html - Pocast with Tyler Cowen on Stubborn Attachments. "I outline a true and objectively valid case for a free and prosperous society, and consider the importance of economic growth for political philosophy, how and why the political spectrum should be reconfigured, how we should think about existential risk, what is right and wrong in Parfit and Nozick and Singer and effective altruism, how to get around the Arrow Impossibility Theorem, to what extent individual rights can be absolute, how much to discount the future, when redistribution is justified, whether we must be agnostic about the distant future, and most of all why we need to “think big.”"

http://www.themoneyillusion.com/?p=32435 - Notes on three podcasts. Faster RGDP growth, Monetary Policy, Tyler Cowen's philosophical views.(edited)

http://www.stitcher.com/podcast/vox/the-ezra-klein-show/e/vc-bill-gurley-on-transforming-health-care-50030526 - A conversation about which healthcare systems are possible in the USA and the future of Obamacare.

https://www.currentaffairs.org/2017/05/campus-politics-and-the-administrative-mind - The nature of College Bureaucracy. Focuses on protests and Title 9. [Culture war]

http://www.stitcher.com/podcast/vox/the-ezra-klein-show/e/cory-booker-returns-live-to-talk-trust-trump-and-basic-incomes-50054271 - "Booker and I dig into America’s crisis of trust. Faith in both political figures and political institutions has plummeted in recent decades, and the product is, among other things, Trump’s presidency. So what does Booker think can be done about it?"

http://www.stitcher.com/podcast/vox/the-ezra-klein-show/e/cal-newport-on-doing-deep-work-and-escaping-social-media-49878016 - Cal Newport on doing Deep Work and escaping social media.

http://tim.blog/2017/04/22/dorian-yates/ - Bodybuilding Champion. High Intensity Training, Injury Prevention, and Building Maximum Muscle.

Open thread, May 15 - May 21, 2017

1 Elo 15 May 2017 07:06AM

If it's worth saying, but not worth its own post, then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

[Link] Anthropic uncertainty in the Counterfactual Blackmail problem

2 Johannes_Treutlein 14 May 2017 04:43PM

Making decisions in a real computer - an argument for meta-decision theory research

2 whpearson 13 May 2017 11:18PM

Decision theory is being used as the basis for AI safety work. This currently involves maximising expected utility of specific actions. Maximising expected utility is woefully inefficient for performing very rapid paced unimportant decisions, which occur frequently in computing. But these fast paced decisions will still need to be made in a way that is purpose oriented in an AI.

This article presents an argument that we should explore meta-decision theories to allow the efficient solution of these problems. Meta-decision theories are also more human-like and could have a different central problems to first order decision theories.

continue reading »

[Link] Surfing Uncertainty: Prediction, Action, and the Embodied Mind - The Future of Prediction

0 morganism 13 May 2017 08:59PM

[Link] Reality has a surprising amount of detail

14 jsalvatier 13 May 2017 08:02PM

Why do we think most AIs unintentionally created by humans would create a worse world, when the human mind was designed by random mutations and natural selection, and created a better world?

1 contravariant 13 May 2017 08:23AM

As far as AI designers go, evolution has to be one of the worst. It randomly changes the genetic code, and then selects on the criterion of ingroup reproductive fitness - in other words, how well a being can reproduce and stay alive - it says nothing about the goals of that being while it's alive.

To survive, and increase one's power are instrumentally convergent goals of any intelligent agent, which means that evolution does not select for any specific type of mind, ethics, or final values.

And yet, it created humans and not paperclip maximizers. True, humans rebelled against and overpowered evolution, but in the end we ended up creating amazing things and not a universe tiled with paperclips(or DNA, for that matter).

Considering how neural network training and genetic algorithms are considered some of the most dangerous ways of creating an AI,

the fact that natural evolution managed to create us with all our goals of curiosity and empathy and love and science,

would be a very unlikely coincidence given that we assume that most AIs we could create are worthless in terms of their goals and what they will do with the universe. Did it happen by chance? The p-value is pretty small on this one.

Careless evolution managed to create humans on her first attempt at intelligence, but humans, given foresight and intelligence, have an extreme challenge making sure an AI is friendly? How can we explain this contradiction? 

 

[Link] Decision Theories in Real Life

2 lifelonglearner 13 May 2017 01:47AM

SlateStarCodex Meetups Everywhere: Analysis

10 mingyuan 13 May 2017 12:29AM

The first round of SlateStarCodex meetups took place from April 4th through May 20th, 2017 in 65 cities, in 16 countries around the world. Of the 69 cities originally listed as having 10 or more people interested, 9 did not hold meetups, and 5 cities that were not on the original list did hold meetups.

We collected information from 43 of these events. Since we are missing data for 1/3 of the cities, there is probably some selection bias in the statistics; I would speculate that we are less likely to have data from less successful meetups.

Of the 43 cities, 25 have at least tentative plans for future meetups. Information about these events will be posted at the SSC Meetups GitHub.

 

Turnout

Attendance ranged from 3 to approximately 50 people, with a mean of 16.7. Turnout averaged about 50% of those who expressed interest on the survey (range: 12% to 100%), twice what Scott expected. This average does not appear to have been skewed by high turnout at a few events – mean: 48%, median: 45%, mode: 53%.

On average, gender ratio seemed to be roughly representative of SSC readership overall, ranging from 78% to 100% male (for the 5 meetups that provided gender data). The majority of attendees were approximately 20-35 years old, consistent with the survey mean age of 30.6.

 

Existing vs new meetups

Approximately one fifth of the SSC meetups were hosted by existing rationality or LessWrong groups. Some of these got up to 20 new attendees from the SSC announcement, while others saw no new faces at all. The two established meetups that included data about follow-up meetings reported that retention rates for new members were very low, at best 17% for the next meeting.

Here, it seems important to make a distinction between the needs of SSC meetups specifically and rationality meetups more generally. On the 2017 survey, 50% of readers explicitly did not identify with LW and 54% explicitly did not identify with EA. In addition, one organizer expressed the concern that, “Going forward, I think there is a concern of “rationalists” with a shared background outnumbering the non-lesswrong group, and dominating the SSC conversation, making new SSC fans less likely to engage.”

This raises the question of whether SSC groups should try to exist separately from local EA/LW/rationalist/skeptic groups – this is of particular concern in locations where the community is small and it’s difficult for any of these groups to function on their own due to low membership.

Along the same lines, one organizer wondered how often it made sense to hold events, since “If meetups happen very frequently, they will be attended mostly by hardcore fans (and a certain type of person), while if they are scheduled less frequently, they are likely to be attended by a larger, more diverse group. My fear is the hardcore fans who go bi-weekly will build a shared community that is less welcoming/appealing to outsiders/less involved people, and these people will be less willing to get involved going forward.”

Suggestions on how to address these concerns are welcome.

 

Advice for initial meetings

Bring name tags, and collect everyone’s email addresses. It’s best to do this on a computer or tablet, since some people have illegible handwriting, and you don’t want their orthographic deficiencies to mean you lose contact with them forever.

Don’t try to impose too much structure on the initial meeting, since people will mostly just want to get to know each other and talk about shared interests. If possible, it’s also good to not have a hard time limit - meetups in this round lasted between 1.5 and 6 hours, and you don’t want to have to make people leave before they’re ready. However, both structure and time limits are things you will most likely want if you have regularly recurring meetups.

 

Content

Most meetups consisted of unstructured discussion in smallish groups (~7 people). At least one organizer had people pair up and ask each other scripted questions, while another used lightning talks as an ice-breaker. Other activities included origami, Rationality Cardinality, and playing with magnadoodles and diffraction glasses, but mostly people just wanted to talk.

Topics, predictably, mostly centered around shared interests, and included: SSC and other rationalist blogs, rationalist fiction, the rationality community, AI, existential risk, politics and meta-politics, book recommendations, and programming (according to the survey, 30% of readers are programmers), as well as normal small talk and getting-to-know-each-other topics.

Common ice-breakers included first SSC post read, how people found SSC, favorite SSC post, and SSC vs LessWrong (aka, is Eliezer or Scott the rightful caliph).

Though a few meetups had a little difficulty getting conversation started and relied on ice-breakers and other predetermined topics, no organizer reported prolonged awkwardness; people had a lot to talk about and conversation flowed quite easily for the most part.

One area where several organizers encountered difficulties was large discrepancies in knowledge of rationalist-sphere topics among attendees, since some people had only recently discovered SSC or were even non-readers brought along by friends, while many others were long-time members of the community. Suggestions for quickly and painlessly bridging inferential gaps on central concepts in the community would be appreciated.

 

Locations 

Meetups occurred in diverse locations, including restaurants, cafés, pubs/bars, private residences, parks, and meeting rooms in coworking spaces or on university campuses.

Considerations for choosing a venue:

  • Capacity – Some meetups found that their original venues couldn’t accommodate the number of people who attended. This happened at a private residence and at a restaurant. Be flexible about moving locations if necessary.
  • Arrangement – For social meetups, you will probably want a more flexible format. For this purpose, it’s best to have the run of the space, which you have in private residences, parks, meeting rooms, and bars and restaurants if you reserve a whole room or floor.
  • Noise – Since the main activity is talking, this is an important consideration. An ideal venue is quiet enough that you can all hear each other, but (if public) not so quiet that you will be disrupting others with your conversation.
  • Visibility – If meeting in a public place, have a somewhat large sign that says ‘SSC’ on it, placed somewhere easily visible. If the location is large or hard to find, consider including your specific location (e.g. ‘we’re at the big table in the northwest corner’) or GPS coordinates in the meetup information.
  • Permission – Check with the manager first if you plan to hold a large meetup in a private building, such as a mall, market, or café. Also consider whether you’ll be disturbing other patrons.
  • Time restrictions – If you are reserving a space, or if you are meeting somewhere that has a closing time, be aware that people may want to continue their discussions for longer than the space is available. Have a contingency plan for this, a second location to move to in case you run overtime.
  • Availability of food – Some meetups lasted as long as six hours, so it’s good to either bring food, meet somewhere with easy access to food, or be prepared to go to a restaurant.
  • Privacy – People at some meetups were understandably hesitant to have controversial / culture war discussions in public. If you anticipate this being a problem, you should try to find a more private venue, or a more secluded area.

Conclusion

Overall most meetups went smoothly, and many had unexpectedly high turnout. Almost every single organizer, even for the tiny meetups, reported that attendees showed interest in future meetings, but few had concrete plans.

These events have been an important first step, but it remains to be seen whether they will lead to lasting local communities. The answer is largely up to you.

If you attended a meetup, seek out the people you had a good time talking to, and make sure you don’t lose contact with them. If you want there to be more events, just set a time and place and tell people. You can share details on local Facebook groups, Google groups, and email lists, and on LessWrong and the SSC meetups repository. If you feel nervous about organizing a meetup, don’t worry, there are plenty of resources just for that. And if you think you couldn’t possibly be an organizer because you’re somehow ‘not qualified’ or something, well, I once felt that way too. In Scott’s words, “it would be dumb if nobody got to go to meetups because everyone felt too awkward and low-status to volunteer.”

Finally, we’d like to thank Scott for making all of this possible. One of the most difficult things about organizing meetups is that it’s hard to know where to look for members, even if you know there must be dozens of interested people in your area. This was an invaluable opportunity to overcome that initial hurdle, and we hope that you all make the most of it.

 

Thanks to deluks917 for providing feedback on drafts of this report, and for having the idea to collect data in the first place :)

Gears in understanding

23 Valentine 12 May 2017 12:36AM

Some (literal, physical) roadmaps are more useful than others. Sometimes this is because of how well the map corresponds to the territory, but sometimes it's because of features of the map that are irrespective of the territory. E.g., maybe the lines are fat and smudged such that you can't tell how far a road is from a river, or maybe it's unclear which road a name is trying to indicate.

In the same way, I want to point at a property of models that isn't about what they're modeling. It interacts with the clarity of what they're modeling, but only in the same way that smudged lines in a roadmap interact with the clarity of the roadmap.

This property is how deterministically interconnected the variables of the model are. There are a few tests I know of to see to what extent a model has this property, though I don't know if this list is exhaustive and would be a little surprised if it were:

  1. Does the model pay rent? If it does, and if it were falsified, how much (and how precisely) could you infer other things from the falsification?
  2. How incoherent is it to imagine that the model is accurate but that a given variable could be different?
  3. If you knew the model were accurate but you were to forget the value of one variable, could you rederive it?

I think this is a really important idea that ties together a lot of different topics that appear here on Less Wrong. It also acts as a prerequisite frame for a bunch of ideas and tools that I'll want to talk about later.

I'll start by giving a bunch of examples. At the end I'll summarize and gesture toward where this is going as I see it.

continue reading »

That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox

11 Stuart_Armstrong 11 May 2017 09:16AM

That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox

Anders Sandberg, Stuart Armstrong, Milan M. Cirkovic

If a civilization wants to maximize computation it appears rational to aestivate until the far future in order to exploit the low temperature environment: this can produce a 1030 multiplier of achievable computation. We hence suggest the "aestivation hypothesis": the reason we are not observing manifestations of alien civilizations is that they are currently (mostly) inactive, patiently waiting for future cosmic eras. This paper analyzes the assumptions going into the hypothesis and how physical law and observational evidence constrain the motivations of aliens compatible with the hypothesis.

As far as I can tell, the paper's physics is correct (most of the energy comes not from burning stars but from the universe's mass).

However, the conclusions are likely wrong, because it's rational for "sleeping" civilizations to still want to round up stars that might be ejected from galaxies, collect cosmic dust, and so on.

The paper is still worth publishing, though, because there may other, more plausible ideas in the vicinity of this one. And it describes how future civilization may choose to use their energy.

Hidden universal expansion: stopping runaways

5 Stuart_Armstrong 11 May 2017 09:01AM

We have a new paper out, presenting the 'aestivation hypothesis'. It's another attempt to reconcile the fact that cosmic expansion seems very easy, yet we see no trace of any alien group doing it.

The idea is that civilizations expand rapidly, but then 'go to sleep', while they wait for the temperature to drop and it becomes possible to do computations with maximal efficiency.

There are some few problems with the theory, though - mainly, why would the civilizations conceal themselves? Even if they were sleeping, they should have some automated processes rounding up intergalactic gases, preventing stars from drifting out of galaxies, and so on.

But though it's hard to justify a civilization permanently hiding, there are reasons why a civilization might hide temporarily.

Consider the following diagram:

Here, a civilization is expanding from the red point, and will eventually reach Earth (drawn not entirely to scale). It's expanding at a decent fraction of light-speed. The red sphere is their physical expansion front, while the yellow sphere is the light expansion front. When that yellow reaches Earth, we will generally be able to notice their expansion, and have some time to react to it - unless they conceal themselves as they expand.

Why would they want to do that? It's not as if we could counter their expansion, or have any chance of resisting. But there is one thing that we might be able to do: flee. Imagine that we got a hundred years warning; we might be able to rush AI, Dyson the sun, build escape ships and launch them at a significant fraction of light speed, etc. They might never be able to catch us, and, as we or our AIs fled, we could develop technologies to reduce or damage our pursuers.

Therefore, it makes sense for the expanding civilization to conceal itself until it has any other civilizations completely surrounded. That means that Dyson swarms and other major feats of stellar engineering might be delayed by many years or decades by the red civilization. So that the 'noticeability front' - the distance at which other civilizations can see clear evidence of red's expansion - lags a bit behind their actual expansion front.

WMDs in Iraq and Syria

9 ChristianKl 10 May 2017 09:03PM

Tetlock wrote in Superforcasters that the US intelligence establishment was likely justified to believe that it was likely that Iraq was hiding WMDs. According to Tetlock their sin was that they asserted that it's certain that Iraq had WMD.

When first reading Superforcasters I didn't quite understand the situation. After reading https://theintercept.com/2015/04/10/twelve-years-later-u-s-media-still-cant-get-iraqi-wmd-story-right/ I did.

The core problem was that Saddam lost track of some of his chemical weapons. His military didn't do perfect accounting of them and they looked the same as conventional weapons. It takes an x-ray to tell his chemical weapons apart from the normal ones.

The US intercepted communications where Saddam told his units to ensure that they had no chemical weapons that inspectors could find. Of course, that communication didn't happen in English. That communication seems to have been misinterpreted by the US intelligence community as evidence that Saddam is hiding WMDs.

Nearly nobody understood that Iraq having chemical weapons and hiding them are two different systems because you need to know where your chemical weapons happen to be to hide them. On the same token, nobody publically argues that pure incompetence might be the cause of chemical weapon usage in Syria. We want to see human agency and if a chemical weapon exploded we want to know that someone is guilty of having made the decision to use them.
In a recent facebook discussion about Iraq and the value of IARPA, a person asserted that the US intelligence community only thought Iraq had WMDs because they were subject to political pressure. 
We have to get better at understanding that bad events can happen without people intending them to happen.

After understanding Iraq it's interesting to look at Syria. Maybe the chemical weapons that exploded in Syria didn't explode because Assad's troops or the opposition wanted to use chemical weapons. They might have simply exploded because some idiot did bad accounting and mislabeled a chemical weapon as being a conventional weapon.

The idea that WMDs explode by accident might be too horrible to contemplate. We have to be better at seeing incompetence as a possible explanation when we want to pin the guilt for having made an evil decision on another person.

[Link] From data to decisions: Processing information, biases, and beliefs for improved management of natural resources and environments

1 Gunnar_Zarncke 08 May 2017 09:47PM

Open thread, May 8 - May 14, 2017

3 Thomas 08 May 2017 08:10AM

If it's worth saying, but not worth its own post, then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

On-line google hangout on approaches to the control problem (2017/6/13 7PM UTC)

3 whpearson 07 May 2017 07:13PM

I'd like to get more discussion on-line about approaches to the control problem, so I'm hosting a hangout.

I'll run it lean coffee style meeting with a broad theme of what to do about the control problem. So people propose topics, we vote on the topics to discuss. Then we have a set period of time to discuss the most popular topics, with the topic proposer going first.

Message me with your email address for an invite.

Voting for continuation of a topic will be done via slack,  so video won't be mandatory. Topic write up will be on a trello board.

 

View more: Next