All of Drake Morrison's Comments + Replies

This is my favorite guide to confronting doom yet

This reminds me of Justin Skycak's thoughts on Deliberate Practice with Math Academy. His ~400 page document about skill building and pedagogy I think would be useful to you if you haven't seen it yet.

1Raemon
Ah thanks. I think I might have seen that a long time ago, but when I was in a different headspace.

I think this post was important, and pointing out a very real dynamic. It also seems to have sparked some conversations about moderation on the site, and so feels important as a historical artifact. I don't know if it should be in the Best Of, but I think something in this reference class should be. 

Completed the survey. I really appreciate the work you do @Screwtape to make the census better!

1Screwtape
Thank you, both for taking the survey and for the appreciation!

I like this! Especially the Past, Present, Future framing. I usually split along epistemic and instrumental lines. So my fundamental questions were:
1. Epistemic: What do you think you know and how do you think you know it?
2. Instrumental: What are you trying to protect, and how are you trying to protect it?

I've had some notion of a third thing, but now I've got a better handle on it, thanks!

I'm fond of saying, "your ethics are only opinions until it costs you to uphold them"

The reason I think this is important is because "[t]o argue against an idea honestly, you should argue against the best arguments of the strongest advocates": if you write 3000 words inveighing against people who think comparative advantage means that horses can't get sent to glue factories, that doesn't license the conclusion that superintelligence Will Definitely Kill You if there are other reasons why superintelligence Might Not Kill You that don't stop being real just because very few people have the expertise to formulate them carefully.

 

There's ... (read more)

I believe DaystarEld was talking about this in various places at LessOnline. They've got a sequence going in more depth here: Procedural Executive Function, Part 1 

2Sable
Yup. That's where I learned about it. I was looking for the link too and couldn't find it. Thanks!

If they don't tell you how to hold them accountable, its a Chaotic intention, not a Lawful commitment

What do you mean by "necessary truth" and "epistemic truth"? I'm sorta confused about what you are asking.

I can be uncertain about the 1000th digit of pi. That doesn't make the digit being 9 any less valid. (Perhaps what you mean by necessary?) Put another way, the 1000th digit of pi is "necessarily" 9, but my knowledge of this fact is "epistemic". Does this help?

For what it's worth, I find the Dath Ilan song to be one of my favorites. Upon listening I immediately wanted this song to be played at my funeral. 

There's something powerful there, which can be dangerous, but it's a kind of feeling that I draw strength and comfort from. I specifically like the phrasing around sins and forgiveness, and expect it to be difficult to engender the same comfort or strength in me without it. Among my friends I'm considered a bit weird in how much I think about grief and death and loss. So maybe it's a weird psychology thing. 

Answer by Drake Morrison43

If you can code, build a small AI with the fast.ai course. This will (hopefully) be fun while also showing you particular holes in your knowledge to improve, rather than a vague feeling of "learn more". 

If you want to follow along with more technical papers, you need to know the math of machine learning: linear algebra, multivariable calculus, and probability theory. For Agent Foundations work, you'll need more logic and set theory type stuff. 

MIRI has some recommendations for textbooks here. There's also the Study Guide and this sequence on leve... (read more)

4metachirality
Vanessa Kosoy has a list specifically for her alignment agenda but is probably applicable to agent foundations in general: https://www.alignmentforum.org/posts/fsGEyCYhqs7AWwdCe/learning-theoretic-agenda-reading-list

Feature Suggestion: add a number to the hidden author names.

I enjoy keeping the author names hidden when reading the site, but find it difficult to follow comment threads when there isn't a persistent id for each poster. I think a number would suffice while keeping the hiddenness.

This has unironically increased the levels of fun in my life

If you already have the concept, you only need a pointer. If you don't have the concept, you need the whole construction. [1]

  1. ^

Yay! I've always been a big fan of the art you guys did on the books. The Least Wrong page has a sort of official magazine feel I like due to the extra design. 

Completed the survey. I liked the additional questions you added, and the overall work put into this. Thanks!

2Screwtape
You are welcome! Thank you for taking it :)

Oh, got it. 

I mean, that still sounds fine to me? I'd rather know about a cool article because it's highly upvoted (and the submitter getting money for that) than not know about the article at all. 

If the money starts being significant I can imagine authors migrating to the sites where they can get money for their writing. (I imagine this has already happened a bit with things like substack)

2mako yass
I don't think people are going to be motivated by the monetary incentive to post much more than they already do. People seem to already like sharing stuff they think is good. Maybe. But that transition could be accelerated by having a credit assignment system (where the author of the post gets money set aside even before they're aware of the site and collecting it), and you're going to need a credit assignment system later anyway when people start reposting things and trying to claim credit for them.

You get money for writing posts that people like. Upvoting posts doesn't get you money. I imagine that creats an incentive to write posts. Maybe I'm misunderstanding you?

2mako yass
Uh, you get money for having your submissions upvoted, right? and most of the articles that are upvoted wont be written on the site, they'll be linked, so the submitter will get the money instead of the author. Submission is curator work.

non.io is a reddit clone that costs 1$ to subscribe, and then it splits the money towards those users you upvote more of. I think it's an interesting idea worth watching.

3Sinclair Chen
there's also hive (formerly steemit) that tries to reward posters of highly upvoted things, and early upvoters who correctly predict what will become big. I think empirically money-based social media hasn't really taken off, but I suspect it's mostly due to transaction costs, bad UI, and the public goods problem (as information is freely copied). These are all solvable!
4mako yass
I got excited about this briefly. I think it's too simple to be interesting, today. Incentivizing curation wont have much impact at these scales. Incentivizing production would, but it makes no attempt to identify and credit creators.

Maybe? I've not played it all that much, honestly. I was simply struck by the neat way it interacted with multiple players. 

I think it could be easily tweaked or houseruled to be a peavewager game by just revealing all the hidden information. Next time I play I'll probably try it out this way. 

War of Whispers is a semi-cooperative game where you play as cults directing nations in their wars. The reason it's cooperative is because each player's cult can change the nation they are supporting. So you can end up negotiating and cooperating with other players to boost a particular nation, because you both get points for it. 

Both times I've played people started on opposite sides, then ended up on the same or nearly the same side. In one of the games two players tied. 

There is still the counting of points so it doesn't quite fit what you are going for here, but it is the closest game I know of where multiple players can start negotiating for mutual aid and both win. 

5dr_s
In the same vein, I'm wondering if you could modify Carcassonne to work as a Peacewager. The game seems to lend itself to it (after all it's about cooperatively shaping a landscape) so having a slight redefinition of individual goals might work.
2mako yass
Hmm if there's a lot of high quality negotiation that would count for something. If you removed MostPointsWins, would it still function?

I think this is pointing at something real. Have you looked at any of the research with the MDA Framework used in video game development?

There are lots of reasons a group (or individual) goes to play a game. This framework found the reasons clustering into these 8 categories: 

  1. the tactile senses (enjoying the shiny coins, or the clacking of dice)
  2. Challenge (the usual "playing to win" but also things like speedrunners)
  3. Narratives (playing for the story, the characters and their actions)
  4. Fantasy (enjoyment of a make-believe world. Escapism)
  5. Fellowship (hangi
... (read more)

oh, that's right. I keep forgetting the LessWrong karma does the weighing thing. 

Has anyone tried experimenting with EigenKarma? It seems like it or something like it could be a good answer for some of this.

6Raemon
One thing to note is that the LessWrong vote-weighting system is (in some ways) intended to be a poor man's eigenkarma (i.e. it does a somewhat similar thing of weighting karma by trust) There's a few different ways that "canonical Eigenkarma" differs from LW upvote/strong-upvote power. What are the things you're particular interested in here?
2philh
There is/was such a project. I don't remember details but @plex probably knows what's going on. (That was meant to auto link and give him a notification but I've never tried it before and it apparently didn't work.)

I think this elucidates the "everyone has motives" issue nicely. Regarding the responses, I feel uneasy about the second one. Sticking to the object level makes sense to me. I'm confused how psychoanalysis is supposed to work without devolving. 

For example, let's say someone thinks my motivation for writing this comment is [negative-valence trait or behavior]. How exactly am I supposed to verify my intentions?

In the simple case, I know what my intentions are and they either trust me when I tell them or they don't. 

It's the cases when people can't... (read more)

Someone did a lot of this already here. Might be worth checking their script to use yourself.

Answer by Drake Morrison10

I think what you are looking for is prediction markets. The ones I know of are:

  1. Manifold Markets - play-money that's easy and simple to use
  2. Metaculus - more serious one with more complex tools (maybe real money somehow?)
  3. PredictIt - just for US politics? But looks like real money?
1M. Y. Zuo
There's no guarantee that they will actually pay out for a long term bet however, since there is no guarantee they will continue existing or honour their obligation, even if they accepted real money. Unless you know of some possibility?

I don't see all comments as criticism. Many comments are of the building up variety! It's that prune-comments and babble-comments have different risk-benefit profiles, and verifying whether a comment is building up or breaking down a post is difficult at times. 

Send all the building-comments you like! I would find it surprising if you needed more than 3 comments per day to share examples, personal experiences, intuitions and relations.

The benefits of building-comments is easy to get in 3 comments per day per post. The risks of prune-comments(spawning demon threads) are easy to mitigate by only getting 3 comments per day per post. 

i think we have very different models of things, so i will try to clarify mine. my best bubble site example is not in English, so i will give another one - the emotional Labor thread in MetaFilter, and MetaFilter as whole. just look on the sheer LENGTH of this page!

https://www.metafilter.com/151267/Wheres-My-Cut-On-Unpaid-Emotional-Labor

there are much more then 3 comments from person there.

from my point of view, this rule create hard ceiling that forbid the best discussions to have. because the best discussions are creative back-and-forth. my best discussi... (read more)

Are we entertaining technical solutions at this point? If so, I have some ideas. This feels to me like a problem of balancing the two kinds of content on the site. Balancing babble to prune, artist to critic, builder to breaker. I think Duncan wants an environment that encourages more Babbling/Building. Whereas it seems to me like Said wants an environment that encourages more Pruning/Breaking. 

Both types of content are needed. Writing posts pattern matches with Babbling/Building, whereas writing comments matches closer to Pruning/Breaking. In my mind... (read more)

8Jasnah Kholin
i find the fact that you see comments as criticism, and not expanding and continuing the building, is indicative of what i see as problematic. good comments should most of the time not be critisim. be part of the building.  the dynamic that is good in my eyes, is one when comments are making the post better not by criticize it, but by sharing examples, personal experiences, intuitions, and the relations of those with the post.  counting all comments as prune instead of bubble disincentivize bubble-comments. this is what you want?
5Raemon
Yeah this is the sort of solution I'm thinking of (although it sounds like you're maybe making a more sweeping assumption than me?) My current rough sense is that a rate limit of 3 comments per post per day (maybe with an additional wordcount based limit per post per day), would actually be pretty reasonable at curbing the things I'm worried about (for users that seem particularly prone to causing demon threads)

If you feel like it should be written differently, then write it differently! Nobody is stopping you. Write a thousand roads to Rome

Could Eliezer have written it differently? Maybe, maybe not. I don't have access to his internal writing cognition any more than you do. Maybe this is the only way Eliezer could write it. Maybe he prefers it this way, I certainly do.

Light a candle, don't curse the darkness. Build, don't burn. 

I used this link to make my own, and it seems to work nicely for me thus far. 

This sequence has been a favorite of mine for finding little drills or exercises to practice overcoming  biases.

https://www.lesswrong.com/posts/gBma88LH3CLQsqyfS/cultish-countercultishness

Cult or Not-Cult aren't two separate categories. They are a spectrum that all human groups live on. 

I agree wholeheartedly that the intent of the guidelines isn't enough. Do you have examples in mind where following a given guideline leads to worse outcomes than not following the guideline?

If so, we can talk about that particular guideline itself, without throwing away the whole concept of guidelines to try to do better. 

An analogy I keep thinking of is the typescript vs javascript tradeoffs when programming with a team. Unless you have a weird special-case, it's just straight up more useful to work with other people's code where the type signatures... (read more)

2Said Achmiz
Yes, sure, we shouldn’t throw away the concept; but that’s not at all a reason to start with the presumption that these particular guidelines are any good! As far as examples go… well, quite frankly, that’s what the OP is all about, right? Apologies, but I am deliberately not responding to this analogy and inferences from it, because adding an argument about programming languages to this discussion seems like the diametric opposite of productive.

Whether you are building an engine for a tractor or a race car, there are certain principles and guidelines that will help you get there. Things like:

  • measure twice before you cut the steel
  • Double check your fittings before you test the engine
  • keep track of which direction the axle is supposed to be turning for the type of engine you are making
  • etc.

The point of the guidelines isn't to enforce a norm of making a particular type of engine. They exist to help groups of engineer make any kind of engine at all. People building engines make consistent, predictable m... (read more)

4Said Achmiz
Well, for one thing, we might reasonably ask whether these guidelines (or anything sufficiently similar to these guidelines to identifiably be “the same idea”, and not just “generic stuff that many other people have said before”) are, in fact, needed in order for a group of people to “stay connected to reality at all”. Indeed we might go further and ask whether these guidelines do, in fact, help a group of people “stay connected to reality at all”. In other words, you say: “The guidelines are for helping people avoid [consistent, predictable mistakes]” (emphasis mine). Yes, the guidelines are “for” that—in the sense that they are intended to fulfill the stated function. But are the guidelines good for that purpose? It’s an open question, surely! And it’s one that merely asserting the guidelines’ intent does not do much to answer. But, perhaps even more importantly, we might, even more reasonably, ask whether any particular guideline is any good for helping a group of people “stay connected to reality at all”. Surely we can imagine a scenario where some of the guidelines are good for that, but others of the guidelines are’t—yes? Indeed, it’s not out of the question that some of the guidelines are good for that purpose, but others of the guidelines are actively bad for it! Surely we can’t reject that possibility a priori, simply because the guidelines are merely labeled “guidelines for rationalist discourse, which are necessary in order to avoid consistent, predictable mistakes, and stay connected to reality at all”—right?

As always, the hard part is not saying "Boo! conspiracy theory!" and "Yay! scientific theory!"

The hard part is deciding which is which

Wow, this hit home in a way I wasn't expecting. I ... don't know what else to say. Thanks for writing this up, seriously. 

see the disconnect—the reason I think X is better than Y is because as far as I can tell X causes more suffering than Y, and I think that suffering is bad."

 

 

I think the X's and Y's got mixed up here. 

Otherwise, this is one of my favorite posts. Some of the guidelines are things I had already figured out and try to follow but most of them were things I could only vaguely grasp at. I've been thinking about a post regarding robust communication and internet protocols. But this covers most of what I wanted to say, better than I could say it. So thanks!

2Duncan Sabien (Deactivated)
Oh, thanks; fixed

The Georgism series was my first interaction with a piece of economic theory that tried to make sense by building a different model than anything I had seen before. It was clear and engaging. It has been a primary motivator in my learning more about economics. 

I'm not sure how the whole series would work in the books, but the review of Progress and Poverty was a great introduction to all the main ideas. 

Related:  Wisdom cannot be unzipped

Reading Worth the Candle with a friend gave us a few weird words that are sazen in and of themselves. Being able to put a word to something lets you get a handle on it so much better. Thanks for writing this up. 

3PoignardAzur
I'd be super interested in specifics, if you can think of them.
2Sable
That this concept is related to my thoughts was on my mind the whole time I was reading. Thanks for linking! More generally, I think I was trying to convey the subjective experience of a subset of sazen - specifically the pithy saying - and the frustration in both sides resulting from an inability to communicate.

If the Highlights are too long, then print off a single post from each section. If that's too long, print off your top three. If that's too long, print off one post. 

Summarizing the post usually doesn't help, as you've discovered. So I'm not really sure what else to tell you. You have a lot of curated options to choose from to start. The Highlights, the Best of LessWrong, the Curated Sequences, Codex. Find stuff you like, and print it off for your friend. 

Or, alternatively, tell them about HPMOR. That's how I introduced myself to the concepts in a fashion where the protagonist had need of them. So the techniques stuck with me. 

Answer by Drake Morrison10

If you have some of the LessWrong books, I would recommend those. They are small little books that you can easily lend out. That's what I've thought of doing before. 

Really, starting is the hard part. Once I saw the value I was getting out of the sequences and other essays, I wanted to read more. So share a single essay, or lend a small book. Start small, and then if you are getting value out of it, continue. 

You don't have to commit to reading the whole Sequences before you start. Just start with one essay from the highlights, when you feel like... (read more)

1trevor
I know I set myself up for this by sounding like the classic "I'm not asking for me, I'm asking for a friend" thing, but I actually am asking for a friend. Specifically, I want to make and print my own compilation of high-impact material, which I personally curate such that it optimizes to makes people butterfly effect/diverge from their former self.  I'm asking for help because I need it to be net-positive, and because people here are more familiar with good stuff in all sorts of places.
  • Robust communication requires feedback. Knowing you received all the packets of information, and checking whether what you received matches what they sent. 
  • Building ideas vs breaking ideas. Related to Babble and Prune, but for communities. Shortform seems like a good place for ideas to develop, or babble. For ideas to be built together, before you critique things. You can destroy a half built idea, even if it's a good idea. 

I wrote a bunch of reviews before I realized I wasn't eligible. Oops. Maybe the review button could be disabled for folks like me?

(I don't care whether my reviews are kept or discarded, either way is fine with me)

3Raemon
Anyone can write reviews (and is encouraged to)

Writing up your thoughts is useful. Both for communication and for clarification to oneself. Not writing for fear of poor epistemics is an easy failure mode to fall into, and this post clearly lays out how to write anyway. More writing equals more learning, sharing, and opportunities for coordination and cooperation. This directly addresses a key point of failure when it comes to groups of people being more rational. 

This post felt like a great counterpoint to the drowning child thought experiment, and as such I found it a useful insight. A reminder that it's okay to take care of yourself is important, especially in these times and in a community of people dedicated to things like EA and the Alignment Problem. 

A great example of taking the initiative and actually trying something that looks useful, even when it would be weird or frowned upon in normal society. I would like to see a post-review, but I'm not even sure if that matters. Going ahead and trying something that seems obviously useful, but weird and no one else is doing is already hard enough. This post was inspiring. 

This was a useful and concrete example of a social technique I plan on using as soon as possible. Being able to explain why is super useful to me, and this post helped me do that. Explaining explicitly the intuitions behind communication cultures is useful for cooperation. This post feels like a step in the right direction in that regard.

Load More