I'm brand new to LW, it's refreshing to be able to discuss things intelligently, I haven't come across many places on the internet that happens. I'm excited to hear all your references to some of my favorite, inspiring, interesting people. I stumbled on the site while trying to Google a scarcely answered question, and I don't even remember what it was because I went down the rabbit hole, reading post after post on so many different subjects. I was even recently discussing the AI winning Multiplayer poker with someone, talking about what it meant for the future of AI (exponential learning) and the problems it could potentially solve in x amount of years.
I've been considering another run at Anki or similar, because I simultaneously found a new segment of a field to learn about and also because I am going to have to pivot my technical learning at work soon.
Reading Michael Nielson's essay on the subject, he makes frequent references to Feynman; I am wondering about the utility of using Anki to remember complex problems in better detail. The motivation is the famous story about Feynman where he always kept a bunch of his favorite open problems in mind, and then whenever he encountered a new technique he would test it against each problem. In this way, allegedly, he made several breakthroughs regarded as brilliant.
It feels to me like the point might be more broad and fundamental than mathematical techniques; I suspect if I could better articulate and memorize an important problem, I could make it more a part of my perspective rather than something I periodically take a crack at. If I can accomplish this, I expect I will be more likely to even notice relevant information in the first place.
Hi, I'm Bruno from Brazil. I have been involved with stuff in the Lesswrongosphere since 2016. While I was in the US, I participated in the New Hampshire and Boston LW meetup groups, with occasional presence in SSC and EA meetups. I volunteered at EAG Boston 2017 and attended EAG London later that year. I did the CFAR workshop of February 2017 and hung out at the subsequent alumni reunion. After having to move back to Brazil I joined the São Paulo LW and EA groups and tried, unsuccessfully, to host a book club to read RAZ over the course of 2018. (We made it as far as mid-February, I think.)
I became convinced of the need to sort out the AI alignment problem after first reading RAZ. I knew I needed to level up on lots of basic subjects before I could venture into doing AI safety research. Because doing so could also have instrumental value to my goal of leaving Brazil for good, I studied at a Web development bootcamp and have been teaching there for a year now; I feel this has given me the confidence to acquire new tech skills.
I intend to start posting here in order to clarify my ideas, solve my confusion and eventually join the ranks of the AI safety researchers. My more immediate goal is to be able to live somewhere other than Brazil while doing some sort of relevant work (even if it is just self-study or something not directly related to AI safety that still allows me to study on the side, like my current gig here does).
In order to combat publication bias, I should probably tell the Open Thread about a post idea that I started drafting tonight but can't finish because it looks like my idea was wrong. Working title: "Information Theory Against Politeness." I had drafted this much—
...Suppose the Quality of a Blog Post is an integer between 0 and 15 inclusive, and furthermore that the Quality of Posts is uniformly distributed. Commenters can roughly assess the Quality of a Post (with some error in either direction) and express their assessment in the form of a Comment, which is also an integer between 0 and 15 inclusive. If the True Quality of a post is , then the assessment expressed in a Comment on that Post follows the probability distribution
(Notice the "wraparound" between 15 and 0: it can be hard for a humble Commenter to tell the difference between brilliance-beyond-their-ken, and utter madness!)
The entropy of the Quality distribution is = 4 bits: in order to inform someone about the Quality of a Post, you need to transmit 4 bits of information. Comments can be thought of as a noisy "channel" conveying information about the post.
The mutual inf
There's no official, endorsed CFAR handbook that's publicly available for download. The CFAR handbook from summer 2016, which I found on libgen, warns
While you may be tempted to read ahead, be forewarned - we've often found that participants have a harder time grasping a given technique if they've already anchored themselves on an incomplete understanding. Many of the explanations here are intentionally approximate or incomplete, because we believe this content is best transmitted in person. It helps to think of this handbook as a companion to the workshop, rather than as a standalone resource.
which I think is still their view on the matter.
I have heard that they would be more comfortable with people learning rationality techniques in-person from a friend, so if you know any CFAR alumni you could ask them (they'd probably also have a better answer to your question).
This is, however, a position which has always made me extremely suspicious of CFAR’s output. I wish they would, at the very least, acknowledge what a huge red flag this sort of thing is.
Noting that I do agree with this particular claim.
I see the situation as:
On the "is there something worth teaching there" front, I think you're just wrong, and obviously so from my perspective (since I have, in fact, learned things. Sunset at Noon is probably the best writeup of what CFAR-descended things I've learned and why they're valuable to me).
This doesn't mean you're obligated to believe me. I put moderate probability on "There is variation on what techniques are useful for what people, and Said's mind is shaped such that the CFAR paradigm isn't useful, and it will never be legible to Said that the CFAR paradigm is useful." But, enough words have been spent trying to demonstrate things to you that seem obvious to me that it doesn't seem worth further time on it.
The Multi-Agent Model of Mind is the best current writeup of (one of) the important elements of what I think of as the CFAR paradigm. I think it'd be more useful for you to critique that than to continue this conversation.
I think that your past criticisms have been useful, and I've explicitly tried to take them into account in the sequence. E.g. the way I defined subagents in the first post of the sequence, was IIRC in part copy-pasted from an earlier response to you, and it was your previous comment that helped/forced me to clarify what exactly I meant. I'd in fact been hoping to see more comments from you on the posts, and expect them to be useful regardless of the tone.
I think I should actually punt this question to Kaj_Sotala, since they are his posts, and the meta rule is that authors get to set the norms on this posts. But:
a) if I had written the posts, I would see them as "yes, now these are actually at the stage where the sort of critique Said does is more relevant." I still think it'd be most useful if you came at it from the frame of "What product is Kaj trying to build, and if I think that product isn't useful, are there different products that would better solve the problem that Kaj's product is trying to solve?"
b) relatedly, if you have criticism of the Sunset at Noon content I'd be interested in that. (this is not a general rule about whether I want critiques of that sort. Most of my work is downstream of CFAR paradigm stuff, and I don't want most of my work to turn into a debate about CFAR. But it does seem interesting to revisit SaN through the "how content that Raemon attributes to CFAR holds up to Said" lens)
c) Even if Kaj prefers you not to engage with them (or to engage only in particular ways), it would be fine under the meta-rules for you to start a separate post and/or discussion thread for the purpose of critiquing. I actually think the most useful thing you might do is write a more extensive post that critiques the sequence as a whole.
I think I should actually punt this question to Kaj_Sotala, since they are his posts, and the meta rule is that authors get to set the norms on this posts.
Sure.
I still think it’d be most useful if you came at it from the frame of “What product is Kaj trying to build, and if I think that product isn’t useful, are there different products that would better solve the problem that Kaj’s product is trying to solve?”
Sure, but what if (as seems likely enough) I think there aren’t any different products that better solve the problem…?
I actually think the most useful thing you might do is write a more extensive post that critiques the sequence as a whole.
So, just as a general point (and this is related to the previous paragraph)…
The problem with the norm of writing critiques as separate posts, is that it biases (or, if you like, nudges) critiques toward the sort that constitute points or theses in their own right.
In other words, if you write a post, and I comment to say that your post is dumb and you are dumb for thinking and writing this and the whole thing is wrong and bad (except, you know, in a tactful way), well, that is, at least in some sense, appropriate (or we might say,
...And the thing is, many of my critiques (of CFAR stuff, yes, and of many other things that are discussed in rationalist spaces) boil down to just “what you are saying is wrong”. If you ask me what I think the right answer is, in such cases, I will have nothing to offer you. I don’t know what the right answer is. I don’t think you know what the right answer is, either; I don’t think anyone has the right answer. Beyond saying that (hypothetical) you are wrong, I often really don’t have much to add.
If all you have to say is "this seems wrong", that... basically just seems fine. [edit to clarify: I mean making a comment, not a post].
I don't expect most LessWrong users would get annoyed at that. The specific complaint we've gotten about you has more to do with the way you Socratic-ly draw people into lengthy conversations that don't acknowledge the difference in frame, and leave people feeling like it was a waste of time. (This has more to do with implicitly demanding asymmetric effort between you and the author, than about criticism).
I'm fairly uncertain here. But I don't currently share the intuition.
Note that the order of events I'm suggesting is:
1. Author posts.
2. Commenter says "this seems wrong / bad". Disagreement ensues
3. Author says "this is annoying enough that I'd prefer you not to comment on my posts anymore." [Hopefully, although not necessarily, the author does this knowing that they are basically opting into you now being encouraged by LessWrong moderators to post your criticism elsewhere if you think it's important. This might not currently be communicated that well but I think it should be]
4. Then you go and write a post titled 'My thoughts on X' or 'Alternative Conversation about X' or whatever, that says 'the author seems wrong / bad.'
By that point, sure it might be annoying, but it's presumably an improvement from the author's take. (I know that if I wanted to write a post about some high level Weird Introspection Stuff that took a bunch of Weird Introspection Paradigm stuff for granted, I'd personally probably be annoyed if you made the discussion about whether the Weird Introspection Paradigm was eve...
Addenda: my Strategies of Personal Growth post is also particularly downstream of CFAR. (I realize that much of it is something you can elsewhere. My perspective is that the main product CFAR provides is a culture that makes it easier to orient this sort of thing, and stick with it. CFAR iterates on "what combination of techniques can you present to a person in 4 days that best help jump-start them into that culture?", and they chose that feedback-loop-cycle after exploring others and finding them less effective)
One salient thing from the Strategies of Personal Growth perspective (which I attribute to exploration by CFAR researchers) is that many of the biggest improvements you can gain come from healing and removing psychological blockers.
The reason is simple: the kind of thing that CFAR (claimed to have) set out to look for, is the kind of thing that should be quite legible even to very skeptical third parties.
What is your current model of what CFAR "claimed to have set out to look for"? I don't actually know much of an explicit statement of what CFAR was trying to look for, besides the basic concepts of "applied rationality".
a harder time grasping a given technique if they've already anchored themselves on an incomplete understanding
This is certainly theoretically possible, but I'm very suspicious of it on reversal test grounds: if additional prior reading is bad, then why isn't less prior reading even better? Should aspiring rationalists not read the Sequences for fear of an incomplete understanding spoiling themselves for some future $3,900 CfAR workshop? (And is it bad that I know about the reversal test without having attended a CfAR workshop?)
I feel the same way about schoolteachers who discourage their students from studying textbooks on their own (because they "should" be learning that material by enrolling in the appropriate school course). Yes, when trying to learn from a book, there is some risk of making mistakes that you wouldn't make with the help of a sufficiently attentive personal tutor (which, realistically, you're not going to get from attending lecture classes in school anyway). But given the alternative of placing my intellectual trajectory at the mercy of an institution that has no particular reason to care about my welfare, I think I'll take my chances.
Note that I'm specificall
...You use math as an example, but that's highly focused on System 2 learning. That suggests that you have false assumptions about what CFAR is trying to teach.
There are many subjects where written instructions are much less valuable than instruction that includes direct practice: circling, karate, meditation, dancing, etc. Most of those analogies are fairly imperfect, and some have partially useful written instructions (in the case of meditation, the written version might have lagged in-person instruction by many centuries). Circling is the example that I'd consider most apt, but it won't mean much to people who haven't taken a good circling workshop.
A different analogy, which more emphasizes the costs of false assumptions: people often imagine that economics teaches something like how to run a good business or how to predict the stock market, because there isn't any slot in their worldview for what a good economics course actually teaches. There are plenty of mediocre executive summaries of economics, which fail to convey to most people that economics requires a pervasive worldview shift (integrating utilitarianism, empiricism about preferences, and some counterintuitive empirical p
...Sometimes, we don’t know how to teach a subject in writing because the subject matter is inherently about action (rather than concepts, analysis, explanation, prediction, numbers, words, etc.).
But sometimes, we don’t know how to teach a subject in writing because there is, in fact, nothing (or, at best, nothing much) to be taught. Sometimes, a subject is actually empty (or mostly empty) of content.
In the latter case, attempting to write it down reveals this (and opens the alleged “content” to criticism)—whereas in person, the charisma of the instructors, the social pressure of being in a group of others who are there to receive the instruction, possibly the various biases associated with having made some costly sacrifice (time, money, etc.) to be there, possibly the various biases associated with the status dynamics at play (e.g. if the instructors are respected, or at least if those around you act as if they are), all serve to mask the fundamental emptiness of what is being “taught”.
I leave it to the reader to discern which of the given examples fall into which category. I will only note that while the subjects found in the former category are often difficult to teach, nevertheless one’s mastery of them, and their effectiveness, is usually quite easy to verify—because action can be demonstrated.
There are many subjects where written instructions are much less valuable than instruction that includes direct practice: circling, karate, meditation, dancing, etc.
Yes, I agree: for these subjects, the "there's a lot of stuff we don't know how to teach in writing" disclaimer I suggested in the grandparent would be a big understatement.
a syllabus is useless (possibly harmful) for teaching economics to people who have bad assumptions about what kind of questions economics answers
Useless, I can believe. (The extreme limiting case of "there's a lot of stuff we don't know how to teach in this format" is "there is literally nothing we know how to teach in this format.") But harmful? How? Won't the unexpected syllabus section titles at least disabuse them of their bad assumptions?
Reading the sequences [...] are unlikely to have much relevance to what CFAR teaches.
Really? The tagline on the website says, "Developing clear thinking for the sake of humanity’s future." I guess I'm having trouble imagining a developing-clear-thinking-for-the-sake-of-humanity's-future curriculum for which the things we write about on this website would be irrelevant. The "comfort zone expansion" exe
...The idea that CFAR would be superfluous is fairly close to the kind of harm that CFAR worries about. (You might have been right to believe that it would have been superfluous in 2012, but CFAR has changed since then in ways that it hasn't managed to make very legible.)
I think meditation provides the best example for illustrating the harm. It's fairly easy to confuse simple meditation instructions (e.g. focus on your breath, sit still with a straight spine) with the most important features of meditation. It's fairly easy to underestimate the additional goals of meditation, because they're hard to observe and don't fit well with more widely accepted worldviews.
My experience suggests that getting value out of meditation is heavily dependent on a feeling (mostly at a system 1 level) that I'm trying something new, and there were times when I wasn't able to learn from meditation, because I mistakenly thought that focusing on my breath was a much more central part of meditation than it actually is.
The times when I got more value out of meditation were times when I tried new variations on the instructions, or new environments (e.g. on a meditation retreat). I can't see any signs that the n
...I'm off from university (3rd year physics undergrad) for the summer and hence have a lot of free time, and I want to use this to make as much progress as possible towards the goal of getting a job in AI safety technical research. I have found that I don't really know how to do this.
Some things that I can do:
Thus far I've worked through the first half of Hutton's Programming in Haskell on the grounds that functional programming maybe teaches a style of thought that's useful and opens doors to more theoretical CS stuff.
I'm optimising for something slightly different that purely becoming good at AI safety, in that at the end I'd like to have some legible things to point to or list on a CV or something (or become better-placed to later acquire such legible things).
I'd be interested to hear from people who know more about what would be helpful for this.
Hi! I've known LW for quite a while, but only now decided to join. I remember reading a comment here and thinking "I like how this person thinks". Needless to say, this is not a common experience I have on the internet. What I hope to get from this site are fruitful intellectual discussions that trip me over and reveal the flaws in my reasoning :)
Hypothesis: there are less comments per user on LW 2.0 than the old LW, because the user base is more educated as to where they have a valuable opinion vs where they don’t.
The question is fuzzier than it might seem at first. The issues is that population of commenters size changes too. You can have a world where the number of very frequent commenters has gone up but the average per commenter has gone done because the number of infrequent commenters has grown even faster than the number of frequent commenters.
There are also multiple possible causes for growth/decline, change in frequency, etc., that I don't think you could really link them to a mechanism as specific as being more educated about where your opinion is valuable. Though I'd definitely link the number of comments per person to the number of other active commenters and number of conversations going, a network effects kind of thing.
Anyhow, some graphs:
Indeed, the average (both mean and median) comments per active commenter each week has gone down.
But it's generally the case that the number of comments and commenters went way down, recovering only late 2017 at the time of Inadequate Equilibria and LessWrong 2.0
We can also look at the composition commenter commenting frequency. Here I've been the commenters for each week into a bin/bucket and seen how they've changed. Top graph is overall volume, bottom graph is the percentage of commenting population in each frequency bucket:
I admit that we must conclude that high-frequency commenters (4+ comments/week) have diminished in absolute numbers and as a percentage over time, though a slight upward trend in the last six months.
I noticed I was confused about how humans can learn novel concepts from verbal explanations without running into the symbol grounding problem. After some contemplation, I came up with this:
To the extent language relies on learned associations between linguistic structures and mental content, a verbal explanation can only work with what's already there. Instead of directly inserting new mental content, the explanation must leverage the receiving mind's established content in a way that lets the mind generate its own version of the new content.
The...
Constructivist learning theory is a relevant keyword; its premise is pretty much directly the same as in your quote (my emphasis added):
An important restriction of education is that teachers cannot simply transmit knowledge to students, but students need to actively construct knowledge in their own minds. That is, they discover and transform information, check new information against old, and revise rules when they do not longer apply. This constructivist view of learning considers the learner as an active agent in the process of knowledge acquisition. Constructivist conceptions of learning have their historical roots in the work of Dewey (192 9), Bruner (1961), Vygotsky (1962), and Piaget (1980). Bednar, Cunningham, Duffy, and Perry (1992) and von Glasersfeld (1995) have proposed several implications of constructivist theory for instructional developers stressing that learning outcomes should focus on the knowledge construction process and that learning goals should be determined from authentic tasks with specific objectives. Similarly, von Glasersfeld (1995) states that learning is not a stimulus-response phenomenon, but a process that requires self-regulation and the development...
I was there before it was fully done. As a person with a strong interest in UX I found it quite exciting.
It definitely tries to be a modern Xerox Park or something like that, and it does really feel like it's doing a lot of really interesting things in the UI space. I have a really hard time telling whether any of the UI ideas they are experimenting with will actually turn out to be useful and widely adopted, but it definitely helped me think about UX in a better way.
Recently out: "The Transhumanism Handbook" ed. Newton Lee (Springer, 2019). Costs money, of course, but you can see the table of contents, the abstracts, and the references for each paper for free. It contains:
5 chapters on yay, transhumanism!
10 on AI
12 on longevity
5 on biohacking
3 on cryptocurrency
5 on art
16 on society and ethics
10 on philosophy and religion
Is there exactly one RSS feed of lesswrong.com, i.e. https://www.lesswrong.com/feed.xml ? As I know too little about the technical side, is it easily possible for you to add different RSS feeds?
Reading about libraries in Julia for geometric algebra, I found Grassmann.jl. This is going to require more knowledge of advanced algebra than I have to use effectively, but while reading about it I noticed the author describe how they can achieve very high dimension numbers. They claim ~4.6e18 dimensions.
That's a lotta dimensions!
The American National Institute of Standards and Technology has a draft plan for AI standards up. There is an announcement on their website; an announcement post on LessWrong; the plan itself on NIST's website; an outline of said plan on LessWrong.
Edit: changed style of links in response to the Please Give Your Links Speaking Names post.
Hey what's up everybody, i'm jason. i found lesswrong as i was researching questions for a quiz app that i'm workin on called wisdom of the crowd app. and i got started doin that cause i started a facebook group called wisdom of the crowd. i noticed right away that you are vibrating at my same frequency so to speak. the rationality philosophy is what motivated me to start that fb group. it's only days old but i got the impression that the idea i had is pretty similar to what you guys are doing here. i love your library lol i'm...
Kind of stupid question, actually. I Googled up clothes for one-armed children (tried knitting, didn't go as planned, thought I'd donate it), and there were much fewer search results than I'd expected. Is it because one-armed people just have their clothes re-sewn from ordinary stuff, or what? Or are there different key words for it?
It seems like there are some intrinisic connections between the clusters of concepts known as "EA", "LW-style rationality", and "HRAD research"; is this a worrying sign?
Specifically, it seems like the core premise of EA relies largely on a good understanding of the world, in a systemic and explicit manner (beause existing heuristics aren't selected for "maximizing altruism"[1]), linking closely to LW, which tries to answer the same question. At the same time, my understanding of HRAD research is that it aims to...
If it’s worth saying, but not worth its own post, you can put it here.
Also, if you are new to LessWrong and want to introduce yourself, this is the place to do it. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are welcome. If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, and seeing if there are any meetups in your area.
The Open Thread sequence is here.