Why isn't CFAR or friends building scaleable rationality tools/courses/resources? I played the Credence Calibration game and feel like that was quite helpful in making me grok Overconfidence Bias and the internal process of down-justing one's confidence in propositions. Multiple times I've seen mentioned the idea of an app for Double Crux. That would be quite useful for improving online discourse (seems like Arbital sorta had relevant plans there).
Relatedly: Why doesn't CFAR have a prep course? I asked them multiple times what I can do to prepare, and they said "you don't have to do anything". This doesn't make sense. I would be quite willing to spend hours learning marginal CFAR concepts, even if it was at a lower pacing/information-density/quality. I think the argument is something like 'you must empty your cup so you can learn the material' but I'm not sure.
I am somewhat suspicious that one of the reasons (certainly not the biggest, but one of) for the lack of these things is so they can more readily indoctrinate AI Safety as a concern. Regardless if that's a motivator, I think their goals would be more readily served by developing scaffolding to help train rationality amongst a broader base of people online (and perhaps use that as a pipeline for the more in-depth workshop).
Some of what a CFAR workshop does is convince our system 1's that it's socially safe to be honest about having some unflattering motives.
Most attempts at doing that in written form would at most only convince our system 2. The benefits of CFAR workshops depend heavily on changing system 1.
Your question about prepping for CFAR sounds focused on preparing system 2. CFAR usually gives advice on preparing for workshops that focuses more on preparing system 1 - minimize outside distractions, and have a list of problems with your life that you might want to solve at the workshop. That's different from "you don't have to do anything".
Most of the difficulties I've had with applying CFAR techniques involve my mind refusing to come up with ideas about where in my life I can apply them. E.g. I had felt some "learned helplessness" about my writing style. The CFAR workshop somehow got me to re-examine that atititude, and to learn how improve it. That probably required some influence on my mood that I've only experienced in reaction to observing people around me being in appropriate moods.
Sorry if this is too vague to help, but much of the relevant stuff happens at subconscious levels where introspection works poorly.
I think it comes down to a combination of 1) not being very confident that CFAR has the True Material yet, and 2) not being very confident in CFAR's ability to correct misconceptions in any format other than teaching in-person workshops. That is, you might imagine that right now CFAR has some Okay Material, but that teaching it in any format other than in person risks misunderstandings where people come away with some Bad Material, and that neither of these is what we really want, which is for people to come away with the True Material, whatever that is. There's at least historically been a sense that one of the only ways to get people to approximately come away with the True Material is for them to actually talk in person to the instructors, who maybe have some of it in their heads but not quite in the CFAR curriculum yet.
(This is based on a combination of talking to CFAR instructors and volunteering at workshops.)
It's also worth pointing out that CFAR is incredibly talent-constrained as an organization, and that there are lots of things CFAR could do that would plausibly be a good idea and that CFAR might even endorse as plausibly a good idea but that they just don't have the person-hours to prioritize.
I am somewhat suspicious that one of the reasons (certainly not the biggest, but one of) for the lack of these things is so they can more readily indoctrinate AI Safety as a concern.
At the mainstream workshops there is no mention of a topic anywhere in the neighborhood of AI safety anywhere in the curriculum. If it comes up at all it's in informal conversations.
Could a possible solution be to teach new teachers?
How far is a person who "knows X" from a person who "can teach X"? I imagine that being able to teach X has essentially two requirements: First, understand X deeply -- which is what we want to achieve anyway. Second, general teaching skills, independent on X -- these could be taught as a separate package; which could already be interesting for people who teach. And what you need then, is a written material containing all known things that should be considered when teaching X, and a short lesson explaining the details of it.
The plan could be approximately this:
1) We already have lessons for X, for Y, for Z -- what CFAR offers to participants already.
2) Make lessons for teaching in general -- and offer them to participants, too, because that is a separately valuable product.
3) Make lessons on "how to teach X" etc., each of them requiring lessons for "X" and for "general teaching" as prerequisites. These will be for volunteers wanting to help CFAR. After the lessons, have the volunteers teach X to some random audience (for a huge discount or even for free). If the volunteer does it well, let them teach X at CFAR workshops; first with some supervision and feedback, later alone.
Yep, CFAR is training new instructors (I'm one of them).
I imagine that being able to teach X has essentially two requirements: First, understand X deeply -- which is what we want to achieve anyway. Second, general teaching skills, independent on X -- these could be taught as a separate package; which could already be interesting for people who teach.
In the education literature, these are called content knowledge and pedagogical knowledge respectively. There is an important third class of thing called pedagogical content knowledge, which refers to specific knowledge about how to teach X.
The example Val likes to use is that if you want to teach elementary school students about division, it's really important to know that there are two conceptually distinct kinds of division, namely equal sharing (you have 12 apples, you want to share them with 4 friends, how many apples per friend) and repeated subtraction (you have 12 apples, you have gift bags that fit 4 apples, how many bags can you make). This is not quite a fact about division, nor is it general teaching skill; it is specifically part of what you need to know to teach division.
There's a difference between "knowing X" and having X be a default behavior. There's also a difference between knowing X and being able to teach it to people who think differently than oneself or have different preconceptions.
Could a possible solution be to teach new teachers?
Relevant info: I've volunteered at 1 CFAR workshop and hang out in the CFAR office periodically. My views here represent my models of how I think CFAR is thinking and are my own.
For 1), you might be interested to know that I recently made a Double Crux UI mockup here. I'm hoping to start some discussion on what an actual interface might look like.
Related to the idea of a prep course, I'll be making a LW post in the next few days about my attempt to create a new sequence on instrumental rationality that is complementary to the sort of self-improvement CFAR does. That may be of interest to you.
Otherwise, I can say that at least at the workshop I was at, there was zero mention of AI safety from the staff. (You can read my review here). It's my impression that there's a lot of cool stuff CFAR could be doing in tandem w/ their workshops, but they're time constrained. Hopefully this becomes less so w/ their new hires.
I do agree that having additional scaffolding would be very good, and that's part of my motivation to start on a new sequence.
Happy to talk more on this as I also think this is an important thing to focus on.
For 1), you might be interested to know that I recently made a Double Crux UI mockup here. I'm hoping to start some discussion on what an actual interface might look like.
Yep, you were one of the parties I was thinking of. Nice work! :D
I'm also interested in developing my instrumental rationality, and I think many of us are. Some may not have noticed CFAR's resources pages: Reading List; Rationality Videos; Blog Updates; and Rationality Checklist
They do update these from time to time.
I can't really speak for CFAR's plans or motives though. Last I heard they were still in an experimental phase and weren't confident enough in their material to go public with it in a big way yet. Has this changed?
The first reason of why isn't CFAR doing X is CFAR thinks other things besides X are more important targets for their effort.
At the beginning, CFAR considered probability calibration very important. As far as I understand today they consider it less important and a variety of other mental skills more important. As a result I think they decided against spending more resources on the Credence game.
As far as a Double Crux app goes, it's a project that somebody could do but I'm not sure that CFAR is the best actor to do it. If Arbital does it and tries to build a community around it, that might be higher return.
As far as I understand CFAR chooses to spend effort on optimizing the post-workshop experience with weekly excercises. I can understand that they might belief that's more likely to provide good returns then focusing on the pre-workshop experience.
CFAR staff did publish http://lesswrong.com/lw/o6p/double_crux_a_strategy_for_resolving_disagreement/ and http://lesswrong.com/lw/o2k/flinching_away_from_truth_is_often_about/ but I guess writing concepts down in that way takes a lot of effort.
If I don't get up, I won't wind up in the office. If I'm consistently not in the office, I don't get paid. If I don't get paid, I won't have be able to buy books or take attractive people out on dates. Running through that chain of logic takes roughly five to ten minutes, which is about how long the snooze on my alarm is, so by the third time it goes off I'm usually out of bed.
A useful hack for me is setting the alarm an hour earlier than I need, and letting myself do whatever I want for that hour. Since this is the only timespan in my day I'm guaranteed to be alone and to have no expectations, it tends to be an opportunity worth getting up for.
Weekend mornings either have an expectation similar to (though usually lesser than) work, or offer a whole day to goof off. Accordingly, there are some weekends where I don't get out of bed until I either get hungry or get a wave of energy that makes me want to run around.
There's a genuine value misalignment there. Sleeping(me) genuinely wants to stay in bed for as long as possible and doesn't give a shit about the amount of time it's wasting nor the fact that oversleeping is coincident with dementia, heart disease. Waking(me) has no desire to get back into bed and really wishes Sleeping(me) had given in sooner. Sometimes Waking(me) will set in motion devices to undermine Sleeping(me) on the next morning. A thing called an "alarm clock", techniques such as moving the alarm clock away from the bed to force a transition. It's a neverending war.
I have four roomies and one bathroom.
I set my first alarm half an hour before I NEED to get up, which also happens to be right before anyone else gets up. If I get up with my first alarm (or within a minute or two), then I am very likely able to get the bathroom. (And if someone is already in there, I am guaranteed that they will be out before I need to leave.) I tell myself that if I get up and do everything I need to do in the morning besides getting dressed, I can go back to bed and turn off all my other alarms except for the one 5-10 minues before I have to leave.
If I don't get up with my first alarm, there's a possibility that I don't get to use the bathroom before I need to leave for work.
Ahh, the joys of NYC life.
What exactly does "in the process of installing a TAP mean"? As far as I understand the idea of a TAP creating it isn't a process that takes multiple days.
I guess I should say I'm practicing the TAP. The reason I said "installing" is that I don't feel like a TAP has been fully installed until I'm extremely confident that it will basically always fire.
I get an awful headache if I stay in bed for more than a few minutes after waking up. Very motivating. Such a blessing!
Suppose it were discovered with a high degree of confidence that insects could suffer a significant amount, and almost all insect lives are worse than not having lived. What (if anything) would/should the response of the EA community be?
Are you familiar with the work of either Brain Tomasik or the Foundational Research Institute? Both take mass suffering very seriously. (Including that of insects, video game characters, and electrons. Well, sort of. I think the last two are just weird EV-things that result when you follow certain things to their logical conclusion, but I'm definitely not an expert.)
There's a lot of uncertainty in this field. I would hope to see a lot of people very quickly shift a lot of effort into researching:
Currently it seems like Brian Tomasik & the Foundational Research Institute, and Sentience Politics, are paying some attention to considerations like this.
Effective interventions for reducing the number of insects in the environment
A partial solution is often better than no solution.
But this seems really dangerous, even assuming some kind of negative utilitarian philosophy (meaning you're more interested in reducing suffering than producing value). If we kill ourselves, then we can't help at all. I would be very reluctant to support any such intervention. Ecological collapse is already an existential threat of some concern to EA. Increasing the risk of any existential threats, even for short-term reduction of total suffering strikes me as a massively bad idea.
Given our starting assumption that insects are suffering, they've likely been doing so for millions of years. A well-aimed intelligence explosion is our best option to permanently solve the problem. Let's not be tempted to cut the corner of a few more decades of suffering with a partial solution and lose the option of a total solution.
My mental model of what could possibly drive someone to EA is too poor to answer this with any degree of accuracy. Speaking for myself, I see no reason why such information should have any influence on future human actions.
Don't farm crickets.
Seriously, that's about all we can do in the short term. We can try to not make the problem worse. Fixing this completely is likely a post-singularity problem. Thus, EA should invest in MIRI.
We can't feasibly eradicate all the insects now--it's been said that cockroaches would survive a global nuclear war. And even if we could, it would mean the extinction of the human species. We're too dependent on them ecologically. If we tried, we'd likely kill ourselves before we got all of them, then the suffering wouldn't end until the sun eventually heats up enough to burn up the biosphere. Patience now is the better strategy. It ends the suffering sooner.
Someone might suggest gene drives, so I'll address that too. We can't use them for eradication of all insect species. Some of them would likely develop resistance first, so we'd have to be very persistent. But we humans wouldn't last that long.
What might work is to alter insects genetically so they don't suffer. If we can figure out how to do this we could then try to put the modification on the gene drive, but this is also very risky. Messing with the pain systems might inadvertently make suffering worse, but also make it less obvious. Nature invented pain for reasons. Turning it off would likely put those insects affected at a selective disadvantage. Suffering might evolve again after we get rid of it. Scaling the drive could unbalance the ecology and thereby damage human populations enough that we couldn't continue the project. It would take a great deal of research to pull this off.
Short of an intelligence explosion, we'd have to genetically engineer an artificial ecology that can sustainably support human life in outer space, but doesn't suffer. We'd then have the capability to move human civilization off-planet (very expensive), and then use giant space mirrors to start a runaway greenhouse effect that makes Earth look like Venus, finally eradicating the old miserable biosphere. This would require at minimum, a world government. I think an intelligence explosion is easier. Maybe not safer, but easier.
I think that for any sensible actions to be designed, you should also show if sufference is additive or not.
Every atom shall be used for the computronium, anyway. So there will be no (insect) pain anymore.
We should be very careful what to upload then.
But it's the EA, you are asking for. What their response should be?
I have no idea. I don't see any use for this movement in this context. Or in almost any other context, too.
Any useful book recommendations (or just dump your recommendations here)? I have a lot of nonfiction books (John Keegan for example), , but none of them seem worth reading - nothing in them is worth remembering 10 years from now.
Any useful book recommendations (or just dump your recommendations here)?
That's too vague to answer directly, so I'll start by going meta. Books vary a great deal in quality. Some are so deceptive that reading them is probably a net negative. Most of the rest are probably not worth your time compared to what else you could be reading, but don't let the perfect be the enemy of the good. Take some risks. The more knowledge you have, the better judge you are. People don't usually regret reading too many books.
Don't be too afraid to start and don't be too afraid to stop reading a book. Remember the sunk-cost fallacy. It's okay to stop reading a book if you judge that it's not worth your time. This should make you more inclined to start reading any book that looks interesting. You can limit your risk. It's okay to judge a book based on incomplete information, because you can't just read them all. Reviews and recommendations are a good starting place. There are plenty of these online. Then you can read the table of contents and skim the start of each chapter, just to see if it's worth reading for real. And even once you start in earnest, you can still quit if it's not worth your time.
Now for the recommendations. It's hard to recommend something to someone I know almost nothing about, but CFAR and MIRI have reading lists. These books have a high probability of being interesting for the kind of people who hang out on LW. (I'm gradually working through them myself.) You're here for some reason. How did you find LessWrong? Why are you asking us for books? Can you ask a more specific question?
Note that Rationality: From AI to Zombies is on both lists. It's probably a good starting point if you haven't read the Sequences already. Many of us here (myself included) consider the Sequences life changing. It reached that point for me long before I finished. I think some parts were better than others. There were boring parts and parts I couldn't follow, but the average density of insights was so high compared to everything else I'd found thus far, that I eventually read the whole thing.
Still, not everyone is ready for the Sequences. I got here via search engine after reading Drexler's Engines of Creation and Kurzweil's The Singularity Is Near after stumbling upon the "singularity" meme--the idea that the near future could be so radically different than the present. I'd still recommend Engines of Creation (though it's a bit dated) but I now think Kurzweil is too optimistic and have come around to Yudkowsky's view.
nothing in them is worth remembering 10 years from now.
Have you read the classics? Some of these are centuries old and still considered important. These are all books worth remembering for at least 10 years. But is this the best we can do? If you've had good a liberal arts education, you already know most of this stuff.
Most human knowledge was developed very recently. (e.g. a modern Physics major knows more about relativity than Einstein, because their teachers used his life's work as their starting point.) We have a higher population than ever before and Internet access. If rationalists designed a general education curriculum, knowing what we know now, what would be different? Study that.
See also: http://lesswrong.com/lw/3gu/the_best_textbooks_on_every_subject/
First off, thank you for taking the time to reply to my message. I understand that not many people are helpful, even on LW, so I appreciate what you are doing.
Thank you for your suggestions.
I don't think the classics are helpful for me because I cannot afford to take the time to understand them right now.
I read most of the Sequences. I planned to convert them to Anki cards but am unable to summarize most concepts. So I have given up on that.
I try to keep a buffer of Anki cards to learn always and a book from which I read and. Convert to Anki cards.
I read a lot, but I am restricted to reading relatively straightforward books - things you don't have to think about to understand. This is because I aim to spend the majority of my time studying to get into college.
So I have been searching for books that fit my rather idiosyncratic criteria -
Reading it will contribute to improving my life. Eg - 48 Laws of power, that Social Psychology textbook lukeprog recommended in his epic dating post.
The book must give straightforward advice, suggestions, or facts. Some textbooks are better than others in this sense. Popular psychology books also work, but I find many don't pass the 3rd criteria.
Has to have a minimum of 3.9 rating on Good reads and the top review should show the book isn't all hype (Economics in One Lesson, for example. I am not reading it because I haven't found a good intro to economics and the top review of this book points out a hell lot of supposed problems (I don't get what the review said).)
It takes me an hour or two to find books worth reading.
Let me tell you of my recent reads to give you an idea.
George Ainslie - Breakdown of Will. Bloody brilliant. I cured most of my Akrasia that has almost destroyed my life (I am taking a gap year to study now.) Using personal rules. I didn't Anki amythinh, because I reread the damn book enough times.
Nick Soares - That epub currently being linked in the discussion section. Now bad, but his blog posts on motivation seem more useful. But the first essay in this epub was Anki worthy.
Algorithms to Live By - Amazing book, but hard to Anki. I will spend more time reading it though. It is worth it.
Cormen - Algorithms Unlocked. I aim to get into the CS field in college. So this is sort of an intro, a preliminary reading or whatever. It should be fun.
I also am trying to read Epictetus and rereading Marcus Aurelius. When I get around to it.
Thanks for the resource of LW link. Awesome rabbit hole to fall into.
You don't really need to reply with recs actually. You have helped me.
But I would still enjoy reading your recs.
I read a lot, but I am restricted to reading relatively straightforward books - things you don't have to think about to understand.
No such thing. Reading is thinking. I'll assume you mean that it doesn't take too much effort, but effort is relative to your ability. Reading some enjoyable fiction to help you unwind would be a terrible chore for most kindergartners, for example. This is true even for your past self. What will your future self consider easy? That might depend on what kind of books you read.
Since you're interested in CS and algorithms, and aren't looking for anything too difficult, I recommend Petzold's Code: The Hidden Language of Computer Hardware and Software. This is a pop book, not a textbook. I found it to be a pretty easy read, though there were small parts in the middle that you may gloss over. That's fine. One of my better computer science classes in college was Computer Architecture. Code does an excellent job of covering most the same ground. You'll feel like you could engineer a computer from scratch (if only you had a multimillion-dollar chip factory).
There are a number of other CS book I'd recommend, but they take more effort.
This thread is for asking any questions that might seem obvious, tangential, silly or what-have-you. Don't be shy, everyone has holes in their knowledge, though the fewer and the smaller we can make them, the better.
Please be respectful of other people's admitting ignorance and don't mock them for it, as they're doing a noble thing.
To any future monthly posters of SQ threads, please remember to add the "stupid_questions" tag.