Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: cousin_it 12 June 2017 08:49:06AM *  4 points [-]

Note that I played a part in convincing MIRI to create IAF, and wrote the only comment on the IAF post you linked, so rest assured that I'm watching you folks :-) My thinking has changed over time though, and probably diverged from yours. I'll lay it out here, hopefully it won't sound too harsh.

First of all, if your goal is explaining math using simpler math, I think there's a better way to do it. In a good math explanation, you formulate an interesting problem at level n whose solution requires level n+1. (Ideally n should be as low as possible.) In a bad math explanation, you assume the reader understands level n, then write out the basic definitions of level n+1 and formulate a problem using those. That loses the reader, unless they are already interested in level n+1.

But that's still underestimating the problem by a couple orders of magnitude. To jumpstart engagement, you need something as powerful as this old post by Eliezer. That's a much more complicated beast. The technical content is pretty much readable to schoolchildren, yet somehow readers are convinced that something magical is going on and they can contribute, not just read and learn. Coming back to that post now, I'm still in awe of how the little gears work, from the opening sentence to the "win" mantra to the hint that he knows the solution but ain't telling. It hits a tiny target in manipulation-space that people don't see clearly even now, after living for a decade inside the research program that it created.

Apart from finding the right problem and distilling it in the right manner, I think the next hardest part is plain old writing style. For example, Eliezer uses lots of poetic language and sounds slightly overconfident, staying mostly in control but leaving dozens of openings for readers to react. But you can't reuse his style today, the audience has changed and you'll sound phony. You need to be in tune with readers in your own way. If I knew how to do it, I'd be doing it already. These comments of mine are more like meta-manipulation aimed at people like you, so I can avoid learning to write :-)

Comment author: endoself 13 June 2017 03:41:30AM 1 point [-]

Note that I ... wrote the only comment on the IAF post you linked

Yes, I replied to it :)

Unfortunately, I don't expect to have more Eliezer-level explanations of these specific lines of work any time soon. Eliezer has a fairly large amount of content on Arbital that hasn't seen LW levels of engagement either, though I know some people who are reading it and benefiting from it. I'm not sure how LW 2.0 is coming along, but it might be good to have a subreddit for content similar to your recent post on betting. There is an audience for it, as that post demonstrated.

Comment author: whpearson 12 June 2017 12:26:14PM 2 points [-]

I lack motivation myself. I'm interested in AIrisk but I think exploring abstract decision theories where the costs of doing the computation to make the decision are ignored is like trying to build a vehicle and ignoring drag entirely.

I may well be wrong so I still skim the agent foundations stuff, but I am unconvinced of its practicality. So I'm unlikely to be commenting on it or participating in that.

Comment author: endoself 12 June 2017 09:49:46PM 1 point [-]

Maybe you've heard this before, but the usual story is that the goal is to clarify conceptual questions that exist in both the abstract and more practical settings. We are moving towards considering such things though - the point of the post I linked was to reexamine old philosophical questions using logical inductors, which are computable.

Further, my intuition from studying logical induction is that practical systems will be "close enough" to satisfying the logical induction critereon that many things will carry over (much of this is just intuitions one could also get from online learning theory). E.g. in the logical induction decision theory post, I expect the individual points made using logical inductors to mostly or all apply to practical systems, and you can use the fact that logical inductors are well-defined to test further ideas building on these.

Comment author: cousin_it 11 June 2017 01:28:15PM *  13 points [-]

I think that's a worthy ideal to strive for, and the bottleneck is simply bringing together enough different people doing intellectual work on the same topic. Then the niceties of academic freedom will happen mostly by themselves. But the premise is much harder than it seems.

LW approached that ideal for a short while, when Eliezer's writings created a diverse flow of people and the mention of Newcomb's problem channelled some of them into decision theory. It was a fun time and I'm happy to have been part of it. Then Eliezer stopped posting fun stuff for a wide audience, the flow of people started drying up, new ideas became scarce due to lack of outsiders, and the work became more intensely mathematical and shrank to a small core group (MIRI workshops and agentfoundations.org). Now it's mostly met with crickets, and the opportunity for outsiders to make LWish philosophical progress and be rewarded with attention is pretty much gone, even though there's plenty of low hanging fruit. I'm sorry to say I also contributed to this "professionalization", which might have been a mistake in retrospect.

A couple days ago, after two years of silence, I wrote a short LW post about probabilities to test the waters. It got a very good reception, showing that people are still interested in this stuff. But to jumpstart such an effort properly, we need to sell amateurs on some way to do important yet accessible intellectual work. I don't know how to do that.

Comment author: endoself 11 June 2017 11:42:45PM 2 points [-]

Scott Garrabrant and I would be happy to see more engagement with the content on Agent Foundations (IAF). I guess you're right that the math is a barrier. My own recent experiment of linking to Two Major Obstacles for Logical Inductor Decision Theory on IAF was much less successful than your post about betting, but I think that there's something inessential about the inaccessiblity.

In that post, for example, I think the math used is mostly within reach for a technical lay audience, except that an understanding of logical induction is assumed, though I may have missed some complexity in looking it over just now. Even for that, it should be possible to explain enough about logical inductors briefly and accessibly enough to let someone understand a version of that post, though I'm not sure if that has been done. People recommend this talk as the best existing introduction.

[Link] Two Major Obstacles for Logical Inductor Decision Theory

1 endoself 10 June 2017 05:48AM
Comment author: diegocaleiro 29 November 2015 11:04:21AM 4 points [-]

Yes I am.

Step 1: Learn Bayes

Step 2: Learn reference class

Step 3: Read 0 to 1

Step 4: Read The Cook and the Chef

Step 5: Reason why are the billionaires saying the people who do it wrong are basically reasoning probabilistically

Step 6: Find the connection between that and reasoning from first principles, or the gear hypothesis, or whichever other term you have for when you use the inside view, and actually think technically about a problem, from scratch, without looking at how anyone else did it.

Step 7: Talk to Michael Valentine about it, who has been reasoning about this recently and how to impart it at CFAR workshops.

Step 8: Find someone who can give you a recording of Geoff Anders' presentation at EAGlobal.

Step 9: Notice how all those steps above were connected, become a Chef, set out to save the world. Good luck!

Comment author: endoself 30 November 2015 07:06:53AM *  2 points [-]

I model probabilistic thinking as something you build on top of all this. First you learn to model the world at all (your steps 3-8), then you learn the mathematical description of part of what your brain is doing when it does all this. There are many aspects of normative cognition that Bayes doesn't have anything to say about, but there are also places where you come to understand what your thinking is aiming at. It's a gears model of cognition rather than the object-level phenomenon.

If you don't have gears models at all, then yes, it's just another way to spout nonsense. This isn't because it's useless, it's because people cargo-cult it. Why do people cargo-cult Bayesianism so much? It's not the only thing in the sequences. The first post, The Simple Truth, big parts of Mysterious Answers to Mysterious Questions, and basically all of Reductionism are about the gears-model skill. Even the name rationalism evokes Descartes and Leibniz, who were all about this skill. My own guess is that Eliezer argued more forcefully for Bayesianism than for gears models in the sequences because, of the two, it is the skill that came less naturally to him, and that stuck.

What would cargo-cult gears models look like? Presumably, scientism, physics envy, building big complicated models with no grounding in reality. This too is a failure mode visible in our community.

Comment author: Yaacov 26 July 2015 04:57:04AM *  13 points [-]

Hi LW! My name is Yaacov, I've been lurking here for maybe 6 months but I've only recently created an account. I'm interested in minimizing human existential risk, effective altruism, and rationalism. I'm just starting a computer science degree at UCLA, so I don't know much about the topic now but I'll learn more quickly.

Specific questions:

What can I do to reduce existential risk, especially that posed by AI? I don't have an income as of yet. What are the best investments I can make now in my future ability to reduce existential risk?

Comment author: endoself 27 July 2015 09:48:37PM 4 points [-]

Hi Yaacov!

The most active MIRIx group is at UCLA. Scott Garrabrant would be happy to talk to you if you are considering research aimed at reducing x-risk. Alternatively, some generic advice for improving your future abilities is to talk to interesting people, try to do hard things, and learn about things that people with similar goals do not know about.

Comment author: joaolkf 08 April 2015 05:48:50PM *  3 points [-]

Worth mentioning that some parts of Superintelligence are already a less contrarian version of many arguments made here in the past.

Also note that although some people do believe that FHI is some sense "contrarian", when you look at the actual hard data on this the fact is FHI has been able to publish in mainstream journals (within philosophy at least) and reach important mainstream researchers (within AI at least) at rates comparable, if not higher, to excellent "non-contrarian" institutes.

Comment author: endoself 09 April 2015 12:24:36AM *  2 points [-]

Yeah, I didn't mean to contradict any of this. I wonder how much a role previous arguments from MIRI and FHI played in changing the zeitgeist and contributing to the way Superintelligence was received. There was a slow increase in uninformed fear-of-AI sentiments over the preceding years, which may have put people in more of a position to consider the arguments in Superintelligence. I think that much of this ultimately traces back to MIRI and FHI; for example many anonymous internet commenters refer to them or use phrasing inspired by them, though many others don't. I'm more sceptical that this change in zeitgeist was helpful though.

Of course specific people who interacted with MIRI/FHI more strongly, such as Jaan Tallinn and Peter Thiel, were helpful in bring the discourse to where it is today.

Comment author: IlyaShpitser 08 April 2015 06:38:53AM 4 points [-]

At least Ng's career though can be credited to Hawkins.

'At least a part'? Also,

???

Comment author: endoself 08 April 2015 08:36:57AM *  1 point [-]

The quote from Ng is

The big AI dreams of making machines that could someday evolve to do intelligent things like humans could, I was turned off by that. I didn’t really think that was feasible, when I first joined Stanford. It was seeing the evidence that a lot of human intelligence might be due to one learning algorithm that I thought maybe we could mimic the human brain and build intelligence that’s a bit more like the human brain and make rapid progress. That particular set of ideas has been around for a long time, but [AI expert and Numenta cofounder] Jeff Hawkins helped popularize it.

I think it's pretty clear that he would have worked on different things if not for Hawkins. He's done a lot of work in robotics, for example, so he could have continued working on robotics if he didn't get interested in general AI. Maybe he would have moved into deep learning later in his career, as it started to show big results.

Even when contrarians win, they lose: Jeff Hawkins

13 endoself 08 April 2015 04:54AM

Related: Even When Contrarians Win, They Lose

I had long thought that Jeff Hawkins (and the Redwood Center, and Numentia) were pursuing an idea that didn't work, and were continuing to fail to give up for a prolonged period of time. I formed this belief because I had not heard of any impressive results or endorsements of their research. However, I recently read an interview with Andrew Ng, a leading machine learning researcher, in which he credits Jeff Hawkins with publicizing the "one learning algorithm" hypothesis - the idea that most of the cognitive work of the brain is done by one algorithm. Ng says that, as a young researcher, this pushed him into areas that could lead to general AI. He still believes that AGI is far though.

I found out about Hawkins' influence on Ng after reading an old SL4 post by Eliezer and looking for further information about Jeff Hawkins. It seems that the "one learning algorithm" hypothesis was widely known in neuroscience, but not within AI until Hawkins' work. Based on Eliezer's citation of Mountcastle and his known familiarity with cognitive science, it seems that he learned of this hypothesis independently of Hawkins. The "one learning algorithm" hypothesis is important in the context of intelligence explosion forecasting, since hard takeoff is vastly more likely if it is true. I have been told that further evidence for this hypothesis has been found recently, but I don't know the details.

This all fits well with Robin Hanson's model. Hawkins had good evidence that better machine learning should be possible, but the particular approaches that he took didn't perform as well as less biologically-inspired ones, so he's not really recognized today. Deep learning would definitely have happened without him; there were already many people working in the field, and they started to attract attention because of improved performance due to a few tricks and better hardware. At least Ng's career though can be credited to Hawkins.

I've been thinking about Robin's hypothesis a lot recently, since many researchers in AI are starting to think about the impacts of their work (most still only think about the near-term societal impacts rather than thinking about superintelligence though). They recognize that this shift towards thinking about societal impacts is recent, but they have no idea why it is occurring. They know that many people, such as Elon Musk, have been outspoken about AI safety in the media recently, but few have heard of Superintelligence, or attribute the recent change to FHI or MIRI.

Comment author: Sherincall 18 February 2015 08:51:28PM *  11 points [-]

Reddit is giving away 10% of their ad revenue to 10 charities that receive the most votes from the community. You can vote for as many charities as you want, with any account that has been created before 10AM PST today.

You can vote for your favorite charities here. I've had problems with the search by name, so if you don't find something, try searching by EIN instead.

Quick links: CFAR, MIRI

Comment author: endoself 18 February 2015 10:09:01PM *  9 points [-]

GiveWell, GiveDirectly, Evidence Action/Deworm the World. You can vote for multiple charities.

View more: Next