Jocko Podcast

9 moridinamael 06 September 2016 03:38PM

I've recently been extracting extraordinary value from the Jocko Podcast.

Jocko Willink is a retired Navy SEAL commander, jiu-jitsu black belt, management consultant and, in my opinion, master rationalist. His podcast typically consists of detailed analysis of some book on military history or strategy followed by a hands-on Q&A session. Last week's episode (#38) was particularly good and if you want to just dive in, I would start there.

As a sales pitch, I'll briefly describe some of his recurring talking points:

  • Extreme ownership. Take ownership of all outcomes. If your superior gave you "bad orders", you should have challenged the orders or adapted them better to the situation; if your subordinates failed to carry out a task, then it is your own instructions to them that were insufficient. If the failure is entirely your own, admit your mistake and humbly open yourself to feedback. By taking on this attitude you become a better leader and through modeling you promote greater ownership throughout your organization. I don't think I have to point out the similarities between this and "Heroic Morality" we talk about around here.
  • Mental toughness and discipline. Jocko's language around this topic is particularly refreshing, speaking as someone who has spent too much time around "self help" literature, in which I would partly include Less Wrong. His ideas are not particularly new, but it is valuable to have an example of somebody who reliably executes on his the philosophy of "Decide to do it, then do it." If you find that you didn't do it, then you didn't truly decide to do it. In any case, your own choice or lack thereof is the only factor. "Discipline is freedom." If you adopt this habit as your reality, it become true.
  • Decentralized command. This refers specifically to his leadership philosophy. Every subordinate needs to truly understand the leader's intent in order to execute instructions in a creative and adaptable way. Individuals within a structure need to understand the high-level goals well enough to be able to act in a almost all situations without consulting their superiors. This tightens the OODA loop on an organizational level.
  • Leadership as manipulation. Perhaps the greatest surprise to me was the subtlety of Jocko's thinking about leadership, probably because I brought in many erroneous assumptions about the nature of a SEAL commander. Jocko talks constantly about using self-awareness, detachment from one's ideas, control of one's own emotions, awareness of how one is perceived, and perspective-taking of one's subordinates and superiors. He comes off more as HPMOR!Quirrell than as a "drill sergeant".

The Q&A sessions, in which he answers questions asked by his fans on Twitter, tend to be very valuable. It's one thing to read the bullet points above, nod your head and say, "That sounds good." It's another to have Jocko walk through the tactical implementation of this ideas in a wide variety of daily situations, ranging from parenting difficulties to office misunderstandings.

For a taste of Jocko, maybe start with his appearance on the Tim Ferriss podcast or the Sam Harris podcast.

Deepmind Plans for Rat-Level AI

20 moridinamael 18 August 2016 04:26PM

Demis Hassabis gives a great presentation on the state of Deepmind's work as of April 20, 2016. Skip to 23:12 for the statement of the goal of creating a rat-level AI -- "An AI that can do everything a rat can do," in his words. From his tone, it sounds like this is more a short-term, not a long-term goal.

I don't think Hassabis is prone to making unrealistic plans or stating overly bold predictions. I strongly encourage you to scan through Deepmind's publication list to get a sense of how quickly they're making progress. (In fact, I encourage you to bookmark that page, because it seems like they add a new paper about twice a month.) The outfit seems to be systematically knocking down all the "Holy Grail" milestones on the way to GAI, and this is just Deepmind. The papers they've put out in just the last year or so concern successful one-shot learning, continuous control, actor-critic architectures, novel memory architectures, policy learning, and bootstrapped gradient learning, and these are just the most stand-out achievements. There's even a paper co-authored by Stuart Armstrong concerning Friendliness concepts on that list.

If we really do have a genuinely rat-level AI within the next couple of years, I think that would justify radically moving forward expectations of AI development timetables. Speaking very naively, if we can go from "sub-nematode" to "mammal that can solve puzzles" in that timeframe, I would view it as a form of proof that "general" intelligence does not require some mysterious ingredient that we haven't discovered yet.

Flowsheet Logic and Notecard Logic

25 moridinamael 09 September 2015 04:42PM

(Disclaimer: The following perspectives are based in my experience with policy debate which is fifteen years out of date. The meta-level point should stand regardless.)

If you are not familiar with U.S. high school debate club ("policy debate" or "cross-examination debate"), here is the gist of it: two teams argue over a topic, and a judge determines who has won.

When we get into the details, there are a lot of problems with the format. Almost everything wrong with policy debate appears in this image:

Flowsheet

This is a "flowsheet", and it is used to track threads of argument between the successive epochs of the debate round. The judge and the debators keep their own flowsheets to make sense of what's going on.

I am sure that there is a skillful, positive way of using flowsheets, but I have never seen it used in any way other than the following:

After the Affirmative side lays out their proposal, the Negative throws out a shotgun blast of more-or-less applicable arguments drawn from their giant plastic tote containing pre-prepared arguments. The Affirmative then counters the Negative's arguments using their own set of pre-prepared counter-arguments. Crucially, all of the Negative arguments must be met. Look at the Flowsheet image again, and notice how each "argument" has an arrow which carries it rightward. If any of these arrows make it to the right side of the page - the end of the round - without being addressed, then the judge will typically consider the round to be won by the side who originated that arrow.

So it doesn't actually matter if an argument receives a good counterargument. It only matters that the other team has addressed it appropriately.

Furthermore, merely addressing the argument with ad hoc counterargument is usually not sufficient. If the Negative makes an argument which contains five separate logical fallacies, and the Affirmative points all of these out and then moves on, the judge may not actually consider the Negative argument to have been refuted - because the Affirmative did not cite any Evidence.

Evidence, in policy debate, is a term of art, and it means "something printed out from a reputable media source and taped onto a notecard." You can't say "water is wet" in a policy debate round without backing it up with a notecard quoting a news source corroborating the wetness of water. So, skillfully pointing out those logical fallacies is meaningless if you don't have the Evidence to back up your claims.

Skilled policy debators can be very good - impressively good - at the mental operations of juggling all these argument threads in their mind and pulling out the appropriate notecard evidence. My entire social circle in high school was composed of serious debators, many of whom were brilliant at it.

Having observed some of these people for the ensuing decade, I sometimes suspect that policy debate damaged their reasoning ability. If I were entirely simplistic about it, I would say that policy debate has destroyed their ability to think and argue rationally. These people essentially still argue the same way, by mental flowsheet, acting as though argument can proceed only via notecard exchange. If they have addressed an argument, they consider it to be refuted. If they question an argument's source ("Wikipedia? Really?"), they consider it to be refuted. If their opponent ignores one of their inconsequential points, they consider themselves to have won. They do not seem to possess any faculty for discerning whether or not one argument actually defeats another. It is the equivalent of a child whose vision of sword fighting is focused on the clicking together of the blades, with no consideration for the intent of cutting the enemy.

Policy debate is to actual healthy argumentation as checkers is to actual warfare. Key components of the object being gamified are ignored or abstracted away until the remaining simulacrum no longer represents the original.


I actually see Notecard Logic and Flowsheet Logic everywhere. That's why I have to back off from my assertion that policy debate destroyed anybody's reasoning ability - I think it may have simply reinforced and hypertrophied the default human argumentation algorithm.

Flowsheet Logic is the tendency to think that you have defeated an argument because you have addressed it. It is the overall sense that you can't lose an argument as long as none of your opponent's statements go unchallenged, even if none of your challenges are substantial/meaningful/logical. It is the belief that if you can originate more threads of argument against your opponent than they can fend off, you have won, even if none of your arguments actually matters individually. I see Flowsheet Logic tendencies expressed all the time.

Notecard Logic is the tendency to treat evidence as binary. Either you have evidence to back up your assertion - even if that evidence takes the form of an article from [insert partisan rag] - or else you are just "making things up to defend your point of view". There is no concession to Bayesian updating, credibility, or degrees of belief in Notecard Logic. "Bob is a flobnostic. I can prove this because I can link you to an article that says it. So what if I can't explain what a flobnostic is." I see Notecard Logic tendencies expressed all the time.

Once you have developed a mental paintbrush handle for these tendencies, you may see them more as well. This awareness should allow you to discern more clearly whether you - or your interlocutor - or someone else entirely - is engaging in these practices. Hopefully this awareness paints a "negative space" of superior argumentation for you.

Less Wrong Business Networking Google Group

7 moridinamael 24 April 2014 02:45PM

Following on JoshuaFox's thread polling for interest in business networking between Less Wrong community members, a Less Wrong Networking Google group has been created.  If you're interested in the potential of discovering potential business opportunities with other Less Wrong users, with whom you may have reason to assume you share some philosophical and ideological alignment, please consider joining the group.

As Gunnar_Zarncke proposed, please consider modifying your user page to indicate your participation.

Please tell me if there's a best practice I should be doing with regards to this Google group that I'm not doing.

Bad Concepts Repository

20 moridinamael 27 June 2013 03:16AM

We recently established a successful Useful Concepts Repository.  It got me thinking about all the useless or actively harmful concepts I had carried around for in some cases most of my life before seeing them for what they were.  Then it occurred to me that I probably still have some poisonous concepts lurking in my mind, and I thought creating this thread might be one way to discover what they are.

I'll start us off with one simple example:  The Bohr model of the atom as it is taught in school is a dangerous thing to keep in your head for too long.  I graduated from high school believing that it was basically a correct physical representation of atoms.  (And I went to a *good* high school.)  Some may say that the Bohr model serves a useful role as a lie-to-children to bridge understanding to the true physics, but if so, why do so many adults still think atoms look like concentric circular orbits of electrons around a nucleus?  

There's one hallmark of truly bad concepts: they actively work against correct induction.  Thinking in terms of the Bohr model actively prevents you from understanding molecular bonding and, really, everything about how an atom can serve as a functional piece of a real thing like a protein or a diamond.

Bad concepts don't have to be scientific.  Religion is held to be a pretty harmful concept around here.  There are certain political theories which might qualify, except I expect that one man's harmful political concept is another man's core value system, so as usual we should probably stay away from politics.  But I welcome input as fuzzy as common folk advice you receive that turned out to be really costly.

Towards an Algorithm for (Human) Self-Modification

29 moridinamael 29 March 2011 11:40PM

LessWrong is wonderful.  Life-changing.  Best thing that ever happened to me.

But it's not really enough to make one a rationalist, is it?  I don't assimilate or even remember all of the knowledge contained in what I read, and I certainly don't dynamically incorporate it into my life-strategy.

Say you want your computer to be able to open Microsoft Word files.  In order to do this, you do not upload a PDF which contains a description of how Microsoft Word works.  No, you install the program and then you run the program.

Over several months of reading LessWrong I found myself wishing I had (a) computer program(s) that could train me to be a rationalist instead of a website that told me about how to be a rationalist.  I would read an article with a tremendous sense of excitement, thinking to myself, "This is it, I have to implement this insight into my life.  This is a change that I must realize."  But I would inevitably hit a mental wall when I saw that just knowing that something was a good idea didn't actually rewire my brain toward better cognitive habits.

I wanted a rationality installer.

I found myself in the midst of a personal crisis.  I came to suspect that the reason for my unhappiness and akrasia was that my goals and my actions had become decoupled - I just couldn't figure out where, or how.

So I set out to make a program that would help me organize what my actual terminal goals and values are, and then help me causally connect my day-to-day activities with these goals and values.  The idea was to create a kind of tree with end-goals at the parents and daily tasks as the children.  The resulting application was not very user-friendly, but it still worked

With the help of my program, I saw that a year ago, I was very happy with my life because all the activities I pursued on a daily basis were very high-utility and directly connected to the achievement of my goals.  I saw that I had recently formed a new long-term goal, the existence of which altered my utility function, but I had not altered my life to sufficiently accommodate this new goal.  I made some changes in my life which I thought were going to be painful sacrifices, but ended up feeling exactly right once I crossed the threshold.  It shocked me how quickly I felt better, how completely I returned to "normal."

And I thought to myself, hey, why do our cognitive algorithms have to actually be inside our heads?  I implemented this one into C++ and it helped me sort out something which was just frustrating and painful and confusing when I tried to manage it on my own.

What other rationality techniques deserve to be coded into "rationality assistant applications?"

(And how much of a desire would there be for such products?)