Comment author: lukeprog 09 April 2013 03:06:24AM 0 points [-]

Yeah, Leverage Research does tons of work on "graphically represented plans" and "preference inference based on the structure of goals", and I think they all use yEd. There's a few settings you should tweak at the beginning and then it's pretty smooth sailing.

Comment author: Geoff_Anders 12 April 2013 09:09:12PM *  10 points [-]

Here are instructions for setting up the defaults the way some people have found helpful:

  1. Open yEd.
  2. Create a new document.
  3. Click the white background; a small yellow square should appear on the canvas.
  4. Click the small yellow square so as to select it.
  5. Click and drag one of the corners of the yellow square to resize it. Make it the default size you'd like your text boxes to be. You will be able to change this later.
  6. Make sure the yellow square is still selected.
  7. Look at the menu in the lower right. It is called "Properties View". It will show you information about the yellow square.
  8. Click the small yellow square in the menu next to the words "Fill Color".
  9. Select the color white for the Fill Color.
  10. Lower in the menu, under "Label", there is an item called "Placement". Find it. Change Placement to "Internal" and "Center".
  11. Right below Placement in the menu is "Size". Find it. Change Size to "Fit Node Width".
  12. Right below Size is "Configuration". Find it. Change Configuration to "Cropping".
  13. Right below Configuration is "Alignment". Find it. Ensure that Alignment is "Center".
  14. In the upper toolbar, click "File" then "Preferences".
  15. A menu will come up. Click the "Editor" tab.
  16. You will see a list of checkboxes. "Edit Label on Create Node" will be unchecked. Check it.
  17. Click Apply.
  18. In the upper toolbar, click "Edit" then "Manage Palette".
  19. A menu will come up. In the upper left there will be a button called "New Section". Click it.
  20. Name the new section after yourself.
  21. Verify that the new section has been created by locating it in the righthand list of "Displayed Palette Selections".
  22. Close the Palette Manager menu.
  23. Doubleclick your white textbox to edit its label.
  24. Put in something suitably generic to indicate a default textbox. I use "[text]" (without the quotes).
  25. Select your white textbox. Be sure that you have selected it, but are not now editing the label.
  26. Right click the white textbox. A menu will appear.
  27. On the menu, mouse over "Add to Palette", then select the palette you named after yourself.
  28. On the righthand side of the screen, there will be a menu at the top called "Palette". Find it.
  29. Scroll through the palettes in the Palette menu until you find the palette you named after yourself. Expand it.
  30. You will see your white textbox in the palette you have named after yourself. Click it to select it.
  31. Right click the white textbox in the palette. Select "Use as Default".
  32. To check that you have done everything properly, click on the white background canvas. Did it create a white textbox like your original, and then automatically allow you to edit the label? If so, you're done.

Then... a. Click the white background to create a box. b. Click a box and drag to create an arrow. c. Click an already existing box to select it. Once selected, click and drag to move it. d. Doubleclick an already existing box to edit its label.

Enjoy!

Comment author: Eliezer_Yudkowsky 23 September 2012 03:55:01AM 12 points [-]

These predictions frequently do not overlap with what existing cognitive science would have one expect.

What is an example of a case you've actually observed where CT made a falsifiable, bold, successful prediction? ("Falsifiable" - say what would have made the prediction fail. "Bold" - explain what a cogsci guy or random good human psychologist would have falsifiably predicted differently.)

Comment author: Geoff_Anders 23 September 2012 12:48:37PM *  6 points [-]

For at least 2 years prior to January 2009, I procrastinated between 1-3 hours a day reading random internet news sites. After I created my first CT chart, I made the following prediction: "If I design a way to gain information about the world that does not involve reading internet news sites that also does not alter my way of achieving my other intrinsic goods, then I will stop spending time reading these internet news sites." The "does not alter my way of achieving my other intrinsic goods" was unpacked. It included: "does not alter my way of gaining social acceptance", "does not alter my relationships with my family members", etc. The specifics were unpacked there as well.

This was prediction was falsifiable - it would have failed if I had kept reading internet news sites. It was also bold - cogsci folk and good random human psychologists would have predicted no change in my internet news reading behavior. And it was also successful - after implementing the recommendation in January 2009, I stopped procrastinating as predicted. Now, of course there are multiple explanations for the success of the prediction, including "CT is true" and "you just used your willpower". Nevertheless, this is an example of a faisifiable, bold, successful prediction.

Comment author: Eliezer_Yudkowsky 22 September 2012 02:01:53PM 20 points [-]

My recollection of my conversation with Geoff, at a Berkeley LW meetup, in Berkeley University in a building that had a statue of a dinosaur in it, is that it went like this. Disclaimer: My episodic memory for non-repeated conversations is terrible and it is entirely possible that there are major inaccuracies here. Disclaimer 2: This is not detailed enough to count as an engaged critique, and Geoff is not obliged to respond to it since I put very little effort into it myself (it is logically rude to demand ever-more conversational effort from other people while putting in very little yourself).

Geoff: I've been working on an incredible new mental theory that explains everything.

Eliezer (internally): That's not a good sign, but he seems earnest and intelligent. Maybe it's something innocuous or even actually interesting.

Eliezer (out loud): And what does it say?

Geoff: Well, I haven't really practiced it explaining it, and I don't expect you to believe it, but (explains CT)

Eliezer (internally): Well this is obviously wrong. Minds just don't work by those sorts of bright-line psychoanalytic rules written out in English, and proposing them doesn't get you anywhere near the level of an interesting cognitive algorithm. Maybe if he's read enough of the Sequences and hasn't invested too many sunk costs / bound up too many hopes in it, I can snap him out of it in fairly short order?

Eliezer (out loud): Where does CT make a different prediction from the cognitive science I already know that I couldn't get without CT?

Geoff: It predicts that people will change their belief to believe that their desires will be fulfilled...

Eliezer (internally): Which sounds a lot like standard cognitive dissonance theory, which itself has been modified in various ways, but we aren't even at the point of talking about that until we get out of the abstract-belief trap.

Eliezer (out loud): No, I mean some sort of sensory experience. Like your eyes seeing an apple fall from a tree, or something like that. What does CT say I should experience seeing, that existing cognitive science wouldn't tell me to expect?

Geoff: (Something along the lines of "CT isn't there yet", I forget the exact reply.)

Eliezer (internally): This is exactly the sort of blind alley that the Sequences are supposed to prevent smart people from wasting their emotional investments on. I wish I'd gotten this person to read the Belief and Anticipation sequence before CT popped into his head, but there's no way I can rescue him from the outside at this point.

Eliezer (out loud): Okay, then I don't believe in CT because without evidence there's no way you could know it even if it was true.

I think there might've also been something about me trying to provide a counterexample like "It is psychologically possible for mothers to believe their children have cancer" but I don't recall what Geoff said to that. I'm not sure whether or not I gave him any advice along the lines of, "Try to explain one thing before explaining everything."

Comment author: Geoff_Anders 22 September 2012 03:08:25PM 9 points [-]

If I recall correctly, I was saying that I didn't know how to use CT to predict simple things of the form "Xs will always Y" or "Xs will Y at rate Z", where X and Y refer to simple observables like "human", "blush", etc. It would be great if I could do this, but unfortunately I can't.

Instead, what I can do is use the CT charting procedure to generate a CT chart for someone and then use CT to derive predictions from the chart. This yields predictions of the form "if a person with chart X does Y, Z will occur". These predictions frequently do not overlap with what existing cognitive science would have one expect.

The way I could have evidence in favor of CT would be if I had created CT charts using the CT procedure, used CT to derive predictions from the charts, and then tested the predictions. And I've done this.

Comment author: Curiouskid 11 January 2012 10:39:50PM *  2 points [-]

I think everybody is getting hung up about connection theory which is not the only thing that Leverage Research does. I'm not completely sure, but I'm pretty sure it's not even the main thing they do. EDIT: Why is this tagged politics? Does it have to do with the mind-killing comment thread about meta-trolling?

Comment author: Geoff_Anders 11 January 2012 11:57:14PM 2 points [-]

Connection Theory is not the main thing that we do. It's one of seven main projects. I would estimate that about 15% of our current effort goes directly into CT right now. It's true that having a superior understanding of the human mind is an important part of our plan, and it's true that CT is the main theory we're currently looking at. So that is one reason people are focusing on it. But it's also one of the better-developed parts of our website right now. So that's probably another reason.

Comment author: Incorrect 11 January 2012 05:37:23AM 1 point [-]

For example, I've been able to work for more than 13 hours a day, with only occasional days off, for more than two years. I attribute this to CT and I expect we'll be able to replicate this. If we end up not being able to, that'll be obvious to us and everyone else.

But what quality of work? Organizing my closet is very different than reading a dense academic paper with full concentration.

Comment author: Geoff_Anders 11 January 2012 03:25:40PM 1 point [-]

I can usually do any type of work. Sometimes it becomes harder for me to write detailed documents in the last couple hours of my day.

Comment author: Bugmaster 11 January 2012 02:02:14AM *  4 points [-]

Leverage Research seems, at first glance at least, to be similar to SIAI. Their plans have similar shapes:

1). Grow the organization (donations welcome).
2). Use the now-grown organization to grow even faster.
3). ???
4). Profit ! Which is to say, solve all the world's problems / avoid global catastrophe / usher in a new age of peace and understanding / etc.

I think they need to be a bit more specific there in step 3.

Comment author: Geoff_Anders 11 January 2012 03:22:19PM *  1 point [-]

We've tried to fill in step 3 quite a bit. Check out the plan and also our backup plan. We're definitely open to suggestions for ways to improve, especially places where the connection between the steps is the most tenuous.

Comment author: shminux 10 January 2012 05:04:23PM 2 points [-]

Maybe you are not aware of them?

Your denial would be more convincing if you compared and contrasted CT ideas and objectivist ideas.

Comment author: Geoff_Anders 10 January 2012 05:10:22PM 3 points [-]

Unfortunately, I'm not familiar with Ayn Rand's ideas on psychology.

Comment author: moridinamael 10 January 2012 04:23:46PM *  13 points [-]

On a first pass, the Leverage Research website feels like Objectivism. I say this because it is full of dubious claims about morality and psychology but which are presented as basic premises and facts. The explanations of "Connection Theory" are full of the same type of opaque reasoning and fiat statements about human nature which perhaps I am particularly sensitive to as a former Objectivist. Knowing nothing more than this first impression, I am going to make a prediction that there are Objectivist influences present here. That seems at least somewhat testable.

Comment author: Geoff_Anders 10 January 2012 04:46:27PM 0 points [-]

There are no Objectivist influences that I am aware of.

Comment author: CronoDAS 10 January 2012 10:45:37AM 7 points [-]

::follows various links::

Is CT falsifiable? There's no obvious way to determine a person's intrinsic goods except by observing their behavior, but a person's behavior is what CT is supposed to predict in the first place. If a person appears to be acting in a way that contradicts the Action Rule, then "CT is wrong" and "CT is fine; the person had different intrinsic goods than I thought they did" are both consistent with the evidence.

Comment author: Geoff_Anders 10 January 2012 03:45:21PM 8 points [-]

Short answer: Yes, CT is falsifiable. Here's how to see this. Take a look at the example CT chart. By following the procedures stated in the Theory and Practice document, you can produce and check a CT chart like the example chart. Once you've checked the chart, you can make predictions using CT and the CT chart. From the example chart, for instance, we can see that the person sometimes plays video games and tries to improve and sometimes plays video games while not trying to improve. From the chart and CT, we can predict: "If the person comes to believe that he stably has the ability to be cool, as he conceives of coolness, then he will stop playing video games while not trying to improve." We would measure belief here primarily by the person's belief reports. So we have a concrete procedure that yields specific predictions. In this case, if the person followed various recommendations designed to increase his ability to be cool, ended up reporting that he stably had the ability to be cool, but still reported playing video games while not trying to improve, CT would be falsified.

Longer answer: In practice, almost any specific theory can be rendered consistent with the data by adding epicycles, positing hidden entities, and so forth. Instead of falsifying most theories, then, what happens is this: You encounter some recalcitrant data. You add some epicycles to your theory. You encounter more recalcitrant data. You posit some hidden entities. Eventually, though, the global theory that includes your theory becomes less elegant than the global theory that rejects your theory. So, you switch to the global theory that rejects your theory and you discard your specific theory. In practice with CT, so far we haven't had to add many epicycles or posit many hidden entities. In particular, we haven't had the experience of having to frequently change what we think a person's intrinsic goods are. If we found that we kept having to revise our views about a person's intrinsic goods (especially if the old posited intrinsic goods were not instrumentally useful for achieving the new posited intrinsic goods), this would be a serious warning sign.

Speaking more generally, we're following particular procedures, as described in the CT Theory and Practice document. We expect to achieve particular results. If in a relatively short time frame we find that we can't, that will provide evidence against the claim "CT is useful for achieving result X". For example, I've been able to work for more than 13 hours a day, with only occasional days off, for more than two years. I attribute this to CT and I expect we'll be able to replicate this. If we end up not being able to, that'll be obvious to us and everyone else.

Thanks for raising the issue of falsifiability. I'm going to add it to our CT FAQ.

Comment author: Geoff_Anders 10 January 2012 04:29:38AM 21 points [-]

Hi Luke,

I'm happy to talk about these things.

First, in answer to your third question, Leverage is methodologically pluralistic. Different members of Leverage have different views on scientific methodology and philosophical methodology. We have ongoing discussions about these things. My guess is that probably two or three of our more than twenty members share my views on scientific and philosophical methodology.

If there’s anything methodological we tend to agree on, it’s a process. Writing drafts, getting feedback, paying close attention to detail, being systematic, putting in many, many hours of effort. When you imagine Leverage, don’t imagine a bunch of people thinking with a single mind. Imagine a large number of interacting parallel processes, aimed at a single goal.

Now, I’m happy to discuss my personal views on method. In a nutshell: my philosophical method is essentially Cartesian; in science, I judge theories on the basis of elegance and fit with the evidence. (“Elegance”, in my lingo, is like Occam’s razor, so in practice you and I actually both take Occam’s razor seriously.) My views aren’t the views of Leverage, though, so I’m not sure I should try to give an extended defense here. I’m going to write up some philosophical material for a blog soon, though, so people who are interested in my personal views should check that out.

As for Connection Theory, I could say a bit about where it came from. But the important thing here is why I use it. The primary reason I use CT is because I’ve used it to predict a number of antecedently unlikely phenomena, and the predictions appear to have come true at a very high rate. Of course, I recognize that I might have made some errors somewhere in collecting or assessing the evidence. This is one reason I’m continuing to test CT.

Just as with methodology, people in Leverage have different views on CT. Some people believe it is true. (Not me, actually. I believe it is false; my concern is with how useful it is.) Others believe it is useful in particular contexts. Some think it’s worth investigating, others think it’s unlikely to be useful and not worth examining. A person who thought CT was not useful and who wanted to change the world by figuring out how the mind really works would be welcome at Leverage.

So, in sum, there are many views at Leverage on methodology and CT. We discuss these topics, but no one insists on any particular view and we’re all happy to work together.

I'm glad you like that we're producing public-facing documents. Actually, we're going to be posting a lot more stuff in the relatively near future.

Comment author: Geoff_Anders 10 January 2012 05:33:10AM 4 points [-]

Oops, I forgot to answer your question about how central Connection Theory is to what we're doing.

The answer is that CT is one part of what some of us believe is our best current answer to the question of how the human mind works. I say "one part" because CT does not cover emotions. In all contexts pertaining to emotions, everyone uses something other than CT. I say "some of us" because not everyone in Leverage uses CT. And I say "best current answer" because all of us are happy to throw CT away if we come up with something better.

In terms of our projects, some people use CT and others don't. Some parts of some training programs are designed with CT in mind; other parts aren't. In some contexts, it is very hard to do anything at all without relying on some background psychological framework. In those contexts, some people rely on CT and others don't.

In terms of our overall plan, CT is potentially extremely useful. That said, CT itself is inessential. If it ends up breaking, we can find new psychological tools. And we actually have a backup plan in case we ultimately can't figure out much at all about how the mind works.

View more: Next