Comment author: Steve_Rayhawk 02 March 2015 10:20:31AM *  6 points [-]

Pessimistic Assumptions Thread

"Excuse me, I should not have asked that of you, Mr. Potter, I forgot that you are blessed with an unusually pessimistic imagination -"

Ch. 15

Sometimes people called Moody 'paranoid'.

Moody always told them to survive a hundred years of hunting Dark Wizards and then get back to him about that.

Mad-Eye Moody had once worked out how long it had taken him, in retrospect, to achieve what he now considered a decent level of caution - weighed up how much experience it had taken him to get good instead of lucky - and had begun to suspect that most people died before they got there. Moody had once expressed this thought to Lyall, who had done some ciphering and figuring, and told him that a typical Dark Wizard hunter would die, on average, eight and a half times along the way to becoming 'paranoid'. This explained a great deal, assuming Lyall wasn't lying.

Yesterday, Albus Dumbledore had told Mad-Eye Moody that the Dark Lord had used unspeakable dark arts to survive the death of his body, and was now awake and abroad, seeking to regain his power and begin the Wizarding War anew.

Someone else might have reacted with incredulity.

Ch. 63

Under standard literary convention... the enemy wasn't supposed to look over what you'd done, sabotage the magic items you'd handed out, and then send out a troll rendered undetectable by some means the heroes couldn't figure out even after the fact, so that you might as well have not defended yourself at all. In a book, the point-of-view usually stayed on the main characters. Having the enemy just bypass all the protagonists' work, as a result of planning and actions taken out of literary sight, would be a diabolus ex machina, and dramatically unsatisfying.

But in real life the enemy would think that they were the main character, and they would also be clever, and think things through in advance, even if you didn't see them do it. That was why everything about this felt so disjointed, with parts unexplained and seemingly inexplicable.

Ch. 94

"You may think that a grade of Dreadful... is not fair. That Miss Granger was faced with a test... for which her lessons... had not prepared her. That she was not told... that the exam was coming on that day."

The Defense Professor drew in a shaking breath.

"Such is realism," said Professor Quirrell.

Ch. 103

Recalling finewbs's coordinated saturation bombing strategy, if the goal is to maximize the total best-guess probability of the set of scenarios covered by at least one solution, this means crafting and posting diverse solutions which handle as wide a diversity of conjunctions of pessimistic assumptions as possible. This would be helped by having a list of pessimistic assumptions.

(It also may be helped by having a reasonable source of probabilities of scenarios, such as HPMOR predictions on PredictionBook. Also: in an adversarial context, the truth of pessimistic assumptions is correlated.)

Comment author: Steve_Rayhawk 02 March 2015 11:12:05AM 1 point [-]

Pessimistic assumption: The effect of the Unbreakable Vow depends crucially on the order in which Harry lets himself become aware of arguments about its logical consequences.

Comment author: Steve_Rayhawk 02 March 2015 10:20:31AM *  6 points [-]

Pessimistic Assumptions Thread

"Excuse me, I should not have asked that of you, Mr. Potter, I forgot that you are blessed with an unusually pessimistic imagination -"

Ch. 15

Sometimes people called Moody 'paranoid'.

Moody always told them to survive a hundred years of hunting Dark Wizards and then get back to him about that.

Mad-Eye Moody had once worked out how long it had taken him, in retrospect, to achieve what he now considered a decent level of caution - weighed up how much experience it had taken him to get good instead of lucky - and had begun to suspect that most people died before they got there. Moody had once expressed this thought to Lyall, who had done some ciphering and figuring, and told him that a typical Dark Wizard hunter would die, on average, eight and a half times along the way to becoming 'paranoid'. This explained a great deal, assuming Lyall wasn't lying.

Yesterday, Albus Dumbledore had told Mad-Eye Moody that the Dark Lord had used unspeakable dark arts to survive the death of his body, and was now awake and abroad, seeking to regain his power and begin the Wizarding War anew.

Someone else might have reacted with incredulity.

Ch. 63

Under standard literary convention... the enemy wasn't supposed to look over what you'd done, sabotage the magic items you'd handed out, and then send out a troll rendered undetectable by some means the heroes couldn't figure out even after the fact, so that you might as well have not defended yourself at all. In a book, the point-of-view usually stayed on the main characters. Having the enemy just bypass all the protagonists' work, as a result of planning and actions taken out of literary sight, would be a diabolus ex machina, and dramatically unsatisfying.

But in real life the enemy would think that they were the main character, and they would also be clever, and think things through in advance, even if you didn't see them do it. That was why everything about this felt so disjointed, with parts unexplained and seemingly inexplicable.

Ch. 94

"You may think that a grade of Dreadful... is not fair. That Miss Granger was faced with a test... for which her lessons... had not prepared her. That she was not told... that the exam was coming on that day."

The Defense Professor drew in a shaking breath.

"Such is realism," said Professor Quirrell.

Ch. 103

Recalling finewbs's coordinated saturation bombing strategy, if the goal is to maximize the total best-guess probability of the set of scenarios covered by at least one solution, this means crafting and posting diverse solutions which handle as wide a diversity of conjunctions of pessimistic assumptions as possible. This would be helped by having a list of pessimistic assumptions.

(It also may be helped by having a reasonable source of probabilities of scenarios, such as HPMOR predictions on PredictionBook. Also: in an adversarial context, the truth of pessimistic assumptions is correlated.)

Comment author: Steve_Rayhawk 02 March 2015 10:54:07AM *  6 points [-]

Pessimistic assumption: Voldemort has made advance preparations which will thwart every potential plan of Harry's based on favorable tactical features or potential features of the situation which might reasonably be obvious to him. These include Harry's access to his wand, the Death Eaters' lack of armor enchantments or prepared shields, the destructive magic resonance, the Time-Turner, Harry's other possessions, Harry's glasses, the London portkey, a concealed Patronus from Hermione's revival, or Hermione's potential purposeful assistance. Any attempt to use these things will fail at least once and and will, absent an appropriate counter-strategy, immediately trigger lethal force against Harry.

Comment author: Steve_Rayhawk 02 March 2015 10:20:31AM *  6 points [-]

Pessimistic Assumptions Thread

"Excuse me, I should not have asked that of you, Mr. Potter, I forgot that you are blessed with an unusually pessimistic imagination -"

Ch. 15

Sometimes people called Moody 'paranoid'.

Moody always told them to survive a hundred years of hunting Dark Wizards and then get back to him about that.

Mad-Eye Moody had once worked out how long it had taken him, in retrospect, to achieve what he now considered a decent level of caution - weighed up how much experience it had taken him to get good instead of lucky - and had begun to suspect that most people died before they got there. Moody had once expressed this thought to Lyall, who had done some ciphering and figuring, and told him that a typical Dark Wizard hunter would die, on average, eight and a half times along the way to becoming 'paranoid'. This explained a great deal, assuming Lyall wasn't lying.

Yesterday, Albus Dumbledore had told Mad-Eye Moody that the Dark Lord had used unspeakable dark arts to survive the death of his body, and was now awake and abroad, seeking to regain his power and begin the Wizarding War anew.

Someone else might have reacted with incredulity.

Ch. 63

Under standard literary convention... the enemy wasn't supposed to look over what you'd done, sabotage the magic items you'd handed out, and then send out a troll rendered undetectable by some means the heroes couldn't figure out even after the fact, so that you might as well have not defended yourself at all. In a book, the point-of-view usually stayed on the main characters. Having the enemy just bypass all the protagonists' work, as a result of planning and actions taken out of literary sight, would be a diabolus ex machina, and dramatically unsatisfying.

But in real life the enemy would think that they were the main character, and they would also be clever, and think things through in advance, even if you didn't see them do it. That was why everything about this felt so disjointed, with parts unexplained and seemingly inexplicable.

Ch. 94

"You may think that a grade of Dreadful... is not fair. That Miss Granger was faced with a test... for which her lessons... had not prepared her. That she was not told... that the exam was coming on that day."

The Defense Professor drew in a shaking breath.

"Such is realism," said Professor Quirrell.

Ch. 103

Recalling finewbs's coordinated saturation bombing strategy, if the goal is to maximize the total best-guess probability of the set of scenarios covered by at least one solution, this means crafting and posting diverse solutions which handle as wide a diversity of conjunctions of pessimistic assumptions as possible. This would be helped by having a list of pessimistic assumptions.

(It also may be helped by having a reasonable source of probabilities of scenarios, such as HPMOR predictions on PredictionBook. Also: in an adversarial context, the truth of pessimistic assumptions is correlated.)

Comment author: Steve_Rayhawk 02 March 2015 10:39:07AM *  0 points [-]

Pessimistic assumption: There are more than two endings. A solution meeting the stated criteria is a necessary but not sufficient condition for the least sad ending.

If a viable solution is posted [...] the story will continue to Ch. 121.

Otherwise you will get a shorter and sadder ending.

Note that the referent of "Ch. 121" is not necessarily fixed in advance.

Counterargument: "I expect that the collective effect of 'everyone with more urgent life issues stays out of the effort' shifts the probabilities very little" suggests that reasonable prior odds of getting each ending are all close to 0 or 1, so any possible hidden difficulty thresholds are either very high or very low.

Counterargument: The challenge in Three Worlds Collide only had two endings.

Counterargument: A third ending would have taken additional writing effort, to no immediately obvious didactic purpose.

Comment author: Steve_Rayhawk 02 March 2015 10:20:31AM *  6 points [-]

Pessimistic Assumptions Thread

"Excuse me, I should not have asked that of you, Mr. Potter, I forgot that you are blessed with an unusually pessimistic imagination -"

Ch. 15

Sometimes people called Moody 'paranoid'.

Moody always told them to survive a hundred years of hunting Dark Wizards and then get back to him about that.

Mad-Eye Moody had once worked out how long it had taken him, in retrospect, to achieve what he now considered a decent level of caution - weighed up how much experience it had taken him to get good instead of lucky - and had begun to suspect that most people died before they got there. Moody had once expressed this thought to Lyall, who had done some ciphering and figuring, and told him that a typical Dark Wizard hunter would die, on average, eight and a half times along the way to becoming 'paranoid'. This explained a great deal, assuming Lyall wasn't lying.

Yesterday, Albus Dumbledore had told Mad-Eye Moody that the Dark Lord had used unspeakable dark arts to survive the death of his body, and was now awake and abroad, seeking to regain his power and begin the Wizarding War anew.

Someone else might have reacted with incredulity.

Ch. 63

Under standard literary convention... the enemy wasn't supposed to look over what you'd done, sabotage the magic items you'd handed out, and then send out a troll rendered undetectable by some means the heroes couldn't figure out even after the fact, so that you might as well have not defended yourself at all. In a book, the point-of-view usually stayed on the main characters. Having the enemy just bypass all the protagonists' work, as a result of planning and actions taken out of literary sight, would be a diabolus ex machina, and dramatically unsatisfying.

But in real life the enemy would think that they were the main character, and they would also be clever, and think things through in advance, even if you didn't see them do it. That was why everything about this felt so disjointed, with parts unexplained and seemingly inexplicable.

Ch. 94

"You may think that a grade of Dreadful... is not fair. That Miss Granger was faced with a test... for which her lessons... had not prepared her. That she was not told... that the exam was coming on that day."

The Defense Professor drew in a shaking breath.

"Such is realism," said Professor Quirrell.

Ch. 103

Recalling finewbs's coordinated saturation bombing strategy, if the goal is to maximize the total best-guess probability of the set of scenarios covered by at least one solution, this means crafting and posting diverse solutions which handle as wide a diversity of conjunctions of pessimistic assumptions as possible. This would be helped by having a list of pessimistic assumptions.

(It also may be helped by having a reasonable source of probabilities of scenarios, such as HPMOR predictions on PredictionBook. Also: in an adversarial context, the truth of pessimistic assumptions is correlated.)

Comment author: Steve_Rayhawk 24 November 2014 07:55:30AM *  10 points [-]

there very likely exist misrepresentations. There are many reasons for this, but I can assure you that I never deliberately lied and that I never deliberately tried to misrepresent anyone. The main reason might be that I feel very easily overwhelmed

I think the thing to remember is that, when you've run into contexts where you feel like someone might not care that they're setting you up to be judged unfairly, you've been too overwhelmed to keep track of whether or not your self-defense involves doing things that you'd normally be able to see would set them up to be judged unfairly.

You've been trying to defend a truth about a question -- about what actions you could reasonably be expected to have been sure you should have taken, after having been exposed to existential-risk arguments -- that's made up of many complex implicit emotional and social associations, like the sort of "is X vs. Y the side everyone should be on?" that Scott Alexander discusses in "Ethnic Tension and Meaningless Arguments". But you've never really developed the necessary emotional perspective to fully realize that the only language you've had access to, to do that with, is a different language: that of explicit factual truths. If you try to state truths in one language using the other without accounting for the difference, blinded by pain and driven by the intuitive impulse escape the pain, you're going to say false things. It only makes sense that you would have screwed up.

written in a tearing hurry, akin to a reflexive retraction from the painful stimulus

Try to progress to having a conscious awareness of your desperation, I mean a conscious understanding of how the desperation works and what it's tied to emotionally. Once you've done that, you should be able to consciously keep in mind better the other ways that the idea of "justice" might also relate to your situation, and so do a lot less unjust damage. (Contrariwise, if you do choose to do damage, a significantly greater fraction of it will be just.)

It might also help to have a stronger deontological proscription against misrepresenting anyone in a way that would cause them to be judged unfairly. That proscription would put you under more pressure to develop this kind of emotional perspective and conscious awareness, although it would do this at the cost of adding extra deontological hoops you have to jump through to escape the pain when it comes. If this leaves you too bound-up to say anything, you can usually go meta and explain how you're too bound-up, at least once you have enough practice at explaining things like that.

I'm sorry. I claim to have some idea what it's like.

(Also, on reflection, I should admit that mostly I'm saying this because I'm afraid of third parties keeping mistakenly unfavorable impressions about your motives; so it's slightly dishonest of me to word some of the above comments as simply directed to you, the way I have. And in the process I've converted an emotional truth, "I think it's important for other people not to believe as-bad things about your motives, because I can see how that amount of badness is likely mistaken", into a factual claim, "your better-looking motives are exactly X".)

In response to Causal Universes
Comment author: Eliezer_Yudkowsky 28 November 2012 06:13:09AM 22 points [-]

Mainstream status:

I haven't yet particularly seen anyone else point out that there is in fact a way to finitely Turing-compute a discrete universe with self-consistent Time-Turners in it. (In fact I hadn't yet thought of how to do it at the time I wrote Harry's panic attack in Ch. 14 of HPMOR, though a primary literary goal of that scene was to promise my readers that Harry would not turn out to be living in a computer simulation. I think there might have been an LW comment somewhere that put me on that track or maybe even outright suggested it, but I'm not sure.)

The requisite behavior of the Time Turner is known as Stable Time Loops on the wiki that will ruin your life, and known as the Novikov self-consistency principle to physicists discussing "closed timelike curve" solutions to General Relativity. Scott Aaronson showed that time loop logic collapses PSPACE to polynomial time.

I haven't yet seen anyone else point out that space and time look like a simple generalization of discrete causal graphs to continuous metrics of relatedness and determination, with c being the generalization of locality. This strikes me as important, so any precedent for it or pointer to related work would be much appreciated.

Comment author: Steve_Rayhawk 28 November 2012 07:16:54PM *  4 points [-]

I know that the idea of "different systems of local consistency constraints on full spacetimes might or might not happen to yield forward-sampleable causality or things close to it" shows up in Wolfram's "A New Kind of Science", for all that he usually refuses to admit the possible relevance of probability or nondeterminism whenever he can avoid doing so; the idea might also be in earlier literature.

that there is in fact a way to finitely Turing-compute a discrete universe with self-consistent Time-Turners in it.

I'd thought about that a long time previously (not about Time-Turners; this was before I'd heard of Harry Potter). I remember noting that it only really works if multiple transitions are allowed from some states, because otherwise there's a much higher chance that the consistency constraints would not leave any histories permitted. ("Histories", because I didn't know model theory at the time. I was using cellular automata as the example system, though.) (I later concluded that Markov graphical models with weights other than 1 and 0 were a less brittle way to formulate that sort of intuition (although, once you start thinking about configuration weights, you notice that you have problems about how to update if different weight schemes would lead to different partition function values).)

I think there might have been an LW comment somewhere that put me on that track

I know we argued briefly at one point about whether Harry could take the existence of his subjective experience as valid anthropic evidence about whether or not he was in a simulation. I think I was trying to make the argument specifically about whether or not Harry could be sure he wasn't in a simulation of a trial timeline that was going to be ruled inconsistent. (Or, implicitly, a timeline that he might be able to control whether or not it would be ruled inconsistent. Or maybe it was about whether or not he could be sure that there hadn't been such simulations.) But I don't remember you agreeing that my position was plausible, and it's possible that that means I didn't convey the information about which scenario I was trying to argue about. In that case, you wouldn't have heard of the idea from me. Or I might have only had enough time to figure out how to halfway defensibly express a lesser idea: that of "trial simulated timelines being iterated until a fixed point".

Comment author: AlphaOmega 17 November 2012 01:37:18AM 1 point [-]

Just a gut reaction, but this whole scenario sounds preposterous. Do you guys seriously believe that you can create something as complex as a superhuman AI, and prove that it is completely safe before turning it on? Isn't that as unbelievable as the idea that you can prove that a particular zygote will never grow up to be an evil dictator? Surely this violates some principles of complexity, chaos, quantum mechanics, etc.? And I would also like to know who these "good guys" are, and what will prevent them from becoming "bad guys" when they wield this much power. This all sounds incredibly naive and lacking in common sense!

Comment author: Steve_Rayhawk 17 November 2012 04:27:37AM *  37 points [-]

The main way complexity of this sort would be addressable is if the intellectual artifact that you tried to prove things about were simpler than the process that you meant the artifact to unfold into. For example, the mathematical specification of AIXI is pretty simple, even though the hypotheses that AIXI would (in principle) invent upon exposure to any given environment would mostly be complex. Or for a more concrete example, the Gallina kernel of the Coq proof engine is small and was verified to be correct using other proof tools, while most of the complexity of Coq is in built-up layers of proof search strategies which don't need to themselves be verified, as the proofs they generate are checked by Gallina.

Isn't that as unbelievable as the idea that you can prove that a particular zygote will never grow up to be an evil dictator? Surely this violates some principles of complexity, chaos [...]

Yes, any physical system could be subverted with a sufficiently unfavorable environment. You wouldn't want to prove perfection. The thing you would want to prove would be more along the lines of, "will this system become at least somewhere around as capable of recovering from any disturbances, and of going on to achieve a good result, as it would be if its designers had thought specifically about what to do in case of each possible disturbance?". (Ideally, this category of "designers" would also sort of bleed over in a principled way into the category of "moral constituency", as in CEV.) Which, in turn, would require a proof of something along the lines of "the process is highly likely to make it to the point where it knows enough about its designers to be able to mostly duplicate their hypothetical reasoning about what it should do, without anything going terribly wrong".

We don't know what an appropriate formalization of something like that would look like. But there is reason for considerable hope that such a formalization could be found, and that this formalization would be sufficiently simple that an implementation of it could be checked. This is because a few other aspects of decision-making which were previously mysterious, and which could only be discussed qualitatively, have had powerful and simple core mathematical descriptions discovered for cases where simplifying modeling assumptions perfectly apply. Shannon information was discovered for the informal notion of surprise (with the assumption of independent identically distributed symbols from a known distribution). Bayesian decision theory was discovered for the informal notion of rationality (with assumptions like perfect deliberation and side-effect-free cognition). And Solomonoff induction was discovered for the informal notion of Occam's razor (with assumptions like a halting oracle and a taken-for-granted choice of universal machine). These simple conceptual cores can then be used to motivate and evaluate less-simple approximations for situations where where the assumptions about the decision-maker don't perfectly apply. For the AI safety problem, the informal notions (for which the mathematical core descriptions would need to be discovered) would be a bit more complex -- like the "how to figure out what my designers would want to do in this case" idea above. Also, you'd have to formalize something like our informal notion of how to generate and evaluate approximations, because approximations are more complex than the ideals they approximate, and you wouldn't want to need to directly verify the safety of any more approximations than you had to. (But note that, for reasons related to Rice's theorem, you can't (and therefore shouldn't want to) lay down universally perfect rules for approximation in any finite system.)

Two other related points are discussed in this presentation: the idea that a digital computer is a nearly deterministic environment, which makes safety engineering easier for the stages before the AI is trying to influence the environment outside the computer, and the idea that you can design an AI in such a way that you can tell what goal it will at least try to achieve even if you don't know what it will do to achieve that goal. Presumably, the better your formal understanding of what it would mean to "at least try to achieve a goal", the better you would be at spotting and designing to handle situations that might make a given AI start trying to do something else.

(Also: Can you offer some feedback as to what features of the site would have helped you sooner be aware that there were arguments behind the positions that you felt were being asserted blindly in a vacuum? The "things can be surprisingly formalizable, here are some examples" argument can be found in lukeprog's "Open Problems Related to the Singularity" draft and the later "So You Want to Save the World", though the argument is very short and hard to recognize the significance of if you don't already know most of the mathematical formalisms mentioned. A backup "you shouldn't just assume that there's no way to make this work" argument is in "Artificial Intelligence as a Positive and Negative Factor in Global Risk", pp 12-13.)

what will prevent them from becoming "bad guys" when they wield this much power

That's a problem where successful/practically applicable formalizations are harder to hope for, so it's been harder for people to find things to say about it that pass the threshold of being plausible conceptual progress instead of being noisy verbal flailing. See the related "How can we ensure that a Friendly AI team will be sane enough?". But it's not like people aren't thinking about the problem.

Comment author: michaelcurzi 16 November 2012 10:58:23PM 1 point [-]

People pursuing a positive Singularity, with the right intentions, who understand the gravity of the problem, take it seriously, and do it on behalf of humanity rather than some smaller group.

I haven't offered a rigorous definition, and I'm not going to, but I think you know what I mean.

Comment author: Steve_Rayhawk 17 November 2012 01:40:37AM *  17 points [-]

you know what I mean.

Right, but this is a public-facing post. A lot of readers might not know why you could think it was obvious that "good guys" would imply things like information security, concern for Friendliness so-named, etc., and they might think that the intuition you mean to evoke with a vague affect-laden term like "good guys" is just the same argument-disdaining groupthink that would be implied if they saw it on any other site.

To prevent this impression, if you're going to use the term "good guys", then at or before the place where you first use it, you should probably put an explanation, like

(I.e. people who are familiar with the kind of thinking that can generate arguments like those in "The Detached Lever Fallacy", "Fake Utility Functions" and the posts leading up to it, "Anthropomorphic Optimism" and "Contaminated by Optimism", "Value is Fragile" and the posts leading up to it, and the "Envisioning perfection" and "Beyond the adversarial attitude" discussions in Creating Friendly AI or most of the philosophical discussion in Coherent Extrapolated Volition, and who understand what it means to be dealing with a technology that might be able to bootstrap to the singleton level of power that could truly engineer a "forever" of the "a boot stamping on a human face — forever" kind.)

In response to Value Loading
Comment author: Steve_Rayhawk 23 October 2012 12:32:22PM 5 points [-]

See also "Acting Rationally with Incomplete Utility Information" by Urszula Chajewska, 2002.

Comment author: lukeprog 14 May 2012 10:07:06AM 65 points [-]

I don't think this response supports your claim that these improvements "would not and could not have happened without more funding than the level of previous years."

I know your comment is very brief because you're busy at minicamp, but I'll reply to what you wrote, anyway: Someone of decent rationality doesn't just "try things until something works." Moreover, many of the things on the list of recent improvements don't require an Amy, a Luke, or a Louie.

I don't even have past management experience. As you may recall, I had significant ambiguity aversion about the prospect of being made Executive Director, but as it turned out, the solution to almost every problem X has been (1) read what the experts say about how to solve X, (2) consult with people who care about your mission and have solved X before, and (3) do what they say.

When I was made Executive Director and phoned our Advisors, most of them said "Oh, how nice to hear from you! Nobody from SingInst has ever asked me for advice before!"

That is the kind of thing that makes me want to say that SingInst has "tested every method except the method of trying."

Donor database, strategic plan, staff worklogs, bringing staff together, expenses tracking, funds monitoring, basic management, best-practices accounting/bookkeeping... these are all literally from the Nonprofits for Dummies book.

Maybe these things weren't done for 11 years because SI's decision-makers did make good plans but failed to execute them due to the usual defeaters. But that's not the history I've heard, except that some funds monitoring was insisted upon after the large theft, and a donor database was sorta-kinda-not-really attempted at one point. The history I've heard is that SI failed to make these kinds of plans in the first place, failed to ask advisors for advice, failed to read Nonprofits for Dummies, and so on.

Money wasn't the barrier to doing many of those things, it was a gap in general rationality.

I will agree, however, that what is needed now is more money. We are rapidly becoming a more robust and efficient and rational organization, stepping up our FAI team recruiting efforts, stepping up our transparency and accountability efforts, and stepping up our research efforts, and all those things cost money.

At the risk of being too harsh… When I began to intern with the Singularity Institute in April 2011, I felt uncomfortable suggesting that people donate to SingInst, because I could see it from the inside and it wasn't pretty. (And I'm not the only SIer who felt this way at the time.)

But now I do feel comfortable asking people to donate to SingInst. I'm excited about our trajectory and our team, and if we can raise enough support then we might just have a shot at winning after all.

Comment author: Steve_Rayhawk 21 October 2012 10:10:58AM *  13 points [-]

these are all literally from the Nonprofits for Dummies book. [...] The history I've heard is that SI [...]

\

failed to read Nonprofits for Dummies,

I remember that, when Anna was managing the fellows program, she was reading books of the "for dummies" genre and trying to apply them... it's just that, as it happened, the conceptual labels she accidentally happened to give to the skill deficits she was aware of were "what it takes to manage well" (i.e. "basic management") and "what it takes to be productive", rather than "what it takes to (help) operate a nonprofit according to best practices". So those were the subjects of the books she got. (And read, and practiced.) And then, given everything else the program and the organization was trying to do, there wasn't really any cognitive space left over to effectively notice the possibility that those wouldn't be the skills that other people afterwards would complain that nobody acquired and obviously should have known to. The rest of her budgeted self-improvement effort mostly went toward overcoming self-defeating emotional/social blind spots and motivated cognition. (And I remember Jasen's skill learning focus was similar, except with more of the emphasis on emotional self-awareness and less on management.)

failed to ask advisors for advice,

I remember Anna went out of her way to get advice from people who she already knew, who she knew to be better than her at various aspects of personal or professional functioning. And she had long conversations with supporters who she came into contact with for some other reasons; for those who had executive experience, I expect she would have discussed her understanding of SIAI's current strategies with them and listened to their suggestions. But I don't know how much she went out of her way to find people she didn't already have reasonably reliable positive contact with, to get advice from them.

I don't know much about the reasoning of most people not connected with the fellows program about the skills or knowledge they needed. I think Vassar was mostly relying on skills tested during earlier business experience, and otherwise was mostly preoccupied with the general crisis of figuring out how to quickly-enough get around the various hugely-saliently-discrepant-seeming-to-him psychological barriers that were causing everyone inside and outside the organization to continue unthinkingly shooting themselves in the feet with respect to this outside-evolutionary-context-problem of existential risk mitigation. For the "everyone outside's psychological barriers" side of that, he was at least successful enough to keep SIAI's public image on track to trigger people like David Chalmers and Marcus Hutter into meaningful contributions to and participation in a nascent Singularity-studies academic discourse. I don't have a good idea what else was on his mind as something he needed to put effort into figuring out how to do, in what proportions occupying what kinds of subjective effort budgets, except that in total it was enough to put him on the threshold of burnout. Non-profit best practices apparently wasn't one of those things though.

But the proper approach to retrospective judgement is generally a confusing question.

the kind of thing that makes me want to say [. . .]

The general pattern, at least post-2008, may have been one where the people who could have been aware of problems felt too metacognitively exhausted and distracted by other problems to think about learning what to do about them, and hoped that someone else with more comparative advantage would catch them, or that the consequences wouldn't be bigger than those of the other fires they were trying to put out.

strategic plan [...] SI failed to make these kinds of plans in the first place,

There were also several attempts at building parts of a strategy document or strategic plan, which together took probably 400-1800 hours. In each case, the people involved ended up determining, from how long it was taking, that, despite reasonable-seeming initial expectations, it wasn't on track to possibly become a finished presentable product soon enough to justify the effort. The practical effect of these efforts was instead mostly just a hard-to-communicate cultural shared understanding of the strategic situation and options -- how different immediate projects, forms of investment, or conditions in the world might feed into each other on different timescales.

expenses tracking, funds monitoring [...] some funds monitoring was insisted upon after the large theft

There was an accountant (who herself already cost like $33k/yr as the CFO, despite being split three ways with two other nonprofits) who would have been the one informally expected to have been monitoring for that sort of thing, and to have told someone about it if she saw something, out of the like three paid administrative slots at the time... well, yeah, that didn't happen.

I agree with a paraphrase of John Maxwell's characterization: "I'd rather hear Eliezer say 'thanks for funding us until we stumbled across some employees who are good at defeating their akrasia and [had one of the names of the things they were aware they were supposed to] care about [happen to be "]organizational best practices["]', because this seems like a better depiction of what actually happened." Note that this was most of the purpose of the Fellows program in the first place -- to create an environment where people could be introduced to the necessary arguments/ideas/culture and to help sort/develop those people into useful roles, including replacing existing management, since everyone knew there were people who would be better at their job than they were and wished such a person could be convinced to do it instead.

View more: Prev | Next