lukeprog comments on Thoughts on the Singularity Institute (SI) - Less Wrong

256 Post author: HoldenKarnofsky 11 May 2012 04:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1270)

You are viewing a single comment's thread. Show more comments above.

Comment author: lukeprog 11 May 2012 08:13:20PM *  18 points [-]

This topic is something I've been thinking about lately. Do SIers tend to have superior general rationality, or do we merely escape a few particular biases? Are we good at rationality, or just good at "far mode" rationality (aka philosophy)? Are we good at epistemic but not instrumental rationality? (Keep in mind, though, that rationality is only a ceteris paribus predictor of success.)

Or, pick a more specific comparison. Do SIers tend to be better at general rationality than someone who can keep a small business running for 5 years? Maybe the tight feedback loops of running a small business are better rationality training than "debiasing interventions" can hope to be.

Of course, different people are more or less rational in different domains, at different times, in different environments.

This isn't an idle question about labels. My estimate of the scope and level of people's rationality in part determines how much I update from their stated opinion on something. How much evidence for Hypothesis X (about organizational development) is it when Eliezer gives me his opinion on the matter, as opposed to when Louie gives me his opinion on the matter? When Person B proposes to take on a totally new kind of project, I think their general rationality is a predictor of success — so, what is their level of general rationality?

Comment author: Bugmaster 11 May 2012 10:49:28PM 2 points [-]

Are we good at epistemic but not instrumental rationality?

Holden implies (and I agree with him) that there's very little evidence at the moment to suggest that SI is good at instrumental rationality. As for epistemic rationality, how would we know ? Is there some objective way to measure it ? I personally happen to believe that if a person seems to take it as a given that he's great at epistemic rationality, this fact should count as evidence (however circumstantial) against him being great at epistemic rationality... but that's just me.

Comment author: TheOtherDave 11 May 2012 09:10:55PM 1 point [-]

If you accept that your estimate of someone's "rationality" should depend on the domain, the environment, the time, the context, etc... and what you want to do is make reliable estimates of the reliability of their opinion, their chances of success. etc... it seems to follow that you should be looking for comparisons within a relevant domain, environment, etc.

That is, if you want to get opinions about hypothesis X about organizational development that serve as significant evidence, it seems the thing to do is to find someone who knows a lot about organizational development -- ideally, someone who has been successful at developing organizations -- and consult their opinions. How generally rational they are might be very relevant causally, or it might not, but is in either case screened off by their domain competence... and their domain competence is easier to measure than their general rationality.

So is their general rationality worth devoting resources to determining?

It seems this only makes sense if you have already (e.g.) decided to ask Eliezer and Louie for their advice, whether it's good evidence or not, and now you need to know how much evidence it is, and you expect the correct answer is different from the answer you'd get by applying the metrics you know about (e.g., domain familiarity and previously demonstrated relevant expertise).

Comment author: lukeprog 11 May 2012 09:55:52PM 3 points [-]

I do spend a fair amount of time talking to domain experts outside of SI. The trouble is that the question of what we should do about thing X doesn't just depend on domain competence but also on thousands of details about the inner workings of SI and our mission that I cannot communicate to domain experts outside SI, but which Eliezer and Louie already possess.

Comment author: TheOtherDave 11 May 2012 10:14:49PM 4 points [-]

So it seems you have a problem in two domains (organizational development + SI internals) and different domain experts in both domains (outside domain experts + Eliezer/Louie), and need some way of cross-linking the two groups' expertise to get a coherent recommendation, and the brute-force solutions (e.g. get them all in a room together, or bring one group up to speed on the other's domain) are too expensive to be worth it. (Well, assuming the obstacle isn't that the details need to be kept secret, but simply that expecting an outsider to come up to speed on all of SI's local potentially relevant trivia simply isn't practical.)

Yes?

Yeah, that can be a problem.

In that position, for serious questions I would probably ask E/L for their recommendations and a list of the most relevant details that informed that decision, then go to outside experts with a summary of the competing recommendations and an expanded version of that list and ask for their input. If there's convergence, great. If there's divergence, iterate.

This is still a expensive approach, though, so I can see where a cheaper approximation for less important questions is worth having.

Comment author: lukeprog 11 May 2012 10:18:53PM 2 points [-]

Yes to all this.

Comment author: siodine 11 May 2012 11:08:47PM -1 points [-]

In the world in which a varied group of intelligent and especially rational people are organizing to literally save humanity, I don't see the relatively trivial, but important, improvements you've made in a short period of time being made because they were made years ago. And I thought that already accounting for the points you've made.

I mean, the question this group should be asking themselves is "how can we best alter the future so as to navigate towards FAI?" So, how did they apparently miss something like opportunity cost? Why, for instance, has their salaries increased when they could've been using it to improve the foundation of their cause from which everything else follows?

(Granted, I don't know the history and inner workings of the SI, and so I could be missing some very significant and immovable hurdles, but I don't see that as very likely; at least, not as likely as Holden's scenario.)

Comment author: lukeprog 11 May 2012 11:18:25PM 4 points [-]

I don't see the relatively trivial, but important, improvements you've made in a short period of time being made because they were made years ago. And I thought that already accounting for the points you've made.

I don't know what these sentences mean.

So, how did they apparently miss something like opportunity cost? Why, for instance, has their salaries increased when they could've been using it to improve the foundation of their cause from which everything else follows?

Actually, salary increases help with opportunity cost. At very low salaries, SI staff ends up spending lots of time and energy on general life cost-saving measures that distract us from working on x-risk reduction. And our salaries are generally still pretty low. I have less than $6k in my bank accounts. Outsourcing most tasks to remote collaborators also helps a lot with opportunity cost.

Comment author: siodine 12 May 2012 12:01:50AM *  3 points [-]

I don't know what these sentences mean.

  • People are more rational in different domains, environments, and so on.
  • The people at SI may have poor instrumental rationality while being adept at epistemic rationality.
  • Being rational doesn't necessarily mean being successful.

I accept all those points, and yet I still see the Singularity Institute having made the improvements that you've made since being hired before you were hired if they have superior general rationality. That is, you wouldn't have that list of relatively trivial things to brag about because someone else would have recognized the items on that list as important and got them done somehow (ignore any negative connotations--they're not intended).

For instance, I don't see a varied group of people with superior general rationality not discovering or just not outsourcing work they don't have a comparative advantage in (i.e., what you've done). That doesn't look like just a failure in instrumental rationality, or just rationality operating on a different kind of utility function, or just a lack of domain specific knowledge.

The excuses available to a person acting in a way that's non-traditionally rational are less convincing when you apply them to a group.

Actually, salary increases help with opportunity cost. At very low salaries, SI staff ends up spending lots of time and energy on general life cost-saving measures that distract us from working on x-risk reduction. And our salaries are generally still pretty low. I have less than $6k in my bank accounts.

No, I get that. But that still doesn't explain away the higher salaries like EY's 80k/year and its past upwards trend. I mean, these higher paid people are the most committed to the cause, right? I don't see those people taking a higher salary when they could use that money for more outsourcing, or another employee, or better employees, if they want to literally save humanity while being superior in general rationality. It's like a homeless person desperately in want of shelter trying save enough for an apartment and yet buying meals at some restaurant.

Outsourcing most tasks to remote collaborators also helps a lot with opportunity cost.

That's the point I was making, why wasn't that done earlier? How did these people apparently miss out on opportunity cost? (And I'm just using outsourcing as an example because it was one of the most glaring changes you made that I think should have probably been made much earlier.)

Comment author: lukeprog 12 May 2012 12:20:39AM 4 points [-]

Right, I think we're saying the same thing, here: the availability of so much low-hanging fruit in organizational development as late as Sept. 2011 is some evidence against the general rationality of SIers. Eliezer seems to want to say it was all a matter of funding, but that doesn't make sense to me.

Now, on this:

I don't see those people taking a higher salary when they could use that money for more outsourcing, or another employee, or better employees, if they want to literally save humanity while being super in general rationality.

For some reason I'm having a hard time parsing your sentences for unambiguous meaning, but if I may attempt to rephrase: "SIers wouldn't take any salaries higher than (say) $70k/yr if they were truly committed to the cause and good in general rationality, because they would instead use that money to accomplish other things." Is that what you're saying?

Comment author: Rain 12 May 2012 12:29:53AM *  3 points [-]

I've heard the Bay Area is expensive, and previously pointed out that Eliezer earns more than I do, despite me being in the top 10 SI donors.

I don't mind, though, <joke> as has been pointed out, even thinking about muffins might be a question invoking existential risk calculations. </joke>

Comment author: lukeprog 12 May 2012 12:39:54AM *  4 points [-]

despite me being in the top 10 SI donors

...and much beloved for it.

Yes, the Bay Area is expensive. We've considered relocating, but on the other hand the (by far) best two places for meeting our needs in HR and in physically meeting with VIPs are SF and NYC, and if anything NYC is more expensive than the Bay Area. We cut living expenses where we can: most of us are just renting individual rooms.

Also, of course, it's not like the Board could decide we should relocate to a charter city in Honduras and then all our staff would be able to just up and relocate. :)

(Rain may know all this; I'm posting it for others' benefit.)

Comment author: komponisto 12 May 2012 06:58:03PM 12 points [-]

I think it's crucial that SI stay in the Bay Area. Being in a high-status place signals that the cause is important. If you think you're not taken seriously enough now, imagine if you were in Honduras...

Not to mention that HR is without doubt the single most important asset for SI. (Which is why it would probably be a good idea to pay more than the minimum cost of living.)

Comment author: TheOtherDave 12 May 2012 01:31:59AM 2 points [-]

Out of curiosity only: what were the most significant factors that led you to reject telepresence options?

Comment author: David_Gerard 12 May 2012 06:02:44PM *  6 points [-]

FWIW, Wikimedia moved from Florida to San Francisco precisely for the immense value of being at the centre of things instead of the middle of nowhere (and yes, Tampa is the middle of nowhere for these purposes, even though it still has the primary data centre). Even paying local charity scale rather than commercial scale (there's a sort of cycle where WMF hires brilliant kids, they do a few years working at charity scale then go to Facebook/Google/etc for gobs of cash), being in the centre of things gets them staff and contacts they just couldn't get if they were still in Tampa. And yes, the question came up there pretty much the same as it's coming up here: why be there instead of remote? Because so much comes with being where things are actually happening, even if it doesn't look directly related to your mission (educational charity, AI research institute).

Comment author: komponisto 12 May 2012 07:00:32PM 0 points [-]

FWIW, Wikimedia moved from Florida to San Francisco

I didn't know this, but I'm happy to hear it.

Comment author: lukeprog 12 May 2012 01:56:50AM 6 points [-]

In our experience, monkeys don't work that way. It sounds like it should work, and then it just... doesn't. Of course we do lots of Skyping, but regular human contact turns out to be pretty important.

Comment author: TheOtherDave 12 May 2012 02:04:01AM 8 points [-]

(nods) Yeah, that's been my experience too, though I've often suspected that companies like Google probably have a lot of research on the subject lying around that might be informative.

Some friends of mine did some experimenting along these lines when doing distributed software development (in both senses) and were somewhat startled to realize that Dark Age of Camelot worked better for them as a professional conferencing tool than any of the professional conferencing tools their company had. They didn't mention this to their management.

Comment author: HoverHell 13 May 2012 08:16:45AM *  1 point [-]

It does something to the motivation, yes (and you can probably speculate on “what exactly” just as well as I can) Which means that for those not in need of any additional motivation remote conferencing works just as well; but that's a very rare occasion in the first place.

Comment author: siodine 12 May 2012 12:34:35AM 0 points [-]

some evidence

Enough for you to agree with Holden on that point?

"SIers wouldn't take any salaries higher than (say) $70k/yr if they were truly committed to the cause and good in general rationality, because they would instead use that money to accomplish other things." Is that what you're saying?

Yes, but I wouldn't set a limit at a specific salary range; I'd expect them to give as much as they optimally could, because I assume they're more concerned with the cause than the money. (re the 70k/yr mention: I'd be surprised if that was anywhere near optimal)

Comment author: lukeprog 12 May 2012 12:46:18AM 2 points [-]

Enough for you to agree with Holden on that point?

Probably not. He and I continue to dialogue in private about the point, in part to find the source of our disagreement.

Yes, but I wouldn't set a limit at a specific salary range; I'd expect them to give as much as they optimally could, because I assume they're more concerned with the cause than the money. (re the 70k/yr mention: I'd be surprised if that was anywhere near optimal)

I believe everyone except Eliezer currently makes between $42k/yr and $48k/yr — pretty low for the cost of living in the Bay Area.

Comment author: siodine 12 May 2012 01:37:39AM 4 points [-]

Probably not. He and I continue to dialogue in private about the point, in part to find the source of our disagreement.

So, if you disagree with Holden, I assume you think SIers have superior general rationality: why?

And I'm confident SIers will score well on rationality tests, but that looks like specialized rationality. I.e., you can avoid a bias but you can't avoid a failure in your achieving your goals. To me, the SI approach seems poorly leveraged. I expect more significant returns from simple knowledge acquisition. E.g., you want to become successful? YOU WANT TO WIN?! Great, read these textbooks on microeconomics, finance, and business. I think this is more the approach you take anyway.

I believe everyone except Eliezer currently makes between $42k/yr and $48k/yr — pretty low for the cost of living in the Bay Area.

That isn't as bad as I thinking it was; I don't know if that's optimal, but it seems at least reasonable.

Comment author: lukeprog 12 May 2012 01:47:10AM 0 points [-]

I assume you think SIers have superior general rationality: why?

I'll avoid double-labor on this and wait to reply until my conversation with Holden is done.

I expect more significant returns from simple knowledge acquisition. E.g., you want to become successful? ...Great, read these textbooks on microeconomics, finance, and business. I think this is more the approach you take anyway.

Right. Exercise the neglected virtue of scholarship and all that.

Comment author: siodine 12 May 2012 01:52:51AM 1 point [-]

Right. Exercise the neglected virtue of scholarship and all that.

It's not that easy to dismiss; if it's as poorly leveraged as it looks relative to other approaches then you have little reason to be spreading and teaching SI's brand of specialized rationality (except for perhaps income).

Comment author: komponisto 12 May 2012 02:04:06AM *  2 points [-]

(Disclaimer: the following comment should not be taken to imply that I myself have concluded that SI staff salaries should be reduced.)

I believe everyone except Eliezer currently makes between $42k/yr and $48k/yr — pretty low for the cost of living in the Bay Area.

I'll grant you that it's pretty low relative to other Bay Area salaries. But as for the actual cost of living, I'm less sure.

I'm not fortunate enough to be a Bay Area resident myself, but here is what the internet tells me:

  • After taxes, a $48,000/yr gross salary in California equates to a net of around $3000/month.

  • A 1-bedroom apartment in Berkeley and nearby places can be rented for around $1500/month. (Presumably, this is the category of expense where most of the geography-dependent high cost of living is contained.)

  • If one assumes an average spending of $20/day on food (typically enough to have at least one of one's daily meals at a restaurant), that comes out to about $600/month.

  • That leaves around $900/month for miscellaneous expenses, which seems pretty comfortable for a young person with no dependents.

So, if these numbers are right, it seems that this salary range is actually right about what the cost of living is. Of course, this calculation specifically does not include costs relating to signaling (via things such as choices of housing, clothing, transportation, etc.) that one has more money than necessary to live (and therefore isn't low-status). Depending on the nature of their job, certain SI employees may need, or at least find it distinctly advantageous for their particular duties, to engage in such signaling.

Comment author: Rain 12 May 2012 12:10:54AM *  2 points [-]

To summarize and rephrase: in a "counterfactual" world where SI was actually rational, they would have found all these solutions and done all these things long ago.

Comment author: komponisto 12 May 2012 12:47:08AM *  2 points [-]

Many of your sentences are confusing because you repeatedly use the locution "I see X"/ "I don't see X" in a nonstandard way, apparently to mean "X would have happened" /"X would not have happened".

This is not the way that phrase is usually understood. Normally, "I see X" is taken to mean either "I observe X" or "I predict X". For example I might say (if I were so inclined):

Unlike you, I see a lot of rationality being demonstrated by SI employees.

meaning that I believe (from my observation) they are in fact being rational. Or, I might say:

I don't see Luke quitting his job at SI tomorrow to become a punk rocker.

meaning that I don't predict that will happen. But I would not generally say:

* I don't see these people taking a higher salary.

if what I mean is "these people should/would not have taken a higher salary [if such-and-such were true]".

Comment author: siodine 12 May 2012 01:04:35AM *  2 points [-]

Oh, I see ;) Thanks. I'll definitely act on your comment, but I was using "I see X" as "I predict X"--just in the context of a possible world. E.g., I predict in the possible world in which SIers are superior in general rationality and committed to their cause, Luke wouldn't have that list of accomplishments. Or, "yet I still see the Singularity Institute having made the improvements..."

I now see that I've been using 'see' as syntactic sugar for counterfactual talk... but no more!

Comment author: komponisto 12 May 2012 01:21:01AM *  2 points [-]

I was using "I see X" as "I predict X"--just in the context of a possible world.

To get away with this, you really need, at minimum, an explicit counterfactual clause ("if", "unless", etc.) to introduce it: "In a world where SIers are superior in general rationality, I don't see Luke having that list of accomplishments."

The problem was not so much that your usage itself was logically inconceivable, but rather that it collided with the other interpretations of "I see X" in the particular contexts in which it occurred. E.g. "I don't see them taking higher salaries" sounded like you were saying that they weren't taking higher salaries. (There was an "if" clause, but it came way too late!)

Comment author: [deleted] 12 May 2012 07:19:52AM *  -2 points [-]

And our salaries are generally still pretty low.

By what measure do you figure that?

I have less than $6k in my bank accounts.

That might be informative if we knew anything about your budget, but without any sort of context it sounds purely obfuscatory. (Also, your bank account is pretty close to my annual salary, so you might want to consider what you're actually signalling here and to whom.)

Comment author: [deleted] 16 May 2012 07:23:21PM 1 point [-]

Have you considered the possibility that even higher salaries might raise productivity further?

I think we should search systematically for ways to convert money into increased productivity.