mike_hawke

Wiki Contributions

Comments

Sorted by

I am agnostic about various dragons. Sometimes I find myself wondering how I would express my dragon agnosticism in a world where belief in dragons was prevalent and high status. I am often disturbed by the result of this exercise. It turns out that what feels like agnosticism is often sneakily biased in favor of what will make me sound better or let me avoid arguments.

This effect is strong enough and frequent enough that I don't think the agnosticism described by this post is a safe epistemic fallback for me. However, it might still be my best option in situations where I want to look good or avoid arguments.


Possibly related: 

Selective Reporting and the Tragedy of the Green Rationalists by Zack M Davis

Kolmogorov Complicity and the Parable of Lightning by Scott Alexander

Yeah, given that Eliezer mentioned Georgism no less than 3 times in his Dath Ilan AMA, I'm pretty surprised it didn't come up even once in this post about UBI.

Personally, I wouldn't be surprised to find we already have most or all the pieces of the true story.

  • Ricardo's law of rent + lack of LVT
  • Supply and demand for low-skill labor
  • Legal restrictions on jobs that disproportionately harm low-wage workers. For example, every single low wage job I have had has been part time, presumably because it wasn't worth it to give me health benefits.
  • Boumol effect?
  • People really want to eat restaurant food, and seem to underestimate (or just avoid thinking about) how much this adds up.
  • A lot of factors that today cause poverty would have simply caused death in the distant past.

That's just off the top of my head

EDIT: Also the hedonic treadmill is such a huge effect that I would be surprised if it wasn't part of the picture. How much worse is it for your kid's tooth to get knocked out at school than to get a 1920's wisdom tooth extraction?

mike_hawke2010

That part is in a paragraph that starts with "My impression is...".

Fair. 

And yet I felt the discomfort before reading that particular paragraph, and I still feel it now. For me personally, the separators you included were not enough: I did indeed have to apply extra effort throughout the post to avoid over-updating on the interpretations as opposed to the hard facts.

Maybe I'm unusual and few other readers have this problem. I suspect that's not the case, but given that I don't know, I'll just say that I find this writing style to be a little too Dark Artsy and symmetrical for my comfort.

I still think this post was net good to publish, and I might end up linking it to someone if I think they're being too credulous toward Gerard. But if I do, it might be with some disclaimer along the lines of, "I think the author got a little carried away with a particular psychological story. I recommend putting in the effort to mentally separate the facts from the fun narrative."

Also, to give credit where it's due, the narrative style really was entertaining.

 

(EDIT: typos)

mike_hawke4033

I read as far as this part:

Because Gerard was on LessWrong when the internet splintered and polarized, he saw the whole story through the lens of LessWrong, and on an instinctive level the site became his go-to scapegoat for all that was going wrong for his vision of the internet.

And I want to make a comment before continuing to read.

I'm uncomfortable with the psychologizing here. I feel like your style is inviting me to suspend disbelief for the sake of a clean and entertaining narrative. Not that you should never do such a thing, but I think it maybe warrants some kind of disclaimer or something. If you had written this specifically for LW, instead of as a linkpost to your blog, I would be suggesting major rewrites in order meet the standards I'm used to around here.

I wouldn't be surprised if the true psychological story was significantly different than the picture you paint here. Especially if it involved real life events, e.g. some tragedy in his family, or problems with his friends or job. Would those things have even been visible in your research?

I'll keep reading, but I'm now going to spend extra effort to maintain the right level of skepticism. None of what I've read so far contradicts my priors, but I'm going to avoid updating too hard on your interpretations (as opposed to the citations & other hard facts).

 

I am bothered that no other commenters have brought this up yet.

Point well taken that technological development and global dominance were achieved by human cultures, not individual humans. But I claim that it is obviously a case of motivated reasoning to treat this as a powerful blow against the arguments for fast takeoff. A human-level AI (able to complete any cognitive task at least as well as you) is a foom risk unless it has specific additional handicaps. These might include:
- For some reason it needs to sleep for a long time every night.
- Its progress gets periodically erased due to random misfortune or enemy action.
- It is locked into a bad strategic position, such as having no cognitive privacy from overseers.
- It can't copy itself.
- It can't gain more compute.
- It can't reliably modify itself.

I'll be pretty surprised if we get AI systems that can do any cognitive task that I can do (such as make longterm plans and spontaneously correct my own mistakes without them being pointed out to me) but that can also only improve themselves very slowly. It really seems like, if I were able to easily edit my own brain, then I would be able to increase my abilities across the board, including my ability to increase my abilities.

The part about airports reminds me of "If All Stories were Written Like Science Fiction Stories" by Mark Rosenfelder: 
https://www.bzpower.com/blogs/entry/58514-if-all-stories-were-written-like-science-fiction-stories/
 

No one else has mentioned The Case Against Education by Bryan Caplan. He says that after reading and arithmetic, schooling is mostly for signaling employable traits like conscientiousness, not for learning. I think Zvi Mowshowitz and Noah Smith had some interesting discussion about this years ago. Scott Alexander supposes that another secret purpose of school is daycare. Whatever the real purposes are, they will tend to be locked into place by laws. Richard Hanania has written a bit about what he thinks families might choose instead of standard schooling if the laws were relaxed.

Without passing judgment on this, I think it should be noted that it would have seemed less out of place when the Sequences were fresh. At that time, the concept of immaterial souls and the surrounding religious memeplexes seemed to be a genuinely interfering with serious discussion about minds.

However, and relatedly, there was not a lot of cooking discussion on LW in 2009, and this tag was created in 2020.

I'm out of the loop. Did Daniel Kokotajlo lose his equity or not? If the NDA is not being enforced, are there now some disclosures being made?

Thanks for the source.

I've intentionally made it difficult for myself to log into twitter. For the benefit of others who avoid Twitter, here is the text of Kelsey's tweet thread:

I'm getting two reactions to my piece about OpenAI's departure agreements: "that's normal!" (it is not; the other leading AI labs do not have similar policies) and "how is that legal?" It may not hold up in court, but here's how it works:

OpenAI like most tech companies does salaries as a mix of equity and base salary. The equity is in the form of PPUs, 'Profit Participation Units'. You can look at a recent OpenAI offer and an explanation of PPUs here: https://t.co/t2J78V8ee4

Many people at OpenAI get more of their compensation from PPUs than from base salary. PPUs can only be sold at tender offers hosted by the company. When you join OpenAI, you sign onboarding paperwork laying all of this out.

And that onboarding paperwork says you have to sign termination paperwork with a 'general release' within sixty days of departing the company. If you don't do it within 60 days, your units are cancelled. No one I spoke to at OpenAI gave this little line much thought.

And yes this is talking about vested units, because a separate clause clarifies that unvested units just transfer back to the control of OpenAI when an employee undergoes a termination event (which is normal).

There's a common legal definition of a general release, and it's just a waiver of claims against each other. Even someone who read the contract closely might be assuming they will only have to sign such a waiver of claims.

But when you actually quit, the 'general release'? It's a long, hardnosed, legally aggressive contract that includes a confidentiality agreement which covers the release itself, as well as arbitration, nonsolicitation and nondisparagement and broad 'noninterference' agreement.

And if you don't sign within sixty days your units are gone. And it gets worse - because OpenAI can also deny you access to the annual events that are the only way to sell your vested PPUs at their discretion, making ex-employees constantly worried they'll be shut out.

Finally, I want to make it clear that I contacted OpenAI in the course of reporting this story. So did my colleague SigalSamuel They had every opportunity to reach out to the ex-employees they'd pressured into silence and say this was a misunderstanding. I hope they do.

Even acknowledging that the NDA exists is a violation of it.

This sticks out pretty sharply to me.

Was this explained to the employees during the hiring process? What kind of precedent is there for this kind of NDA? 

Load More