LESSWRONG
LW

1345
Mass_Driver
3520326670
Message
Dialogue
Subscribe

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
An Activist View of AI Governance
No wikitag contributions to display.
Warning Aliens About the Dangerous AI We Might Create
Mass_Driver5d30

Yeah, but have you done a back of the envelope calculation here, or has anyone else? What size target could we hit in the Andromeda galaxy using, e.g., $50 million at our current tech levels, and how long could we transmit for? How large of a receiver would that target need to have pointing toward us in order to receive the message with anything like reasonable fidelity? If our message is focused no more tightly than on "a star," then would the receivers need an antenna the size of a solar system to pick it up? If not, why not?

I'm not sure codebreaking is a reasonable test of a supposedly universal language. A coded message has some content that you know would make sense if the code can be broken. By contrast, a CosmicOS message might or might not have any content that anyone else would be able to absorb. Consider the difference between, e.g., a Chinese transmission sent in the clear, and an English transmission sent using sophisticated encryption. If you're an English speaker who's never been exposed to even the concept of a logographic writing system, then it's not obvious to me that it will be easier to make sense of the plaintext Chinese message than the encrypted English message. I think we should test that hypothesis before we invest in an enormous transmitter.

I'm not sure what your comment "if we will start discussing it, we will not reach consensus for many years" implies about your interest in this conversation. If you don't see a discussion on this topic as valuable, that's fine, and I won't take up any more of your time.

Reply
Warning Aliens About the Dangerous AI We Might Create
Mass_Driver5d84

I think this is a promising strategy that deserves more investigation. Your game theory analysis of dark forest-type situations is particularly compelling; thank you for sharing it. I have two main questions: (1) to what extent is this technically feasible, and (2) how politically costly would the weirdness of the proposal be?

For technical feasibility, I was very surprised to hear you suggest targeting the Andromeda Galaxy. I agree that in principle the nearest stars are more likely to already have whatever data they might want about Earth, but I think of "the nearest stars" as being within 50 light-years or so, not as including the entire Milky Way. Can you explain why you think we'd be able to send any message at all to the Andromeda Galaxy in the next few years, or why an alien civilization 1,000 light-years away in a different part of the Milky Way would most likely be able to passively gather enough data on Earth to draw their own conclusions about us without the need for a warning?

The other part of the technical feasibility question is whether constructed languages like CosmicOS actually work. Has anyone done testing to see whether, e.g., physicists with no prior exposure to the language and no reference guides are able to successfully decipher messages in CosmicOS?

Politically, I'd like to see focus groups and polling on the proposal. Does the general American public approve or disapprove of such warnings? Do they think it's important or unimportant? What about astronomers, or Congressional staffers, or NASA employees? Yes, this is a weird idea, but the details could turn out to matter in terms of whether it's so weird that there's a high risk of burning significant amounts of credibility for the AI safety movement as a whole.

Reply1
Four Questions to Refine Your Policy Proposal
Mass_Driver1mo20

I am literally a tort litigator in the United States! I worked for several years as a personal injury and product safety litigator. 

Although the American legal system holds out "use reasonable care" as its official standard, in practice this gets further defined and specified based on rules found in OSHA, IEEE, or whatever the professional code of practice is for the relevant industry. As a plaintiff's attorney, if you can't point to a specific rule or norm that the defendant broke, you're extremely unlikely to recover any damages. The violation of the rule or the norm is taken as very persuasive evidence that the defendant didn't use reasonable care -- and, conversely, if you can't point to any crisp rule violation, that's usually taken as very persuasive evidence that the defendant did use reasonable care.

Reply
MAGA speakers at NatCon were mostly against AI
Mass_Driver2mo20

Agreed; well said.

Reply
MAGA speakers at NatCon were mostly against AI
Mass_Driver2mo6464

Yes, that's exactly right, we do. That's what it means to be an ally rather than a friend. America allied with the Soviet Union in World War 2; this is no different. When someone earnestly offers to help you literally save the world, you hold your nose and shake their hand.

Reply
Mainstream Grantmaking Expertise (Post 7 of 7 on AI Governance)
Mass_Driver4mo20

Fair enough; we certainly paid much less than people could make in the private sector at CAIP, for essentially that reason. It's good for nonprofit staff to have some skin in the game.

My suggestion to consider more competitive wages is mostly a response to Oliver suggesting that LTFF has had a serious and long-term challenge in hiring as many people as they would need to fully accomplish their mission.

Reply
Mikhail Samin's Shortform
Mass_Driver4mo63

Part of the distinction I try to draw in my sequence is that the median person at CSET or RAND is not "in politics" at all. They're mostly researchers at think tanks, writing academic-style papers about what kinds of policies would be theoretically good for someone to adopt. Their work is somewhat more applied/concrete than the work of, e.g., a median political science professor at a state university, but not by a wide margin.

If you want political experts -- and you should -- you have to go talk to people who have worked on political campaigns, served in the government, or led advocacy organizations whose mission is to convince specific politicians to do specific things. This is not the same thing as a policy expert. 

For what it's worth, I do think OpenPhil and other large EA grantmakers should be hiring many more people. Hiring any one person too quickly is usually a mistake, but making sure that you have several job openings posted at any given time (each of which you vet carefully) is not.

Reply3
Mikhail Samin's Shortform
Mass_Driver4mo90

I'm the author of the LW post being signal-boosted. I sincerely appreciate Oliver's engagement with these critiques, and I also firmly disagree with his blanket dismissal of the value of "standard practices." 

As I argue in the 7th post in the linked sequence, I think OpenPhil and others are leaving serious value on the table by not adopting some of the standard grant evaluation practices used at other philanthropies, and I don't think they can reasonably claim to have considered and rejected them -- instead the evidence strongly suggests that they're (a) mostly unaware of these practices due to not having brought in enough people with mainstream expertise, and (b) quickly deciding that anything that seems unfamiliar or uncomfortable "doesn't make sense" and can therefore be safely ignored. 

We have a lot of very smart people in the movement, as Oliver correctly points out, and general intelligence can get you pretty far in life, but Washington, DC is an intensely competitive environment that's full of other very smart people. If you try to compete here with your wits alone while not understanding how politics works, you're almost certainly going to lose.

Reply
Mainstream Grantmaking Expertise (Post 7 of 7 on AI Governance)
Mass_Driver5mo30

These are important points, and I'm glad you're bringing them up!

  1. Is spending a lot of time to assess new grantmakers merely distressing (but still net positive in terms of extending your total grantmaking ability), or is it actually causing you to lose time in expectation? In other words, if you spend 40 hours recruiting and assessing candidates, does one of those candidates then go on to do 100+ hours of useful grantmaking work? Or is it more like 20 hours of useful grantmaking work?
  2. How closely connected is the shortage of people willing to be full-time grantmakers with an expectation that grantmakers will already be fluent in technical AI safety when they start work? I could imagine that people who could otherwise be working for ARC or Anthropic would be very difficult to lure away full-time, but there's an entire field of mainstream philanthropic foundations that mostly have full-time staff working on their grants. Could we hire some of those grantmakers full time to lend their general grantmaking expertise, evaluating things like budgets and org charts and performance targets, while relying on part-time advisors to provide technical expertise about the details of AI safety research? If not, why not?
  3. What do you see as the most likely or most important negative consequence if grantmakers try to offer highly competitive salaries? Is this something that your funders have literally refused to pay for, or are you worried about being criticized for it (by whom? what consequences would follow from that criticism?), or does it just generally increase team members' anxiety levels, or what exactly is the downside? I have definitely seen some of this paranoia you're talking about, so it's a real problem, but I wonder if it's worth accepting the costs associated with paying highly competitive salaries in order to attract more and better people. It's also worth noting that 'highly competitive nonprofit salaries' are still lower than 'highly competitive tech salaries,' probably by a factor of about 3. You can get top-notch grantmaking talent for much less than the price of top-notch computer engineering talent.
Reply
I'm scared.
Mass_Driver5mo60

I mostly just got older and therefore calmer. I've crossed off most of the highest-priority items from my bucket list, so while I would prefer to continue living for a good long while, my personal death and/or defeat doesn't seem so catastrophically bad anymore, and to cope with the loss of civilization/humanity I read a lot of history and sci-fi and anthropology and other works that help me zoom out and see that there has already been great loss and that while I do want to spend my resources fighting to reduce the risk of that loss, it's not something I need to spend a lot of time or energy personally suffering over, especially not in advance. Worry is interest paid on trouble before it's due.

Reply2
Load More
10Four Questions to Refine Your Policy Proposal
1mo
2
56Mainstream Grantmaking Expertise (Post 7 of 7 on AI Governance)
5mo
7
59Political Funding Expertise (Post 6 of 7 on AI Governance)
5mo
4
70Orphaned Policies (Post 5 of 7 on AI Governance)
6mo
5
60Shift Resources to Advocacy Now (Post 4 of 7 on AI Governance)
6mo
18
110We're Not Advertising Enough (Post 3 of 7 on AI Governance)
6mo
10
59The Need for Political Advertising (Post 2 of 7 on AI Governance)
6mo
2
119Please Donate to CAIP (Post 1 of 7 on AI Governance)
6mo
20
10LINK: Quora brainstorms strategies for containing AI risk
9y
1
18Help Build a Landing Page for Existential Risk?
10y
32
Load More