All of tlevin's Comments + Replies

Agreed, I think people should apply a pretty strong penalty when evaluating a potential donation that has or worsens these dynamics. There are some donation opportunities that still have the "major donors won't [fully] fund it" and "I'm advantaged to evaluate it as an AIS professional" without the "I'm personal friends with the recipient" weirdness, though -- e.g. alignment approaches or policy research/advocacy directions you find promising that Open Phil isn't currently funding that would be executed thousands of miles away.

Depends on the direction/magnitude of the shift!

I'm currently feeling very uncertain about the relative costs and benefits of centralization in general. I used to be more into the idea of a national project that centralized domestic projects and thus reduced domestic racing dynamics (and arguably better aligned incentives), but now I'm nervous about the secrecy that would likely entail, and think it's less clear that a non-centralized situation inevitably leads to a decisive strategic advantage for the leading project. Which is to say, even under pretty op... (read more)

4[anonymous]
Can you say more about what has contributed to this update?

It's not super clear whether from a racing perspective having an equal number of nukes is bad. I think it's genuinely messy (and depends quite sensitively on how much actors are scared of losing vs. happy about winning vs. scared of racing). 


Importantly though, once you have several thousand nukes the strategic returns to more nukes drop pretty close to zero, regardless of how many your opponents have, while if you get the scary model's weights and then don't use them to push capabilities even more, your opponent maybe gets a huge strategic advantage ... (read more)

Yeah doing it again it works fine, but it was just creating a long list of empty bullet points (I also have this issue in GDocs sometimes)

2habryka
Yeah, weird. I will see whether I can reproduce it somehow. It is quite annoying when it happens.

Gotcha. A few disanalogies though -- the first two specifically relate to the model theft/shared access point, the latter is true even if you had verifiable API access: 

  1. Me verifying how many nukes you have doesn't mean I suddenly have that many nukes, unlike AI model capabilities, though due to compute differences it does not mean we suddenly have the same time distance to superintelligence. 
  2. Me having more nukes only weakly enables me to develop more nukes faster, unlike AI that can automate a lot of AI R&D.
  3. This model seems to assume you have
... (read more)
2habryka
It's not super clear whether from a racing perspective having an equal number of nukes is bad. I think it's genuinely messy (and depends quite sensitively on how much actors are scared of losing vs. happy about winning vs. scared of racing).  I do also currently think that the compute-component will likely be a bigger deal than the algorithmic/weights dimension, making the situation more analogous to nukes, but I do think there is a lot of uncertainty on this dimension. Yeah, totally agree that this is an argument against proliferation, and an important one. While you might not end up with additional racing dynamics, the fact that more global resources can now use the cutting edge AI system to do AI R&D is very scary. In-general I think it's very hard to predict whether people will overestimate or underestimate things. I agree that literally right now countries are probably underestimating it, but an overreaction in the future also wouldn't surprise me very much (in the same way that COVID started with an underreaction, and then was followed by a massive overreaction).

In general, we should be wary of this sort of ‘make things worse in order to make things better.’ You are making all conversations of all sizes worse in order to override people’s decisions.

Glad to be included in the roundup, but two issues here.

First, it's not about overriding people's decisions; it's a collective action problem. When the room is silent and there's a single group of 8, I don't actually face a choice of a 2- or 3-person conversation; it doesn't exist! The music lowers the costs for people to split into smaller conversations, so the people ... (read more)

Also - I'm not sure I'm getting the thing where verifying that your competitor has a potentially pivotal model reduces racing?

2habryka
Same reason as knowing how many nukes your opponents has reduces racing. If you are conservative the uncertainty in how far ahead your opponent is causes escalating races, even if you would both rather not escalate (as long as your mean is well-calibrated).  E.g. let's assume you and your opponent are de-facto equally matched in the capabilities of your system, but both have substantial uncertainty, e.g. assign 30% probability to your opponent being substantially ahead of you. Then if you think those 30% of worlds are really bad, you probably will invest a bunch more into developing your systems (which of course your opponent will observe, increase their own investment, and then you repeat).  However, if you can both verify how many nukes you have, you can reach a more stable equilibrium even under more conservative assumptions. 

The "how do we know if this is the most powerful model" issue is one reason I'm excited by OpenMined, who I think are working on this among other features of external access tools

2habryka
Interesting. I would have to think harder about whether this is a tractable problem. My gut says it's pretty hard to build confidence here without leaking information, but I might be wrong. 

If probability of misalignment is low, probability of human+AI coups (including e.g. countries invading each other) is high, and/or there aren't huge offense-dominant advantages to being somewhat ahead, you probably want more AGI projects, not fewer. And if you need a ton of compute to go from an AI that can do 99% of AI R&D tasks to an AI that can cause global catastrophe, then model theft is less of a factor. But the thing I'm worried about re: model theft is a scenario like this, which doesn't seem that crazy:

  • Company/country X has an AI agent that c
... (read more)
1Bogdan Ionut Cirstea
Spicy take: it might be more realistic to substract 1 or even 2 from the numbers for the GPT generations, and also to consider that the intelligence explosion might be quite widely-distributed: https://www.lesswrong.com/posts/wr2SxQuRvcXeDBbNZ/bogdan-ionut-cirstea-s-shortform?commentId=6EFv8PAvELkFopLHy  
3Nathan Helm-Burger
I expect that having a nearly-AGI-level AI, something capable of mostly automating further ML research, means the ability to rapidly find algorithmic improvements that result in: 1. drastic reductions in training cost for an equivalently strong AI.       -  Making it seem highly likely that a new AI trained using this new architecture/method and a similar amount of compute as the current AI would be substantially more powerful. (thus giving an estimate of time-to-AGI)      -  Making it possible to train a much smaller cheaper model than the current AI with the same capabilities. 2. speed-ups and compute-efficiency for inference on current AI, and for the future cheaper versions 3. ability to create and deploy more capable narrow tool-AIs which seem likely to substantially shift military power when deployed to existing military hardware (e.g. better drone piloting models) 4. ability to create and deploy more capable narrow tool-AIs which seem likely to substantially increase economic productivity of the receiving factories. 5. ability to rapidly innovate in non-ML technology, and thereby achieve military and economic benefits. 6. ability to create and destroy self-replicating weapons which would kill most of humanity (e.g. bioweapons), and also to create targeted ones which would wipe out just the population of a specific country. If I were the government of a country in whom such a tech were being developed, I would really not other countries able to steal this tech. It would not seem like a worthwhile trade-off that the thieves would then have a more accurate estimate of how far from AGI my countries' company was.
2habryka
Just pressing enter twice seems to work well-enough for me, though I feel like I vaguely remember some bugged state where that didn't work.

[reposting from Twitter, lightly edited/reformatted] Sometimes I think the whole policy framework for reducing catastrophic risks from AI boils down to two core requirements -- transparency and security -- for models capable of dramatically accelerating R&D.

If you have a model that could lead to general capabilities much stronger than human-level within, say, 12 months, by significantly improving subsequent training runs, the public and scientific community have a right to know this exists and to see at least a redacted safety case; and external resear... (read more)

4habryka
Is it? My sense is the race dynamics get worse if you are worried that your competitor has access to a potentially pivotal model but you can't verify that because you can't steal it. My guess is the best equilibrium is major nations being able to access competing models.  Also, at least given present compute requirements, a smaller actor stealing a model is not that dangerous, since you need to invest hundreds of millions into compute to use the model for dangerous actions, which is hard to do secretly (though to what degree dangerous inference will cost a lot is something I am quite confused about).  In general I am not super confident here, but I at least really don't know what the sign of hardening models against exfiltration with regards to race dynamics is. 

Seems cheap to get the info value, especially for quieter music? Can be expensive to set up a multi-room sound system, but it's probably most valuable in the room that is largest/most prone to large group formation, so maybe worth experimenting with a speaker playing some instrumental jazz or something. I do think the architecture does a fair bit of work already.

I'm confident enough in this take to write it as a PSA: playing music at medium-size-or-larger gatherings is a Chesterton's Fence situation

It serves the very important function of reducing average conversation size: the louder the music, the more groups naturally split into smaller groups, as people on the far end develop a (usually unconscious) common knowledge that it's too much effort to keep participating in the big one and they can start a new conversation without being unduly disruptive. 

If you've ever been at a party with no music where ... (read more)

1Sinclair Chen
this is an incredible insight! from this I think we can design better nightclublike social spaces for people who don't like loud sounds (such as people in this community with signal processing issues due to autism). One idea I have is to do it in the digital. like, VR chat silent nightclub where the sound falloff is super high. (perhaps this exists?) Or a 2D top down equivalent. I will note that Gather Town is backwards - the sound radius is so large that there is still lots of lemurs, but at the same time you can't read people's body language from across the room - and instead there needs to be an emotive radius from webcam / face-tracking needs to be larger than the sound radius. Or you can have a trad UI with "rooms" of very small size that you have to join to talk. tricky to get that kind of app right though since irl there's a fluid boundary between in and out of a convo and a binary demarcation would be subtly unpleasant. Another idea is to find alternative ways to sound isolate in meatspace. Other people have talked about architectural approaches like in Lighthaven. Or imagine a party where everyone had to wear earplugs. sound falls off with the square of distance and you can calculate out how many decibles you need to deafen everyone by to get the group sizes you want. Or a party with a rule that you have to plug your ears when you aren't actively in a conversation.  Or you could lay out some hula hoops with space between them and the rule is you can only talk within the hula hoop with other people in it, and you can't listen in on someone else's hula hoop convo. have to plug your ears as you walk around. Better get real comfortable with your friends! Maybe secretly you can move the hoops around to combine into bigger groups if you are really motivated. Or with way more effort, you could similarly do a bed fort building competition. These are very cheap experiments!
2jbash
It seems to me that this claim has a lot to overcome, given that the observers could walk away at any time. Is that a goal? I've never been much of a partygoer, but if I want to have a one-on-one conversation with somebody and get to know them, a party is about the last place I'd think about going. Too many annoying interruptions. It may do that, but that doesn't necessarily mean that that's the function. You could equally well guess that its function was to exclude people who don't like loud music, since it also does that.
6Nathan Helm-Burger
Other factors also to consider: 1. Gatherings with generous alcohol drinking tend to have louder music because alcohol relaxes the inner ear muscles, resulting in less vibration being conveyed, resulting in sound dampening. So anyone drinking alcohol experiences lower sound volumes. This means that a comfortable volume for a drunk person is quite a bit higher than for a sober person. Which is a fact that can be quite unpleasant if you are the designated driver! I always try to remember to bring earplugs if I'm going to be a designated driver for a group going out drinking. If you are drinking less than the average amount of alcohol at a social gathering, chances are your opinion of the music will be that it is too loud.   2. The intent of the social gathering in some cases is to facilitate good conversations. In such a case the person managing the music (host or DJ) should be thoughtful of this, and aim for a 'coffee shop' vibe with quiet background music and places to go in the venue where the music dwindles away. In the alternate case, where the intent of the party is to facilitate social connection and/or flirtation and/or fun dancing... then the host / DJ may be actively pushing the music loud to discourage any but the most minimal conversation, trying to get people to drink alcohol and dance rather than talk, and at most have brief simple 1-1 conversations. A dance club is an example of a place deliberately aiming for this end of the spectrum.   So, in designing a social gathering, these factors are definitely something to keep in mind. What are the goals of the gathering? How much, if any, alcohol will the guests be drinking? If you have put someone in charge of controlling the music, are they on the same page about this? Or are they someone who is used to controlling music in a way appropriate to dance hall style scenarios and will default to that?   In regards to intellectual discussion focused gatherings, I do actually think that there can be a pl
5whestler
Unfortunately different people have different levels of hearing ability, so you're not setting the conversation size at the same level for all participants. If you set the volume too high, you may well be excluding some people from the space entirely. I think that people mostly put music on in these settings as a way to avoid awkward silences and to create the impression that the room is more active than it is, whilst people are arriving. If this is true, then it serves no great purpose once people have arrived and are engaged in conversation. Another important consideration is sound-damping. I've been in venues where there's no music playing and the conversations are happening between 3 -5 people but everyone is shouting to be heard above the crowd, and it's incredibly difficult for someone with hearing damage to participate at all. This is primarily a result of hard, echoey walls and very few soft furnishings.  I think there's something to be said for having different areas with different noise levels, allowing people to choose what they're comfortable with, and observing where they go.
8Adam Scholl
I agree music has this effect, but I think the Fence is mostly because it also hugely influences the mood of the gathering, i.e. of the type and correlatedness of people's emotional states. (Music also has some costs, although I think most of these aren't actually due to the music itself and can be avoided with proper acoustical treatment. E.g. people sometimes perceive music as too loud because the emitted volume is literally too high, but ime people often say this when the noise is actually overwhelming for other reasons, like echo (insofar as walls/floor/ceiling are near/hard/parallel), or bass traps/standing waves (such that the peak amplitude of the perceived wave is above the painfully loud limit, even though the average amplitude is fine; in the worst cases, this can result in barely being able to hear the music while simultaneously perceiving it as painfully loud!)

As having gone to Lighthaven, does this still feel marginally worth it at Lighthaven where we mostly tried to make it architecturally difficult to have larger conversations? I can see the case for music here, but like, I do think music makes it harder to talk to people (especially on the louder end) and that does seem like a substantial cost to me.

I agree that that's the most important change and that there's reason to think people in Constellation/the Bay Area in general might systematically under-attend to policy developments, but I think the most likely explanation for the responses concentrating on other things is that I explicitly asked about technical developments that I missed because I wasn't in the Bay, and the respondents generally have the additional context that I work in policy and live in DC, so responses that centered policy change would have been off-target.

Kelsey Piper now reports: "I have seen the extremely restrictive off-boarding agreement that contains nondisclosure and non-disparagement provisions former OpenAI employees are subject to. It forbids them, for the rest of their lives, from criticizing their former employer. Even acknowledging that the NDA exists is a violation of it."

Quick reactions:

  1. Re: how over-emphasis on "how radical is my ask" vs "what my target audience might find helpful" and generally the importance of making your case well regardless of how radical it is, that makes sense. Though notably the more radical your proposal is (or more unfamiliar your threat models are), the higher the bar for explaining it well, so these do seem related.
  2. Re: more effective actors looking for small wins, I agree that it's not clear, but yeah, seems like we are likely to get into some reference class tennis here. "A lot of successful o
... (read more)
1Noosphere89
It's not just that problem though, they will likely be biased to think that their policy is helpful for safety of AI at all, and this is a point that sometimes gets forgotten. But correct on the fact that Akash's argument is fully general.

I think some of the AI safety policy community has over-indexed on the visual model of the "Overton Window" and under-indexed on alternatives like the "ratchet effect," "poisoning the well," "clown attacks," and other models where proposing radical changes can make you, your allies, and your ideas look unreasonable (edit to add: whereas successfully proposing minor changes achieves hard-to-reverse progress, making ideal policy look more reasonable).

I'm not familiar with a lot of systematic empirical evidence on either side, but it seems to me like the more... (read more)

2MP
I'm not a decel, but the way this stuff often is resolved is that there are crazy people that aren't taken seriously by the managerial class but that are very loud and make obnoxious asks. Think the evangelicals against abortion or the Columbia protestors. Then there is some elite, part of the managerial class, that makes reasonable policy claims. For Abortion, this is Mitch McConnel, being disciplined over a long period of time in choosing the correct judges.  For Palestine, this is Blinken and his State Department bureaucracy.  The problem with decels is that theoretically they are part of the managerial class themselves. Or at least, they act like they are. They call themselves rationalists, read Eliezer and Scott Alexander, and what not. But the problem is that it's very hard for an uninterested third party to take seriously these Overton Window bogous claims from people that were supposed to be measured, part of the managerial class.  You need to split. There are the crazy ones that people don't take seriously, but will move the managerial class. And there are the serious people that EA Money will send to D.C. to work at Blumenthal's office. This person needs to make small policy requests that will sabotage IA, without looking so. And slowly, you get policy wins and you can sabotage the whole effort.
[anonymous]115

Agree with lots of this– a few misc thoughts [hastily written]:

  1. I think the Overton Window frame ends up getting people to focus too much on the dimension "how radical is my ask"– in practice, things are usually much more complicated than this. In my opinion, a preferable frame is something like "who is my target audience and what might they find helpful." If you're talking to someone who makes it clear that they will not support X, it's silly to keep on talking about X. But I think the "target audience first" approach ends up helping people reason in a mor
... (read more)
4trevor
Recently, John Wentworth wrote: And I think this makes sense (e.g. Simler's Social Status: Down the Rabbit Hole which you've probably read), if you define "AI Safety" as "people who think that superintelligence is serious business or will be some day". The psych dynamic that I find helpful to point out here is Yud's Is That Your True Rejection post from ~16 years ago. A person who hears about superintelligence for the first time will often respond to their double-take at the concept by spamming random justifications for why that's not a problem (which, notably, feels like legitimate reasoning to that person, even though it's not). An AI-safety-minded person becomes wary of being effectively attacked by high-status people immediately turning into what is basically a weaponized justification machine, and develops a deep drive wanting that not to happen. Then justifications ensue for wanting that to happen less frequently in the world, because deep down humans really don't want their social status to be put at risk (via denunciation) on a regular basis like that. These sorts of deep drives are pretty opaque to us humans but their real world consequences are very strong. Something that seems more helpful than playing whack-a-mole whenever this issue comes up is having more people in AI policy putting more time into improving perspective. I don't see shorter paths to increasing the number of people-prepared-to-handle-unexpected-complexity than giving people a broader and more general thinking capacity for thoughtfully reacting to the sorts of complex curveballs that you get in the real world. Rationalist fiction like HPMOR is great for this, as well as others e.g. Three Worlds Collide, Unsong, Worth the Candle, Worm (list of top rated ones here). With the caveat, of course, that doing well in the real world is less like the bite-sized easy-to-understand events in ratfic, and more like spotting errors in the methodology section of a study or making money playing poker.

These are plausible concerns, but I don't think they match what I see as a longtime DC person.  

We know that the legislative branch is less productive in the US than it has been in any modern period, and fewer bills get passed (many different metrics for this, but one is https://www.reuters.com/graphics/USA-CONGRESS/PRODUCTIVITY/egpbabmkwvq/) .  Those bills that do get passed tend to be bigger swings as a result -- either a) transformative legislation (e.g., Obamacare, Trump tax cuts and COVID super-relief, Biden Inflation Reduction Act and CHIPS... (read more)

The "highly concentrated elite" issue seems like it makes it more, rather than less, surprising and noteworthy that a lack of structural checks and balances has resulted in a highly stable and (relatively) individual-rights-respecting set of policy outcomes. That is, it seems like there would thus be an especially strong case for various non-elite groups to have explicit veto power.

1Arjun Panickssery
Do non-elite groups factor into OP's analysis. I interpreted is as inter-elite veto, e.g. between the regional factions of the U.S. or between religious factions, and less about any "people who didn't go to Oxbridge and don't live in London"-type factions. I can't think of examples where a movement that wasn't elite-led destabilized and successfully destroyed a regime, but I might be cheating in the way I define "elites" or "led."

One other thought on Green in rationality: you mention the yin of scout mindset in the Deep Atheism post, and scout mindset and indeed correct Bayesianism involves a Green passivity and maybe the "respect for the Other" described here. While Blue is agnostic, in theory, between yin and yang -- whichever gives me more knowledge! -- Blue as evoked in Duncan's post and as I commonly think of it tends to lean yang: "truth-seeking," "diving down research rabbit holes," "running experiments," etc. A common failure mode of Blue-according-to-Blue is a yang that pr... (read more)

I think this post aims at an important and true thing and misses in a subtle and interesting but important way.

Namely: I don't think the important thing is that one faction gets a veto. I think it's that you just need limitations on what the government can do that ensure that it isn't too exploitative/extractive. One way of creating these kinds of limitations is creating lots of veto points and coming up with various ways to make sure that different factions hold the different veto points. But, as other commenters have noted, the UK government does not hav... (read more)

3Arjun Panickssery
The UK is also a small country, both literally, having a 4-5x smaller population than e.g. France during several centuries of Parliamentary rule before the Second Industrial Revolution, and figuratively, since they have an unusually concentrated elite that mostly goes to the same university and lives in London (whose metro area has 20% of the country's population). https://www.youtube.com/watch?app=desktop&v=dkhcNoMNHA0

(An extra-heavy “personal capacity” disclaimer for the following opinions.) Yeah, I hear you that OP doesn’t have as much public writing about our thinking here as would be ideal for this purpose, though I think the increasingly adversarial environment we’re finding ourselves in limits how transparent we can be without undermining our partners’ work (as we’ve written about previously).

The set of comms/advocacy efforts that I’m personally excited about is definitely larger than the set of comms/advocacy efforts that I think OP should fund, si... (read more)

Just being "on board with AGI worry" is so far from sufficient to taking useful actions to reduce the risk that I think epistemics and judgment is more important, especially since we're likely to get lots of evidence (one way or another) about the timelines and risks posed by AI during the term of the next president.

He has also broadly indicated that he would be hostile to the nonpartisan federal bureaucracy, e.g. by designating way more of them as presidential appointees, allowing him personally to fire and replace them. I think creating new offices that are effectively set up to regulate AI looks much more challenging in a Trump (and to some extent DeSantis) presidency than the other candidates.

Thanks for these thoughts! I agree that advocacy and communications is an important part of the story here, and I'm glad for you to have added some detail on that with your comment. I’m also sympathetic to the claim that serious thought about “ambitious comms/advocacy” is especially neglected within the community, though I think it’s far from clear that the effort that went into the policy research that identified these solutions or work on the ground in Brussels should have been shifted at the margin to the kinds of public communications you mention.

I als... (read more)

2[anonymous]
I appreciate the comment, though I think there's a lack of specificity that makes it hard to figure out where we agree/disagree (or more generally what you believe). If you want to engage further, here are some things I'd be excited to hear from you: * What are a few specific comms/advocacy opportunities you're excited about//have funded? * What are a few specific comms/advocacy opportunities you view as net negative//have actively decided not to fund? * What are a few examples of hypothetical comms/advocacy opportunities you've been excited about? * What do you think about EG Max Tegmark/FLI, Andrea Miotti/Control AI, The Future Society, the Center for AI Policy, Holly Elmore, PauseAI, and other specific individuals or groups that are engaging in AI comms or advocacy?  I think if you (and others at OP) are interested in receiving more critiques or overall feedback on your approach, one thing that would be helpful is writing up your current models/reasoning on comms/advocacy topics. In the absence of this, people simply notice that OP doesn't seem to be funding some of the main existing examples of comms/advocacy efforts, but they don't really know why, and they don't really know what kinds of comms/advocacy efforts you'd be excited about.

Thank you! Classic American mistake on my part to round these institutions to their closest US analogies.

I broadly share your prioritization of public policy over lab policy, but as I've learned more about liability, the more it seems like one or a few labs having solid RSPs/evals commitments/infosec practices/etc would significantly shift how courts make judgments about how much of this kind of work a "reasonable person" would do to mitigate the foreseeable risks. Legal and policy teams in labs will anticipate this and thus really push for compliance with whatever the perceived industry best practice is. (Getting good liability rulings or legislation would multiply this effect.)

"We should be devoting almost all of global production..." and "we must help them increase" are only the case if:

  1. There are no other species whose product of [moral weight] * [population] is higher than bees, and
  2. Our actions only have moral relevance for beings that are currently alive.

(And, you know, total utilitarianism and such.)

4JBlack
True. For (1): Given the criteria outlined in their documents, ants likely outweigh everything else. There are tens of quadrillions of them, with a weight adjusted by credence of sentience on the order of 0.001 estimated from their evaluations of a few other types of insects. So current ant population would account for thousands of times more moral weight than current humanity instead of only the few dozen times more for bees. Regarding (2): Extending moral weight to potential future populations presumably would mean that we ought to colonize the universe with something like immortal ants - or better yet, some synthetic entities that requires less resources per unit of sentience. As it is unlikely that we are the most resource-efficient way to maintain and extend this system, we should extinguish ourselves as the project nears completion to make room for more efficient entities.

Just want to plug Josh Greene's great book Moral Tribes here (disclosure: he's my former boss). Moral Tribes basically makes the same argument in different/more words: we evolved moral instincts that usually serve us pretty well, and the tricky part is realizing when we're in a situation that requires us to pull out the heavy-duty philosophical machinery.

I think the main thing stopping the accelerationists and open source enthusiasts from protesting with 10x as many people is that, whether for good reasons or not, there is much more opposition to AI progress and proliferation than support among the general public. (Admittedly this is probably less true in the Bay Area, but I would be surprised if it was even close to parity there and very surprised if it were 10x.)

2MondSemmel
Thanks, that's very helpful context. In principle, I wouldn't put too much stock in the specific numbers of a single poll, since those results depend too much on specific wording etc. But the trend in this poll is consistent enough over all questions that I'd be surprised if the questions could be massaged to get the opposite results, let alone ones 10x in favor of the accelerationist side. (That said, I didn't like the long multi-paragraph questions further down in the poll. I felt like many were phrased to favor the cautiousness side somewhat, which biases the corresponding answers. Fortunately there were also plenty of short questions without this problem.)

My response to both paragraphs is that the relevant counterfactual is "not looking into/talking about AI risks." I claim that there is at least as much social pressure from the community to take AI risk seriously and to talk about it as there is to reach a pessimistic conclusion, and that people are very unlikely to lose "all their current friends" by arriving at an "incorrect" conclusion if their current friends are already fine with the person not having any view at all on AI risks.

I think it's admirable to say things like "I don't want to [do the thing that this community holds as near-gospel as a good thing to do.]" I also think the community should take it seriously that anyone feels like they're punished for being intellectually honest, and in general I'm sad that it seems like your interactions with EAs/rats about AI have been unpleasant.

That said...I do want to push back on basically everything in this post and encourage you and others in this position to spend some time seeing if you agree or disagree with the AI stuff.

  • Assumin
... (read more)
3Viliam
This is too optimistic assumption. On one hand, we have Kirsten's ability to do AI research. On the other hand, we have all the social pressure that Kirsten complains about. You seem to assume that the former is greater than the latter, which may or may not be true (no offense meant). An analogy with religion is telling someone to make an independent research about the historical truth about Jesus. In theory, that should work. In practice... maybe that person has no special talent for doing historical research; plus there is always the knowledge in the background that arriving at the incorrect answer would cost them all their current friends anyway (which I hope does not work the same with EAs, but the people who can't stop talking about the doom now probably won't be able to stop talking about it even if Kirsten tells them "I have done my research, and I disagree").
2KirstenH
Thanks, this is pretty persuasive and worth thinking about (so I will think about it!)

It seems to me like government-enforced standards are just another case of this tradeoff - they are quite a bit more useful, in the sense of carrying the force of law and applying to all players on a non-voluntary basis, and harder to implement, due to the attention of legislators being elsewhere, the likelihood that a good proposal gets turned into something bad during the legislative process, and the opportunity cost of the political capital.

This post has already helped me admit that I needed to accept defeat and let go of a large project in a way that I think might lead to its salvaging by others - thanks for writing.

Answer by tlevin21

First, congratulations - what a relief to get in (and pleasant update on how other selective processes will go, including the rest of college admissions)!

I lead HAIST and MAIA's governance/strategy programming and co-founded CBAI, which is both a source of conflict of interest and insider knowledge, and my take is that you should almost certainly apply to MIT. MIT is a much denser pool of technical talent, but MAIA is currently smaller and less well-organized than HAIST. Just by being an enthusiastic participant, you could help make it a more robust group,... (read more)

I don't think this is the right axis on which to evaluate posts. Posts that suggest donating more of your money to charities that save the most lives, causing less animal suffering via your purchases, and considering that AGI might soon end humanity are also "harmful to an average reader" in a similar sense: they inspire some guilt, discomfort, and uncertainty, possibly leading to changes that could easily reduce the reader's own hedonic welfare.

However -- hopefully, at least -- the "average reader" on LW/EAF is trying to believe true things and achieve go... (read more)

Quick note on 2: CBAI is pretty concerned about our winter ML bootcamp attracting bad-faith applicants and plan to use a combo of AGISF and references to filter pretty aggressively for alignment interest. Somewhat problematic in the medium term if people find out they can get free ML upskilling by successfully feigning interest in alignment, though...

Great write-up. Righteous Mind was the first in a series of books that really usefully transformed how I think about moral cognition (including Hidden Games, Moral Tribes, Secret of Our Success, Elephant in the Brain). I think its moral philosophy, however, is pretty bad. In a mostly positive (and less thorough) review I wrote a few years ago (that I don't 100% endorse today), I write:

Though Haidt explicitly tries to avoid the naturalistic fallacy, one of the book’s most serious problems is its tendency to assume that people finding something disgusting im

... (read more)
2ErnestScribbler
Thanks for the thoughtful comment! I agree that the normative parts were the weakest in the Book. There were other parts that I found weak, like how I think he caught the Moral Foundations and their ubiquitous presence well, but then made the error of thinking liberals don't use them (when in fact they use them a lot, certainly in today's climate, just with different in-groups, sanctified objects, etc.). An initial draft had a section about this. But in the spirit of Ruling Thinkers In, Not Out, I decided to let go of these in the review and focus on the parts I got a lot out of. I'll take a look at Greene, sounds very interesting. About what to do about disagreements with conservatives, I'd say if you understand where others are coming from, perhaps you can compromise in a way that's positive-sum. It doesn't mean you have to concede they're right, only that in a democracy they are entitled to affect policy, but that doesn't mean you should be fighting over it instead of discussing in good faith. I liked the final paragraph, about how reason slowly erodes emotional objections over a long time. Maybe that's an optimistic note to finish on.