All of Brendan Long's Comments + Replies

I’ve had emails ignored, responses that amount to “this didn’t come from the right person,” and the occasional reply like this one, from a very prominent member of AI safety:

“Without reading the paper, and just going on your brief description…”

That’s the level of seriousness these ideas are treated with.

I only had time to look at your first post, and then only skimmed it because it's really long. Asking people you don't know to read something of this length is more than you can really expect. People are busy and you're not the only one with demands on thei... (read more)

1funnyfranco
It’s absolutely reasonable to not read my essays. They’re long, and no one owes me their time. But to not read them and still dismiss them - that’s not rigorous. That’s kneejerk. And unfortunately, that’s been the dominant pattern, both here and elsewhere. I’m not asking for every random person to read a long essay. I’m pointing out that the very people whose job it is to think about existential risk have either (a) refused to engage on ideological grounds, (b) dismissed the ideas based on superficial impressions, or (c) admitted they haven’t read the arguments, then responded anyway. You just did version (c). You say some of my bullet points were “unsupported assertions,” but you also say you only skimmed. That’s exactly the kind of shallow engagement I’m pointing to. It lets people react without ever having to actually wrestle with the ideas. If the conclusions are wrong, point to why. If not, the votes shouldn’t be doing the work that reasoning is supposed to. As for tractability: I’m not claiming to offer a solution. I’m explaining why the outcome - human extinction via AGI driven by capitalism - looks inevitable. “That’s probably true, but we can’t do anything about it” is a valid reaction. “That’s too hard to think about, so I’ll downvote and move on” isn’t. I thought LessWrong was about thinking, not feeling. That hasn’t been my experience here. And that’s exactly what this essay is addressing.

This seems to explain a lot about why Altman is trying so hard both to make OpenAI for-profit (to more easily raise money with that burn rate) and why he wants so much bigger data centers (to keep going on "just make it bigger").

Due to an apparently ravenous hunger among our donor base for having benches with plaques dedicated to them, and us not actually having that many benches, the threshold for this is increased to $2,000.

Given the clear mandate from the community, when do you plan to expand Lighthaven with a new Hall of Benches, and how many benches do you think you can fit in it?

I think it's more that learning to prioritize effectiveness over aesthetics will make you a more effective software engineer. Sometimes terrible languages are the right tool for the job, and I find it gives me satisfaction to pick the right tool even if I wish we lived in a world where the right tool was also the objectively best language (OCaml, obviously).

Answer by Brendan Long20

This economist thinks the reason is that inputs were up in January and the calculation is treating that as less domestic production rather than increased inventories:

OK, so what can we say about the current forecast of -2.8% for Q1 of 2025? First, almost all of the data in the model right now are for January 2025 only. We still have 2 full months in the quarter to go (in terms of data collection). Second, the biggest contributor to the negative reading is a massive increase in imports in January 2025.

[...]

The Atlanta Fed GDPNow model is doing exactly that,

... (read more)

I updated this after some more experimentation. I now bake them uncovered for 50 minutes rather than doing anything more complicated, and I added some explicit notes about additional seasonings. I also usually do a step where I salt and drain the potatoes, so I mentioned that in the variations.

During our evaluations we noticed that Claude 3.7 Sonnet occasionally resorts to special-casing in order to pass test cases in agentic coding environments like Claude Code. Most often this takes the form of directly returning expected test values rather than implementing general solutions, but also includes modifying the problematic tests themselves to match the code’s output.

Claude officially passes the junior engineer Turing Test?

But if we are merely mathematical objects, from whence arises the feelings of pleasure and pain that are so fundamental?

My understanding is that these feelings are physical things that exist in your brain (chemical, electrical, structural features, whatever). I think of this like how bits (in a computer sense) are an abstract thing, but if you ask "How does the computer know this bit is a 1?", the answer is that it's a structural feature of a hard drive or an electrical signal in a memory chip.

Allowing for charitable donations as an alternative to simple taxation does shift the needle a bit but not enough to substantially alter the argument IMO.

Not to mention that allowing for charitable donations as an alternative would likely lead to everyone setting up charities for their parents to donate to.

2localdeity
I personally recommend that all parents donate to the Localdeity Enrichment Fund, an important yet frequently overlooked cause area.

The resistance to such a policy is largely about ideology rather than about feasibility. It is about the quiet but pervasive belief that those born into privilege should remain there.

I don't think this is true at all. There is an ideological argument for inheritance, but it's not the one you're giving.

The ideological argument is that in a system with private property, people should be able to spend the money they earn in the ways they want, and one of the things people most want is to spend money on their children. The important person served by inheritance law is the person who made the money, not their inheritors (who you rightly point out didn't do anything).

Answer by Brendan Long52

Sam Altman is almost certainly aware of the arguments and just doesn't agree with them. The OpenAI emails are helpful for background on this, but at least back when OpenAI was founded, Elon Musk seemed to take AI safety relatively seriously.

Elon Musk to Sam Teller - Apr 27, 2016 12:24 PM

History unequivocally illustrates that a powerful technology is a double-edged sword. It would be foolish to assume that AI, arguably the most powerful of all technologies, only has a single edge.

The recent example of Microsoft's AI chatbot shows how quickly it can turn inc

... (read more)
1KvmanThinking
Has Musk tried to convince the other AI companies to also worry about safety?

I don't understand why perfect substitution matters. If I'm considering two products, I only care which one provides what I want cheapest, not the exact factor between them.

For example, if I want to buy a power source for my car and have two options:

Engine: 100x horsepower, 100x torque Horse: 1x horsepower, 10x torque

If I care most about horsepower, I'll buy the engine, and if I care most about torque, I'll also buy the engine. The engine isn't a "perfect substitute" for the horse, but I still won't buy any horses.

Maybe this has something to do with prices, but it seems like that just makes things worse since engines are cheaper than horses (and AI's are likely to be cheaper than humans).

Location: Remote. Timaeus will likely be located in either Berkeley or London in the next 6 months, and we intend to sponsor visas for these roles in the future.

Will all employees be required to move to Berkeley or London, or will they have the option to continue working remotely?

5Jesse Hoogland
We won’t strictly require it, but we will probably strongly encourage it. It’s not disqualifying, but it could make the difference between two similar candidates. 

I think the biggest tech companies collude to fix wages so that they are sufficiently higher than every other company's salaries to stifle competition

The NYT article you cite says the exact opposite, that Big Tech companies were sued for colluding to fix wages downward, not upward. Why would engineers sue if they were being overpaid?

1purple fire
Sorry, I can elaborate better on the situation. The big tech companies know that they can pay way more than smaller competitors, so they do. But then that group of megacorp tech (Google, Amazon, Meta, etc.) collude with each other to prevent runaway race dynamics. This is how they're able to optimize their costs with the constraint of salaries being high enough to stifle competition. Here, I was just offering evidence for my claim that big tech is a monopsonistic cartel in the SWE labor market, it isn't really evidence one way or another for the claims I make in the original post.

It seems like the big players already have plans to cut Nvidia out of the loop though.

And while they seem to have the best general purpose hardware, they're limited by competition with AMD, Apple, and Qualcomm.

3winstonBosan
I think that is just true. Now in hindsight, my mistake is that I haven't really updated sufficiently towards how the major players are shifting towards their own chip design capacity. (Apple comes to mind but I am definitely caught a bit off guard on how even Meta and Amazon had moved forward.) I had the impression that Amazon had a bad time in their previous generation of chips - and that new generation of their chips is focused on inference anyways.  But now with the blending of inference and training regime, maybe the "intermediaries" like Nvidia now gets to capture less and less of upside. And it seems more and more likely to me that we are having a moment of "going back to the basics" of looking at the base ingredients - the compute and the electricity. 

I rice my potatoes while they're still burning hot, which is annoying, but I'm impatient and it means the result is still warm. If you're (reasonably) waiting for the potatoes to cool down, you might be able to re-heat them in the microwave or on the stove without too much of a change to texture, although you'd have to be careful about how you stir it.

Doesn't the stand mixer method overmix and produce glue-y mashed potatoes? I actually don't mind that texture but I thought that's why people don't usually do it that way.

2AnthonyC
You would think so, I certainly used to think so, but somehow it doesn't seem to work that way in practice. That's usually the step where my wife does the seasoning and adds the liquids, so IDK if there is something specific she does that makes it work. But I'm definitely whipping them with the whisk attachment, which incorporates air, and not beating them with a paddle attachment. I suspect that's the majority of why it works.

I also like Yukon Golds best in mashed potatoes, but I use a ricer (similar to this one).

2AnthonyC
I used to use a ricer, but found that it always made the potatoes too cold by the time I ate them. Do you find this? If not, do you (even if you never thought of it this way) do anything specific to prevent it? If so, do you then reheat them, and how?   With a stand mixer and the whisk attachment I found removing the ricer step hasn't really mattered, but any other whipping method and yeah, it's very useful.

I get the 10 lbs bags at Costco (usually buying 20 lbs at a time). Are the Trader Joe's ones noticably better tasting? I'd love to try more potato varieties but no one seems to sell anything more interesting unless I want tiny colorful potatoes that cost $10/lb.

4AnthonyC
Fair enough, I moved into a small space a few years ago and mostly buy smaller quantities now. I also like that the Little Potato Company's potatoes are already washed and I'm often boondocking/on a limited water supply.  Costco is generally above average in most things, so definitely a good choice. I find the brands I mentioned to be more consistently high quality across locations and over time, but not too much better at their respective bests. So when I need a specific meal to be high quality, like on holidays, I'll make sure to go to Trader Joe's. FWIW the Trader Joe's organic golds are around $4/3lb bag. The Little Potato Company's bags are around $2-3/lb. I have bought both in at least 10 states each at this point and those price have been fairly consistent. I also don't want to spend a huge amount on potatoes.

I actually have this exact box grater so I sacrificed a potato for science and determined:

  • The thickness looks a little smaller than I usually do but should be fine.
  • It's not quite as sharp as my mandoline so it might get tiring.
  • The slicer is very small, so you might have trouble with large potatoes. I usually cut potatoes on the small side (for smaller slices) anyway, so this might not be a problem.
  • You should get something to protect your hand since you'll definitely cut the tip of your finger off if you slice 6 lbs of potatoes like this without a guard.

I agree that more crispy bits is good. The recipe above optimizes for not being annoying to make, but doing the exact same thing and spreading the mixture on two sheet pans might work (and it would probably have a much shorter bake time).

I suspect the crispier version would be harder to store and wouldn't reheat as well though.

That's a good point. I don't really know what I'm doing, so I'm not able to predict exact variations. I found that this worked relatively consistently no matter how I cooked it, but the version in the recipe above was the best.

I definitely endorse changing the recipe based on how it goes:

  • If it's not crispy enough, bake it longer uncovered, or increase the temperature, or move the pan closer to the top of the oven.
  • If the internal texture is crunchy/uncooked, bake it (covered) longer.
  • If the internal texture is too mushy, bake it (covered) shorter. You could
... (read more)

Wouldn't the FDA not really be a blocker here, since doctors are allowed to prescribe medications off-label? It sounds more like a medical culture (or liability) thing, although I guess they kind-of interact since using FDA-approved medications in the FDA-approved way is (probably?) a good way to avoid liability issues.

1DenizT
Yes. I argued that the causes for the malaise are over-determined, and medical liability and close-mindedness are both reasons. There are integrative and functional doctors who are more willing to prescribe off-label medications. But they are rare and I haven't researched them in depth yet.  But if there were 10x more clinical trial results, there'd be a greater universe from which one could prescribe off-label medications, and the quality of off-label recommendations would be higher. And the FDA certainly is a blocker there.

I'm planning to donate $1000[1] but not until next year (for tax purposes). If there was a way that pledge that I would.

  1. ^

    I'm committing to donating $1000 but it might end up being more when I actually think through all of the donations I plan to do next year.

8habryka
Thank you!  After many other people have said similar things I am now pretty sure we will either keep the fundraiser open until like the second or third week of January, or add some way of pledging funds to be donated in the next year.

I showed up and some other people were in the room :(

I'm finishing up packing but won't make it there until 2:15 or so.

Haha, well that dosage probably would probably cause weight loss.

All of the sources I can find give the density as exactly 4 oz = 1/2 cup, although maybe this is just an approximation that's infecting other data sources?

https://www.wolframalpha.com/input?i=density+of+butter+*+(1%2F2+cup)+in+ounces

But 1/2 cup of butter weighs 4 ounces according to every source I can find: https://www.wolframalpha.com/input?i=density+of+butter+*+(1%2F2+cup)+in+ounces

Which means a 4 ounce stick of butter is 1/2 cup by volume.

4Said Achmiz
The density of butter is reasonably close to 1 avoirdupois ounce per 1 fluid ounce, but is definitely not exactly equal: https://kg-m3.com/material/butter gives the density as 0.95033293516 oz./fl. oz., or 0.911 kg/m^3. (The link you provide doesn’t give a source; the data at the above link is sourced from the International Network of Food Data Systems (INFOODS).) ---------------------------------------- Further commentary: The density of water (at refrigerator temperatures) is ~1 g/cm^3. 1 oz. = ~28.35 g; 1 fl. oz. = ~ 29.57 cm^3; thus the density of water is (1/28.35) / (1/29.57) = ~1.043 = oz./fl. oz. (This is, of course, equal to 0.95033293516 / 0.911, allowing for rounding and floating point errors.) Note that the composition of butter varies. In particular, it varies by the ratio of butterfat to water (there are also butter solids, i.e. protein, but those are a very small part of the total mass). American supermarket butter has approx. 80% butterfat; Amish butter, European butters (e.g. Kerrygold), or premium American butters (e.g. Vital Farms brand) have more butterfat (up to 85%). Butterfat is less dense than water (thus the more butterfat is present, the lower the average density of the stick of butter as a whole—although this doesn’t make a very big difference, given the range of variation). Given the numbers in the paper at the last link, we can calculate the average density (specific gravity) of butter (assuming butterfat content of a cheap American supermarket brand) as 0.8 * 0.9 + 0.2 * 1.0 = 0.92. This approximately matches our 0.911 kg/m^3 number above.

It sounds like 1/2 cup of butter (8 tbps) weighs 4 oz, so shouldn't this actually work out so each of those sections actually is 1 tbsp in volume, and it's just a coincidence (or not) that the density of butter is 1 oz / 2 fl oz?

2Said Achmiz
No, you’re misunderstanding. There is no 1/2 cup of butter anywhere in the above scenario. One stick of butter is 4 oz. of butter (weight), but not 1/2 cup of butter (volume).
4AnthonyC
This is almost true. Fat is less dense than water, so a tablespoon of butter weighs something like 10% less than a half ounce. Not enough to matter in practice for most cooking. Your toast and your average chocolate chip cookie don't care. But, many approximations like this exist, and are collectively important enough that professionals use weight not volume in most recipes. And enough that the difference in fat content between butters (as low as 80% in the US but more often 85+% in European or otherwise "premium" butters) can matter in more sensitive recipes, like pie crust and drop biscuits. I used to add 1-2 Tbsp of shortening to my pie crust. I stopped when I switched to Kerrygold butter - no longer needed.

The problem is that lack of money isn't the reason there's not enough housing in places that people want to live. Zoning laws intentionally exclude poor people because rich people don't want to live near them. Allocating more money to the problem doesn't really help (see: the ridiculous amount of money California spends on affordable housing), and if you fixed the part where it's illegal, the government spending isn't necessary because real estate developers would build apartments without subsidies if they were allowed to.

Also, the most recent election shows that ordinary people really, really don't like inflation, so I don't think printing trillions of dollars for this purpose is actually more palatable.

-1[anonymous]
The idea is to balance spending with subsidies, to prevent inflation. In this new system, there’s nothing preventing people from migrating from antagonistic municipalities to places where subsidies are approved because of good planning and political climate.

You're right, I was taking the section saying "In this new system, the only incentive to do more and go further is to transcend the status quo in some way, and earn recognition for a unique contribution." too seriously. On a second re-read, it seems like your proposal is actually just to print money to give people food stamps and housing vouchers. I think the answer to why we don't do that is that we do that.

Food is essentially a solved problem in the United States, and the biggest problem with housing vouchers is that there physically isn't enough housing... (read more)

-1[anonymous]
You’re still not reading the post closely enough. This isn’t just food stamps and housing vouchers, it’s real dollars created for purpose, with matching subsidies on the supply side. That means if there’s 4T new dollars of housing spending, the system allocates 4T new dollars of housing subsidies to build new homes. There’s two nuances that your gloss misses, first, producers aren’t just compelled to honor welfare tokens. Second, the dollars are created, not gathered through taxes. Both points make the system more palatable to entrenched interests and ordinary people.

I think you've re-invented Communism. The reason we don't implement it is that in practice it's much worse for everyone, including poor people.

1[anonymous]
I find this comment flippant and unworthy of a community like LessWrong. First of all, you’re denying the politics of millions of earnest people, many as educated and gifted as you, and second of all, you’re equating a 21st century democratically steered market economy with the totalitarian central planning of 20th century Stalinism. You’re right that no one wants that.

I'll try to make it but I might be moving that day so I'm not sure :\

2Brendan Long
I'm finishing up packing but won't make it there until 2:15 or so.

Finally, note to self, probably still don’t use SQLite if you have a good alternative? Twice is suspicious, although they did fix the bug same day and it wasn’t ever released.

But is this because SQLite is unusually buggy, or because its code is unusually open, short and readable and thus understandable by an AI? I would guess that MySQL (for example) has significantly worse vulnerabilities but they're harder to find.

cwillu110

There are severe issues with the measure I'm about to employ (not least is everything listed in https://www.sqlite.org/cves.html) , but the order of magnitude is still meaningful:

https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=sqlite 170 records

https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=postgresql 292 records (+74 postgres and maybe another 100 or so under pg; the specific spelling “postgresql” isn't used as consistently as “sqlite” and “mysql” is)

https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=mysql 2026 records

6npostavs
Finding two bugs in a large codebase doesn't seem especially suspicious to me.

SQLite is ludicrously well tested; similar bugs in other databases just don't get found and fixed.

I don't know anything about you in particular, but if you know alignment researchers who would recommend you, could you get them to refer you either internally or through their contacts?

4Nathan Helm-Burger
I have gotten referrals from various people at various times. Thanks for the suggestion!

This is actually why a short position (a complicated loan) would theoretically work. If we all die, then you, as someone else's counterparty, never need to pay your loan back.

(I think this is a bad idea, but not because of counterparty risk)

I think the idea is that short position pays off up-front, and then you don't need to worry about the loan if everyone's dead.

If by paying off you mean this bet actually working I think you're right though. It seems more likely that the stock market would go up in the short term, forcing you to cover at a higher price and losing a bunch of money. And if the market stays flat, you'll still lose money on interest payments unless doom is coming this year.

4J Bostock
Also, in this case you want to actually spend the money before the world ends. So actually losing money on interests payments isn't the real problem, the real problem is that if you actually enjoy the money you risk losing everything and being bankrupt/in debtors prison for the last two years before the world ends. There's almost no situation in which you can be so sure of not needing to pay the money back that you can actually spend it risk-free. I think the riskiest short-ish thing that is even remotely reasonable is taking out a 30-year mortgage and paying just the minimum amount each year, such that the balance never decreases. Worst case you just end up with no house after 30 years, but not in crippling debt, and move back into the nearest rat group house.

I'll be out of town (getting married on the 25th) but I'd be happy to do something the weekend after.

I don't think this is actually the rule by common practice (and not all bad things should be illegal). For example, if one of your friends/associates says something that you think is stupid, going around telling everyone that they said something stupid would generally be seen as rude. It would also be seen as crazy if you overheard someone saying something negative about their job and then going out of your way to tell their boss.

In both cases there would be exceptions, like if if the person's boss is your friend or safety reasons like you mentioned, but I think by default sharing negative information about people is seen as bad, even if it's sometimes considered low-levels of bad (like with gossip).

1gb
There's definitely a fair expectation against gossiping and bad-mouthing. I don't think that's quite what the OP is talking about, though. I believe the relevant distinction is that (generally speaking) those behaviors don't do any good to anyone, including the person spreading the gossip. But consider how murkier the situation becomes if you're competing for a promotion with the person here:

I also agree with this to some extent. Journalists should be most concerned about their readers, not their sources. They should care about accurately quoting their sources because misquoting does a disservice to their readers, and they should care about privacy most of the time because having access to sources is important to providing the service to their readers.

I guess this post is from the perspective of being a source, so "journalists are out to get you" is probably the right attitude to take, but it's good actually for journalists to prioritize their readers over sources.

1gb
My understanding is that the OP is suggesting the journalists' attitude is unreasonable (maybe even unethical). You're saying that their attitude is justifiable because it benefits their readers. I don't quite agree that that reason is necessary, nor that it would be by itself sufficient. My view is that journalists are justified in quoting a source because anyone is generally justified in quoting what anyone else has actually said, including for reasons that may benefit no one but the quoter. There are certainly exceptions to this (if divulging the information puts someone in danger, for instance), but those really are exceptions, not the rule. The rule, as recognized both by common practice and by law, is that you simply have no general right to (or even expectation of) privacy about things you say to strangers, unless of course the parties involved agree otherwise.

The convenient thing about journalism is that the problems we're worried about here are public, so you don't need to trust the list creators as much as you would in other situations. This is why I suggest giving links to the articles, so anyone reading the list can verify for themselves that the article commits whichever sin it's accused of.

The trickier case would be protecting against the accusers lying (i.e. tell journalist A something bad and then claim that they made it up). If you have decent verification of accusers' identifies you might still get a good enough signal to noise ratio, especially if you include positive 'reviews'.

1StartAtTheEnd
You can still lie by omission, allowing evidence that shows person A's wrongdoings, while refuting evidence that shows either person A's examples of trustworthiness, or person B's wrongdoings. If I do 10 things, 8 of which are virtuous and 2 of which are bad, and you only communicate the two to the world, then you will have deceived your listeners. Meanwhile, if another person does 8 things which are bad and 2 which are virtuous, you could share those two things. One-sidedness can be harmful and biased without ever lying (negative people tend to be in this group I think, especially if they're intelligent) A lot of online review sites are biased, despite essentially being designed to represent regular people rather than some authority which might lie to you. They silently delete reviews, selectively accuse reviews of breaking rules (holding a subset of them to a much higher standard, or claiming that reviews are targeted harassment by some socially unappealing group), adding fake votes themselves, etc.

I largely agree with this article but I feel like it won't really change anyone's behavior. Journalists act the way they do because that's what they're rewarded for. And if your heuristic is that all journalists are untrustworthy, it makes it hard for trustworthy journalists to get any benefit from that.

A more effective way to change behavior might be to make a public list of journalists who are or aren't trustworthy, with specific information about why ("In [insert URL here], Journalist A asked me for a quote and I said X, but they implied inaccurately th... (read more)

4Nathan Young
Well I do talk to journalists I trust and not those I don't. And I don't give quotes to those who won't take responsibility for titles. But yes, more suggestions appreciated.
2StartAtTheEnd
This doesn't work, as you don't know if the list (or its creators) are trustworthy. This is a smaller version of something which is an unsolvable problem (because you need an absolute reference point but only have relative reference points) An authority can keep an eye on everything under its control, but it cannot keep an eye on itself. "Who watches the watchers?". This is why a ministry of truth is a bad idea and why misinformation is impossible to combat. It's tempting to say that openness of information is a solution (that if everyone can voice their opinions, observers can come to a sound conclusion themselves), and while this does end better, you don't know if, for instance, a review site is deleting user reviews or not. (I just realized this is why people value transparency. But you don't know if a seemingly transparent entity is actually transparent or just pretending to be. You can use technology which is fair or secure by design, but authorities (like the government) always make sure that this technology can't exist.

It would be very surprising to me if such ambitious people wanted to leave right before they had a chance to make history though.

They can't do that since it would make it obvious to the target that they should counter-attack.

As an update: Too much psyllium makes me feel uncomfortably full, so I imagine that's part of the weight loss effect of 5 grams of it per meal. I did some experimentation but ended up sticking with 1 gram per meal or snack, in 500 gram capsules and taken with water.

I carry 8 of these pills (enough for 4 meals/snacks) in my pocket in small flat pill organizers.

It's still too early to assess the impact on cholesterol but this helps with my digestive issues, and it seems to help me not overeat delicious foods to the same extent (i.e. on a day where I previous... (read more)

1Søren Elverlin
You scared me when you wrote 500 gram instead of 500 mg. :D

Biden and Harris have credibly committed to help Taiwan. Trump appears much more isolationist and less likely to intervene, which might make China more likely to invade.

I personally think it's good for us to protect friendly countries like this, but isn't China invading Taiwan good for AI risk, since destroying the main source of advanced chips would slow down timelines?

You also mention Trump's anti-democratic tendencies, which seem bad for standard reasons, but not really relevant to AI existential risk (except to the extent that he might stay in power and continue making bad decisions 4+ years out).

2LintzA
Some reasons why the anti-democratic tendencies might matter: * This might be the guy in charge of deploying AGI and negotiating with other nations about it. I think we should be very concerned about the values of the person with the most power over this process. While purely caring about democracy could matter a lot for this, it's also a signal of a general lack of values and lack of thinking about values, that seems concerning if he can make decisions about governing AGI with massive downstream effects. * I think his anti-democratic tendencies also display his intense power-hunger. It seems dangerous to have someone with this characteristic wielding power over the development of an incredibly powerful technology that could be used for all kinds of nefarious purposes.  * There is also some small chance that Trump either attempts to seize power or manipulates the coming election for a supporter of his. I think this probably increases the chances of a far less competent and value-aligned person taking the helm in 2028.  * In general, the public seems pretty bought-in on AI risk being a real issue and is interested in regulation. Having democratic instincts would perhaps push in the direction of good regulation (though the relationship here seems a little less clear). As far as Taiwan I worry that Trump's strategic ambiguity has a few too many dashes of ambiguity on this front which could lead to increased chance of crises which could escalate into something much worse. I don't really have strong opinions about how Trump vs Kamala would fare on Taiwan though to be honest. 

I think it's important that AIs will be created within an existing system of law and property rights. Unlike animals, they'll be able to communicate with us and make contracts. It therefore seems perfectly plausible for AIs to simply get rich within the system we have already established, and make productive compromises, rather than violently overthrowing the system itself.

I think you disagree with Eliezer on a different crux (whether the alignment problem is easy). If we could create AI's that follows the existing system of law and property rights (includ... (read more)

5Matthew Barnett
I disagree that creating an agent that follows the existing system of law and property rights, and acts within it rather than trying to undermine it, would count as a solution to the alignment problem. Imagine a man who only cared about himself and had no altruistic impulses whatsoever. However, this man reasoned that, "If I disrespect the rule of law, ruthlessly exploit loopholes in the legal system, and maliciously comply with the letter of the law while disregarding its intent, then other people will view me negatively and trust me less as a consequence. If I do that, then people will be less likely to want to become my trading partner, they'll be less likely to sign onto long-term contracts with me, I might accidentally go to prison because of an adversarial prosecutor and an unsympathetic jury, and it will be harder to recruit social allies. These are all things that would be very selfishly costly. Therefore, for my own selfish benefit, I should generally abide by most widely established norms and moral rules in the modern world, including the norm of following intent of the law, rather than merely the letter of the law." From an outside perspective, this person would essentially be indistinguishable from a normal law-abiding citizen who cared about other people. Perhaps the main difference between this person and a "normal" person is that this man wouldn't partake in much private altruism like donating to charity anonymously; but that type of behavior is rare anyway among the general public. Nonetheless, despite appearing outwardly-aligned, this person would be literally misaligned with the rest of humanity in a basic sense: they do not care about other people. If it were not instrumentally rational for this person to respect the rights of other citizens, they would have no issue throwing away someone else's life for a dollar. My basic point here is this: it is simply not true that misaligned agents have no incentive to obey the law. Misaligned agents typic
2Thomas Kwa
Taboo 'alignment problem'.

I think trying to be Superman is the problem, but I'm ok if that line of thinking doesn't work for you.

Do you mean in the sense that people who aren't Superman should stop beating themselves up about it (a real problem in EA), or that even if you are (financial) Superman, born in the red-white-and-blue light of a distant star, you shouldn't save people in other countries because that's bad somehow?

2Dylan Price
The latter. Superman's powers are magical, but our powers are intimately connected to the state of life for the less fortunate. We know that our economic prosperity is based on a mix of innovation and domination, and the more we reduce our involvement in the domination side of it, the more we address the real root of the problem.
Brendan Long*7034

The argument using Bernard Arnault doesn't really work. He (probably) won't give you $77 because if he gave everyone $77, he'd spend a very large portion of his wealth. But we don't need an AI to give us billions of Earths. Just one would be sufficient. Bernard Arnault would probably be willing to spend $77 to prevent the extinction of a (non-threatening) alien species.

(This is not a general-purpose argument against worrying about AI or other similar arguments in the same vein, I just don't think this particular argument in the specific way it was written in this post works)

gwern*7737

No, it works, because the problem with your counter-argument is that you are massively privileging the hypothesis of a very very specific charitable target and intervention. Nothing makes humans all that special, in the same way that you are not special to Bernard Arnault nor would he give you straightup cash if you were special (and, in fact, Arnault's charity is the usual elite signaling like donating to rebuild Notre Dame or to French food kitchens, see Zac's link). The same argument goes through for every other species, including future ones, and your ... (read more)

Reply3211
-14j_timeberlake

In this analogy, you:every other human::humanity:every other stuff AI can care about. Arnault can give money to dying people in Africa (I have no idea who he is as person, I'm just guessing), but he has no particular reasons to give them to you specifically and not to the most profitable investment/most efficient charity.

I'm only vaguely connected to EA in the sense of donating more-than-usual amounts of money in effective ways (❤️ GiveDirectly), but this feels like a strawman. I don't think the average EA would recommend charities that hurt other people as side effects, work actively-harmful jobs to make money[1], or generally Utilitarian-maxxing.

The EA trolley problem is that there are thousands (or millions) of trolleys that have varying difficult of stopping, barreling toward varying groups of people. The problem isn't that stopping them hurts other people (it doesn't)... (read more)

1Dylan Price
❤️ thanks!  That is debatable. I think trying to be Superman is the problem, but I'm ok if that line of thinking doesn't work for you.
Load More