This seems to explain a lot about why Altman is trying so hard both to make OpenAI for-profit (to more easily raise money with that burn rate) and why he wants so much bigger data centers (to keep going on "just make it bigger").
Due to an apparently ravenous hunger among our donor base for having benches with plaques dedicated to them, and us not actually having that many benches, the threshold for this is increased to $2,000.
Given the clear mandate from the community, when do you plan to expand Lighthaven with a new Hall of Benches, and how many benches do you think you can fit in it?
I think it's more that learning to prioritize effectiveness over aesthetics will make you a more effective software engineer. Sometimes terrible languages are the right tool for the job, and I find it gives me satisfaction to pick the right tool even if I wish we lived in a world where the right tool was also the objectively best language (OCaml, obviously).
This economist thinks the reason is that inputs were up in January and the calculation is treating that as less domestic production rather than increased inventories:
...OK, so what can we say about the current forecast of -2.8% for Q1 of 2025? First, almost all of the data in the model right now are for January 2025 only. We still have 2 full months in the quarter to go (in terms of data collection). Second, the biggest contributor to the negative reading is a massive increase in imports in January 2025.
[...]
The Atlanta Fed GDPNow model is doing exactly that,
I updated this after some more experimentation. I now bake them uncovered for 50 minutes rather than doing anything more complicated, and I added some explicit notes about additional seasonings. I also usually do a step where I salt and drain the potatoes, so I mentioned that in the variations.
During our evaluations we noticed that Claude 3.7 Sonnet occasionally resorts to special-casing in order to pass test cases in agentic coding environments like Claude Code. Most often this takes the form of directly returning expected test values rather than implementing general solutions, but also includes modifying the problematic tests themselves to match the code’s output.
Claude officially passes the junior engineer Turing Test?
But if we are merely mathematical objects, from whence arises the feelings of pleasure and pain that are so fundamental?
My understanding is that these feelings are physical things that exist in your brain (chemical, electrical, structural features, whatever). I think of this like how bits (in a computer sense) are an abstract thing, but if you ask "How does the computer know this bit is a 1?", the answer is that it's a structural feature of a hard drive or an electrical signal in a memory chip.
Allowing for charitable donations as an alternative to simple taxation does shift the needle a bit but not enough to substantially alter the argument IMO.
Not to mention that allowing for charitable donations as an alternative would likely lead to everyone setting up charities for their parents to donate to.
The resistance to such a policy is largely about ideology rather than about feasibility. It is about the quiet but pervasive belief that those born into privilege should remain there.
I don't think this is true at all. There is an ideological argument for inheritance, but it's not the one you're giving.
The ideological argument is that in a system with private property, people should be able to spend the money they earn in the ways they want, and one of the things people most want is to spend money on their children. The important person served by inheritance law is the person who made the money, not their inheritors (who you rightly point out didn't do anything).
Sam Altman is almost certainly aware of the arguments and just doesn't agree with them. The OpenAI emails are helpful for background on this, but at least back when OpenAI was founded, Elon Musk seemed to take AI safety relatively seriously.
...Elon Musk to Sam Teller - Apr 27, 2016 12:24 PM
History unequivocally illustrates that a powerful technology is a double-edged sword. It would be foolish to assume that AI, arguably the most powerful of all technologies, only has a single edge.
The recent example of Microsoft's AI chatbot shows how quickly it can turn inc
I don't understand why perfect substitution matters. If I'm considering two products, I only care which one provides what I want cheapest, not the exact factor between them.
For example, if I want to buy a power source for my car and have two options:
Engine: 100x horsepower, 100x torque Horse: 1x horsepower, 10x torque
If I care most about horsepower, I'll buy the engine, and if I care most about torque, I'll also buy the engine. The engine isn't a "perfect substitute" for the horse, but I still won't buy any horses.
Maybe this has something to do with prices, but it seems like that just makes things worse since engines are cheaper than horses (and AI's are likely to be cheaper than humans).
Location: Remote. Timaeus will likely be located in either Berkeley or London in the next 6 months, and we intend to sponsor visas for these roles in the future.
Will all employees be required to move to Berkeley or London, or will they have the option to continue working remotely?
I think the biggest tech companies collude to fix wages so that they are sufficiently higher than every other company's salaries to stifle competition
The NYT article you cite says the exact opposite, that Big Tech companies were sued for colluding to fix wages downward, not upward. Why would engineers sue if they were being overpaid?
It seems like the big players already have plans to cut Nvidia out of the loop though.
And while they seem to have the best general purpose hardware, they're limited by competition with AMD, Apple, and Qualcomm.
I rice my potatoes while they're still burning hot, which is annoying, but I'm impatient and it means the result is still warm. If you're (reasonably) waiting for the potatoes to cool down, you might be able to re-heat them in the microwave or on the stove without too much of a change to texture, although you'd have to be careful about how you stir it.
Doesn't the stand mixer method overmix and produce glue-y mashed potatoes? I actually don't mind that texture but I thought that's why people don't usually do it that way.
I get the 10 lbs bags at Costco (usually buying 20 lbs at a time). Are the Trader Joe's ones noticably better tasting? I'd love to try more potato varieties but no one seems to sell anything more interesting unless I want tiny colorful potatoes that cost $10/lb.
I actually have this exact box grater so I sacrificed a potato for science and determined:
I agree that more crispy bits is good. The recipe above optimizes for not being annoying to make, but doing the exact same thing and spreading the mixture on two sheet pans might work (and it would probably have a much shorter bake time).
I suspect the crispier version would be harder to store and wouldn't reheat as well though.
That's a good point. I don't really know what I'm doing, so I'm not able to predict exact variations. I found that this worked relatively consistently no matter how I cooked it, but the version in the recipe above was the best.
I definitely endorse changing the recipe based on how it goes:
Wouldn't the FDA not really be a blocker here, since doctors are allowed to prescribe medications off-label? It sounds more like a medical culture (or liability) thing, although I guess they kind-of interact since using FDA-approved medications in the FDA-approved way is (probably?) a good way to avoid liability issues.
I showed up and some other people were in the room :(
I'm finishing up packing but won't make it there until 2:15 or so.
Haha, well that dosage probably would probably cause weight loss.
All of the sources I can find give the density as exactly 4 oz = 1/2 cup, although maybe this is just an approximation that's infecting other data sources?
https://www.wolframalpha.com/input?i=density+of+butter+*+(1%2F2+cup)+in+ounces
But 1/2 cup of butter weighs 4 ounces according to every source I can find: https://www.wolframalpha.com/input?i=density+of+butter+*+(1%2F2+cup)+in+ounces
Which means a 4 ounce stick of butter is 1/2 cup by volume.
It sounds like 1/2 cup of butter (8 tbps) weighs 4 oz, so shouldn't this actually work out so each of those sections actually is 1 tbsp in volume, and it's just a coincidence (or not) that the density of butter is 1 oz / 2 fl oz?
The problem is that lack of money isn't the reason there's not enough housing in places that people want to live. Zoning laws intentionally exclude poor people because rich people don't want to live near them. Allocating more money to the problem doesn't really help (see: the ridiculous amount of money California spends on affordable housing), and if you fixed the part where it's illegal, the government spending isn't necessary because real estate developers would build apartments without subsidies if they were allowed to.
Also, the most recent election shows that ordinary people really, really don't like inflation, so I don't think printing trillions of dollars for this purpose is actually more palatable.
You're right, I was taking the section saying "In this new system, the only incentive to do more and go further is to transcend the status quo in some way, and earn recognition for a unique contribution." too seriously. On a second re-read, it seems like your proposal is actually just to print money to give people food stamps and housing vouchers. I think the answer to why we don't do that is that we do that.
Food is essentially a solved problem in the United States, and the biggest problem with housing vouchers is that there physically isn't enough housing...
I think you've re-invented Communism. The reason we don't implement it is that in practice it's much worse for everyone, including poor people.
I'll try to make it but I might be moving that day so I'm not sure :\
Finally, note to self, probably still don’t use SQLite if you have a good alternative? Twice is suspicious, although they did fix the bug same day and it wasn’t ever released.
But is this because SQLite is unusually buggy, or because its code is unusually open, short and readable and thus understandable by an AI? I would guess that MySQL (for example) has significantly worse vulnerabilities but they're harder to find.
There are severe issues with the measure I'm about to employ (not least is everything listed in https://www.sqlite.org/cves.html) , but the order of magnitude is still meaningful:
https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=sqlite 170 records
https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=postgresql 292 records (+74 postgres and maybe another 100 or so under pg; the specific spelling “postgresql” isn't used as consistently as “sqlite” and “mysql” is)
https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=mysql 2026 records
SQLite is ludicrously well tested; similar bugs in other databases just don't get found and fixed.
I don't know anything about you in particular, but if you know alignment researchers who would recommend you, could you get them to refer you either internally or through their contacts?
This is actually why a short position (a complicated loan) would theoretically work. If we all die, then you, as someone else's counterparty, never need to pay your loan back.
(I think this is a bad idea, but not because of counterparty risk)
I think the idea is that short position pays off up-front, and then you don't need to worry about the loan if everyone's dead.
If by paying off you mean this bet actually working I think you're right though. It seems more likely that the stock market would go up in the short term, forcing you to cover at a higher price and losing a bunch of money. And if the market stays flat, you'll still lose money on interest payments unless doom is coming this year.
I'll be out of town (getting married on the 25th) but I'd be happy to do something the weekend after.
I don't think this is actually the rule by common practice (and not all bad things should be illegal). For example, if one of your friends/associates says something that you think is stupid, going around telling everyone that they said something stupid would generally be seen as rude. It would also be seen as crazy if you overheard someone saying something negative about their job and then going out of your way to tell their boss.
In both cases there would be exceptions, like if if the person's boss is your friend or safety reasons like you mentioned, but I think by default sharing negative information about people is seen as bad, even if it's sometimes considered low-levels of bad (like with gossip).
I also agree with this to some extent. Journalists should be most concerned about their readers, not their sources. They should care about accurately quoting their sources because misquoting does a disservice to their readers, and they should care about privacy most of the time because having access to sources is important to providing the service to their readers.
I guess this post is from the perspective of being a source, so "journalists are out to get you" is probably the right attitude to take, but it's good actually for journalists to prioritize their readers over sources.
The convenient thing about journalism is that the problems we're worried about here are public, so you don't need to trust the list creators as much as you would in other situations. This is why I suggest giving links to the articles, so anyone reading the list can verify for themselves that the article commits whichever sin it's accused of.
The trickier case would be protecting against the accusers lying (i.e. tell journalist A something bad and then claim that they made it up). If you have decent verification of accusers' identifies you might still get a good enough signal to noise ratio, especially if you include positive 'reviews'.
I largely agree with this article but I feel like it won't really change anyone's behavior. Journalists act the way they do because that's what they're rewarded for. And if your heuristic is that all journalists are untrustworthy, it makes it hard for trustworthy journalists to get any benefit from that.
A more effective way to change behavior might be to make a public list of journalists who are or aren't trustworthy, with specific information about why ("In [insert URL here], Journalist A asked me for a quote and I said X, but they implied inaccurately th...
It would be very surprising to me if such ambitious people wanted to leave right before they had a chance to make history though.
They can't do that since it would make it obvious to the target that they should counter-attack.
As an update: Too much psyllium makes me feel uncomfortably full, so I imagine that's part of the weight loss effect of 5 grams of it per meal. I did some experimentation but ended up sticking with 1 gram per meal or snack, in 500 gram capsules and taken with water.
I carry 8 of these pills (enough for 4 meals/snacks) in my pocket in small flat pill organizers.
It's still too early to assess the impact on cholesterol but this helps with my digestive issues, and it seems to help me not overeat delicious foods to the same extent (i.e. on a day where I previous...
Biden and Harris have credibly committed to help Taiwan. Trump appears much more isolationist and less likely to intervene, which might make China more likely to invade.
I personally think it's good for us to protect friendly countries like this, but isn't China invading Taiwan good for AI risk, since destroying the main source of advanced chips would slow down timelines?
You also mention Trump's anti-democratic tendencies, which seem bad for standard reasons, but not really relevant to AI existential risk (except to the extent that he might stay in power and continue making bad decisions 4+ years out).
I think it's important that AIs will be created within an existing system of law and property rights. Unlike animals, they'll be able to communicate with us and make contracts. It therefore seems perfectly plausible for AIs to simply get rich within the system we have already established, and make productive compromises, rather than violently overthrowing the system itself.
I think you disagree with Eliezer on a different crux (whether the alignment problem is easy). If we could create AI's that follows the existing system of law and property rights (includ...
I think trying to be Superman is the problem, but I'm ok if that line of thinking doesn't work for you.
Do you mean in the sense that people who aren't Superman should stop beating themselves up about it (a real problem in EA), or that even if you are (financial) Superman, born in the red-white-and-blue light of a distant star, you shouldn't save people in other countries because that's bad somehow?
The argument using Bernard Arnault doesn't really work. He (probably) won't give you $77 because if he gave everyone $77, he'd spend a very large portion of his wealth. But we don't need an AI to give us billions of Earths. Just one would be sufficient. Bernard Arnault would probably be willing to spend $77 to prevent the extinction of a (non-threatening) alien species.
(This is not a general-purpose argument against worrying about AI or other similar arguments in the same vein, I just don't think this particular argument in the specific way it was written in this post works)
No, it works, because the problem with your counter-argument is that you are massively privileging the hypothesis of a very very specific charitable target and intervention. Nothing makes humans all that special, in the same way that you are not special to Bernard Arnault nor would he give you straightup cash if you were special (and, in fact, Arnault's charity is the usual elite signaling like donating to rebuild Notre Dame or to French food kitchens, see Zac's link). The same argument goes through for every other species, including future ones, and your ...
In this analogy, you:every other human::humanity:every other stuff AI can care about. Arnault can give money to dying people in Africa (I have no idea who he is as person, I'm just guessing), but he has no particular reasons to give them to you specifically and not to the most profitable investment/most efficient charity.
I'm only vaguely connected to EA in the sense of donating more-than-usual amounts of money in effective ways (❤️ GiveDirectly), but this feels like a strawman. I don't think the average EA would recommend charities that hurt other people as side effects, work actively-harmful jobs to make money[1], or generally Utilitarian-maxxing.
The EA trolley problem is that there are thousands (or millions) of trolleys that have varying difficult of stopping, barreling toward varying groups of people. The problem isn't that stopping them hurts other people (it doesn't)...
I only had time to look at your first post, and then only skimmed it because it's really long. Asking people you don't know to read something of this length is more than you can really expect. People are busy and you're not the only one with demands on thei... (read more)