All of samuelshadrach's Comments + Replies

I'm not claiming it's the only factor.

Russia and China obviously have significant crude oil reserves which they use domestically. They get to keep them instead of exporting to someone because they have nuclear weapons.

All of industry is ultimately based on a few resources such as crude oil, coal and water. These are then used to make steel and electricity which are then used to make industrial supplies for chemicals and so on.

So a shortage of drugs or of roads or of hospitals does indirectly tie into the energy use of the country.

@Arjun Panickssery I'm not sure what counts as definitive proof to you.

US crude oil imports: https://worldpopulationreview.com/country-rankings/us-oil-imports-by-country

You can read history of US relations with Saudi Arabia or Iraq or South Korea or any of the other countries at the top of this list.

I'm mainly trying to explain this graph of energy use per capita.

I agree the US exports a variety of goods including weapons, food, industrial products, aircraft and so on, and this gives them more money to purchase crude oil. And being on the leading edge of science and engineering for these industries enables them to make these exports in the first place.

US military protection including nuclear protection is obviously another reason why US gets favourable deals from its allies though.

1samuelshadrach
@Arjun Panickssery I'm not sure what counts as definitive proof to you. US crude oil imports: https://worldpopulationreview.com/country-rankings/us-oil-imports-by-country You can read history of US relations with Saudi Arabia or Iraq or South Korea or any of the other countries at the top of this list.

Rule of law

Energy use per capita

Global utility includes the above two things (first two tiers of Maslow's hierarchy) not just counting the number of deaths (where I agree health-related deaths are the biggest bracket).

I consider US govt partially responsible for unequal distribution.

0Nick_Tarleton
Neither the mortality-rate nor the energy-use map lines up that closely with the US geopolitical sphere of influence. (E.g. Russia and China on the one hand, Latin America on the other.) I'm not saying the US government isn't partially responsible for unequal distribution, but your previous comment sounds like treating it as the only or primary significant factor. (I'm also not sure what point you're trying to make at all with the energy-use map, given how similar it looks to the mortality-rate map.)

(My response to you is also unoriginal but worth stating imo.)

I would prefer if you used the phrase "US geopolitical sphere of influence" instead of "developed world". It makes it clear your take is political.

Leaders within the US govt have obviously contributed to multiple wars and genocides, you just happen to be born into a family that is not on the receiving end of any of them. Part of the reason (but not the full reason) for the economic prosperity is crude oil deals made by the US govt under threat of nuclear war.

Statements such as yours give leaders within the US govt implicit consent to continue this sort of rule over the world.

1mlsbt
"Crude oil deals made by the US govt" are responsible for a negligible proportion of global economic prosperity, which comes out of the global scientific ecosystem that has been centered in the US for nearly 100 years.
7Hastings
This period of global safety is not fairly distributed,    But it is also real https://data.unicef.org/resources/levels-and-trends-in-child-mortality-2024/

Good to know it helped

2025-05-12

Samuel x Saksham AI timelines (discussion on 2025-05-09)

  • top-level views
    • samuel top-level: 25% AI!2030 >= ASI, >50% ASI >> AI!2030 >> AI!2025, <25% AI!2030 ~= AI!2025
    • saksham top-level: medium probability AI!2030 >= ASI
    • samuel bullish on model scaling, more uncertain on RL scaling
    • saksham bullish on RL/inference scaling, saksham bullish on grokking
      • samuel: does bullish on grokking mean bullish on model scaling. saksham: unsure
  • agreements
    • samuel and saksham agree: only 2024-2025 counts as empirical data to extrapol
... (read more)

Actually persuading someone to donate to you is harder for most people than figuring out how to use cryptocurrency.

Using crypto is not that hard in the average case. The main habits you need to get into are a) verify everything, 90% of all the services and platforms are scams b) no mistakes when dealing with large sums, practice with small sum first. one mistake can lose all your money

This can also be done over the internet. Talk to their irl social circle.

Most security experts a bank would reasonably hire are not bank robbers, you know? I assume that's true anyway,

If you're good at it, you can purchase the knowledge without giving them a position of power. Intelligence agencies purchase zero days from hackers on black market. Foreign spies can be turned using money to become double agents.

What would help more is a language translation browser extension that doesn't suck, so people could get used to the habit of reading news and opinions from outside their country.

Anyone who found this post helpful and is a software developer, please consider building this. I might do it myself, if I had more time or money.

1whestler
Thanks for posting. I've had some of the same thoughts especially about honesty and the therapist's ability to support you in doing something that they either don't understand the significance of or may actively morally oppose. It's a very difficult thing to require a person to try to do.

but for example, trying to ascent the academic status hierarchy is a bad use of time and resources

For some fields such as biotech, it's difficult to get access to labs outside of academia. And you can't learn without lab access because the cutting edge experiments don't get posted to YouTube (yet).

AI-related social fragmentation

I made a video on feeling lonely due to AI stuff.

I'm going to give a weird answer and say maybe it's because water is a scarce resource for life. (Especially water not polluted by another organism.)

All life is made up mainly of lipids/carbohydrates and proteins. Humans therefore need to eat proteins and lipids/carbohydrates in large quantities.

Carbohydrates can be dry. Proteins have secondary structure which needs some water content to maintain. Other organisms (such as microorganisms) can compete for that water so it has to be protected. Hence you put the stuff with water content inside a protective cas... (read more)

Anyone wanna be friends? Like, we could talk once a month on video call.

Not having friends who buy into AI xrisk assumptions is bad for my motivation, so I'm self-interestedly trying to fix that.

I know some people who buy into AI xrisk assumptions also dislike my plan but I don't have a solution for that. I'm not going to give up an important plan just because it makes people in my social circle unhappy.

Ban on ASI > Open source ASI > Closed source ASI

This is my ordering.

Yudkowsky's worldview in favour of closed source ASI is sitting in multiple shaky assumptions. One of these assumptions is that getting a 3-month to 3-year lead is necessary and sufficient condition for alignment to be solved. Yudkowsky!2025 himself doesn't believe alignment can be solved in 3 years.

Why does anybody on lesswrong want closed source ASI?

I think for a lot of societal change to happen, information needs to be public first. (Then it becomes common knowledge, then an alternate plan gets buy-in, then that becomes common knowledge and so on.)

A foreign adversary getting the info doesn't mean it's public, although it has increased the number of actors N who now have that piece of info in the world. Large N is not stable so eventually the info may end up public anyway.

If I got $1M in funding, I'd use it towards some or all of the following projects.

The objective is to get secret information out of US ASI orgs (including classified information) and host it in countries outside the US. Hopefully someone else can use this info to influence US and world politics.

Black DAQ

  • whistleblower/spy guide
  • hacker guide

Grey DAQ

  • internet doxxing tool
  • drones/cctv outside offices/ datacentres

High attention

  • persuade Indian, Russian, Chinese journalists to run a SecureDrop-like system
  • digital journalism guide
  • OR run a journalist outl
... (read more)
4faul_sname
Why do you want to do this as a lone person rather than e.g. directly working with the intelligence service of some foreign adverary?

Instead of just considering median member of the population, you can consider extremes.

Some fraction of any population (be it Chinese or English) will want the most information dense writing possible. Some fraction will want more artistic language. Some fraction will want to be as illegible as possible.

The difference I think is that Old English grammar did not allow very information dense writing, and English in 2025 allows more density. Even today though it seems obvious more density is possible, a lot of articles, pronouns etc seem optional to me. Changi... (read more)

Is there a single person on Earth in the intersection of these?

  • received $1M funding
  • non-profit
  • public advocacy
  • AI xrisk

My general sense is that most EA / rationalist funders avoid public advocacy. Am I missing anything?

Thanks for reply.

If you are making a video, I agree it's not a good idea to put weaker arguments there if you know stronger arguments.

I strongly disagree with the idea that therefore you should defer to EA / LW leadership (or generally, anyone with more capital/attention/time), and either not publish your own argument or publish their argument instead of yours. If you think an argument is good and other people think it's bad, I'd say post it.

2Kaj_Sotala
I also strongly disagree with that idea.

Update: I figured it out and hosted it. Clear difference in capabilities visible.

I need atleast $100/mo to host 24x7 though.

TGI makes it trivial.

Can host openai-community/gpt2 (125M, 2019), EleutherAI/gpt-neox-20b (20B, 2022), gpt-3.5-turbo (175B?, 2020) and o3 (2T?, 2025).

Anyone has a GPT2 fine-tuned API?

I might wanna ship an app comparing GPT2, GPT3.5 and o3, to explain scaling laws to non-technical folks.

1samuelshadrach
Update: I figured it out and hosted it. Clear difference in capabilities visible. I need atleast $100/mo to host 24x7 though. TGI makes it trivial. Can host openai-community/gpt2 (125M, 2019), EleutherAI/gpt-neox-20b (20B, 2022), gpt-3.5-turbo (175B?, 2020) and o3 (2T?, 2025).

Oh, cool

Do you have a clear example of a blunder someone should not make when making such a video?

Obviously you can't forecast all the effects of making a video, there could be some probability mass of negative outcome while the mean and median are clearly positive.

2Kaj_Sotala
Suppose Echo Example's video says, "If ASI is developed, it's going to be like in The Terminator - it wakes up to its existence, realizes it's more intelligent than humans, and then does what more intelligent species do to weaker ones. Destroys and subjugates them, just like humans do to other species!" Now Vee Viewer watches this and thinks "okay, the argument is that the ASIs would be a more intelligent 'species' than humans, and more intelligent species always want to destroy and subjugate weaker ones".  Having gotten curious about the topic, Vee mentions this to their friends, and someone points them to Yann LeCun claiming that people imagine killer robots because people fail to imagine that we could just build an AI without the harmful human drives. Vee also runs into Steven Pinker arguing that history "does turn up the occasional megalomaniacal despot or psychopathic serial killer, but these are products of a history of natural selection shaping testosterone-sensitive circuits in a certain species of primate, not an inevitable feature of intelligent systems". So then Vee concludes that oh, that thing about ASI's risks was just coming from a position of anthropomorphism and people not really understanding that AIs are different from humans. They put the thought out of their head. Then some later time Vee runs into Denny Diligent's carefully argued blog post about the dangers of ASI. The beginning reads: "In this post, I argue that we need a global ban on developing ASI. I draw on the notion of convergent instrumental goals, which holds that all sufficiently intelligent agents have goals such as self-preservation and acquiring resources..." At this point, Vee goes "oh, this is again just another version of the Terminator argument, LeCun and Pinker have already disproven that", closes the tab, and goes do something else. Later Vee happens to have a conversation with their friend, Ash Acquaintance. Ash: "Hey Vee, I ran into some people worried about artifici

In theory, yes.

In practice, I think bad publicity is still publicity. Most people on earth still haven't heard about xrisk. I trust that sharing the truth has hard-to-predict positive effects over long time horizons even if not over short. I think average LW user is too risk-averse relative to the problem they wish to solve.

I'd love to hear your reasoning for why making a video is bad. But I do vaguely suspect this disagreement comes down to some deeper priors of how the world works and hence may not get resolved quickly.

2Kaj_Sotala
I didn't say that making a video would always be bad! I agree that if the median person reading your comment would make a video, it would probably be good. I only disputed the claim that making a video would always be good.

If you support an international ban on building ASI, please consider making a short video stating this.

A low quality video recording made in 15 minutes is better than no video at all. Consider doing it right now if you are convinced.

Optional:

  • make a long video instead of a short one, explaining your reasoning
  • make videos on other topics to increase viewership

Here's mine: https://youtube.com/shorts/T40AeAbGIcg?si=OFCuD37Twyivy-oa

Why?

  • Video has orders of magnitude more reach than text. Most people on earth don't have the attention span for lengthy text p
... (read more)
5Kaj_Sotala
It could also be worse than no video all, if it gives people negative associations around the whole concept.

I'd rather go along with the inevitable than fight a losing battle. Less privacy for everyone.

Anyone on lesswrong writing about solar prices?

Electricity from coal and crude oil has stagnated at $0.10/kWh for over 50 years, meaning the primary way of increasing your country's per capita energy use reserve is to trade/war/bully other countries into giving you their crude oil.

Solar electricity is already at $0.05/kWh and is forecasted to go as low as $0.02/kWh by 2030.

Does anything from this document seem interesting to you?

Having a simple cli tool to convert file formats, generate embeddings, share them in a standard format - seems relevant to increasing the transparency of the planet.

You might particularly want to increase transparency of what’s going on at ASI companies or govts, or what’s going on at lesswrong.

I've made a reply formalising this.

I've made a reply formalising this.

I made a reply. You're referring to situation b.

Update based on the replies:

I basically see this as a Markov process.

X(t+1) = P(x(t+1) | x(t), x(t-1), x(t-2), ...) = F(x(t))

where x(t) is a value is sampled from X(t) distribution for all t.

In plain English, given the last value you get a probability distribution for the next value.

In the AI example: Given x(2025), estimate probability distribution X(2030) where x is the AI capability level.

Possibilities

a) x(t+1) value is determined by x(t) value. There is no randomness. No new information is learned from x(t).

b) X(t+1) distribution is conditional on the ... (read more)

There is a similar hypothesis that is testable. Find someone who is illiterate and superstitious today and fund their education upto university level.

Edit: Bonus points if they are selected from an isolated tribal community existing today

Sorry about hijacking an only tangentially related thread but I'd love to get your thoughts on ways to accelerate common knowledge formation. This could be technologies or social technologies or something else.

I have a bunch of thoughts around this. Where would be the best place to talk?

Here's a simplified example for people who have never traded in the stock market. We have a biased coin with 80% probability of heads. What's the probability of tossing 3 coins and getting 3 heads? 51.2%. Assuming first coin was heads, what's the probability of getting other two coins also heads? 64%

Each coin toss is analogous to whether the next model follows or does not follow scaling laws.

1samuelshadrach
Update based on the replies: I basically see this as a Markov process. X(t+1) = P(x(t+1) | x(t), x(t-1), x(t-2), ...) = F(x(t)) where x(t) is a value is sampled from X(t) distribution for all t. In plain English, given the last value you get a probability distribution for the next value. In the AI example: Given x(2025), estimate probability distribution X(2030) where x is the AI capability level. Possibilities a) x(t+1) value is determined by x(t) value. There is no randomness. No new information is learned from x(t). b) X(t+1) distribution is conditional on the value of x(t). Learning which value x(t) was sampled from distribution X(t) distribution gives you new information. However you sampled one of those values such that P(x(t+1) | x(t-1), x(t-2), ...) = P(x(t+1) | x(t), x(t-2) ). You got lucky, and the value sampled ensures distribution remains the same. c) You learned new information and the probability distribution also changed. a is possible but seems to imply overconfidence to me. b is possible but seems to imply extraordianry luck to me, especially if it's happening multiple times. c seems like the most likely situation to me.
2Viliam
With coin, the options are "head" and "tails", so "head" moves you in one direction. With LLMs, the options are "worse than expected", "just as expected", "better than expected", so "just as expected" does not have to move you in a specific direction.
1shawnghu
Another way of operationalizing the objections to your argument are: what is the analogue to the event "flips heads"? If the predicate used is "conditional on AI models achieving power level X, what is the probability of Y event?" and the new model is below level X, by construction we have gained 0 bits of information about this. Obviously this example is a little contrived, but not that contrived, and trying to figure out what fair predicates are to register will result in more objections to your original statement.
2Phiwip
I don't think this analogy works on multiple levels. As far as I know, there isn't some sort of known probability that scaling laws will continue to be followed as new models are released. While it is true that a new model continuing to follow scaling laws is increased evidence in favor of future models continuing to follow scaling laws, thus shortening timelines, it's not really clear how much evidence it would be. This is important because, unlike a coin flip, there are a lot of other details regarding a new model release that could plausibly affect someone's timelines. A model's capabilities are complex, human reactions to them likely more so, and that isn't covered in a yes/no description of if it's better than the previous one or follows scaling laws. Also, following your analogy would differ from the original comment since it moves to whether the new AI model follows scaling laws instead of just whether the new AI model is better than the previous one (It seems to me that there could be a model that is better than the previous one yet still markedly underperforms compared to what would be expected from scaling laws). If there's any obvious mistakes I'm making here I'd love to know, I'm still pretty new to the space.

As Gwern noted, we can't understand chess endgames.

On this example specifically, a) it's possible AI is too stupid to have a theory of mind of humans such that it can write good chess textbooks on these endgames. Maybe there is an elegant way of looking at it that isn't brute force b) chess endgames are amenable to brute force in a way that "invent a microscope" is not. Scientific discovery is searching through an exponential space so you need a good heuristic or model for every major step you take, you can't brute force it.

If a new AI model comes out that's better than the previous one and it doesn't shorten your timelines, that likely means either your current or your previous timelines were inaccurate.

1samuelshadrach
Here's a simplified example for people who have never traded in the stock market. We have a biased coin with 80% probability of heads. What's the probability of tossing 3 coins and getting 3 heads? 51.2%. Assuming first coin was heads, what's the probability of getting other two coins also heads? 64% Each coin toss is analogous to whether the next model follows or does not follow scaling laws.

An intuition pump you can try is make them sit side by side with an AI and answer questions on text in 1 minute. And check whose answers are better.

I agree tech beyond human comprehension is possible. I’m just giving an intuition as to why a lot of radically powerful tech likely still lies within human comprehension. 500 [1] years of progress is likely to still be within comprehension, so is 50 years or 5 years.

The most complex tech that exists in the universe is arguably human brains themselves and we could probably understand a good fraction of their working too, if someone explained it.

Important point here being the AI has to want to explain it in simple terms to us.

If you get a 16th century human ... (read more)

2Davidmanheim
I think you are fooling yourself about how similar people in 1600 are to people today. The average person at the time was illiterate, superstitious, and could maybe do single digit addition and subtraction. You're going to explain nuclear physics?

Would you include preference cascades and the formation of common knowledge in the same cluster?

4romeostevensit
Definitely for preference cascades. For common knowledge I'd say it's about undermining of common knowledge formation (eg meme to not share salary, strong pressure not to name that emperor is naked, etc.)

Why does this matter? To quote a Yudkowsky-ish example, maybe you can take a 16-th century human (before Newtonian physics was invented, after guns were invented) and explain to him how a nuclear bomb works. This doesn't matter for predicting the outcome of a hypothetical war between 16th century Britain and 21st century USA.

ASI inventions can be big surprises and yet be things that you could understand if someone taught you.

We could probably understand how a von Neumann probe or an anti-aging cure worked too, if someone taught us.

2Davidmanheim
If AI systems can make 500 years of progress before we notice it's uncontrolled, it's already assuming it's a insanely strong superintelligence.   Probably, if it's of a type we can imagine and is comprehensible in those terms - but that's assuming the conclusion! As Gwern noted, we can't understand chess endgames. Similarly, in the case of a strong ASI, the ASI- created probe or cure could look more like a random set of actions that aren't explainable in our terms which cause the outcome than it does like an engineered / purpose driven system that is explainable at all.

Suppose you are trying to figure out a function f(x,y,z | a,b,c) where x, y ,z are all scalar values and a, b, c are all constants.

If you knew a few zeroes of this function, you could figure out good approximations of this function. Let's say you knew

U(x,y, a=0) = x
U(x,y, a=1) = x
U(x,y, a=2) = y
U(x,y, a=3) = y

You could now guess U(x,y) = x if a<1.5, y if a>1.5

You will not be able to get a good approximation if you did not know enough zeroes.

This is a comment about morality. x, y, z are agent's multiple possibly-conflicting values and a, b, c ar... (read more)

Update: I read your examples and I honestly don’t see how any of these 3 people would be better off by their own idea of what better off means, if they were less open or less truthful.

P.S. discussing anonymously is easier if you’re not confident you can handle the social repercussions of discussing it under your real name. I agree that morality is social dark matter and it’s difficult to argue in favour of positions that are pro-violence pro-deception etc under your real name.

If you can’t provide a few unambiguous examples of the dilemma in the post that actually happened in the real world, I’m less likely to take your post seriously.

Might be worth thinking more and then coming up with examples.

Do you have examples?

1eva_
I do have examples that motivated me to write this, but they're all examples where people are still strongly disagreeing about the object level of what happened, or possibly lying about how they disagree on the object level and pretending they're committed to honesty. I thought about putting them in the essay but decided it wouldn't be fair and I didn't want to distract my actual thesis into a case analysis of how maybe all my examples have a problem other than over-adherence to bad honesty norms. Should I put them in a comment? I'm genuinely unsure. I could probably DM you them if you really want? EDIT: okay fine you win. The public examples with nice writeups that I am most willing to cite are: Eneasz Brodski, Zack M Davis, Scott Alexander. There are other posts related to some of those but I don't want to exhaustively link everything anyone's said about it in this comment. I claim there are other people making in my opinion similar mistakes but I'm either unable or unwilling to provide evidence so you shouldn't believe me. I would prefer to leave as an exercise for the reader what any of those things have to do with my position because this whole line of inquiry seems incredibly cursed.

Update: I'll be more specific. There's a power buys you distance from the crime phenomena going on if you're okay with using Google maps data acquired on about their restaurant takeout orders, but not okay asking the restaurant employee yourself or getting yourself hired at the restaurant.

Pizza index and stalking employees are both the same thing, it's hard to do one without the other. If you choose to declare war against AI labs you also likely accept that their foot soldiers are collateral damage.

I agree that (non-violent) stalking of employees is still a more hostile technique than writing angry posts on an internet forum.

Makes sense, thanks for replying.

Load More