All of Dana's Comments + Replies

Dana
40

I think LW consensus has been that the main existential risk is AI development in general. The only viable long-term option is to shut it all down. Or at least slow it down as much as possible until we can come up with better solutions. DeepSeek from my perspective should incentivize slowing down development (if you agree with the fast follower dynamic. Also by reducing profit margins generally), and I believe it has.

Anyway, I don't see how this relates to these predictions. The predictions are about China's interest in racing to AGI. Do you believe China would now rather have an AGI race with USA than agree to a pause?

3Mateusz Bagiński
Any evidence of DeepSeek marginally slowing down AI development?
1thedudeabides
and the response to 'shut it down' has always "what about China, or India, or the UAE, or Europe to which the response was...they want to pause bc XYZ Well, you not have proof, not speculation, that they are not pausing.  They don't find your arguments pursuasive. What to do?!? Which is why the original post was about updating.  Something you don't seem very interested doing.  Which is irrational. So is this forum about rationality or about AI risk?  I would think the later flows from the former, but I don't see much evidence of the former.
Dana
62

I'm not convinced that these were bad predictions for the most part.

The main prediction: 1) China lacks compute. 2) CCP values stability and control -> China will not be the first to build unsafe AI/AGI.

Both of these premises are unambiguously true as far as I'm aware. So, these predictions being bad suggests that we now believe China is likely to build AGI without realizing it threatens stability/control, and with minimal compute, before USA? All while refusing to agree to any sort of deal to slow down? Why? Seems unlikely.

American companies, on the ot... (read more)

3thedudeabides
The argument has historically been that existential risk from AI came from some combination of a) SOTA models, and b) open source. China is now publishing SOTA open source models.  Oh and they found a way to optimize around their lack of GPUs. Are you sure you aren't under the influence of cognitive dissonance/selective memory? 
Dana
10

I keep some folders (and often some other transient files) on my desktop and pin my main apps to the taskbar. With apps pinned to your taskbar, you can open a new instance with Windows+shift+num (or just Windows+num if the app isn't open yet).

I do the same as you and search for any other apps that I don't want to pin.

Dana
10

Well, vision and mapping seem like they could be pretty generic (and I expect much better vision in future base models anyway). For the third limitation, I think it's quite possible that Claude could provide an appropriate segmentation strategy for whatever environment it is told it is being placed into.

Whether this would be a display of its intelligence, or just its capabilities, is beside the point from my perspective.

3Cole Wyeth
This won’t work, happy to bet on it if you want to make a manifold market. 
Dana
91

But these issues seem far from insurmountable, even with current tech. It is just that they are not actually trying, because they want to limit scaffolding.

From what I've seen, the main issues:
1) Poor vision -> Can be improved through tool use, will surely improve greatly regardless with new models
2) Poor mapping -> Can be improved greatly + straightforwardly through tool use
3) Poor executive function -> I feel like this would benefit greatly from something like a separation of concerns. Currently my impression is Claude is getting overwhelmed wit... (read more)

6Cole Wyeth
Yes, but because this scaffolding would have to be invented separately for each task, it’s no longer really zero shot and says little about the intelligence of Claude. 
Dana
-1-2

I interpret the main argument as: 
You cannot predict the direction of policy that would result from certain discussions/beliefs
The discussions improve the accuracy of our collective world model, which is very valuable
Therefore, we should have the discussions first and worry about policy later.

I agree that in many cases there will be unforeseen positive consequences as a result of the improved world model, but in my view, it is obviously false that we cannot make good directionally-correct predictions of this sort for many X. And the negative will clea... (read more)

Dana
32

I agree with you that people like him do a service to prediction markets: contributing a huge amount of either liquidity or information. I don't agree with you that it is clear which one he is providing, especially considering the outcome. He did also win his popular vote bet, which was hovering around, I'm not sure, ~20% most of the time? 

I think he (Theo) probably did have a true probability around 80% as well. That's what it looks like at least. I'm not sure why you would assume he should be more conservative than Kelly. I'm sure Musk is not, as one example of a competent risk-taker.

2Alexander Gietelink Oldenziel
The true probability would be more like >90% considering other factors like opportunity costs, transactions cost, counterparty risk, unforeseen black swans of various kinds etc.  Bear in mind this is  all things considered probability not just in-model probability, i.e. this would have to integrate that most other observers (especially those with strong calibrated prediction ) very strongly disagree*. Certainly, in some cases this is possible but one would need quite overwhelming evidence that you had a huge edge.  I agree one can reject Kelly betting - that's pretty crazy risky but plausibly the case for people like Elon or Theo. The question is whether the rest of us (with presumably more reasonably cautious attitudes) should take his win as much epistemic evidence. I think not. From our perspective his manic riskloving wouldn't be an much evidence for rational expectations.  *didn't the Kelly formula already integrate the fact that other people think differently. No, this is an additional piece of information one has to integrate. The Kelly betting gives you an implicit risk-averseness even conditioning on your beliefs being true (on average).    EDIT: Indeed it seems Theo the French Whale might have done a Kelly bet estimate too, he reports his true probability at 80-90%. Perhaps he did have private information.  "For example, a hypothetical sale of Théo's 47 million shares for Trump to win the election would execute at an estimated average price of just $0.02, according to Polymarket, which would represent a 96% loss for the trader. Théo paid an average price of about $0.56 cents for the 47 million shares. Meanwhile, a hypothetical sale of Théo's nearly 20 million shares for Trump to win the popular vote would execute at an average price of less than a 10th of a penny, according to Polymarket, representing a near-total loss. With so much money on the line, Théo said he is feeling nervous, though he believes Trump has an 80%-90% chance to win the electio
Dana
30

A few glaring issues here:
1) Does the question imply causation or not? It shouldn't.
2) Are these stats intended to be realistic such that I need to consider potential flaws and take a holistic view or just a toy scenario to test my numerical skills? If I believe it's the former and I'm confident X and Y are positively correlated, a 2x2 grid showing X and Y negatively correlated should of course make me question the quality of your data proportionally.
3) Is this an adversarial question such that my response may be taken out of context or otherwise misused?

T... (read more)

Dana
10

I do not really understand your framing of these three "dimensions". The way I see it, they form a dependency chain. If either of the first two are concentrated, they can easily cut off access during takeoff (and I would expect this). If both of the first two are diffuse, the third will necessarily also be diffuse.

How could one control AI without access to the hardware/software? What would stop one with access to the hardware/software from controlling AI?

4Matthew Barnett
One would gain control by renting access to the model, i.e., the same way you can control what an instance of ChatGPT currently does. Here, I am referring to practical control over the actual behavior of the AI, when determining what the AI does, such as what tasks it performs, how it is fine-tuned, or what inputs are fed into the model. This is not too dissimilar from the high level of practical control one can exercise over, for example, an AWS server that they rent. While Amazon may host these servers, and thereby have the final say over what happens to the computer in the case of a conflict, the company is nonetheless inherently dependent on customer revenue, implying that they cannot feasibly use all their servers privately for their own internal purposes. As a consequence of this practical constraint, Amazon rents these servers out to the public, and they do not substantially limit user control over AWS servers, providing for substantial discretion to end-users over what software is ultimately implemented. In the future, these controls could also be determined by contracts and law, analogously to how one has control over their own bank account, despite the bank providing the service and hosting one's account. Then, even in the case of a conflict, the entity that merely hosts an AI may not have practical control over what happens, as they may have legal obligations to their customers that they cannot breach without incurring enormous costs to themselves. The AIs themselves may resist such a breach as well. In practice, I agree these distinctions may be hard to recognize. There may be a case in which we thought that control over AI was decentralized, but in fact, power over the AIs was more concentrated or unified than we believed, as a consequence of centralization over the development or the provision of AI services. Indeed, perhaps real control was always in the hands of the government all along, as they could always choose to pass a law to nationalize AI,
Dana
*32

I've updated my comment. You are correct as long as you pre-commit to a single answer beforehand, not if you are making the decision after waking up. The only reason pre-committing to heads works, though, is because it completely removes the Tuesday interview from the experiment. She will no longer be awoken on Tuesday, even if the result is tails. So, this doesn't really seem to be in the spirit of the experiment in my opinion. I suppose the same pre-commit logic holds if you say the correct response gets (1/coin-side-wake-up-count) * value per response though.

Dana
32

Halfer makes sense if you pre-commit to a single answer before the coin-flip, but not if you are making the decisions independently after each wake-up event. If you say heads, you have a 50% chance of surviving when asked on Monday, and a 0% chance of surviving when asked on Tuesday. If you say tails, you have a 50% chance of surviving Monday and a 100% chance of surviving Tuesday.

4Gurkenglas
If you say heads every time, half of all futures contain you; likewise with tails.
Answer by Dana
71

I would frame the question as "What is the probability that you are in heads-space?", not "What is the probability of heads?". The probability of heads is 1/2, but the probability that I am in heads-space, given I've just experiences a wake-up event, is 1/3.

The wake-up event is only equally likely on Monday. On Tuesday, the wake-up event is 0%/100%. We don't know whether it is Tuesday or not, but we know there is some chance of it being Tuesday, because 1/3 of wake-up events happen on Tuesday, and we've just experienced a wake-up event:

P(Monday|wake-up) = ... (read more)

Dana
1015

What would be upsetting about being called "she"? I don't share your intuition. Whenever I imagine being misgendered (or am misgendered, e.g., on a voice call with a stranger), I don't feel any strong emotional reaction. To the point that I generally will not correct them.

I could imagine it being very upsetting if I am misgendered by someone who should know me well enough not to misgender me, or if someone purposefully misgenders me. But the misgendering specifically is not the main offense in these two cases.

Perhaps myself and ymeskhout are less tied to our gender identity than most?

3brambleboy
If random strangers start calling you "she", that implies you look feminine enough to be mistaken for a woman. I think most men would prefer to look masculine for many reasons: not being mistaken for a woman, being conventionally attractive, being assumed to have a 'manly' rather than 'effeminate' personality, looking your age, etc. If you look obviously masculine, then being misgendered constantly would just be bewildering. Surely something is signaling that you use feminine pronouns. If it's just people online misgendering you based on your writing, then that's less weird. But I think it still would bother some people for some of the reasons above.
4eniteris
It is important to note that people have a wide range of attachment to their gender identity, ranging from willing to undergo extreme body modification in order to match their gender identity, to those who don't care in the slightest. The issue is that cisgender is the default, and if you don't have a strong attachment to your gender identity, you have no reason to change the label. Hence, cisgendered people have a wide range of attachment to their gender identity, from strongly identifying with it to no attachment at all. (There is also the group of agender, which includes those who have deeply examined their gender identity and decided that they don't really care (and probably also want to signal their examination and non-caring of gender identity)) Someone who is transgender obviously has an attachment to their gender identity, and this is obviously from which the Pronoun Discourse stems. They have a strong preference for a gender, and a preference to be referred to with the appropriate pronouns, and thus being misgendered is upsetting, as their preferences are violated. (Of course, most of this rests on the ability to communicate the preference, and accidental violations when the preference was not communicated are less egregious than deliberate violations.) Otherwise misgendering can be upsetting if it is tied to stereotypes of masculinity and femininity and attempting an insult based off those stereotypes.
4lsusr
I feel complimented when people inadvertently misgender me on this website. It implies I have successfully modeled the Other.
Dana
83

These are the remarks Zvi was referring to in the post. Also worth noting Graham's consistent choice of the word 'agreed' rather than 'chose', and Altman's failed attempt to transition to chairman/advisor to YC. It sure doesn't sound like Altman was the one making the decisions here.

gwern
398

Altman's failed attempt to transition to chairman/advisor to YC

Of some relevance in this context is that Altman has apparently for years been claiming to be YC Chairman (including in filings to the SEC): https://www.bizjournals.com/sanfrancisco/inno/stories/news/2024/04/15/sam-altman-y-combinator-board-chair.html

Dana
110

You're not taking your own advice. Since your message, Ilya has publicly backed down, and Polymarket has Sam coming back as CEO at coinflip odds: Polymarket | Sam back as CEO of OpenAI?

Dana
10

How is that addressing Hotz's claim? Eliezer's post doesn't address any worlds with a God that is outside of the scope of our Game of Life, and it doesn't address how well the initial conditions and rules were chosen. The only counter I see in that post is that terrible things have happened in the past, which provide a lower bound for how bad things can get in the future. But Hotz didn't claim that things won't go bad, just that it won't be boring.

8SimonM
This doesn't seem to account for property taxes, which I expect would change the story quite a bit for the US.
1Gunnar_Zarncke
Thank you.
Dana
31

How about slavery? Should that be legal? Stealing food, medication? Age limits?

There are all sorts of things that are illegal which, in rare cases, would be better off being legal. But the legal system is a somewhat crude tool. Proponents of these laws would argue that in most cases, these options do more harm than good. Whether that's true or not is an open question from what I can tell. Obviously if the scenarios you provide are representative then the answer is clear. But I'm not sure why we should assume that to be the case. Addiction and mental illnes... (read more)

1Dumbledore's Army
Just so you know, there are a lot of people disagreeing with me on this page, and you are the only one I have downvoted.  I'm surprised that someone who has been on LessWrong as long as you would engage in such blatant strawmanning. Slavery? Really?
3Sweetgum
Slavery and theft harm others, so they are not relevant here. Age limits would be the most relevant. We have age limits on certain things because we believe that regardless of whether they want to, underage people deciding to do those things is usually not in their best interest. Similarly, bans on sex for rent and kidney sale could be justified by the belief that regardless of whether they want to, people doing these things is usually not in their best interest. However, this is somewhat hard to back up: It's pretty unclear whether prostitution or homelessness is worse, and it's easy to think of situations where selling a kidney definitely would be worth it (like the one given in the post). I don't want to live in that world either, but banning sex for rent doesn't resolve the issue. It just means we've gone from a world where women have to prostitute themselves to afford rent to a world where women just can't afford rent, period. What I said here is wrong, see this comment You don't think having to sell your kidneys and have sex for rent to get by is bad enough to get people to protest/riot? Also, it seems like you've implicitly changed your position here. Previously, you said that when someone sells a kidney/trades sex for rent it would usually not be in their best interest, and that those options would usually only be taken under the influence of addiction or mental illness. Now, when you say that people would do those things "to get by" it sounds like you're implying that these are rational choices that would be in peoples' best interest given the bad situation, and would be taken by ordinary people. Which of these do you agree with?
7tailcalled
The kinds of enslavement most people are familiar with is the enslavement of African-Americans. As far as I understand, they were originally enslaved as part of inter-tribal warfare and raids in Africa. This is a sort of force/expropriation, which seems distinct from the sorts of "bad options" talked about in the post, in that they aren't really options, they are forced. Also, this kind of slavery has been exceptionally brutal compared to other kinds of slavery. I'm not super familiar with other kinds of slavery historically. As I understand, it has often been debt slavery.
Dana
30

It is explained in the first section of the referenced post: AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years - EA Forum (effectivealtruism.org)

Unaligned: If you're going to die soon, you probably want to spend your money soon.

Aligned: If you're going to be rich soon, you probably don't want to save your money.

Both scenarios depend upon the time-discounted value of money to be lower after AGI. I guess the underlying assumptions are that the value derived from aligned AGI will be distributed without respect to cap... (read more)

2Dagon
Ah, so the mechanism is a reduction in present value of money, not an increase in future value (though it implies an increase from the reduced current value).  That does fit nicely with my general finance-world outlook, which is "we're not as rich as we think", but I'm not sure I'm sold on the rebound part of the story.
Dana
154

You are not talking about per person, you are talking about per worker. Total working hours per person has increased ~20% from 1950-2000 for ages 25-55.

8jasoncrawford
Oh, I misunderstood. Yes, my stats are per worker. It's interesting to see that per-person has increased a bit. Not sure what to make of that. The early-1900s stats didn't count a lot of housework that was done mostly by housewives.
Dana
54

The problem with this explanation is that there is a very clear delineation here between not-fraud and fraud. It is the difference between not touching customer deposits and touching them. Your explanation doesn't dispute that they were knowingly and intentionally touching customer deposits. In that case, it is indisputably intentional, outright fraud. The only thing left to discuss is whether they knew the extent of the fraud or how risky it was.

I don't think it was ill-intentioned based on SBF's moral compass. He just had the belief, "I will pass a small... (read more)

Dana
90

I find https://youglish.com/ to be very helpful with this.

1Solenoid_Entity
Some websites are great, but I've found they're wrong often enough I usually want to corroborate them with something else.