LESSWRONG
LW

3438
Wei Dai
41893Ω2886144509518
Message
Dialogue
Subscribe

If anyone wants to have a voice chat with me about a topic that I'm interested in (see my recent post/comment history to get a sense), please contact me via PM.

My main "claims to fame":

  • Created the first general purpose open source cryptography programming library (Crypto++, 1995).
  • Published one of the first descriptions of a cryptocurrency based on a distributed public ledger (b-money, 1998), predating Bitcoin.
  • Proposed UDT, combining the ideas of updatelessness, policy selection, and evaluating consequences using logical conditionals.
  • First to argue for pausing AI development based on the technical difficulty of ensuring AI x-safety (SL4 2004, LW 2011).
  • Identified current and future philosophical difficulties as core AI x-safety bottlenecks, potentially insurmountable by human researchers, and advocated for research into metaphilosophy and AI philosophical competence as possible solutions.

My Home Page

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
10Wei Dai's Shortform
Ω
2y
Ω
225
Wei Dai's Shortform
Wei Dai16h21

If my value system is only about receiving stuff from the universe, then the logical endpoint is a kind of blob that just receives stuff and doesn't even need a brain.

Unless one of the things you want to receive from the universe is to be like Leonardo da Vinci, or be able to do everything effortlessly and with extreme competence. Why "do chores" now if you can get to that endpoint either way, or maybe even more likely if you don't "do chores" because it allows you to save on opportunity costs and better deploy your comparative advantage? (I can understand if you enjoy the time spent doing these activities, but by calling them "chores" you seem to be implying that you don't?)

Reply
Wei Dai's Shortform
Wei Dai18h20

Hmm, I find it hard to understand or appreciate this attitude. I can't think of any chores that I intrinsically don't want to outsource, only concerns that I may not be able to trust the results. What are some other examples of chores you do and don't want to outsource? Do you have any pattern or explanation of where you draw the line? Do you think people who don't mind outsourcing all their chores are wrong in some way?

Reply
Wei Dai's Shortform
Wei Dai20h8067

A clear mistake of early AI safety people is not emphasizing enough (or ignoring) the possibility that solving AI alignment (as a set of technical/philosophical problems) may not be feasible in the relevant time-frame, without a long AI pause. Some have subsequently changed their minds about pausing AI, but by not reflecting on and publicly acknowledging their initial mistakes, I think they are or will be partly responsible for others repeating similar mistakes.

Case in point is Will MacAskill's recent Effective altruism in the age of AGI. Here's my reply, copied from EA Forum:

I think it's likely that without a long (e.g. multi-decade) AI pause, one or more of these "non-takeover AI risks" can't be solved or reduced to an acceptable level. To be more specific:

  1. Solving AI welfare may depend on having a good understanding of consciousness, which is a notoriously hard philosophical problem.
  2. Concentration of power may be structurally favored by the nature of AGI or post-AGI economics, and defy any good solutions.
  3. Defending against AI-powered persuasion/manipulation may require solving metaphilosophy, which judging from other comparable fields, like meta-ethics and philosophy of math, may take at least multiple decades to do.

I'm worried that by creating (or redirecting) a movement to solve these problems, without noting at an early stage that these problems may not be solvable in a relevant time-frame (without a long AI pause), it will feed into a human tendency to be overconfident about one's own ideas and solutions, and create a group of people whose identities, livelihoods, and social status are tied up with having (what they think are) good solutions or approaches to these problems, ultimately making it harder in the future to build consensus about the desirability of pausing AI development.

Reply
Wei Dai's Shortform
Wei Dai22h20

it'll be even harder if I know the other person is responding to an AI-rewritten version of my comment, referring to an AI-summarized version of my profile, running AI hypotheticals on how I would react

I think all of these are better than the likely alternatives though, which is that

  • I fail to understand someone's comment or the reasoning/motivations behind their words, and most likely just move on (instead of asking them to clarify)
  • I have little idea what their background knowledge/beliefs are when replying to them
  • I fail to consider some people's perspectives on some issue

It also seems like I change my mind (or at least become somewhat more sympathetic) more easily when arguing with an AI-representation of someone's perspective, maybe due to less perceived incentive to prove that I was right all along.

Reply
Wei Dai's Shortform
Wei Dai7d52

If people started trying earnestly to convert wealth/income into more kids, we'd come under Malthusian constraints again, and before that much backsliding in living standards and downward social mobility for most people, which would trigger a lot of cultural upheaval and potential backlash (e.g., calls for more welfare/redistribution and attempts to turn culture back against "eugenics"/"social Darwinism", which will probably succeed just like they succeeded before). It seems ethically pretty fraught to try to push the world in that direction, to say the least, and it has a lot of other downsides, so I think at this point a much better plan to increase human intelligence is to make available genetic enhancements that parents can voluntarily choose for their kids, government-subsidized if necessary to make them affordable for everyone, which avoids most of these problems.

Reply2
Thomas Kwa's Shortform
Wei Dai7d42

Quantum theory and simulation arguments both suggest that there are many copies of myself in the multiverse. From a first person subjective anticipation perspective, experiencing death as nothingness seems impossible so it seems like I should either anticipate my subjective experience continuing as one of the surviving copies, or the whole concept of subjective anticipation is confused. From a third person / God's view, death can be thought of some of the copies being destroyed or a reduction in my "measure", but I don't seem to fear this, just as I didn't jump in joy to learn about having a huge number of copies in the first place. The situation seems too abstract or remote or foreign to trigger my fear (or joy) response.

Reply
Cole Wyeth's Shortform
Wei Dai8d40

If it became common to demand and check proofs of (human) work, there will be a strong incentive to use AI to generate such proofs, which doesn't not seem very hard to do.

Reply
Wei Dai's Shortform
Wei Dai8d2-4

What motive does a centralized dominant power have to allow any progress?

A culture/ideology that says the ruler is supposed to be benevolent and try to improve their subjects' lives, which of course was not literally followed, but would make it hard to fully suppress things that could clearly make people's lives better, like many kinds of technological progress. And historically, AFAIK few if any of the Chinese emperors tried to directly suppress technological innovation, they just didn't encourage it like the West did, through things like patent laws and scientific institutions.

The entire world would likely look more like North Korea.

Yes, directionally it would look more like North Korea, but I think the controls would not have to be as total or harsh, because there is less of a threat that outside ideas could rush in and overturn the existing culture/ideology the moment you let your guard down.

Reply
Thomas Kwa's Shortform
Wei Dai8d20

We can do adversarial training against other AIs, but ancestral humans didn't have to contend with animals whose goal was to trick them into not reproducing by any means necessary

We did have to contend with memes that tried to hijack our minds to spread them horizontally (as opposed to vertically, by having more kids), but unfortunately (or fortunately) such "adversarial training" wasn't powerful enough to instill a robust desire to maximize reproductive fitness. Our adversarial training for AI could also be very limited compared to the adversaries or natural distributional shifts the AI will face in the future.

Our fear of death is therefore much more robust than our desire to maximize reproductive fitness

My fear of death has been much reduced after learning about ideas like quantum immortality and simulation arguments, so it doesn't seem that much more robust. Its apparent robustness in others looks like an accidental effect of most people not paying attention or being able to fully understand such ideas, which does not seem to have a relevant analogy for AI safety.

Reply
Cole Wyeth's Shortform
Wei Dai8d3539

I think extensive use of LLM should be flagged at the beginning of a post, but "uses an LLM in any part of its production process whatsoever" would probably result in the majority of posts being flagged and make the flag useless for filtering. For example I routinely use LLMs to check my posts for errors (that the LLM can detect), and I imagine most other people do so as well (or should, if they don't already).

Unfortunately this kind of self flagging/reporting is ultimately not going to work, as far as individually or societally protecting against AI-powered manipulation, and I doubt there will be a technical solution (e.g. AI content detector or other kind of defense) either (short of solving metaphilosophy). I'm not sure it will do more good than harm even in the short run because it can give a false sense of security and punish the honest / reward the dishonest, but still lean towards trying to establish "extensive use of LLM should be flagged at the beginning of a post" as a norm.

Reply
Load More
10Wei Dai's Shortform
Ω
2y
Ω
225
65Managing risks while trying to do good
2y
26
46AI doing philosophy = AI generating hands?
Ω
2y
Ω
23
224UDT shows that decision theory is more puzzling than ever
Ω
2y
Ω
56
163Meta Questions about Metaphilosophy
Ω
2y
Ω
80
34Why doesn't China (or didn't anyone) encourage/mandate elastomeric respirators to control COVID?
Q
3y
Q
15
55How to bet against civilizational adequacy?
Q
3y
Q
20
5AI ethics vs AI alignment
3y
1
118A broad basin of attraction around human values?
Ω
4y
Ω
18
234Morality is Scary
Ω
4y
Ω
116
Load More
Carl Shulman
2 years ago
Carl Shulman
2 years ago
(-35)
Human-AI Safety
2 years ago
Roko's Basilisk
7 years ago
(+3/-3)
Carl Shulman
8 years ago
(+2/-2)
Updateless Decision Theory
12 years ago
(+62)
The Hanson-Yudkowsky AI-Foom Debate
13 years ago
(+23/-12)
Updateless Decision Theory
13 years ago
(+172)
Signaling
13 years ago
(+35)
Updateless Decision Theory
14 years ago
(+22)
Load More