Milan W

Milan Weibel   https://weibac.github.io/

Wikitag Contributions

Comments

Sorted by
Milan W4870

I suspect that most people whose priors have not been shaped by a libertarian outlook are not very surprised by the outcome of this experiment.

Milan W172

Making up something analogous to Crocker's rules but specifically for pronouns would probably be a good thing: a voluntary commitment to surrender any pronoun preferences (gender related or otherwise) in service of communication efficiency.

Now that I think about it, a literal and expansive reading of Crocker's rules themselves includes such a surrender of the right to enforce pronoun preferences.

Milan W142

For anyone wondering about the "Claude boys" thing being fake: it was edited from a real post talking about "coin boys" (ie kids flipping a coin constantly to make decisions). Still pretty funny imo.

Answer by Milan W1410

As you repeatedly point out, there are multiple solutions to each issue. Assuming good enough technology, all of them are viable. Which (if any) solutions end up being illegal, incentivized, made fun of, or made mandatory, becomes a matter of which values end up being normative. Thus, these people may be culture-warring because they think they're influencing "post-singularity" values. This would betray the fact that they aren't really thinking in classical singularitarian terms.

Alternatively, they just spent too much time on twitter and got caught up in dumb tribal instincts. Happens to the best of us.

Milan W1210

Historically, lesswrong has been better at truth-finding than at problem-solving. 

I hope that this thread is useful as a high signal-to-noise ratio source.

This site is very much a public forum, so I would advise any actor whishing to implement a problem-solving stance to coordinate in a secure manner.

Milan W115

You're assuming that:
- There is a single AGI instance running.
- There will be a single person telling that AGI what to do
- The AGI's obedience to this person will be total.

I can see these assumptions holding approximately true if we get really really good at corrigibility and if at the same time running inference on some discontinuously-more-capable future model is absurdly expensive. I don't find that scenario very likely, though.

Milan W101

What is the theory of change here exactly, besides possible good things done by the agents themselves? I've got a couple of ideas, but I don't want to bias your response.

Milan W*107

I think I agree more with your counterargument than with your main argument. Having broad knowledge is good for generating ideas, and LLMs are good for implementing them quickly and thus having them bump against reality.

Milan W90

Note in particular that the Commission is recommending Congress to "Provide broad multiyear contracting authority to the executive branch and associated funding for leading artificial intelligence, cloud, and data center companies and others to advance the stated policy at a pace and scale consistent with the goal of U.S. AGI leadership".

i.e. if these recommendations get implemented, pretty soon a big portion of the big 3 lab's revenue will come from big government contracts. Look like a soft nationalization scenario to me.

Milan W93

Against 1.c. Humans need at least some resources that would clearly put us in life-or-death conflict with powerful misaligned AI agents in the long run.: The doc says that "Any sufficiently advanced set of agents will monopolize all energy sources, including solar energy, fossil fuels, and geothermal energy, leaving none for others" There's two issues with that statement:

First, the qualifier "sufficiently advanced" is doing a lot of work. Future AI systems, even if superintelligent, will be subject to physical constraints and economic concepts such as opportunity costs. The most efficient route for an unaligned ASI or set of ASIs for expanding their energy capture may well sidestep current human energy sources, at least for a while. We don't fight ants to capture their resources. 
Second: it assumes advanced agents will want to monopolize all energy sources. While instrumental convergence is true, partial misalignment with some degree of concern for humanity's survival and autonomy is plausible. Most people in developed countries have a preference for preserving the existence of an autonomous population of chimpanzees, and our "business-as-usual-except-ignoring-AI" world seems on track to achieve that.

Taken together, both arguments paint a picture of a future ASI mostly not taking over the resources we are currently using on Earth, mostly because it's easier to take over other resources (for instance, getting minerals from asteroids and energy from orbital solar capture). Then, it takes over the lightcone except Earth, because it cares about preserving independent-humanity-on-Earth a little. This scenario has us subset-of-humans-who-care-about-the-lightcone losing spectacularly to an ASI in a conflict over the lightcone, but not humanity being in a life-or-death-conflict with an ASI.

Load More