All of 142857's Comments + Replies

14285710

Given an aligned AGI, to what extent are people ok with letting the AGI modify us? Examples of such modifications include (feel free to add to the list):

  • Curing aging/illnesses
  • Significantly altering our biological form
  • Converting us to digital life forms
  • Reducing/Removing the capacity to suffer
  • Giving everyone instant jhanas/stream entry/etc.
  • Altering our desires to make them easier to satisfy
  • Increasing our intelligence (although this might be an alignment risk?)
  • Decreasing our intelligence
  • Refactoring our brains entirely

What exact parts of being "human" do we want to preserve?

1Olivier Coutu
These are interesting questions that modern philosophers have been pondering. Stampy has an answer on forcing people to change faster than they would like and we are working on adding more answers that attempt to guess what an (aligned) superintelligence might do.
142857194

A "moonshot idea" I saw brought up is getting Yudkowsky's Harry Potter fanfiction translated into Chinese (please never ever do this).

This has already been done, and has pretty good reviews and some discussions

I've looked through the EA/Rationalist/AI Safety forums in China

If these are public, could you post the links to them?

there is only one group doing technical alignment work in China

Do you know the name of the group, and what kinds of approaches they are taking toward technical alignment?

6Lao Mein
Tian-xia forums are invite-only and mostly expats. I should probably dig deeper to find native Chinese discussions. CSAGI. Unfortunately, their website (csagi.org) has been dead for a while. It's founded by Zhu Xiaohu. He mentioned bisimulation and reinforcement learning.
14285710

Are there any alignment approaches that try to replicate how children end up loving their parents (or vice versa), except with AI and humans? Alternatively, approaches that look like getting an AI to do Buddhist lovingkindness?

1142857
I'll use this comment to collect things I find. LOVE (Learning Other's Values or Empowerment).
Answer by 14285740

For derendering latex in Emacs, see https://github.com/io12/org-fragtog.

For drawing images in line, you could try https://github.com/misohena/el-easydraw.

1Johannes C. Mayer
I was thinking more free hand drawing.
14285720

I like this idea and think it is worth exploring. It is not even just with training new models; AGI have to worry about misalignment with every self-modification and every interaction with the environment that changes itself.

Perhaps there are even ways to deter an AGI from self-improvement, by making misalignment more likely.

Some caveats are:

  • AGI may not take alignment seriously. We already have plenty of examples of general intelligences who don't.
  • AGI can still increase its capabilities without training new models, e.g. by getting more compute
  • If an AGI dec
... (read more)
14285730

Escape. Invest in space travel and escape the solar system before they arrive.
If your AI timelines are long, this may be a viable strategy for preserving the human species in the event of unaligned AGI.
In your AI timelines are short, a budget solution is to just send human brains into space and hope they will be found and revived by other powerful species (hopefully at least one of them is "benevolent").