MichaelDickens

Wiki Contributions

Comments

Sorted by

I primarily use a weird ergonomic keyboard (the Kinesis Advantage 2) with custom key bindings. But my laptop keyboard has normal key bindings, so my "normal keyboard" muscle memory still works.

On Linux Mint with Cinnamon, you can do this in system settings by going to Keyboard -> Layouts -> Options -> Caps Lock behavior. (You can also put that line in a shell script and set the script to run at startup.)

I use a Kinesis Advantage keyboard with the keys rebound to look like this (apologies for my poor graphic design skills):

https://i.imgur.com/Mv9FI7a.png

  • Caps Lock is rebound to Backspace and Backspace is rebound to Shift.
  • Right Shift is rebound to Ctrl + Alt + Super, which I use as a command prefix for window manager commands.
  • "custom macro" uses the keyboard's built-in macro feature to send a sequence of four keypresses (Alt-G Ctrl-`), which I use as a prefix for some Emacs commands.
  • By default, the keyboard has two backslash (\) keys. I use the OS keyboard software to rebind the second one to "–" (unshifted) and "—" (shifted), which for me are the most useful characters that aren't on a standard US keyboard.
  1. There were two different clauses, one about malaria and the other about chickens. "Helping people is really important" clearly applies to the malaria clause, and there's a modified version of the statement ("helping animals is really important") that applies to the chickens clause. I think writing it that way was an acceptable compromise to simplify the language and it's pretty obvious to me what it was supposed to mean.
  2. "We should help more rather than less, with no bounds/limitations" is not a necessary claim. It's only necessary to claim "we should help more rather than less if we are currently helping at an extremely low level".
Answer by MichaelDickens50

MIRI's communications strategy update published in May explained what they were planning on working on. I emailed them a month or so ago and they said they are continuing to work on the things in that blog post. They are the sorts of things that can take longer than a year so I'm not surprised that they haven't released anything substantial in the way of comms this year.

That's only true if a single GPU (or small number of GPUs) is sufficient to build a superintelligence, right? I expect it to take many years to go from "it's possible to build superintelligence with a huge multi-billion-dollar project" and "it's possible to build superintelligence on a few consumer GPUs". (Unless of course someone does build a superintelligence which then figures out how to make GPUs many orders of magnitude cheaper, but at that point it's moot.)

I don't think controlling compute would be qualitatively harder than controlling, say, pseudoephedrine.

(I think it would be harder, but not qualitatively harder—the same sorts of strategies would work.)

Also, I don't feel that this article adequately addressed the downside of SA that it accelerates an arms race. SA is only favored when alignment is easy with high probability and you're confident that you will win the arms race, and you're confident that it's better for you to win than for the other guy[1], and you're talking about a specific kind of alignment where an "aligned" AI doesn't necessarily behave ethically, it just does what its creator intends.

[1] How likely is a US-controlled (or, more accurately, Sam Altman/Dario Amodei/Mark Zuckerberg-controlled) AGI to usher in a global utopia? How likely is a China-controlled AGI to do the same? I think people are too quick to take it for granted that the former probability is larger than the latter.

Cooperative Development (CD) is favored when alignment is easy and timelines are longer. [...]

Strategic Advantage (SA) is more favored when alignment is easy but timelines are short (under 5 years)

I somewhat disagree with this. CD is favored when alignment is easy with extremely high probability. A moratorium is better given even a modest probability that alignment is hard, because the downside to misalignment is so much larger than the downside to a moratorium.[1] The same goes for SA—it's only favored when you are extremely confident about alignment + timelines.

[1] Unless you believe a moratorium has a reasonable probability of permanently preventing friendly AI from being developed.

  1. I was asking a descriptive question here, not a normative one. Guilt by association, even if weak, is a very commonly used form of argument, and so I would expect it to be in used in this case.

I intended my answer to be descriptive. EAs generally avoid making weak arguments (or at least I like to think we do).

Load More