MichaelDickens

Wiki Contributions

Comments

Sorted by

This is actually a crazy big effect size? Preventing ~10–50% of a cold for taking a few pills a day seems like a great deal to me.

Don't push the frontier of regulations. Obviously this is basically saying that Anthropic should stop making money and therefore stop existing. The more nuanced version is that for Anthropic to justify its existence, each time it pushes the frontier of capabilities should be earned by substantial progress on the other three points.

I think I have a stronger position on this than you do. I don't think Anthropic should push the frontier of capabilities, even given the tradeoff it faces.

If their argument is "we know arms races are bad, but we have to accelerate arms races or else we can't do alignment research," they should be really really sure that they do, actually, have to do the bad thing to get the good thing. But I don't think you can be that sure and I think the claim is actually less than 50% likely to be true.

  1. I don't take it for granted that Anthropic wouldn't exist if it didn't push the frontier. It could operate by intentionally lagging a bit behind other AI companies while still staying roughly competitive, and/or it could compete by investing harder in good UX. I suspect a (say) 25% worse model is not going to be much less profitable.
  2. (This is a weaker argument but) If it does turn out that Anthropic really can't exist without pushing the frontier and it has to close down, that's probably a good thing. At the current level of investment in AI alignment research, I believe reducing arms race dynamics + reducing alignment research probably net decreases x-risk, and it would be better for this version of Anthropic not to exist. People at Anthropic probably disagree, but they should be very concerned that they have a strong personal incentive to disagree, and should be wary of their own bias. And they should be especially especially wary given that they hold the fate of humanity in their hands.

If lysine is your problem but you don't want to eat beans, you can also buy lysine supplements.

I primarily use a weird ergonomic keyboard (the Kinesis Advantage 2) with custom key bindings. But my laptop keyboard has normal key bindings, so my "normal keyboard" muscle memory still works.

On Linux Mint with Cinnamon, you can do this in system settings by going to Keyboard -> Layouts -> Options -> Caps Lock behavior. (You can also put that line in a shell script and set the script to run at startup.)

I use a Kinesis Advantage keyboard with the keys rebound to look like this (apologies for my poor graphic design skills):

https://i.imgur.com/Mv9FI7a.png

  • Caps Lock is rebound to Backspace and Backspace is rebound to Shift.
  • Right Shift is rebound to Ctrl + Alt + Super, which I use as a command prefix for window manager commands.
  • "custom macro" uses the keyboard's built-in macro feature to send a sequence of four keypresses (Alt-G Ctrl-`), which I use as a prefix for some Emacs commands.
  • By default, the keyboard has two backslash (\) keys. I use the OS keyboard software to rebind the second one to "–" (unshifted) and "—" (shifted), which for me are the most useful characters that aren't on a standard US keyboard.
  1. There were two different clauses, one about malaria and the other about chickens. "Helping people is really important" clearly applies to the malaria clause, and there's a modified version of the statement ("helping animals is really important") that applies to the chickens clause. I think writing it that way was an acceptable compromise to simplify the language and it's pretty obvious to me what it was supposed to mean.
  2. "We should help more rather than less, with no bounds/limitations" is not a necessary claim. It's only necessary to claim "we should help more rather than less if we are currently helping at an extremely low level".
Answer by MichaelDickens50

MIRI's communications strategy update published in May explained what they were planning on working on. I emailed them a month or so ago and they said they are continuing to work on the things in that blog post. They are the sorts of things that can take longer than a year so I'm not surprised that they haven't released anything substantial in the way of comms this year.

That's only true if a single GPU (or small number of GPUs) is sufficient to build a superintelligence, right? I expect it to take many years to go from "it's possible to build superintelligence with a huge multi-billion-dollar project" and "it's possible to build superintelligence on a few consumer GPUs". (Unless of course someone does build a superintelligence which then figures out how to make GPUs many orders of magnitude cheaper, but at that point it's moot.)

I don't think controlling compute would be qualitatively harder than controlling, say, pseudoephedrine.

(I think it would be harder, but not qualitatively harder—the same sorts of strategies would work.)

Load More