robo

Posts

Sorted by New

Wiki Contributions

Comments

robo30

I'm very not sure how to do this, but are there ways to collect some counteracting or unbiased samples about Sam Altman?  Or to do another one-sided vetting for other CEOs to see what the base rate of being able to dig up questionable things is?  Collecting evidence in that points in only one direction just sets off huge warning lights 🚨🚨🚨🚨 I can't quiet.

robo10

I think this is the sort of conversation we should be having!  [Side note: I think restricting compute is more effective than restricting research because you don't need 100% buy in.

  1. it's easier to prevent people from manufacturing semiconductors than to keep people from learning ideas that fit on a napkin
  2. It's easier to prevent scientists in Eaccistan from having GPUs than to prevent scientists in Eaccistan from thinking.

The analogy to nuclear weapons is, I think, a good one.  The science behind nuclear weapons is well known -- what keeps them from being built is access to nuclear materials.

(Restricting compute also seriously restricts research.  Research speed on neural nets is in large part bounded by how many experiments you run rather than ideas you have.)]

robo10

I think the weakness with KL divergence is that the potentially harmful model can do things the safe model would be exponentially unlikely to do.  Even if the safe model has a 1 in 1 trillion chance of stabbing me in the face, the KL penalty to stabbing me in the face is log(1 trillion) (and logs make even huge numbers small).

What about limiting the unknown model to chose one of the cumulative 98% most likely actions for the safe model to take?  If the safe model never has more than a 1% chance of taking an action that will kill you, then the unknown model won't be able to take an action that kills you.  This isn't terribly different from the Top-K sampling many language models use in practice.

robo12061

Our current big stupid: not preparing for 40% agreement

Epistemic status: lukewarm take from the gut (not brain) that feels rightish

The "Big Stupid" of the AI doomers 2013-2023 was AI nerds' solution to the problem "How do we stop people from building dangerous AIs?" was "research how to build AIs".  Methods normal people would consider to stop people from building dangerous AIs, like asking governments to make it illegal to build dangerous AIs, were considered gauche.  When the public turned out to be somewhat receptive to the idea of regulating AIs, doomers were unprepared.

Take: The "Big Stupid" of right now is still the same thing.  (We've not corrected enough).  Between now and transformative AGI we are likely to encounter a moment where 40% of people realize AIs really could take over (say if every month another 1% of the population loses their job).  If 40% of the world were as scared of AI loss-of-control as you, what could the world do? I think a lot!  Do we have a plan for then?

Almost every LessWrong post on AIs are about analyzing AIs.  Almost none are about how, given widespread public support, people/governments could stop bad AIs from being built.

[Example: if 40% of people were as worried about AI as I was, the US would treat GPU manufacture like uranium enrichment.  And fortunately GPU manufacture is hundreds of time harder than uranium enrichment!  We should be nerding out researching integrated circuit supply chains, choke points, foundry logistics in jurisdictions the US can't unilaterally sanction, that sort of thing.]

TLDR, stopping deadly AIs from being built needs less research on AIs and more research on how to stop AIs from being built.

*My research included 😬

robo2-3

My comment here is not cosmically important and I may delete it if it derails the conversation.

There are times when I would really want a friend to tap me on the shoulder and say "hey, from the outside the way you talk about <X> seems way worse than normal.  Are you hungry/tired/too emotionally close?".  They may be wrong, but often they're right.
If you (general reader you) would deeply want someone to tap you on the shoulder, read on, otherwise this comment isn't for you.

If you burn at NYT/Cade Metz intolerable hostile garbage, are you have not taken into account how defensive tribal instincts can cloud judgements, then, um <tap tap>?

robo91

I appreciate that you are not speaking loudly if you don't yet have anything loud to say.

robo330

Is that your family's net worth is $100 and you gave up $85?  Or your family's net worth is $15 and you gave up $85?

Either way, hats off!

robo30

How close would this rank a program p with a universal Turing machine simulating p?  My sense is not very close because the "same" computation steps on each program don't align.

My "most naïve formula for logical correlation" would be something like put a probability distribution on binary string inputs, treat  and  as random variables , and compute their mutual information.

robo10

Interesting idea.
I don't think using a classical Turing machine in this way would be the right prior for the multiverse.  Classical Turing machines are a way for ape brains to think about computation using the circuitry we have available ("imagine other apes following these social contentions about marking long tapes of paper").  They aren't the cosmically simplest form of computation.  For example, the (microscopic non-course-grained) laws of physics are deeply time reversible, where Turing machines are not.
I suspect this computation speed prior would lead to Boltzmann-brain problems.  Your brain at this moment might be computed at high fidelity, but everything else in the universe would be approximated for the computational speed-up.

Load More