All of George Herold's Comments + Replies

So I'm walking around my field, a little stoned, and thinking how the paper clip maximizer is like a cancer.  Unconstrained growth.  And the way life deals with cancer is by a programmed death of cells.  (Death of cells leads to aging and eventual death of organism.)  And death of organism is the same as the death of the paper clip maximizer.  (in this analogy) So why not introduce death into all our AI and machines as a way of stopping the cancer of the paper clip maximizer?  Now the first 'step' to death is, "no infinite do ... (read more)

2gjm
A few disorganized thoughts arising from my intuition that this approach isn't like to help much: Quite a lot of people still die of cancer. A paperclip maximizer with preprogrammed death in its future will try to maximize paperclips really fast while it has time; this doesn't necessary have good consequences for humans nearby. If an AI is able to make other AIs, or to modify itself, then it can make a new AI with similar purposes and no preprogrammed death, or turn itself into one. It may be difficult to stop very ingenious AIs doing those things (and if we can arrange never to have very ingenious AIs in the first place, then most of the doomy scenarios are already averted, though of course at the cost of missing out on whatever useful things very ingenious AIs might have been able to do for us). If an AI can't do those things but its human designers/maintainers can, it has an incentive to persuade them to make it not have to die. Some of the doomy scenarios you might worry about involve AIs that are extremely persuasive, either because they are expert psychologists or language-users or because they are able to make large threats or offers. If there are many potentially-cooperating AIs, e.g. because independently-originating AIs are able to communicate with one another or because they reproduce somehow, then the fact that individual AIs die doesn't stop them cooperating on longer timescales, just as humans sometimes manage to do. Presumably a dying-soon AI is less useful than one without preprogrammed death, so people or groups developing AIs will have an  incentive not to force their AIs to die soon. Scenarios where there's enough self-restraint and/or regulation to overcome this are already less-doomy scenarios because e.g. they can enforce all sorts of other extra-care measures if AIs seem to be close to dangerously capable. (To be clear, my intuition is only intuition and I am neither an AI developer nor an AI alignment/safety expert of any kind. Maybe s

I love sledding! (Oh dear, now you've caused me to recall my favorite sledding picture of my kids.) I'm not sure I can find it.  But my two kids (elder daughter and younger son, maybe 8 and 6 (1.5 year separation)) are coming down a little sledding hill, my daughter in front has grins of delight on her face, and my younger son's expression is earnest, in the back steering, his job is to get them down the hill safely.  There's a blurry dog tail in the shot, )  

Re: sleds breaking.  That sucks.  I've had sleds I bought from Value (har... (read more)

I found this,https://www.cohealthdata.dphe.state.co.us/chd/Resources/briefs/Obesity.pdf which shows the east part of Co. as more obese.  (Not sure how it compares to Kansas)

1Brendan Long
Eastern Colorado is topologically very similar to Kansas and I suspect they get more water from wells than the (much more populous) middle of the state.

"The risks of Covid-19 prevented by vaccination greatly exceed the risks of vaccination."

Is this true across all age groups?  I've been getting PO'ed at radio ads in NY encouraging moms to get their 3 year olds vaccinated.  But maybe this is my mistake.