All of blackstampede's Comments + Replies

I appreciate your reply- it was thoughtful and lucid. Thanks for taking the time to comment. Well, I thought I wasn't going to have time for a long(-ish) reply, but once I started writing I couldn't stop, so here you go-

First, I'd say that "free choice" obviously isn't an absolute state. But I think that there are more or less free choices. Working for a factory in the US is a more free choice while working for a factory in a country that has no other good options is a less free choice. I could have been more clear about that.

Second, I'm not going to argue... (read more)

Unless I'm misunderstanding, it seems like pumping the water up from an aquifer to the surface would be enough height to act as a battery- you wouldn't drain it to ground level, you would drain it back down into the aquifer.

8DirectedEvolution
Average water table depth in West Texas is 4.2 meters, and water weighs about 9810 N per cubic meter. If you dig a pit 1 meter deep to store the water at ground level, that reduces your height above aquifer to about 3 meters. You then can store about 30,000 J per square meter of pit area. For perspective, that is about enough energy to keep a lightbulb on for 10 minutes. It is about .008 kWh. Average household energy consumption is about 30 kWh/day. You’d therefore need a pit that takes up about an acre to store enough energy to power your house for a day. This may interfere with your farming plans somewhat.

I considered that. I wonder if you could plant some hardy, dry-climate crop the first few years and till it under to improve the soil composition.

2tailcalled
Ah ok. It should be noted that it's perfectly consistent for both of the following to be true: * At the current margins, the state doesn't really mind speeding and mainly sees it as a source of revenue. * If speed limits were not enforced at all and was only suggestions made on signs at the road, the culture around driving would shift to very often speed very strongly.
1JBlack
Yes this does seem to be happening. It also appears to be unavoidable. Our state of knowledge is nowhere near being able to guarantee that any AGI we develop will not kill us all. We are already developing AI that is superhuman in increasingly many aspects. Those who are actively working right now to bring the rest of the capabilities up to and above human levels obviously can't be sufficiently concerned, or they would not be doing it.
3RobertM
I think there's probably value in being on an alignment team at a "capabilities" org, or even embedded in a capabilities team if the role itself doesn't involve work that contributes to capabilities (either via first-order or second-order effects).   I think that the "in the room" argument might start to make sense when there's actually a plan for alignment that's in a sufficiently ready state to be operationalized.  AFAICT nobody has such a plan yet.  For that reason, I think maintaining & improving lines of communication is very important, but if I had to guess, I'd say you could get most of the anticipated benefit there without directly doing capabilities work.
6Eli Tyre
Why do you think this? It seems to me that reading books about deep learning is a just fine thing to do, but that publishing papers that push forward the frontier of deep learning is plausibly quite bad. These seem like such different activities that I'm not at all inclined to lump them together for the purposes of this question.
3RobertM
I wouldn't call it an infohazard; generally that refers to information that's harmful simply to know, rather than because it might e.g. advance timelines.   There are arguments to be made about how much overlap there is between capabilities research and alignment research, but I think by default most things that would be classified as capabilities research do not meaningful advance AI alignment.  For that to be true, you'd need >50% of all capabilities work to advance alignment "by default" (and without requiring any active effort to "translate" that capabilities work into something helpful for alignment), since the relative levels of effort invested are so skewed to capabilities.  See also https://www.lesswrong.com/tag/differential-intellectual-progress.
-6trevor
9DirectedEvolution
He has a $100 bet with Brian Caplan, inflation adjusted. EY took Brian’s money at the time of the bet, and pays back if he loses.
1Vanilla_cabs
Yes, but I don't know if he really did it. I see multiple problems with that implementation. First, the interest rate should be adjusted for inflation, otherwise the bet is about a much larger class of events than "end of the world". Next, there's a high risk that the "doom" better will have spent all their money by the time the bet expires. The "survivor" better will never see the color of their money anyway. Finally, I don't think it's interesting to win if the world ends. I think what's more interesting is rallying doubters before it's too late, in order to marginally raise our chances of survival.
2MSRayne
Ah! Sorry for being nitpicky then. I understand what you mean now. And I agree!
1ponkaloupe
i happen to mostly agree with you on those broad ideals. a large space full of constant experimentation allows for regularly finding better ways of doing things: American dynamism in a nutshell. yes, and no. abortion is relevant to a government because most governments promise a specific set of rights to their citizens which must be defended, and one of these rights is protection from violence. it’s reasonable for a government to approach abortion strictly from the angle of “at what moment(s) in human development do we grant humans their citizenship.” as with the question of justice, the decision-making here could be guided by processes which are either closely tied to morality (“life is sacred; citizenship is granted at conception”) or less directly related to morals (“for the good of the country, citizenship should be granted once the expected gains from providing it outweigh the cost”). in a competitive landscape, one might expect selective pressures to optimize for the latter interpretation. in fact, if one understands morality to be a thing which emerged in the context of social cooperation, one might expect the individual’s moral view to yield similar results to the amoral view of decision making — and that significant disagreements at that level are due to radical changes in the human experience since roughly the agricultural revolution, where the optimal methods of cooperation began to shift at a rate that challenged the ability for morals to match. but this is me shooting loosely-formed ideas from the hip here: i’ve never looked into the history of morality and it could easily exist for reasons other than facilitating social cooperation.
1ersatz
Just pasting the link in the rich text editor but I don’t know the Markdown syntax sorry.
2Daniel Kokotajlo
I think that's exactly the sort of thing I'm looking for, yes. It's important that users be able to trust that e.g. the website won't get hacked and its secrets revealed. How can that be achieved?
4ChristianKl
arXiv already allows for free publishing today. Nothing you wrote about seems to provide a meaningful improvement on arXiv. 
1Maxwell Peterson
I haven’t!