Thanks for this post Ben. I think a lot of what you're saying here could alternatively be filed under "Taking ideas seriously": the dedication to follow through with the consequences of ideas, even if their conclusions are unorthodox or uncomfortable.
I would reckon: no single AI safety method "will work" because no single method is enough by itself. The idea expressed in the post would not "solve" AI alignment, but I think it's a thought-provoking angle on part of the problem.
Weber again: "And, so in light of this historical view, we need to remember that bureaucracy, taken as it is, is just an instrument of precision that can be put to service by purely political, economic, or any other dominating or controlling interest. Therefore the simultaneous development of democratization and bureaucratization should not be exaggerated, no matter how typical the phenomena may be." Yikes, okay, it seems like Weber understood the notion the orthogonality thesis."
Isn't this interesting, Weber's point is similar to the orthogonality thes...
Thanks Justin! This is an interesting perspective. I'd enjoy seeing a compilation of different perspectives on ensuring AI alignment. (Another recurrent example would be the cybersecurity perspective on AI safety.)
Bureaucratization is the ultimate specific means to turn a mutually agreed upon community action rooted in subjective feeling into action rooted in a rational agreement by mutual consent.
This sounds a lot like the general situation of creating moral or judicial systems for a society. (When it works well.)
...The principle of fixed competencies
I quite like the concept of alignment through coherence between the "coherence factors"!
"Wisdom" has many meanings. I would use the word differently to how the article is using it.
I think the healthy and compassionate response to this article would be to focus on addressing the harms victims have experienced. So I find myself disappointed by much of the voting and comment responses here.
I agree that the Bloomberg article doesn't acknowledge that most of the harms that they list have been perpetrated by people who have already mostly been kicked out of the community, and uses some unfair framings. But I think the bigger issue is that of harms experienced by women that may not have been addressed: that of unreported cases, and of insu...
I think I agree with your technological argument, but I'd take your 6 months and 2.5 years and multiply them by a factor of 2-4x.
Party of it is likely that we are conceiving the scenarios a bit differently. I might be including some additional practical considerations.
Yes, that's most of the 2-5%.
Thank you for this post, Max.
My background here:
Summary: I wouldn't give 70% for WW3/KABOOM from conventional NATO retaliation. I would give that 2-5% in this moment (I spent little time thinking about the precise number).
Motivation: I think conventional responses from NATO will cause Russia to generally back down. I think Putin wants to use the threat of nukes, not actually use them.
Even when cornered yet further, I expect Putin to assess ...
The amount of effort going into AI as a whole ($10s of billions per year) is currently ~2 orders of magnitude larger than the amount of effort going into the kind of empirical alignment I’m proposing here, and at least in the short-term (given excitement about scaling), I expect it to grow faster than investment into the alignment work.
There's a reasonable argument (shoutout to Justin Shovelain) that the risk is that work such as this done by AI alignment people will be closer to AGI than the work done by standard commercial or academic research, and th...
Unfortunately, there is no good 'where to start' guide for anti-aging. This is insane, given this is the field looking for solutions to the biggest killer on Earth today.
Low hanging fruit intervention: Create a public guide to that effect on a web site.
That being said, I would bet that one would be able to find other formalisms that are equivalent after kicking down the door...
At least, we've now hit one limit in the shape of universal computation: No new formalism will be able to do something that couldn't be done with computers. (Unless we're gravely missing something about what's going on in the universe...)
When it comes to the downside risk, it's often that there are more unknown unknown that produce harm then positive unknown unknown. People are usually biased to overestimate the positive effects and underestimate the negative effects for the known unknown.
This seems plausible to me. Would you like to expand on why you think this is the case?
The asymmetry between creation and destruction? (I.e., it's harder to build than it is to destroy.)
Very good point! The effect of not taking an action depends on what the counterfactual is: what would happen otherwise/anyway. Maybe the article should note this.
Excellent comment, thank you! Don't let the perfect be the enemy of the good if you're running from an exponential growth curve.
Looks promising to me. Technological development isn't by default good.
Though I agree with the other commenters that this could fail in various ways. For one thing, if a policy like this is introduced without guidance on how to analyze the societal implications, people will think of wildly different things. ML researchers aren't by default going to have the training to analyze societal consequences. (Well, who does? We should develop better tools here.)
Or, at least, include a paragraph or a few to summarize it!
Some quick musings on alternatives for the "self-affecting" info hazard type:
I wrote this comment to an earlier version of Justin's article:
It seems to me that most of the 'philosophical' problems are going to get solved as a matter of solving practical problems in building useful AI. You could call ML systems, AI, that is getting developed now 'empirical'. From the perspective of the people building current systems, they likely don't consider what they're doing as solving philosophical problems. Symbol grounding problem? Well, an image classifier built on a convolutional neural network learns to ...
I expect the event to have no particular downside risks, and to give interesting input and spark ideas in experts and novices alike. Mileage will vary, of course. Unconferences foster dynamic discussion and a living agenda. If it's risky to host this event, then I expect AI strategy and forecasting meetups and discussions at EAG to be risky and they should also not be hosted.
I and other attendees of AIXSU pay careful attention to potential downside risks. I also think it's important we don't strangle open intellectual advancement. We need to...
We can subdivide the security story based on the ease of fixing a flaw if we're able to detect it in advance. For example, vulnerability #1 on the OWASP Top 10 is injection, which is typically easy to patch once it's discovered. Insecure systems are often right next to secure systems in program space.
Insecure systems are right next to secure systems, and many flaws are found. Yet, the larger systems (the company running the software, the economy, etc) manage to correct somehow. It's because there are mechanisms in the larger systems poised t...
This seems like a valuable research question to me. I have a project proposal in a drawer of mine that is strongly related: "Entanglement of AI capability with AI safety".
My guess is that the ideal is to have semi-independent teams doing research. Independence in order to better explore the space of questions, and some degree of plugging in to each other in order to learn from each other and to coordinate.
Are there serious info hazards, and if so can we avoid them while still having a public discussion about the non-hazardous parts of strategy?
There are info hazards. But I think if we can can discuss Superintelligence publicly, then yes; we can have a public discussion about non-hazardous parts of strategy.
Are there enough...
Nice work, Wei Dai! I hope to read more of your posts soon.
However I haven't gotten much engagement from people who work on strategy professionally. I'm not sure if they just aren't following LW/AF, or don't feel comfortable discussing strategically relevant issues in public.
A bit of both, presumably. I would guess a lot of it comes down to incentives, perceived gain, and habits. There's no particular pressure to discuss on LessWrong or the EA forum. LessWrong isn't perceived as your main peer group. And if you're at FHI or OpenAI, you'll have plenty contact with people who can provide quick feedback already.
I'm very confused why you think that such research should be done publicly, and why you seem to think it's not being done privately.
I don't think the article implies this:
Research should be done publicly
The article states: "We especially encourage researchers to share their strategic insights and considerations in write ups and blog posts, unless they pose information hazards."
Which means: share more, but don't share if you think there are possible negative consequences of it.
Though I guess you could mean that it's very h...
Yes -- the plan is to have these on an ongoing basis. I'm writing this just as the deadline was passed for the one planned to April.
Here's the web site: https://aisafetycamp.com/
The facebook is also a good place to keep tabs on it: https://www.facebook.com/groups/348759885529601/
Your relationship with other people is a macrocosm of your relationship with yourself.
I think there's something to that, but it's not that general. For example, some people can be very kind to others but harsh with themselves. Some people can be cruel to others but lenient to themselves.
If you can't get something nice, you can at least get something predictable
The desire for the predictable is what Autism Spectrum Disorder is all about, I hear.
I think there's something to that, but it's not that general. For example, some people can be very kind to others but harsh with themselves. Some people can be cruel to others but lenient to themselves.
Even if the behavior itself seems vastly different, that doesn't necessarily mean they aren't just different instances of the same "social program". For example, if you're "kind" to others but harsh with yourself, it might be because you don't know how to hold people accountable without being harsh, and corre...
It's bleen, without a moment's doubt.
Counterpoint: Sometimes, not moving means moving, because everyone else is moving away from you. Movement -- change -- is relative. And on the Internet, change is rapid.
Interesting. I might show up.
Thanks for the tip. Two other books on the subject that seem to be appreciated are Introduction to Set Theory by Karel Hrbacek and Classic Set Theory: For Guided Independent Study by Derek Goldrei.
Edit: math.se weighs in: http://math.stackexchange.com/a/264277/255573
The author of the Teach Yourself Logic study guide agrees with you about reading multiple sources:
I very strongly recommend tackling an area of logic (or indeed any new area of mathematics) by reading a series of books which overlap in level (with the next one covering some of the same ground and then pushing on from the previous one), rather than trying to proceed by big leaps.
In fact, I probably can’t stress this advice too much, which is why I am highlighting it here. For this approach will really help to reinforce and deepen understanding as you re-encounter the same material from different angles, with different emphases.
My two main sources of confusion in that sentence are:
I find Halmos somewhat contradictory here.
But I'm convinced you're right. I've edited the post. Thanks.
You guys must be right. And wikipedia corroborates. I'll edit the post. Thanks.
Hello.
I'm currently attempting to read through the MIRI research guide in order to contribute to one of the open problems. Starting from Basics. I'm emulating many of Nate's techniques. I'll post reviews of material in the research guide at lesswrong as I work through it.
I'm mostly posting here now just to note this. I can be terse at times.
See you there.
First, appreciation: I love that calculated modification of self. These, and similar techniques, can be very useful if put to use in the right way. I recognize myself here and there. You did well to abstract it all out this clearly.
Second, a note: You've described your techniques from the perspective of how they deviate from epistemic rationality - "Changing your Terminal Goals", "Intentional Compartmentalization", "Willful inconsistency". I would've been more inclined to describe them from the perspective of their central eff...
And boxing, by the way, means giving the AI zero power.
No, hairyfigment's answer was entirely appropriate. Zero power would mean zero effect. Any kind of interaction with the universe means some level of power. Perhaps in the future you should say nearly zero power instead so as to avoid misunderstanding on the parts of others, as taking you literally on the "zero" is apparently "legalistic".
As to the issues with nearly zero power:
So you disagree with the premise of the orthogonality thesis. Then you know a central concept to probe to understand the arguments put forth here. For example, check out Stuart's Armstrong's paper: General purpose intelligence: arguing the Orthogonality thesis
There's no guarantee that boxing will ensure the safety of a soft takeoff. When your boxed AI starts to become drastically smarter than a human -- 10 times --- 1000 times -- 1000000 times -- the sheer enormity of the mind may slip out of human possibility to understand. All the while, a seemingly small dissonance between the AI's goals and human values -- or a small misunderstanding on our part of what goals we've imbued -- could magnify to catastrophe as the power differential between humanity and the AI explodes post-transition.
If an AI goes through the ...
Mark: So you think human-level intelligence by principle does not combine with goal stability. Aren't you simply disagreeing with the orthogonality thesis, "that an artificial intelligence can have any combination of intelligence level and goal"?
http://intelligenceexplosion.com/en/2012/ai-the-problem-with-solutions/ links to http://lukeprog.com/SaveTheWorld.html - which redirects to http://lukemuehlhauser.comsavetheworld.html/ - which isn't there anymore.
I agree with the general shape of your argument, including that Cotra and Carlsmith are likely to overestimate the compute of the human brain, and that frontier algorithms are not as efficient as algorithms could be.
But I disagree that it will happen this quickly. :)