Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: devi 20 December 2016 11:16:43PM *  1 point [-]

However, this likely understates the magnitudes of differences in underlying traits across cities, owing to people anchoring on the people who they know when answering the questions rather than anchoring on the national population

I think this is a major problem. This is mainly based on taking a brief look at this study a while back and being very suspicious of it explicitly contradicting so many of my models (eg South America having lower Extraversion than North America and East Asia being the least Conscientious region)

Comment author: Qiaochu_Yuan 20 December 2016 07:42:01AM 17 points [-]

The bucket diagrams don't feel to me like the right diagrams to draw. I would be drawing causal diagrams (of aliefs); in the first example, something like "spelled oshun wrong -> I can't write -> I can't be a writer." Once I notice that I feel like these arrows are there I can then ask myself whether they're really there and how I could falsify that hypothesis, etc.

Comment author: devi 20 December 2016 10:30:27PM 9 points [-]

The causal chain feels like a post-justification and not what actually goes on in the child's brain. I expect this to be computed using a vaguer sense of similarity that often ends up agreeing with causal chains (at least good enough in domains with good feedback loops). I agree that causal chains are more useful models of how you should think explicitly about things, but it seems to me that the purpose of these diagrams is to give a memorable symbol for the bug described here (use case: recognize and remember the applicability of the technique).

In response to comment by devi on Lesswrong 2016 Survey
Comment author: robirahman 05 May 2016 05:28:30PM 2 points [-]

Someone said elsewhere in this thread that if you stop in the middle of the survey, it does record the answers you put in before quitting.

Comment author: devi 09 May 2016 12:39:31AM 0 points [-]

Great! Thanks!

Comment author: devi 01 May 2016 07:21:05PM 1 point [-]

I just remembered that I still haven't finished this. I saved my survey response partway through, but I don't think I ever submitted it. Will it still be counted, and if not, could you give people with saved survey responses the opportunity to submit them?

I realize this is my fault, and understand if you don't want to do anything extra to fix it.

Comment author: ChristianKl 17 February 2016 11:17:37AM 2 points [-]

To the extend that people want to live where other people live it's useful to have a high density. Flat buildings aren't optimal for cities even when they are cheap to build.

Comment author: devi 18 February 2016 02:09:37AM 0 points [-]

I wasn't only referring to wanting to live where there are a lot of people. I was also referring to wanting to live near to very similar/nice people and far from very dissimilar/annoying people. I think the latter, together with the expected ability to scale things down, would make people want to live in smaller, more selected, communities. Even if they were in the middle of nowhere.

Comment author: ChristianKl 15 February 2016 03:24:47PM 6 points [-]

I think most places where people want to live don't fulfill the criteria of their being "a reasonable amount of free space".

Comment author: devi 17 February 2016 01:48:10AM 0 points [-]

Where people want to live depends on where other people live. It's possible to move away from bad Nash equilibria by cooperation.

Comment author: AlexMennen 12 December 2015 07:05:32PM 5 points [-]

(2) the risk of bad multi-polar traps. Much of (2) seems solvable by robust cooperation, that we seem to be making good progress on.

Not necessarily. In a multi-polar scenario consisting entirely of Unfriendly AIs, getting them to cooperate with each other doesn't help us.

Comment author: devi 13 December 2015 12:18:48AM 0 points [-]

Yes, robust cooperation is not much to us if its cooperation between the paperclip maximizer and the pencilhead minimizer. But if there are a hundred shards that make up human values, and tens of thousands of people running AI's trying to maximize the values they see fit. It's actually not unreasonable to assume that the outcome, while not exactly what we hoped for, is comparable to incomplete solutions that err on the side of (1) instead.

After having written this I notice that I'm confused and conflating: (a) incomplete solutions in the sense of there not being enough time to do what should be done, and (b) incomplete solutions in the sense of it being actually (provably?) impossible to implement what we right now consider essential parts of the solution. Has anyone got thoughts on (a) vs (b)?

Comment author: devi 12 December 2015 05:53:58PM 3 points [-]

It's important to remember the scale we're talking about here. A $1B project (even when considered over its lifetime) in such an explosive field with such prominent backers, would be interpreted as nothing other than a power-grab unless it included a lot of talk about openness (it will still be, but as a less threatening one). Read the interview with Musk and Altman and note how they're talking about sharing data and collaborations. This will include some noticeable short term benefits for the contributors, and pushing for safety, either via including someone from our circles or by a more safety focused mission statement, would impede your efforts at gathering such a strong coalition.

It's easy to moan over civilizational inadequacy and moodily conclude that above shows us how (as a species) we're so obsessed with appropriateness and politics that we will avoid our one opportunity to save ourselves. Sure do some of that, and then think of the actual effects for a few minutes:

If the Value Alignment research program is solvable in the way we all hope it is (complete with a human universal CEV, stable reasoning under self-modification and about other instances of our algorithm) then having lots of implementations running around will be basically the same as distributing the code over lots of computers. If the only problem is that human values won't quite converge: this gives us a physical implementation of the merging algorithm of everyone just doing their own thing and (acausally?) trading with each other.

If we can't quite solve everything that we're hoping for, this does change the strategic picture somewhat. Mainly it seems to push us away from a lot of quick fixes, that will likely seem tempting as we approach the explosion: we can't have a sovereign just run the world like some kind of OS that keeps everyone separate, we'll also be much less likely to make the mistakes of creating CelestAI from Friendship is Optimal, something that optimizes most our goals but has some undesired lock-ins. There are a bunch of variations here, but we seem locked out of strategies that try to achieve some minimum level of the cosmic endowment, while possibly failing at getting a substantial constant fraction of our potential by achieving it at the cost of important values or freedoms.

Whether this is a bad thing or not really depends on how one evaluates two types of risk: (1) the risk of undesired lock-ins from an almost perfect superintelligence getting too much relative power, (2) the risk of bad multi-polar traps. Much of (2) seems solvable by robust cooperation, that we seem to be making good progress on. What keeps spooking me are risks due to consciousness: either mistakenly endowing algorithms with it creating suffering, or evolving to the point that we loose it. These aren't as easily solved by robust cooperation, especially if we don't notice them until it's too late. The real strategic problem right now is that there isn't really anyone we can trust to be unbiased in analyzing the relative dangers of (1) and (2), especially because they pattern-match so well with the ideological split between left and right.

Comment author: AlexMennen 12 December 2015 12:26:44AM *  14 points [-]

From their website, it looks like they'll be doing a lot of deep learning research and making the results freely available, which doesn't sound like it would accelerate Friendly AI relative to AI as a whole. I hope they've thought this through.

Edit: It continues to look like their strategy might be counterproductive. [Edited again in response to this.]

Comment author: devi 12 December 2015 05:06:11PM 1 point [-]

They seem deeply invested in avoiding an AI arms race. This is a good thing, perhaps even if it speeds up research somewhat right now (avoiding increasing speedups later might be the most important thing: e^x vs 2+x etc etc).

Note that if the Deep Learning/ML field is talent limited rather than funding limited (seems likely given how much funding it has), the only acceleration effects we should expect are from connectedness and openness (i.e. better institutions). When some of this connectedness might be through collaboration with MIRI, this could very well advance AI Safety Research relative to AI research (via tighter integration of the research programs and choices of architecture and research direction, this seems especially important in how it will play out in the endgame).

In summary, this could actually be really good, it's just too early to tell.

Comment author: devi 15 January 2015 03:00:09AM 4 points [-]

Does Java (the good parts) refer to the O'Reilly book with the same name? Or is it some proper subset of the language like what Crockford describes for Javascript?

View more: Next