Thanks for writing this. I have been fretting for some time and realized that what I needed was a rational take on the war. I appreciate the time you've taken you write this out and I'll check out your other posts on this.
This seems correct to me. Thank you.
You don't know anything about how cars work?
It's possible to predict the behavior of black boxes without knowing anything about their internal structure.
Elaborate?
That says a lot more about your personal values then the general human condition.
I suppose you are right.
The models of worms might be a bit better at predicting worm behavior but they are not perfect.
They are significantly closer to being perfect than our models of humans. I think you are right in pointing out that where you draw the line is somewhat arbitrary. But the point is the variation on the continuum.
Do you think it is something external to the birds that make them migrate?
Norbert Wiener is where it all starts. This book has a lot of essays. It's interesting--he's talking about learning machines before "machine learning" was a household word, but envisioning it as electrical circuits.
http://www.amazon.com/Cybernetics-Second-Edition-Control-Communication/dp/026273009X
I think that it's important to look inside the boxes. We know a lot about the mathematical limits of boxes which could help us understand whether and how they might go foom.
Thank you for introducing me to that Concrete Mathematics book. That looks cool....
Do you think that rationalism is becoming a religion, or should become one?
Thanks. That criticism makes sense to me. You put the point very concretely.
What do you think of the use of optimization power in arguments about takeoff speed and x-risk?
Or do you have a different research agenda altogether?
That makes sense. I'm surprised that I haven't found any explicit reference to that in the literature I've been looking at. Is that because it is considered to be implicitly understood?
One way to talk about optimization power, maybe, would be to consider a spectrum between unbounded, LaPlacean rationality and the dumbest things around. There seems to be a move away from this though, because it's too tied to notions of intelligence and doesn't look enough at outcomes?
It's this move that I find confusing.
There are people in my department who do work in this area. I can reach out and ask them.
I think Mechanical Turk gets used a lot for survey experiments because it has a built-in compensation mechanism and there are ways to ask questions in ways that filter people into precisely what you want.
I wouldn't dismiss Facebook ads so quickly. I bet there is a way to target mobile app developers on that.
My hunch is that like survey questions, sampling methods are going to need to be tuned case-by-case and patterns extracted inductively from that. Good social scientific experiment design is very hard. Standardizing it is a noble but difficult task.
Thanks. That's very helpful.
I've been thinking about Stuart Russell lately, which reminds me...bounded rationality. Isn't there a bunch of literature on that?
http://en.wikipedia.org/wiki/Bounded_rationality
Have you ever looked into any connections there? Any luck with that?
1) This is an interesting approach. It looks very similar to the approach taken by the mid-20th century cybernetics movement--namely, modeling social and cognitive feedback processes with the metaphors of electrical engineering. Based on this response, you in particular might be interested in the history of that intellectual movement.
My problem with this approach is that it considers the optimization process as a black box. That seems particularly unhelpful when we are talking about the optimization process acting on itself as a cognitive process. It's eas...
Could you please link to examples of the kind of marketing studies that you are talking about? I'd especially like to see examples of those that you consider good vs. those you consider bad.
I am confused. Shouldn't the questions depend on the content of the study being performed? Which would depend (very specifically) on the users/clients? Or am I missing something?
I would worry about sampling bias due to selection based on, say, enjoying points.
The privacy issue here is interesting.
It makes sense to guarantee anonymity. Participants recruited personally by company founders may be otherwise unwilling to report honestly (for example). For health related studies, privacy is an issue for insurance reasons, etc.
However, for follow-up studies, it seems important to keep earlier records including personally identifiable information so as to prevent repeatedly sampling from the same population.
That would imply that your organization/system needs to have a data management system for securely storing the p...
He then takes whatever steps we decide on to locate participants.
Even if the group assignments are random, the prior step of participant sampling could lead to distorted effects. For example, the participants could be just the friends of the person who created the study who are willing to shill for it.
The studies would be more robust if your organization took on the responsibility of sampling itself. There is non-trivial scientific literature on the benefits and problems of using, for example, Mechanical Turk and Facebook ads for this kind of work. There is extra value added for the user/client here, which is that the participant sampling becomes a form of advertising.
Thanks for your thoughtful response. I'm glad that I've been more comprehensible this time. Let me see if I can address the problems you raise:
1) Point taken that human freedom is important. In the background of my argument is a theory that human freedom has to do with the endogeneity of our own computational process. So, my intuitions about the role of efficiency and freedom are different from yours. One way of describing what I'm doing is trying to come up with a function that a supercontroller would use if it were to try to maximize human freedom. The i...
I see, that's interesting. So you are saying that while the problem as scoped in §2 may take a function of arbitrary complexity, there is a constraint in the superintelligence problem I have missed, which is that the complexity of the objective function has certain computational limits.
I think this is only as extreme a problem as you say in a hard takeoff situation. In a slower takeoff situation, inaccuracies due to missing information could be corrected on-line as computational capacity grows. This is roughly business-as-usual for humanity---powerful enti...
Could you flesh this out? I'm not familiar with key-stretching.
A pretty critical point is whether or not the hashed value is algorithmically random. The depth measure has the advantage of picking over all permissible starting conditions without having to run through each one. So it's not exactly analogous to a brute force attack. So for the moment I'm not convinced on this argument.
Thanks for your encouraging comments. They are much appreciated! I was concerned that following the last post with an improvement on it would be seen as redundant, so I'm glad that this process has your approval.
Regarding your first point:
Entropy is not depth. If you do something that increases entropy, then you actually reduce depth, because it is easier to get to what you have from an incompressible starting representation. In particular, the incompressible representation that matches the high-entropy representation you have created. So if you hold hum
Maybe. Can you provide an argument for that?
As stated, that wouldn't maximize g, since applying the hash function once and tiling would cap the universe at finite depth. Tiling doesn't make any sense.
Your point about physical entropy is noted and a good one.
One reason to think that something like D(u/h) would pick out higher level features of reality is that h encodes those higher-level features. It may be possible to run a simulation of humanity on more efficient physical architecture. But unless that simulation is very close to what we've already got, it won't be selected by g.
You make an interesting point about the inefficiency of physics. I'm not sure what you mean by that exactly, and am not in a position of expertise to say otherwise. However, I ...
So, the key issue is whether or not the representations produced by the paperclip optimizer could have been produced by other processes. If there is another process that produces the paperclip-optimized representations more efficiently than going through the process of humanity, then that process dominates the calculation of D(r).
In other words, for this objection to make sense, it's not enough for the humanity to have been sufficient for the R scenario. It must be necessary for producing R, or at least necessary to result in it in the most efficient possible way.
What are your criteria for a more concrete model than what has been provided?
Do you know if there are literally entries for these outcomes on tvtropes.org? Should there be?
I think what the idea in the post does is that it gets at the curvature of the space, so to speak.
thanks I've been mispelling that for a while now. I stand corrected.
That is of course one of the questions on the table: who has the power to implement and promote different platforms.
I guess I disagree with this assessment of which problem is easier.
Humanity continues to exist while people stub their toes all the time. I.e., currently the humanity existing problem is close to solved, and the toe stubbing problem has by and large not been.
this is the sort of thing that gets assigned in seminars. Maybe 80% correct but ultimately weak sauce IMO
So there's some big problems of picking the right audience here. I've tried to make some headway into the community complaining about newsfeed algorithm curation (which interests me a lot, but may be more "political" than would interest you) here:
which is currently under review. It's a lot softer that would be ideal, but since I'm trying to convince these people to go from "algorithms, how complicated! Must be evil" to "oh, they ...
ethically motivated algorithmic curation.
Is that a polite expression for "propaganda via software"? Whose ethics are we talking about?
I think this is a super post!
Re: Generality.
Yes, I agree a toy setup and a proof are needed here. In case it wasn't clear, my intentions with this post was to suss out if there was other related work out there already done (looks like there isn't) and then do some intuition pumping in preparation for a deeper formal effort, in which you are instrumental and for which I am grateful. If you would be interested in working with me on this in a more formal way, I'm very open to collaboration.
Regarding your specific case, I think we may both be confused about the math. I think you are right...
Non-catastrophic with respect to existence, not with respect to "human values." I'm leaving values out of the equation for now, focusing only on the problem of existence. If species suicide is on the table as something that might be what our morality ultimately points to, then this whole formulation of the problem has way deeper issues.
My point is that starting anew without taking into account the computational gains, you are increasing D(u) efficiently and D(u/h) inefficiently, which is not favored by the objective function.
If there's something ...
Maybe this will be more helpful:
If the universe computes things that are not computational continuations of the human condition (which might include resolution to our moral quandaries, if that is in the cards), then it is, with respect to optimizing function g, wasting the perfectly good computational depth achieved by humanity so far. So, driving computation that is not somehow reflective of where humanity was already going is undesirable. The computational work that is favored is work that makes the most of what humanity was up to anyway.
To the extent t...
Re: your first point:
As I see it, there are two separate problems. One is preventing catastrophic destruction of humanity (Problem 1). The other is creating utopia (Problem 2). Objective functions that are satisficing with respect to Problem 1 may not be solutions to Problem 2. While as I read it the Yudkowsky post you linked to argues for prioritizing Problem 2, on the contrary my sense of the thrust of Bostrom's argument is that it's critical to solve Problem 1. Maybe you can tell me if I've misunderstood.
Without implicating human values, I'm claiming th...
First, I'm grateful for this thoughtful engagement and pushback.
Let's call your second dystopia the Universal Chinese Turing Factory, since it's sort of a mash-up of the factory variant of Searle's Chinese Room argument and a universal Turing Machine.
I claim that the Universal Chinese Turing Factory, if put to some generic task like solving satisfiability puzzles, will not be favored by a supercontroller with the function I've specified.
Why? Because if we look at the representations computed by the Universal Chinese Turing Factory, they may be very logical...
I can try. This is new thinking for me, so tell me if this isn't convincing.
If a future is deep with respect to human progress so far, but not as deep with respect to all possible incompressible origins, then we are selecting for futures that in a sense make use of the computational gains of humanity.
These computational gains include such unique things as:
human DNA, which encodes our biological interests relative to the global ecosystem.
details, at unspecified depth, about the psychologies of human beings
political structures, sociological structures,
1) Thanks, that's encouraging feedback! I love logical depth as a complexity measure. I've been obsessed with it for years and it's nice to have company.
2) Yes, my claim is that Manfred's doomsday cases would have very high D(u) and would be penalized. That is the purpose of having that term in the formula.
I agree with your suspicion that our favorite future have relatively high D(u/h) / D(u) but not the highest value of D(u/h) / D(u). I suppose I'd defend a weaker claim, that a D(u/h) / D(u) supercontroller would not be an existential threat. One reason f...
A good prediction :)
Logical depth is not entropy.
The function I've proposed is to maximize depth-of-universe-relative-to-humanity-divided-by-depth-of-universe.
Consider the decision to kill off people and overwrite them with a very fast SAT solver. That would surely increase depth-of-universe, which is in the denominator. I.e. increasing that value decreases the favorability of this outcome.
What increases the favorability of the outcome, in light of that function, are the computation of representations that take humanity as an input. You could imagine the s...
Corporations exist, if they have any purpose at all, to maximize profit. So this presents a sort of dilemma: their diminishing returns and fragile existence suggest that either they do intend to maximize profit but just aren't that great at it; or they don't have even that purpose which is evolutionarily fit and which they are intended to by law, culture, and by their owners, in which case how can we consider them powerful at all or remotely similar to potential AIs etc?
Ok, let's recognize some diversity between corporations. There are lots of differen...
Yes, but Git has a bottleneck: there are humans in the loop, and there are no plans to remove or significantly modify those humans. By "in the loop", I mean humans are modifying Git, while Git is not modifying humans or itself.
I think I see what you mean, but I disagree.
First, I think timtyler makes a great point.
Second, the level of abstraction I'm talking about is that of the total organization. So, does the organization modify its human components, as it modifies its software component?
I'd say: yes. Suppose Git adds a new feature. Then t...
Ok, thanks for explaining that.
I think we agree that organizations recursively self-improve.
The remaining question is whether organizational cognitive enhancement is bounded significantly below that of an AI.
So far, most of the arguments I've encountered for why the bound on machine intelligence is much higher than human intelligence have to do with the physical differences between hardware and wetware).
I don't disagree with those arguments. What I've been trying to argue is that the cognitive processes of an organization are based on both hardware and ...
They can't use one improvement to fuel another, they would have to come up with the next one independently
I disagree.
Suppose an organization has developers who work in-house on their issue tracking system (there are several that do--mostly software companies).
An issue tracking system is essentially a way for an organization to manage information flow about bugs, features, and patches to its own software. The issue tracker (as a running application) coordinates between developers and the source code itself (sometimes, its own source code).
Taken as a w...
should the word "corporation" in the first sentence be "[organization]"?
Yes, at least to be consistent with my attempt at de-politicizing the post :) I've corrected it. Thanks.
I wasn't sure what sort of posts were considered acceptable. I'm glad that particular examples have come up in the comments.
Do you think I should use particular examples in future posts? I could.
I find this difficult to follow. Is there a concrete mathematical definition of 'recursion' in this sense available anywhere?
I've realized I didn't address your direct query:
(Aside: Is the theory of "communicative rationality" specified well enough that we can measure degrees of it, as we can with Bayesian rationality?)
Not yet. It's a qualitatively described theory. I think it's probably possible to render it into quantitative terms, but as far as I know it has not yet been done.
There are many reasons why the intelligence of AI+ greatly dwarfs that of human organizations; see Section 3.1 of the linked paper.
Since an organization's optimization power includes optimization power gained from information technology, I think that the "AI Advantages" in section 3.1 mostly apply just as well to organizations. Do you see an exception?
This sounds similar to a position of Robin Hanson addressed in Footnote 25 of the linked paper.
Ah, thanks for that. I think I see your point: rogue AI could kill everybody, whereas a domina...
This point about Ukrainian neo-Nazis is very misunderstood by the West.
During the Maidan revolution in Ukraine in 2014, neo-Nazi groups occupied government buildings and brought about a transition of government.
Why are there neo-Nazis in Ukraine? Because during WWII, the Nazis and the USSR were fighting over Ukraine. Ukraine is today quite ethnically diverse, and some of the 'western' Ukrainians who were resentful of USSR rule and, later, Russian influence, have reclaimed nazi ideas as part of a far-right Ukrainian nationalism. Some of these nazi groups th... (read more)