Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Wei_Dai comments on Max Tegmark on our place in history: "We're Not Insignificant After All" - Less Wrong

18 [deleted] 04 January 2010 12:02AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (68)

You are viewing a single comment's thread.

Comment author: Wei_Dai 04 January 2010 04:50:22PM *  17 points [-]

What strikes me about our current situation is not only are we at an extremely influential point in the history of the universe, but how few people realize this. It ought to give the few people in the know enormous power (relative to just about anyone else who has existed or will exist) to affect the future, but, even among those who do realize that we're at a bottleneck, few try to shape the future in any substantial way, to nudge it one way or another. Instead, they just go about their "normal" lives, and continue to spend their money on the standard status symbols and consumer goods.

What to make of this? If we follow straight revealed preference, we have to conclude that people have huge discount rates on distance or time, or to put it more straightforwardly, they are simply indifferent about what happens in nearly all of the universe. This is not a very palatable conclusion for those who lean towards preference utilitarianism. Robin's response (in "Dream Time") is to dismiss those preferences as "consequential delusions" and Eliezer's response (in CEV) is to hope that if people were more intelligent and rational they would have more interesting preferences.

Personally, I don't know what I want the future to be, but I still find it worthwhile to try to push it in certain directions, directions that I think are likely to be net improvements. And I also puzzle over why I appear to be in such an atypical position.

Comment author: whpearson 04 January 2010 05:35:54PM 4 points [-]

My current position is I don't know what the correct action to take to nudge the world the way I want. The world seems to be working somewhat at this point and any nudge may send it into a path towards something that doesn't work (even sub-human AI might change the order of the world so much it stops working).

So my strategy is to try and prepare a nudge that could be used in case of emergencies. While trying to live a semi-normal life as well and cope with akrasia etc, it is not going quickly.

Comment author: Wei_Dai 04 January 2010 06:04:22PM 3 points [-]

There are some actions that seem to be clear wins, like fighting against unFriendly AI. I find it difficult to see what kind of nudge you could prepare that would be effective in an emergency. Can you say more about the kind of thing you had in mind?

Comment author: whpearson 04 January 2010 08:12:53PM 1 point [-]

I think very fast UFAI is unlikely, so I tend to worry about the rest of the bottleneck. Slow AI* has its own dangers and is not a genie I would like to let out of the bottle unless I really need it. Even if the first Slow AI is Friendly it doesn't guarantee the next 1000 will be, so it depends on the interaction between the AI and the society that makes it.

Not that I expect to code it all myself. I really should be thinking about setting up an institution to develop and hide the information in such a way that it is distributed but doesn't leak. The time to release the information/code would be when there had been a non-trivial depopulation of earth and it was having trouble reforming an industrial society (or other time industrial earth was in danger). The reason not release it straight away would be to hope for better understanding of the future trajectory of the Slow AIs.

There might be an argument for releasing the information if we could show we would never get a better understanding of the future of the Slow AIs.

*By slow AI I mean AI that has as much likelihood of Fooming as unenhanced humans do, due to sharing similar organization and limitations of intelligence.

Comment author: MatthewB 04 January 2010 08:54:46PM 0 points [-]

Could you define sub-human AI, please?

It seems to me that we already have all manner of sub-human AI. The AIs that deal with telephone traffic, data mining, air-traffic control, the Gov't and Intelligence services, the Military, Universities that have AI programs, Zoos that have breeding programs (and sequence the genomes of endangered animals to find the best mate for the animal), etc.

Are these types of AI far too primitive to even be considered sub-human, in your opinion?

Comment author: whpearson 04 January 2010 11:11:30PM *  0 points [-]

Are these types of AI far too primitive to even be considered sub-human, in your opinion?

Not exactly too primitive but of the wrong structure. Are you familiar with functional programming type notation? An off line learning system can be considered a curried function of type

classify :: Corpus -> (a -> b)

Where a and b are the input and output types, and Corpus is the training data. Consider a chess playing game that learns from previous chess games (for simplicity).

Corpus -> (ChessGameState -> ChessMove) or a data mining tool set up for finding terrorists

Corpus -> ((Passport, FlightItinerary) -> Float) where the float is the probability that the person travelling is a terrorist based on the passport presented and the itinerary.

They can be very good at their jobs, but they are predictable. You know their types. What I was worried about is learning systems that don't have a well defined input and output over their life times.

Consider the humble PC it doesn't know how many monitors it is connected to or what will be connected to its USB sockets. If you wanted to create a system that could learn to control it you would need to be from any type to any type, dependent upon what it had connected.* I think humans and animals are designed to be this kind of system as our brain has been selected to cope with many different types of body with minimal evolutionary change. It is what allows us to add prosthetics and cope with bodily changes over life (growth and limb/sense loss). These system are a lot more flexible as they can learn things quickly by restricting their search spaces, but still have a wide range of possible actions.

There are more considerations for an intelligence about the type of function that determines how the corpus/memory determines the current input/output mapping as well. But that is another long reply.

*You can represent any type to any other type as a large integer in a finite system. But with the type notation I am trying to indicate what the system is capable of learning at any one point. We don't search the whole space for computational resource reasons.

Comment author: MatthewB 05 January 2010 01:43:22AM 2 points [-]

Thanks for the reply. It is very helpful.

I am aware of functional programming, but only due to having explored it myself (I am still at City College of San Francisco, and will not be transferring to UC - hopefully Berkeley or UCSD - until this fall). Unfortunately, most Community and Junior Colleges don't teach functional programming, because they are mostly concerned with cranking out code monkeys rather than real Computer Scientists or Cognitive Scientists (My degree is Cog Sci/Computationalism and Computational Engineering - or, the shorter name: Artificial Intelligence. At least that is what most of the people in the degree program are studying. Especially at Berkeley and UCSD, the two places I wish to go).

So, is what you are referring to, with a learning type system, not Sub-human equivalent because it has no random or Stochastic processes?

Or, to be a little more clear, they are not sub-human equivalent because they are highly deterministic and (as you put it) predictable.

I get what you mean about human body-type adaptation. We still have the DNA in our bodies for having tails of all types (from reptile to prehensile), and we still have DNA for other deprecated body plans. Thus, a human-equivalent AI would need to be flexible enough to be able to adapt to a change in its body plan and tools (at least this is what I am getting).

In another post (which I cannot find, as I need to learn how to search my old posts better), I propose that computers are another form of intelligence that is evolving with humans as the agent of selection and mutation. Thus, they have a vastly different evolutionary pathway than biological intelligence has had. I came up with this after hearing Eliezer Yudowski speak at one of the Singularity Summits (and maybe Convergence 08. I cannot recall if he was there or not). He talks about Mind Space, and how humans are only a point in Mind Space, and that the potential Mind Space is huge (maybe even unbounded. I hope that he will correct me if I have misunderstood this).

Comment author: MatthewB 05 January 2010 02:00:35AM 5 points [-]

What strikes me about our current situation is not only are we at an extremely influential point in the history of the universe, but how few people realize this.

I am amazed at how few people are aware of the consequences of the next few decades. Yet, when I think about other influential points in history, I find that people were either just as ignorant, or actively resisting such change (Saboteurs & Luddites in the Textile industries in Holland and England for instance. Or, those who opposed industrialization in the USA at the end of the 19th century).

The last really major change, such as the one we are now in, was the end of the Industrial Revolution in the early 1900s. It was so very obvious that rural life was becoming a thing of the past, yet people fled into nostalgia and disbelief; talking about the benefits of the pastoral lifestyle (while ignoring the fact that it was backbreaking work that required dawn to dusk toil for little gain).

Those very few who were aware that it was indeed a time that would end the world as people knew it were able to improve their lot immensely. A new generation of wealth was created.

This same thing is happening now. And, you are correct:

It ought to give the few people in the know enormous power

Hopefully, this period will also result in the ability of the vast hoards of people living in or below the poverty line to rise above this. Ultimately, we could move into a post-scarcity economy, where all basic needs are fully (and comfortably) met, and thus free people to pursue more fulfilling work and leisure.

Of course, the book is still out on that one.

And I also puzzle over why I appear to be in such an atypical position.

That is something that I wonder as well. I've spent an inordinate amount of time at my school trying to educate people about the possibilities of the next 2 to 5 decades, yet it has mostly fallen on deaf ears (I would like to say even among the Computer Science/Engineering Professors and students, but it is more like especially among the Comp Sci/Engn Profs & Students). There have been a few who knew about the historical changes that are happening now, but I didn't need to educate them. They already knew. They are also the people whom I noticed that, like myself, were aiming at Berkeley, Stanford, CMU, MIT, etc.

So, maybe it is that those of us in the know should consider ourselves fortunate and make plans to help elevate others in the future who missed the boat (so to speak). I know that if I succeed at my goals, I plan to help out others whom I know have had hard times due to failure to plan well (or who made mistakes, such as I, earlier in their life and need a second chance)

Comment author: byrnema 05 January 2010 02:50:41AM *  -1 points [-]

Wow: MIT and Berkeley. You guys must have been the group that was right.

Comment author: MatthewB 05 January 2010 03:01:11AM 3 points [-]

I should point out that I am the stupid one among them, which is why I have to limit myself to UC (Berkeley or UCSD - as they have a HUGE Cog Sci and AI program at UCSD that rivals Berkeley's). If I was not disabled (and old enough to be most of the group's father) I would probably be heading to MIT or CMU as well... Although, Berkeley is not shabby. My GPA suffered horribly when I first went back to school due to not taking my disability fully into account and not knowing my rights as a disabled person yet. I have finally managed to have a couple of semesters at 3+ GPA, but my overall GPA is still slightly below 3. I've been told that I will stand a good chance to get into Berkeley if I maintain the 3.2 to 3.7 semesters I've been getting since the end of 2008 (I only do 3/4 time, as I discovered in that first semester that I can't manage full time very well).

Thank you for the compliment though. I hope that I continue to be worthy of it.

Comment author: byrnema 04 January 2010 05:11:38PM 4 points [-]

And I also puzzle over why I appear to be in such an atypical position.

And I was wondering why I was in such an atypical position of not caring.

You write of pushing the universe towards net improvements. By 'improvement', you mean relative to your particular or general human values. At a large and far scale, why should we have any loyalty to those values, especially if they are arbitrary (that is, not sufficiently general to mind space)? If the universe is meant to be dominated by the different values of other minds, why would I do anything but shrug my shoulders about that?

Comment author: Wei_Dai 04 January 2010 05:49:29PM 2 points [-]

I think just by asking the question "why should I care?", you probably already care more than most, who just go on doing what they always did without a second thought.

If I ask myself "why do I care?", the answer is that I don't seem to care much about the standard status symbols and consumer goods (bigger houses, faster cars, etc.), so what is left? Well for one thing, I care about knowledge, i.e., finding answers to questions that puzzle me, and I think I can do that much better in some futures than in others.

Comment author: AdeleneDawner 04 January 2010 06:48:35PM 1 point [-]

Er... if you answered why you care, I'm failing to find where you did so. Listing what you care about doesn't answer the question.

I don't think it's controversial that 'why do you care about that' is either unanswerable, or answerable only in terms of something like evolution or neurochemestry, in the case of terminal values.

Comment author: byrnema 04 January 2010 07:43:40PM *  0 points [-]

Listing what you care about doesn't answer the question.

There is a subtext to this question, which is that I believe we typically assume -- until it is demonstrated otherwise -- that our values are similar or overlap significantly, so it is natural to ask 'why do you value this' when maybe we really mean 'what terminal value do you think you're optimizing with this'? Disagreements about policy or 'what we should care about' are then often based on different beliefs about what achieves what than different values. It is true that if our difference in caring turns out to be based upon different values, or weighting values differently, then there's nothing much to discuss. Since I do value knowledge too, I wanted to further qualify how Wei Dai values knowledge, because I don't see how nudging the far future one way or another is going to increase Wei Dei's total knowledge.

Comment author: Wei_Dai 04 January 2010 07:30:40PM 0 points [-]

Byrnema had a specific objection to human values that are "arbitrary", and I think my response addressed that. To be more explicit, all values are vulnerable to the charge of being arbitrary, but seeking knowledge seems less vulnerable than others, and that seems to explain why I care more about the future than the average person. I was also trying to point out to Byrnema that perhaps she already cares more about the future than most, but didn't realize it.

Comment author: byrnema 04 January 2010 06:03:55PM *  0 points [-]

To what extent does your caring about the future depend upon you being there to experience it?

Then my next question would be, how important is your identity to this value? For example, do you have a strong preference whether it is "you" that gains more and more knowledge of the universe, or any other mind?

Comment author: Wei_Dai 04 January 2010 07:46:12PM *  3 points [-]

I might change my mind in the future, but right now my answers are "to a large extent" and "pretty important".

Why do you care what my values are, though, or why they are what they are? I find it fascinating that "value-seeking" is a common behavior among rationalist-wannabes (and I'm as guilty of it as anyone). It's almost as if the most precious resource in this universe isn't negentropy, but values.

ETA: I see you just answered this in your reply to Adelene Dawner:

I wanted to further qualify how Wei Dai values knowledge, because I don't see how nudging the far future one way or another is going to increase Wei Dei's total knowledge.

I expect that I can survive indefinitely in some futures. Does that answer your question?

Comment author: byrnema 04 January 2010 07:58:56PM *  2 points [-]

It's almost as if the most precious resource in this universe isn't negentropy, but values.

That's an amusing observation, with some amount of truth to it.

The reason why I was asking is because I was seeking to understand why you care and I don't.

Given your reply, I think our difference in caring can be explained by the fact that when I imagine the far future, I don't imagine myself there. I'm also less attached to my identity; I wouldn't mind experiencing the optimization of the universe from the point of view of an alien mind, with different values. (This last bit is relevant if you want the future to be good just the sake of it being good, even if you're not there.)

Comment author: cabalamat 05 January 2010 07:30:23AM 2 points [-]

If we follow straight revealed preference, we have to conclude that people have huge discount rates on distance or time, or to put it more straightforwardly, they are simply indifferent about what happens in nearly all of the universe.

Maybe they just think that they can't affect what happens very much.

Comment author: MichaelBishop 06 January 2010 09:06:16PM 4 points [-]

People talk about global poverty and other issues they have little influence over. If people would at least talk about the long-term future of our species that would be a start.