Comment author: whpearson 13 December 2014 10:44:52PM 1 point [-]

I have misgivings about using high level concepts to constrain an AI (be it friendliness or approval). I suspect we may well not share many concepts at all unless there is some form of lower level constraint system that makes our ontologies similar. If we must program the ontology in and it is not capable of drift, I have doubts it will be able to come up with vastly novel ways of seeing the world, limiting its potential power.

My favourite question is why build systems that are separate from us anyway? Or to put another way, how can we build a computational system that interacts with our brains as if it was part of us. Assume that we are multi-'sort of agent' systems that (mostly) pull in the same direction, how can we get computers to be part of that system.

I think some of the ideas of approval directed agents might be relevant, I suspect parts of our brain monitoring other parts and giving approval of their actions is part of the reason for consciousness (and also the dopamine system).

Comment author: Benito 20 August 2014 02:45:23PM 1 point [-]

For a great-if-imprecise response to #4, you can just read aloud the single page story at the beginning of Bostrom's book 'Superintelligence'. For a more precise response, you can make explicit the analogy.

Comment author: whpearson 23 August 2014 01:27:43PM 1 point [-]

And if they come back with an snake egg instead? Surely we need to have some idea of the nature of AI and it thus how exactly it is dangerous.

Comment author: paulfchristiano 30 August 2013 05:01:16PM 2 points [-]

I don't think that fast intelligence explosion ---> you have to solve the kind of hard philosophical problems that you are alluding to. You seem to grant that there are no particular hard philosophical problems we'll have to solve, but you think that nevertheless every approach to the problem will require solving such problems. Is it easy to state why you expect this? Is it because approaches we can imagine in detail today involve solving hard problems?

Regarding the hardness of defining "remain in control," it is not the case that you need to be able to define X formally in order to accomplish X. Again, perhaps such approaches require solving hard philosophical problems, but I don't see why you would be confident (either about this particular approach or more broadly). Regarding my claim that we need to figure this out anyway, I mean that we need to implicitly accept some process of reflection and self-modification as we go on reflecting and self-modifying.

I don't see why a singleton is necessary to avert value drift in any case; they seem mostly orthogonal. Is there a simple argument here? See e.g. Carl's post on this and mine. I agree there is a problem to be solved, but it seems to involve faithfully transmitting hard-to-codify values (again, perhaps implicitly).

Comment author: whpearson 30 August 2013 08:11:15PM 0 points [-]

A singleton (even if it is a world government) is argued to be a good thing for humanity by Bostrom here and here

Comment author: CarlShulman 14 May 2013 09:45:36PM *  1 point [-]

The population boom to the Malthusian limit (and a lower Malthusian limit for AI than humans) is an overwhelmingly important impact (on growth, economic activity, etc) that you don't mention, but that is regularly emphasized.

Running people faster or slower and keeping backups came immediately to mind, and Wikipedia adds space travel, but those three by themselves don't seem like they change that much.

Do you think mathematics and CS, or improvement of brain emulation software and other AI, wouldn't go much further with 1000 people working for a million years, than 100 million people working for 10 years?

Comment author: whpearson 15 May 2013 11:02:50PM 0 points [-]

It is a toss up as far as I am concerned, depends what the search space of maths/cs looks like. People seem to get stuck in their ways and dismiss other potential pathways. I'm envisioning the difference (for humans at least) to be like running a hill climbing algorithm from 1000 different points for a million years or 100 million different points for 10 years. So if the 1000 people get stuck in local optima they may do worse compared to someone who get lucky and happens to search a very fertile bit of maths/cs for a small amount of time.

Also you couldn't guarantee that people would maintain interest that long.

Lastly the sped up people would also have to wait 100,000 times longer for any practical run. Which are still done in lots of CS/AI. Even stuff like algorithm design.

So unless you heavily modded humans first, I'm not sure it is slam dunk for the sped up people.

Comment author: kilobug 15 May 2013 04:18:15PM 6 points [-]

Interesting, but the point on "Democracy" seems a bit an applause light to me. We all like democracy so a community needs democracy, right ?

Well, if you look at communities, you'll see that "leader worship" is actually as least as efficient to build a strong community than democracy. I'm not saying it's the best option all things considered, but if in the purpose of crafting a community, having a strong, quasi-dictatorial leader that everyone respects tends to be a very efficient way. The "penguin" is a clear example of that : Linus, the "benevolent dictator for life" is a strong factor of the community cohesion. Democratic models can also work (to stay in the same domain, that's how Debian works, and it works very well) but they aren't the most likely path to success.

There probably are evolutionary psychology reasons behind the "strong leader" pattern, rooting into families (were the patriarch or the matriarch is the natural "strong leader") and tribes (which usually aren't very democratic), the two most primitive communities, but I won't enter the details because evopsy isn't my primary field.

Comment author: whpearson 15 May 2013 10:24:48PM 1 point [-]

What happens when the dictator for life departs (for whatever reason)? .

Comment author: Davidmanheim 02 May 2013 10:52:58PM 1 point [-]

The problem with electing (human) agents is that you suddenly have principle-agent problems. Their priorities change if they gain status from being selected, whether it's because they want to be re-selected, or because their time in power is limited, or because it is unlimited. If they don't gain anything by being selected, you are likely to have no incentive for them to invest in making optimal decisions.

Even if this is untrue, you need to project their decisions, typically by assuming you know something about their utility; if this projection is mis-specified even a bit, the difference can be catastrophic; their utility also may not be stationary. So their are some issues there, but they are interesting ones.

Comment author: whpearson 04 May 2013 07:31:09PM 0 points [-]

Thanks! I suspected there was terminology for the principle-agent problem, but didn't know what to google.

Agreed upon it being a big issue. I suppose I am interested in whether we can ameliorate them, so that there are fewer of the issues, rather than eliminate them entirely.

I'll keep an eye out for any follow ups you do to this meetup.

Comment author: NancyLebovitz 02 May 2013 01:15:37AM *  2 points [-]

You might be interested in Trust: The Social Virtues and The Creation of Prosperity. More generally, I was a little surprised at the pure experimental approach that didn't have a look at the degree of corruption in different real-world societies.

Corruption is widespread through our society. From major events like the Enron scandel to low level inefficiency in government it has a massive impact on our day to day lives. People aren't inherently evil, so it is the type of organisations that we create that are at fault.

I recommend "From major events like the Enron scandal to low level inefficiency in government, corruption has a massive effect on our day to day lives."

As for the next sentence, I'm not sure whether I don't understand you or don't agree with you. Admittedly, there will be more crime when there are weak barriers to crime, but I also believe that people who want to get away with something will, if they have the power, try to shape organizations which will let them get away with what they want.

Something to contemplate: Man creates huge Ponzi scheme in EVE Online just to prove he can do it. When it's over, he considers returning the money, which he has no use for, but he just can't make himself do it.

Comment author: whpearson 02 May 2013 07:40:13PM *  0 points [-]

You might be interested in Trust: The Social Virtues and The Creation of Prosperity.

Thanks. I'll have a look at the book.

More generally, I was a little surprised at the pure experimental approach that didn't have a look at the degree of corruption in different real-world societies.

I did mention looking at various subjects in the What>Explore section, one of which will be looking current real world societies.

I focus on experimentation for a few different reasons

1) Experimentation is hard. You can't do it on your own, you need other people, so the most focus goes on it. Otherwise people might just read books and make observations, which leads to the second point.

2) Experiments are a teaching tool. People have to learn that a different way can be better for them and the best way is to try it out for themselves.

3) There are lots of different societal norms and structures we haven't tried, so their might be opportunities to escape our current local optima.

I recommend "From major events like the Enron scandal to low level inefficiency in government, corruption has a massive effect on our day to day lives."

Thanks! I'll change that.

As for the next sentence, I'm not sure whether I don't understand you or don't agree with you. Admittedly, there will be more crime when there are weak barriers to crime, but I also believe that people who want to get away with something will, if they have the power, try to shape organizations which will let them get away with what they want.

I should probably put a qualifying "Most" in front of the people. I was writing it when I was trying to avoid weasel words.

But there is the question of why those you think "evil" get power? Who gets power is also somewhat a societal question.

Comment author: whpearson 01 May 2013 10:32:48PM *  3 points [-]

I'd like some comments on the landing page of a website I am working on Experi-org. It is to do with experimenting with organisations.

I mainly want feedback on tone and clarity of purpose. I'll work on cleaning it up more (getting a friend who is a proof reader to give it the once over), once I have those nailed down.

Open Thread, May 1-14, 2013

3 whpearson 01 May 2013 10:28PM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Comment author: Davidmanheim 01 May 2013 03:52:12AM 1 point [-]

That's intersting; I'm approaching it from the perspective of welfare economics, not computer science, but the approaches you are describing sound promising. I'll need to look into them more. The problem is that there is a wide gulf between making decisions and delegating someone to do so.

My view is that if we have no metric for assessing whether a decision is good, it's hard to talk about making good decisions. We need coherent metrics, and partial orderings like pareto are only useful when we constrain the decision models to fit what our math can handle! (Instead of handling whatever portions of reality we can using our math, and admitting that the world is more complex than we can currently model.)

Comment author: whpearson 01 May 2013 07:05:51PM *  0 points [-]

Just reading up on welfare economics a bit, so apologies if I say anything incorrect. I have a lot to read up on!

The problem is that there is a wide gulf between making decisions and delegating someone to do so.

True. My approach does rely on delegating decisions to actors. The open question in my mind is if we can create systems such that actors are encouraged to make good decisions or adopt good decision making processes*.

From my point of view some form of welfare economics (or decision markets) may be one of the processes that actor would be have incentives to adopt. But it may well be able to stand on its own two feet.

*And decision processes would be evaluated by a general situation rather than specific decisions.

At the moment I would ideally sum normalised deltas in utility of a situation for an agent. But that is the weakest part of the system, it is open to manipulation somewhat.

View more: Next