Comment author: Dagon 06 November 2014 09:19:36AM 1 point [-]

Don't focus on internal knowledge vs black-box prediction, instead think of model complexity and how big our constructed model has to be in order to predict correctly.

A human may be its own best model, meaning that perfect prediction requires a model at least as complex as the thing itself. Or the internals may contain a bunch of redundancy and inefficiency, in which case it's possible to create a perfect model of behavior and interaction that's smaller than the human itself.

If we build the predictive model from sufficient observation and black-box techniques, we might be able to build a smaller model that is perfectly representative, or we might not. If we build it solely from internal observation and replication, we're only ever going to get down to the same complexity as the original.

I include hybrid approaches (use internal and external observations to build models that don't operate identically to the original mechanisms) in the first category: that's still black-box thinking - use all info to model input/output without blindly following internal structure.

Comment author: sbenthall 08 November 2014 06:14:48AM 0 points [-]

This seems correct to me. Thank you.

Comment author: Wes_W 04 November 2014 04:18:36PM 2 points [-]

I'm not clear on the distinction you're drawing. Can you give a concrete example?

I don't know how cars work, but almost nothing my car does can surprise me. Only unusual one-off problems require help from somebody who knows the internal structure.

But cars are designed to be usable by laypeople, so this is maybe an unfair example.

Comment author: sbenthall 08 November 2014 06:13:24AM 0 points [-]

You don't know anything about how cars work?

Comment author: ChristianKl 04 November 2014 02:58:16PM 3 points [-]

It's possible to predict the behavior of black boxes without knowing anything about their internal structure.

In general, we can say that people do not have the capacity to explicitly represent other people very well. People are unpredictable to each other. This is what makes us free. When somebody is utterly predictable to us, their rigidity is a sign of weakness or stupidity.

That says a lot more about your personal values then the general human condition. Many people want romantic partners that understand them and don't associate this desire with either party being weak or stupid.

We are able to model the internal structure of worms with available computing power.

What do you mean with that sentence? It's obviously true because you can model anything. You can model cows as spherical bodies. We can model human behavior as well. Both our models of worms and of humans aren't perfect. The models of worms might be a bit better at predicting worm behavior but they are not perfect.

Comment author: sbenthall 08 November 2014 06:11:27AM 0 points [-]

It's possible to predict the behavior of black boxes without knowing anything about their internal structure.

Elaborate?

That says a lot more about your personal values then the general human condition.

I suppose you are right.

The models of worms might be a bit better at predicting worm behavior but they are not perfect.

They are significantly closer to being perfect than our models of humans. I think you are right in pointing out that where you draw the line is somewhat arbitrary. But the point is the variation on the continuum.

Comment author: SolveIt 04 November 2014 08:56:05AM 5 points [-]

Really? I suppose it depends on what you mean by an agent, but I can know that birds will migrate at certain times of the year while knowing nothing about their insides.

Comment author: sbenthall 08 November 2014 06:07:34AM 0 points [-]

Do you think it is something external to the birds that make them migrate?

Comment author: Gunnar_Zarncke 20 October 2014 07:22:46AM 1 point [-]

It looks very similar to the approach taken by the mid-20th century cybernetics movement

Interesting. I know a bit about cybernetics but wasn't consciously aware of a clear analog between cognitive and electrical processes. Maybe I'm missing some background. Could you give a reference I could follow up on?

I think that it's this [the backbox] kind of metaphor that is responsible for "foom" intuitions, but I think those are misplaced.

That is a plausible interpretation. Fooming is actually the only valid interpretation given an ideal black-box AI modelled this way. We have to look into the box which is comparable to looking at non-ideal op-amps. Fooming (on human time-scales) may still be be possible, but to determine that we have to get a handle on the math going on inside the box(es).

But in computation, we are dealing almost always with discrete math.

One could formulate discrete analogs to the continuous equations relating self-optimization steps. But I don't think this gains much as we are not interested in the specific efficiency of a specific optimization step. That wouldn't work anyway simply because the effect of each optimization step isn't known precisely, not even its timing.

But maybe your proposal to use complexity results from combinatorial optimization theory for specific feedback types (between the optimization stages outlined by EY) could provide better approximations to possible speedups.

Maybe we can approximate the black-box as a set of nested interrelated boxes.

Comment author: sbenthall 21 October 2014 12:16:02AM 1 point [-]

Norbert Wiener is where it all starts. This book has a lot of essays. It's interesting--he's talking about learning machines before "machine learning" was a household word, but envisioning it as electrical circuits.

http://www.amazon.com/Cybernetics-Second-Edition-Control-Communication/dp/026273009X

I think that it's important to look inside the boxes. We know a lot about the mathematical limits of boxes which could help us understand whether and how they might go foom.

Thank you for introducing me to that Concrete Mathematics book. That looks cool.

I would be really interested to see how you model this problem. I'm afraid that op-amps are not something I'm familiar with but it sounds like you are onto something.

Comment author: sbenthall 21 October 2014 12:10:24AM 1 point [-]

Do you think that rationalism is becoming a religion, or should become one?

Comment author: Stuart_Armstrong 20 October 2014 12:19:18PM 2 points [-]
Comment author: sbenthall 21 October 2014 12:08:24AM 1 point [-]

Thanks. That criticism makes sense to me. You put the point very concretely.

What do you think of the use of optimization power in arguments about takeoff speed and x-risk?

Or do you have a different research agenda altogether?

Comment author: lukeprog 20 October 2014 06:22:11PM *  0 points [-]

You might say bounded rationality is our primary framework for thinking about AI agents, just like it is in AI textbooks like Russell & Norvig's. So that question sounds to me like it might sound to a biologist if she was asked whether her sub-area had any connections to that "Neo-Darwinism" thing. :)

Comment author: sbenthall 21 October 2014 12:05:22AM 0 points [-]

That makes sense. I'm surprised that I haven't found any explicit reference to that in the literature I've been looking at. Is that because it is considered to be implicitly understood?

One way to talk about optimization power, maybe, would be to consider a spectrum between unbounded, LaPlacean rationality and the dumbest things around. There seems to be a move away from this though, because it's too tied to notions of intelligence and doesn't look enough at outcomes?

It's this move that I find confusing.

Comment author: DavidLS 20 October 2014 10:56:48AM 2 points [-]

Yeah, this is a brutal point. I wish I knew a good answer here.

Is there a gold standard approach? Last I checked even the state of the art wasn't particularly good.

Facebook / Google / StumbleUpon ads sound promising in that they can be trivially automated, and if only ad respondents could sign up for the study, then the friend issue is moot. Facebook is the most interesting of those, because of the demographic control it gives.

How bad is the bias? I performed a couple google scholar searches but didn't find anything satisfying.

To make things more complicated, some companies will want to test highly targeted populations. For example, Apptimize is only suitable for mobile app developers -- and I don't see a facebook campaign working out very well for locating such people.

A tentative solution might be having the company wishing to perform the test supply a list of websites they feel caters to good participants. This is even worse than facebook ads from a biasing perspective though. At minimum it sounds like disclosing how participants were located prominently will be important.

Comment author: sbenthall 20 October 2014 11:56:56PM 3 points [-]

There are people in my department who do work in this area. I can reach out and ask them.

I think Mechanical Turk gets used a lot for survey experiments because it has a built-in compensation mechanism and there are ways to ask questions in ways that filter people into precisely what you want.

I wouldn't dismiss Facebook ads so quickly. I bet there is a way to target mobile app developers on that.

My hunch is that like survey questions, sampling methods are going to need to be tuned case-by-case and patterns extracted inductively from that. Good social scientific experiment design is very hard. Standardizing it is a noble but difficult task.

Comment author: lukeprog 19 October 2014 03:27:07AM 3 points [-]

It's not much, but: see our brief footnote #3 in IE:EI and the comments and sources I give in What is intelligence?

Comment author: sbenthall 20 October 2014 05:23:50AM 0 points [-]

Thanks. That's very helpful.

I've been thinking about Stuart Russell lately, which reminds me...bounded rationality. Isn't there a bunch of literature on that?

http://en.wikipedia.org/wiki/Bounded_rationality

Have you ever looked into any connections there? Any luck with that?

View more: Next