You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

[Stub] Ontological crisis = out of environment behaviour?

8 Stuart_Armstrong 13 January 2016 03:10PM

One problem with AI is the possibility of ontological crises - of AIs discovering their fundamental model of reality is flawed, and being unable to cope safely with that change. Another problem is the out-of-environment behaviour - that an AI that has been trained to behave very well in a specific training environment, messes up when introduced to a more general environment.

It suddenly occurred to me that these might in fact be the same problem in disguise. In both cases, the AI has developed certain ways of behaving in reaction to certain regular features of their environment. And suddenly they are placed in a situation where these regular features are absent - either because they realised that these features are actually very different from what they thought (ontological crisis) or because the environment is different and no longer supports the same regularities (out-of-environment behaviour).

In a sense, both these errors may be seen as imperfect extrapolation from partial training data.

Dealing with a Major Personal Crisis

16 Gunnar_Zarncke 20 January 2014 01:36AM

This is the earlier promised post about Dealing with a Major Personal Crisis. Please continue reading there but comment here.

The reasons for posting it this way are explained at the end of the link. I hope this approach does what I want it to.