Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

[Link] Cognitive Core Systems explaining intuitions behind belief in souls, free will, and creation myths

5 Kaj_Sotala 06 May 2017 12:13PM
Comment author: Stuart_Armstrong 28 April 2017 05:33:53AM 0 points [-]

I feel like you're straw-manning scenario analysis.

I am.

But if we're going to analyse scenario planning seriously - which I certainly didn't here - we need to look at older scenario planning attempts, and see how useful they were.

Comment author: Kaj_Sotala 28 April 2017 12:14:30PM 2 points [-]

If you admit that this is an unfair strawman, then why are you bothering to post it?

Comment author: Kaj_Sotala 22 April 2017 11:37:58AM *  8 points [-]

Good criticisms and I think I'm in rough agreement with many of them, but I'd suggest cutting/shortening the beginning. ~everyone already knows what Ponzi schemes are, and the whole extended "confidence game" introduction frames your post in a more hostile way than I think you intended, by leading your readers to think that you're about to accuse EA of being intentionally fraudulent.

Comment author: Oscar_Cunningham 21 April 2017 02:56:17PM 0 points [-]

Until I actually looked into this so was I. In my case I think it's Terry Pratchett's fault. In Feet of Clay he describes Golems as being prone to continue with tasks forever unless told to stop.

Comment author: Kaj_Sotala 21 April 2017 05:10:41PM *  0 points [-]

From the MIRI paper "Intelligence Explosion and Machine Ethics":

Let us call this precise, instruction-following genie a Golem Genie. (A golem is a creature from Jewish folklore that would in some stories do exactly as told [Idel 1990], often with unintended consequences, for example polishing a dish until it is as thin as paper [Pratchett 1996].)

(The "Idel" reference goes to Idel, Moshe. 1990. Golem: Jewish Magical and Mystical Traditions on the Artificial Anthropoid. SUNY Series in Judaica. Albany: State University of New York Press.)

Comment author: username2 20 April 2017 10:07:27PM 0 points [-]

This argument notably holds true of FAI / control theory efforts. Proponents of FAI asset that heaven-on-Earth utopian futures are not inevitable outcomes, but rather low probability possibilities they must work towards. It still seems overtly religious and weird to those of us who are not convinced that utopian outcomes are even possible / logically consistent.

Comment author: Kaj_Sotala 21 April 2017 05:07:27PM 2 points [-]

If you're not convinced that utopian outcomes are even possible, isn't that completely compatible with the claim that utopian futures are not inevitable and low-probability?

Comment author: Kaj_Sotala 21 April 2017 03:09:51PM 3 points [-]

Huh, some of the top articles are totally not what I'd have expected. "Don't Get Offended" is non-promoted and currently only has an upvote total of 32. "Advanced Placement exam cutoffs and superficial knowledge over deep knowledge" is also not promoted and has an upvote total of 4.

Would be interesting for someone to run an analysis to see how closely upvotes and page views correlate. Apparently not as much as I'd have guessed.

Brief update on the consequences of my "Two arguments for not thinking about ethics" (2014) article

14 Kaj_Sotala 05 April 2017 11:25AM

In March 2014, I posted on LessWrong an article called "Two arguments for not thinking about ethics (too much)", which started out with:

I used to spend a lot of time thinking about formal ethics, trying to figure out whether I was leaning more towards positive or negative utilitarianism, about the best courses of action in light of the ethical theories that I currently considered the most correct, and so on. From the discussions that I've seen on this site, I expect that a lot of others have been doing the same, or at least something similar.

I now think that doing this has been more harmful than it has been useful, for two reasons: there's no strong evidence to assume that this will give us very good insight to our preferred ethical theories, and more importantly, because thinking in those terms will easily lead to akrasia.

I ended the article with the following paragraph:

My personal experience of late has also been that thinking in terms of "what does utilitarianism dictate I should do" produces recommendations that feel like external obligations, "shoulds" that are unlikely to get done; whereas thinking about e.g. the feelings of empathy that motivated me to become utilitarian in the first place produce motivations that feel like internal "wants". I was very close to (yet another) burnout and serious depression some weeks back: a large part of what allowed me to avoid it was that I stopped entirely asking the question of what I should do, and began to focus entirely on what I want to do, including the question of which of my currently existing wants are ones that I'd wish to cultivate further. (Of course there are some things like doing my tax returns that I do have to do despite not wanting to, but that's a question of necessity, not ethics.) It's way too short of a time to say whether this actually leads to increased productivity in the long term, but at least it feels great for my mental health, at least for the time being.

The long-term update (three years after first posting the article) is that starting to shift my thought patterns in this way was totally the right thing to do, and necessary for starting a long and slow recovery from depression. It's hard to say entirely for sure how big of a role this has played, since the patterns of should-thought were very deeply ingrained and have been slow to get rid of; I still occasionally find myself engaging in them. And there have been many other factors also affecting my recovery during this period, so only a part of the recovery can be attributed to the "utilitarianism-excising" with any certainty. Yet, whenever I've found myself engaging in such patterns of thought and managed to eliminate them, I have felt much better as a result. I do still remember a time when a large part of my waking-time was driven by utilitarian thinking, and it's impossible for me to properly describe how relieved I now feel for the fact that my mind feels much more peaceful now.

The other obvious question besides "do I feel better now" is "do I actually get more good things done now"; and I think that the answer is yes there as well. So I don't just feel generally better, I think my actions and motivations are actually more aligned with doing good than they were when I was trying to more explicitly optimize for following utilitarianism and doing good in that way. I still don't feel like I actually get a lot of good done, but I attribute much of this to still not having entirely recovered; I also still don't get a lot done that pertains to my own personal well-being. (I just spent several months basically doing nothing, because this was pretty much the first time when I had the opportunity, finance-wise, to actually take a long stressfree break from everything. It's been amazing, but even after such an extended break, the burnout symptoms still pop up if I'm not careful.)

Comment author: Kaj_Sotala 04 March 2017 05:06:50PM *  4 points [-]

There was discussion about this post on /r/ControlProblem; I agree with these two comments:

If I understood the article correctly, it seems to me that the author is missing the point a bit.

He argues that the explosion has to slow down, but the point is not about superintelligence becoming limitless in a mathematical sense, it's about how far it can actually get before it starts hitting its limits.

Of course, it makes sense that, as the author writes, a rapid increase in intelligence would at some point eventually have to slow down due to approaching some hardware and data acquisition limits which would keep making its improvement process harder and harder. But that seems almost irrelevant if the actual limits turn out to be high enough for the system to evolve far enough.

Bostrom's argument is not that the intelligence explosion, once started, would have to continue indefinitely for it to be dangerous.

Who cares if the intelligence explosion of an AI entity will have to grind to a halt before quite reaching the predictive power of an absolute omniscient god.

If it has just enough hardware and data available during its initial phase of the explosion to figure out how to break out of its sandbox and connect to some more hardware and data over the net, then it might just have enough resources to keep the momentum and sustain its increasingly rapid improvement long enough to become dangerous, and the effects of its recalcitrance increasing sometime further down the road would not matter much to us.

and

I had the same impression.

He presents an argument about improving the various expressions in Bayes' theorem, and arrives at the conclusion that the agent would need to improve its hardware or interact with the outside world in order to lead to a potentially dangerous intelligence explosion. My impression was that everyone had already taken that conclusion for granted.

Also, I wrote a paper some time back that essentially presented the opposite argument; here's the abstract, you may be interested in checking it out:

Two crucial questions in discussions about the risks of artificial superintelligence are: 1) How much more capable could an AI become relative to humans, and 2) how easily could superhuman capability be acquired? To answer these questions, I will consider the literature on human expertise and intelligence, discuss its relevance for AI, and consider how an AI could improve on humans in two major aspects of thought and expertise, namely mental simulation and pattern recognition. I find that although there are very real limits to prediction, it seems like an AI could still substantially improve on human intelligence, possibly even mastering domains which are currently too hard for humans. In practice, the limits of prediction do not seem to pose much of a meaningful upper bound on an AI’s capabilities, nor do we have any nontrivial lower bounds on how much time it might take to achieve a superhuman level of capability. Takeover scenarios with timescales on the order of mere days or weeks seem to remain within the range of plausibility.

[Link] Moral Philosophers as Ethical Engineers: Limits of Moral Philosophy and a Pragmatist Alternative

2 Kaj_Sotala 23 February 2017 01:02PM
Comment author: Anders_H 18 January 2017 01:23:22AM *  2 points [-]

I skimmed this paper and plan to read it in more detail tomorrow. My first thought is that it is fundamentally confused. I believe the confusion comes from the fact that the word "prediction" is used with two separate meanings: Are you interested in predicting Y given an observed value of X (Pr[Y | X=x]), or are you interested in predicting Y given an intervention on X (i.e. Pr[Y|do(X=x)]).

The first of these may be useful for certain purposes. but If you intend to use the research for decision making and optimization (i.e. you want to intervene to set the value of X , in order to optimize Y), then you really need the second type of predictive ability, in which case you need to extract causal information from the data. This is only possible if you have a randomized trial, or if you have a correct causal model.

You can use the word "prediction" to refer to the second type of research objective, but this is not the kind of prediction that machine learning algorithms are designed to do.

In the conclusions, the authors write:

"By contrast, a minority of statisticians (and most machine learning researchers) belong to the “algorithmic modeling culture,” in which the data are assumed to be the result of some unknown and possibly unknowable process, and the primary goal is to find an algorithm that results in the same outputs as this process given the same inputs. "

The definition of "algorithmic modelling culture" is somewhat circular, as it just moves the ambiguity surrounding "prediction" to the word "input". If by "input" they mean that the algorithm observes the value of an independent variable and makes a prediction for the dependent variable, then you are talking about a true prediction model, which may be useful for certain purposes (diagnosis, prognosis, etc) but which is unusable if you are interested in optimizing the outcome.

If you instead claim that the "input" can also include observations about interventions on a variable, then your predictions will certainly fail unless the algorithm was trained in a dataset where someone actually intervened on X (i.e. someone did a randomized controlled trial), or unless you have a correct causal model.

Machine learning algorithms are not magic, they do not solve the problem of confounding unless they have a correct causal model. The fact that these algorithms are good at predicting stuff in observational datasets does not tell you anything useful for the purposes of deciding what the optimal value of the independent variable is.

In general, this paper is a very good example to illustrate why I keep insisting that machine learning people need to urgently read up on Pearl, Robins or Van der Laan. The field is in danger of falling into the same failure mode as epidemiology, i.e. essentially ignoring the problem of confounding. In the case of machine learning, this may be more insidious because the research is dressed up in fancy math and therefore looks superficially more impressive.

Comment author: Kaj_Sotala 13 February 2017 12:26:44PM 0 points [-]

Not entirely sure I understand you; I read the paper mostly as pointing out that current psych methodology tends to overfit, and that psychologists don't even know what overfitting means. This is true regardless of which type of prediction we're talking about.

View more: Next