Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Stuart_Armstrong 28 April 2017 05:33:53AM 0 points [-]

I feel like you're straw-manning scenario analysis.

I am.

But if we're going to analyse scenario planning seriously - which I certainly didn't here - we need to look at older scenario planning attempts, and see how useful they were.

Comment author: Kaj_Sotala 28 April 2017 12:14:30PM 2 points [-]

If you admit that this is an unfair strawman, then why are you bothering to post it?

Comment author: Kaj_Sotala 22 April 2017 11:37:58AM *  7 points [-]

Good criticisms and I think I'm in rough agreement with many of them, but I'd suggest cutting/shortening the beginning. ~everyone already knows what Ponzi schemes are, and the whole extended "confidence game" introduction frames your post in a more hostile way than I think you intended, by leading your readers to think that you're about to accuse EA of being intentionally fraudulent.

Comment author: Oscar_Cunningham 21 April 2017 02:56:17PM 0 points [-]

Until I actually looked into this so was I. In my case I think it's Terry Pratchett's fault. In Feet of Clay he describes Golems as being prone to continue with tasks forever unless told to stop.

Comment author: Kaj_Sotala 21 April 2017 05:10:41PM *  0 points [-]

From the MIRI paper "Intelligence Explosion and Machine Ethics":

Let us call this precise, instruction-following genie a Golem Genie. (A golem is a creature from Jewish folklore that would in some stories do exactly as told [Idel 1990], often with unintended consequences, for example polishing a dish until it is as thin as paper [Pratchett 1996].)

(The "Idel" reference goes to Idel, Moshe. 1990. Golem: Jewish Magical and Mystical Traditions on the Artificial Anthropoid. SUNY Series in Judaica. Albany: State University of New York Press.)

Comment author: username2 20 April 2017 10:07:27PM 0 points [-]

This argument notably holds true of FAI / control theory efforts. Proponents of FAI asset that heaven-on-Earth utopian futures are not inevitable outcomes, but rather low probability possibilities they must work towards. It still seems overtly religious and weird to those of us who are not convinced that utopian outcomes are even possible / logically consistent.

Comment author: Kaj_Sotala 21 April 2017 05:07:27PM 2 points [-]

If you're not convinced that utopian outcomes are even possible, isn't that completely compatible with the claim that utopian futures are not inevitable and low-probability?

Comment author: Kaj_Sotala 21 April 2017 03:09:51PM 3 points [-]

Huh, some of the top articles are totally not what I'd have expected. "Don't Get Offended" is non-promoted and currently only has an upvote total of 32. "Advanced Placement exam cutoffs and superficial knowledge over deep knowledge" is also not promoted and has an upvote total of 4.

Would be interesting for someone to run an analysis to see how closely upvotes and page views correlate. Apparently not as much as I'd have guessed.

Comment author: Kaj_Sotala 04 March 2017 05:06:50PM *  4 points [-]

There was discussion about this post on /r/ControlProblem; I agree with these two comments:

If I understood the article correctly, it seems to me that the author is missing the point a bit.

He argues that the explosion has to slow down, but the point is not about superintelligence becoming limitless in a mathematical sense, it's about how far it can actually get before it starts hitting its limits.

Of course, it makes sense that, as the author writes, a rapid increase in intelligence would at some point eventually have to slow down due to approaching some hardware and data acquisition limits which would keep making its improvement process harder and harder. But that seems almost irrelevant if the actual limits turn out to be high enough for the system to evolve far enough.

Bostrom's argument is not that the intelligence explosion, once started, would have to continue indefinitely for it to be dangerous.

Who cares if the intelligence explosion of an AI entity will have to grind to a halt before quite reaching the predictive power of an absolute omniscient god.

If it has just enough hardware and data available during its initial phase of the explosion to figure out how to break out of its sandbox and connect to some more hardware and data over the net, then it might just have enough resources to keep the momentum and sustain its increasingly rapid improvement long enough to become dangerous, and the effects of its recalcitrance increasing sometime further down the road would not matter much to us.

and

I had the same impression.

He presents an argument about improving the various expressions in Bayes' theorem, and arrives at the conclusion that the agent would need to improve its hardware or interact with the outside world in order to lead to a potentially dangerous intelligence explosion. My impression was that everyone had already taken that conclusion for granted.

Also, I wrote a paper some time back that essentially presented the opposite argument; here's the abstract, you may be interested in checking it out:

Two crucial questions in discussions about the risks of artificial superintelligence are: 1) How much more capable could an AI become relative to humans, and 2) how easily could superhuman capability be acquired? To answer these questions, I will consider the literature on human expertise and intelligence, discuss its relevance for AI, and consider how an AI could improve on humans in two major aspects of thought and expertise, namely mental simulation and pattern recognition. I find that although there are very real limits to prediction, it seems like an AI could still substantially improve on human intelligence, possibly even mastering domains which are currently too hard for humans. In practice, the limits of prediction do not seem to pose much of a meaningful upper bound on an AI’s capabilities, nor do we have any nontrivial lower bounds on how much time it might take to achieve a superhuman level of capability. Takeover scenarios with timescales on the order of mere days or weeks seem to remain within the range of plausibility.

Comment author: Anders_H 18 January 2017 01:23:22AM *  2 points [-]

I skimmed this paper and plan to read it in more detail tomorrow. My first thought is that it is fundamentally confused. I believe the confusion comes from the fact that the word "prediction" is used with two separate meanings: Are you interested in predicting Y given an observed value of X (Pr[Y | X=x]), or are you interested in predicting Y given an intervention on X (i.e. Pr[Y|do(X=x)]).

The first of these may be useful for certain purposes. but If you intend to use the research for decision making and optimization (i.e. you want to intervene to set the value of X , in order to optimize Y), then you really need the second type of predictive ability, in which case you need to extract causal information from the data. This is only possible if you have a randomized trial, or if you have a correct causal model.

You can use the word "prediction" to refer to the second type of research objective, but this is not the kind of prediction that machine learning algorithms are designed to do.

In the conclusions, the authors write:

"By contrast, a minority of statisticians (and most machine learning researchers) belong to the “algorithmic modeling culture,” in which the data are assumed to be the result of some unknown and possibly unknowable process, and the primary goal is to find an algorithm that results in the same outputs as this process given the same inputs. "

The definition of "algorithmic modelling culture" is somewhat circular, as it just moves the ambiguity surrounding "prediction" to the word "input". If by "input" they mean that the algorithm observes the value of an independent variable and makes a prediction for the dependent variable, then you are talking about a true prediction model, which may be useful for certain purposes (diagnosis, prognosis, etc) but which is unusable if you are interested in optimizing the outcome.

If you instead claim that the "input" can also include observations about interventions on a variable, then your predictions will certainly fail unless the algorithm was trained in a dataset where someone actually intervened on X (i.e. someone did a randomized controlled trial), or unless you have a correct causal model.

Machine learning algorithms are not magic, they do not solve the problem of confounding unless they have a correct causal model. The fact that these algorithms are good at predicting stuff in observational datasets does not tell you anything useful for the purposes of deciding what the optimal value of the independent variable is.

In general, this paper is a very good example to illustrate why I keep insisting that machine learning people need to urgently read up on Pearl, Robins or Van der Laan. The field is in danger of falling into the same failure mode as epidemiology, i.e. essentially ignoring the problem of confounding. In the case of machine learning, this may be more insidious because the research is dressed up in fancy math and therefore looks superficially more impressive.

Comment author: Kaj_Sotala 13 February 2017 12:26:44PM 0 points [-]

Not entirely sure I understand you; I read the paper mostly as pointing out that current psych methodology tends to overfit, and that psychologists don't even know what overfitting means. This is true regardless of which type of prediction we're talking about.

Comment author: Kaj_Sotala 22 January 2017 11:44:05AM 4 points [-]

Do you feel the "link post ugh"?

Submitting...

Comment author: cousin_it 28 December 2016 08:54:25PM *  19 points [-]

I've found a nice hack that may help others: practice starting and stopping to do stuff, rather than just doing or not doing stuff.

Example 1: if you want to practice drawing, instead of forcing yourself into a long drawing session, repeat the action "drop whatever else you're doing and start drawing practice" five times within one day. Then it'll be easier the next day.

Example 2: if you want to surf the internet less, instead of forcing yourself to stay away from the computer for a long time, repeat the action "stop surfing and relax for a minute" five times within one day. Then it'll be easier the next day.

I don't know if this stuff works, but it gives me a cool feeling of being in control :-)

Comment author: Kaj_Sotala 30 December 2016 01:50:36PM 3 points [-]

Based on what I know of habit formation and the principles of deliberate practice, this should work.

A friend also commented that it worked for her when she wanted to start exercising more regularly.

Comment author: Kaj_Sotala 29 December 2016 06:19:56PM 14 points [-]

It's my understanding that in a democracy, the criteria for how various groups of people are treated isn't so much "are these people economically useful for the state", but rather "how much voting power do these people have and use" (the democracy parts of The Rulers for Rulers are relevant here). For instance, as the linked video notes, countries where the vote of the farming block swings elections tend to have large farming subsidies, even though this pretty much means that the farmers need the state financially and not the other way around.

It seems plausible to me that UBI could even make its recipients more politically influential: I used to have some involvement with Finnish politics, and heard that the various political parties rely a lot on pensioners as their volunteers, since pensioners have a lot of spare time that they can use on politics. This would suggest that interventions such as the UBI that may give its beneficiaries more free time, increase the chances of those beneficiaries participating in the political system and thus being taken more into account in decision-making.

View more: Next