Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Kaj_Sotala 04 March 2017 05:06:50PM *  4 points [-]

There was discussion about this post on /r/ControlProblem; I agree with these two comments:

If I understood the article correctly, it seems to me that the author is missing the point a bit.

He argues that the explosion has to slow down, but the point is not about superintelligence becoming limitless in a mathematical sense, it's about how far it can actually get before it starts hitting its limits.

Of course, it makes sense that, as the author writes, a rapid increase in intelligence would at some point eventually have to slow down due to approaching some hardware and data acquisition limits which would keep making its improvement process harder and harder. But that seems almost irrelevant if the actual limits turn out to be high enough for the system to evolve far enough.

Bostrom's argument is not that the intelligence explosion, once started, would have to continue indefinitely for it to be dangerous.

Who cares if the intelligence explosion of an AI entity will have to grind to a halt before quite reaching the predictive power of an absolute omniscient god.

If it has just enough hardware and data available during its initial phase of the explosion to figure out how to break out of its sandbox and connect to some more hardware and data over the net, then it might just have enough resources to keep the momentum and sustain its increasingly rapid improvement long enough to become dangerous, and the effects of its recalcitrance increasing sometime further down the road would not matter much to us.

and

I had the same impression.

He presents an argument about improving the various expressions in Bayes' theorem, and arrives at the conclusion that the agent would need to improve its hardware or interact with the outside world in order to lead to a potentially dangerous intelligence explosion. My impression was that everyone had already taken that conclusion for granted.

Also, I wrote a paper some time back that essentially presented the opposite argument; here's the abstract, you may be interested in checking it out:

Two crucial questions in discussions about the risks of artificial superintelligence are: 1) How much more capable could an AI become relative to humans, and 2) how easily could superhuman capability be acquired? To answer these questions, I will consider the literature on human expertise and intelligence, discuss its relevance for AI, and consider how an AI could improve on humans in two major aspects of thought and expertise, namely mental simulation and pattern recognition. I find that although there are very real limits to prediction, it seems like an AI could still substantially improve on human intelligence, possibly even mastering domains which are currently too hard for humans. In practice, the limits of prediction do not seem to pose much of a meaningful upper bound on an AI’s capabilities, nor do we have any nontrivial lower bounds on how much time it might take to achieve a superhuman level of capability. Takeover scenarios with timescales on the order of mere days or weeks seem to remain within the range of plausibility.

Comment author: Anders_H 18 January 2017 01:23:22AM *  2 points [-]

I skimmed this paper and plan to read it in more detail tomorrow. My first thought is that it is fundamentally confused. I believe the confusion comes from the fact that the word "prediction" is used with two separate meanings: Are you interested in predicting Y given an observed value of X (Pr[Y | X=x]), or are you interested in predicting Y given an intervention on X (i.e. Pr[Y|do(X=x)]).

The first of these may be useful for certain purposes. but If you intend to use the research for decision making and optimization (i.e. you want to intervene to set the value of X , in order to optimize Y), then you really need the second type of predictive ability, in which case you need to extract causal information from the data. This is only possible if you have a randomized trial, or if you have a correct causal model.

You can use the word "prediction" to refer to the second type of research objective, but this is not the kind of prediction that machine learning algorithms are designed to do.

In the conclusions, the authors write:

"By contrast, a minority of statisticians (and most machine learning researchers) belong to the “algorithmic modeling culture,” in which the data are assumed to be the result of some unknown and possibly unknowable process, and the primary goal is to find an algorithm that results in the same outputs as this process given the same inputs. "

The definition of "algorithmic modelling culture" is somewhat circular, as it just moves the ambiguity surrounding "prediction" to the word "input". If by "input" they mean that the algorithm observes the value of an independent variable and makes a prediction for the dependent variable, then you are talking about a true prediction model, which may be useful for certain purposes (diagnosis, prognosis, etc) but which is unusable if you are interested in optimizing the outcome.

If you instead claim that the "input" can also include observations about interventions on a variable, then your predictions will certainly fail unless the algorithm was trained in a dataset where someone actually intervened on X (i.e. someone did a randomized controlled trial), or unless you have a correct causal model.

Machine learning algorithms are not magic, they do not solve the problem of confounding unless they have a correct causal model. The fact that these algorithms are good at predicting stuff in observational datasets does not tell you anything useful for the purposes of deciding what the optimal value of the independent variable is.

In general, this paper is a very good example to illustrate why I keep insisting that machine learning people need to urgently read up on Pearl, Robins or Van der Laan. The field is in danger of falling into the same failure mode as epidemiology, i.e. essentially ignoring the problem of confounding. In the case of machine learning, this may be more insidious because the research is dressed up in fancy math and therefore looks superficially more impressive.

Comment author: Kaj_Sotala 13 February 2017 12:26:44PM 0 points [-]

Not entirely sure I understand you; I read the paper mostly as pointing out that current psych methodology tends to overfit, and that psychologists don't even know what overfitting means. This is true regardless of which type of prediction we're talking about.

Comment author: Kaj_Sotala 22 January 2017 11:44:05AM 4 points [-]

Do you feel the "link post ugh"?

Submitting...

Comment author: cousin_it 28 December 2016 08:54:25PM *  19 points [-]

I've found a nice hack that may help others: practice starting and stopping to do stuff, rather than just doing or not doing stuff.

Example 1: if you want to practice drawing, instead of forcing yourself into a long drawing session, repeat the action "drop whatever else you're doing and start drawing practice" five times within one day. Then it'll be easier the next day.

Example 2: if you want to surf the internet less, instead of forcing yourself to stay away from the computer for a long time, repeat the action "stop surfing and relax for a minute" five times within one day. Then it'll be easier the next day.

I don't know if this stuff works, but it gives me a cool feeling of being in control :-)

Comment author: Kaj_Sotala 30 December 2016 01:50:36PM 3 points [-]

Based on what I know of habit formation and the principles of deliberate practice, this should work.

A friend also commented that it worked for her when she wanted to start exercising more regularly.

Comment author: Kaj_Sotala 29 December 2016 06:19:56PM 14 points [-]

It's my understanding that in a democracy, the criteria for how various groups of people are treated isn't so much "are these people economically useful for the state", but rather "how much voting power do these people have and use" (the democracy parts of The Rulers for Rulers are relevant here). For instance, as the linked video notes, countries where the vote of the farming block swings elections tend to have large farming subsidies, even though this pretty much means that the farmers need the state financially and not the other way around.

It seems plausible to me that UBI could even make its recipients more politically influential: I used to have some involvement with Finnish politics, and heard that the various political parties rely a lot on pensioners as their volunteers, since pensioners have a lot of spare time that they can use on politics. This would suggest that interventions such as the UBI that may give its beneficiaries more free time, increase the chances of those beneficiaries participating in the political system and thus being taken more into account in decision-making.

Comment author: Bobertron 20 December 2016 11:05:47PM 2 points [-]

Interesting article. Here is the problem I have: In the first example, "spelling ocean correctly" and "I'll be a successful writer" clearly have nothing to do with each other, so they shouldn't be in a bucket together and the kid is just being stupid. At least on first glance, that's totally different from Carol's situation. I'm tempted to say that "I should not try full force on the startup" and "there is a fatal flaw in the startup" should be in a bucket, because I believe "if there is a fatal flaw in the startup, I should not try it". As long as I believe that, how can I separate these two and not flinch?

Do you think one should allow oneself to be less consistent in order to become more accurate? Suppose you are a smoker and you don't want to look into the health risks of smoking, because you don't want to quit. I think you should allow yourself in some situations to both believe "I should not smoke because it is bad for my health" and to continue smoking, because then you'll flinch less. But I'm fuzzy on when. If you completely give up on having your actions be determined by your believes about what you should do, that seems obviously crazy and there won't be any reason to look into the health risks of smoking anyway.

Maybe you should model yourself as two people. One person is rationality. It's responsible for determining what to believe and what to do. The other person is the one that queries rationality and acts on it's recommendations. Since rationality is a consequentialis with integrity it might not recommend to quit smoking, because then the other person will stop acting on it's advice and stop giving it queries.

Comment author: Kaj_Sotala 22 December 2016 06:32:59PM *  0 points [-]

In the first example, "spelling ocean correctly" and "I'll be a successful writer" clearly have nothing to do with each other,

If you think that successful writers are talented, and that talent means fewer misspellings, then misspelling things is evidence of you not going to be a successful writer. (No, I don't think this is a very plausible model, but it's one that I'd imagine could be plausible to a kid with a fixed mindset and who didn't yet know what really distinguishes good writers from the bad.)

Comment author: Elo 17 December 2016 08:38:56AM 0 points [-]

As a very shitty theory; the results might be able to be explained by frequency of exercise associated with sauna use. i.e. if I go in the sauna every time I gym and I gym 7 days a week instead of 1 day a week I can presume that means I am healthier or am more likely to be healthier.

Previous results from the KIHD study have shown that frequent sauna bathing also significantly reduces the risk of sudden cardiac death, the risk of death due to coronary artery disease and other cardiac events, as well as overall mortality. According to Professor Jari Laukkanen, the study leader, sauna bathing may protect both the heart and memory to some extent via similar, still poorly known mechanisms. “However, it is known that cardiovascular health affects the brain as well. The sense of well-being and relaxation experienced during sauna bathing may also play a role.”

As I would expect with general health. I barely know anyone who uses a sauna, let alone anyone who uses one 7 days a week. Mainly due to them mostly existing in conjunction with health infrastructure like gyms and swimming pools.

Comment author: Kaj_Sotala 17 December 2016 05:36:08PM *  4 points [-]

Note that the study is from Finland, where sauna-going is not particularly associated with exercise: people just go into the sauna for its own sake. There are saunas in conjunction of gyms, yes, but e.g. apartment buildings often have their own dedicated saunas that the tenants can reserve for their own use. (Somebody having a single one-hour sauna shift per week is typical.)

That said, there are probably other confounders in that e.g. people who can use a sauna seven times a week are a lot more likely to have a sauna of their own, so live in their own house rather than an apartment, among other things.

Comment author: Kaj_Sotala 17 December 2016 05:31:42PM 4 points [-]

Could you elaborate on the developmental tasks, at least the bolded ones? I think I get their rough contents, but their descriptions are short enough that it might just be an illusion of understanding.

Comment author: Kaj_Sotala 14 December 2016 07:29:35PM 2 points [-]

Whoa, this draft has a section on AGI and superintelligence that directly quotes Bostrom, Yudkowsky, Omohundro etc., and also has an "appreciation" section saying "We also wish to express our appreciation for the following organizations regarding their seminal efforts regarding AI/AS Ethics, including (but not limited to) [...] the Machine Intelligence Research Institute".

The executive summary for the AGI/ASI section reads as follows:

Future highly capable AI systems (sometimes referred to as artificial general intelligence or AGI) may have a transformative effect on the world on the scale of the agricultural or industrial revolutions, which could bring about unprecedented levels of global prosperity. The Safety and Beneficence of Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI) Committee has provided multiple issues and candidate recommendations to help ensure this transformation will be a positive one via the concerted effort by the AI community to shape it that way.

Issues:

• As AI systems become more capable— as measured by the ability to optimize more complex objective functions with greater autonomy across a wider variety of domains—unanticipated or unintended behavior becomes increasingly dangerous.
• Retrofitting safety into future, more generally capable, AI systems may be difficult.
• Researchers and developers will confront a progressively more complex set of ethical and technical safety issues in the development and deployment of increasingly autonomous and capable AI systems.
• Future AI systems may have the capacity to impact the world on the scale of the agricultural or industrial revolutions.

Comment author: ig0r 07 December 2016 10:25:16PM 0 points [-]

Nice. Just curious, how much did you do, and why'd you stop (if you did)?

In response to comment by ig0r on Finding slices of joy
Comment author: Kaj_Sotala 11 December 2016 12:30:18PM *  0 points [-]

Hard to say, both because I haven't been sticking very hard to any specific style of meditation, and also because the amount of meditation I've done has varied a lot, depending on various life circumstances. There was a time when I'd meditate for several hours a day; these days I do less formal practice (I try to go for at least twenty minutes a day), but I tend to also incorporate meditation into my daily activities and routines and maintain a level of mindfulness throughout the day. I tend to easily slip into a meditative state in the morning, after waking up but before getting up from bed, and might spend an hour or two that way.

I haven't actually done very much pure vipassana; instead I've found tranquility meditation, "just-sitting" zazen, and most recently metta more rewarding.

View more: Next