Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

[Link] Choosing prediction over explanation in psychology: Lessons from machine learning

1 Kaj_Sotala 17 January 2017 09:23PM

[Link] Disjunctive AI scenarios: Individual or collective takeoff?

3 Kaj_Sotala 11 January 2017 03:43PM
Comment author: cousin_it 28 December 2016 08:54:25PM *  19 points [-]

I've found a nice hack that may help others: practice starting and stopping to do stuff, rather than just doing or not doing stuff.

Example 1: if you want to practice drawing, instead of forcing yourself into a long drawing session, repeat the action "drop whatever else you're doing and start drawing practice" five times within one day. Then it'll be easier the next day.

Example 2: if you want to surf the internet less, instead of forcing yourself to stay away from the computer for a long time, repeat the action "stop surfing and relax for a minute" five times within one day. Then it'll be easier the next day.

I don't know if this stuff works, but it gives me a cool feeling of being in control :-)

Comment author: Kaj_Sotala 30 December 2016 01:50:36PM 3 points [-]

Based on what I know of habit formation and the principles of deliberate practice, this should work.

A friend also commented that it worked for her when she wanted to start exercising more regularly.

Comment author: Kaj_Sotala 29 December 2016 06:19:56PM 14 points [-]

It's my understanding that in a democracy, the criteria for how various groups of people are treated isn't so much "are these people economically useful for the state", but rather "how much voting power do these people have and use" (the democracy parts of The Rulers for Rulers are relevant here). For instance, as the linked video notes, countries where the vote of the farming block swings elections tend to have large farming subsidies, even though this pretty much means that the farmers need the state financially and not the other way around.

It seems plausible to me that UBI could even make its recipients more politically influential: I used to have some involvement with Finnish politics, and heard that the various political parties rely a lot on pensioners as their volunteers, since pensioners have a lot of spare time that they can use on politics. This would suggest that interventions such as the UBI that may give its beneficiaries more free time, increase the chances of those beneficiaries participating in the political system and thus being taken more into account in decision-making.

Comment author: Bobertron 20 December 2016 11:05:47PM 2 points [-]

Interesting article. Here is the problem I have: In the first example, "spelling ocean correctly" and "I'll be a successful writer" clearly have nothing to do with each other, so they shouldn't be in a bucket together and the kid is just being stupid. At least on first glance, that's totally different from Carol's situation. I'm tempted to say that "I should not try full force on the startup" and "there is a fatal flaw in the startup" should be in a bucket, because I believe "if there is a fatal flaw in the startup, I should not try it". As long as I believe that, how can I separate these two and not flinch?

Do you think one should allow oneself to be less consistent in order to become more accurate? Suppose you are a smoker and you don't want to look into the health risks of smoking, because you don't want to quit. I think you should allow yourself in some situations to both believe "I should not smoke because it is bad for my health" and to continue smoking, because then you'll flinch less. But I'm fuzzy on when. If you completely give up on having your actions be determined by your believes about what you should do, that seems obviously crazy and there won't be any reason to look into the health risks of smoking anyway.

Maybe you should model yourself as two people. One person is rationality. It's responsible for determining what to believe and what to do. The other person is the one that queries rationality and acts on it's recommendations. Since rationality is a consequentialis with integrity it might not recommend to quit smoking, because then the other person will stop acting on it's advice and stop giving it queries.

Comment author: Kaj_Sotala 22 December 2016 06:32:59PM *  0 points [-]

In the first example, "spelling ocean correctly" and "I'll be a successful writer" clearly have nothing to do with each other,

If you think that successful writers are talented, and that talent means fewer misspellings, then misspelling things is evidence of you not going to be a successful writer. (No, I don't think this is a very plausible model, but it's one that I'd imagine could be plausible to a kid with a fixed mindset and who didn't yet know what really distinguishes good writers from the bad.)

Comment author: Elo 17 December 2016 08:38:56AM 0 points [-]

As a very shitty theory; the results might be able to be explained by frequency of exercise associated with sauna use. i.e. if I go in the sauna every time I gym and I gym 7 days a week instead of 1 day a week I can presume that means I am healthier or am more likely to be healthier.

Previous results from the KIHD study have shown that frequent sauna bathing also significantly reduces the risk of sudden cardiac death, the risk of death due to coronary artery disease and other cardiac events, as well as overall mortality. According to Professor Jari Laukkanen, the study leader, sauna bathing may protect both the heart and memory to some extent via similar, still poorly known mechanisms. “However, it is known that cardiovascular health affects the brain as well. The sense of well-being and relaxation experienced during sauna bathing may also play a role.”

As I would expect with general health. I barely know anyone who uses a sauna, let alone anyone who uses one 7 days a week. Mainly due to them mostly existing in conjunction with health infrastructure like gyms and swimming pools.

Comment author: Kaj_Sotala 17 December 2016 05:36:08PM *  4 points [-]

Note that the study is from Finland, where sauna-going is not particularly associated with exercise: people just go into the sauna for its own sake. There are saunas in conjunction of gyms, yes, but e.g. apartment buildings often have their own dedicated saunas that the tenants can reserve for their own use. (Somebody having a single one-hour sauna shift per week is typical.)

That said, there are probably other confounders in that e.g. people who can use a sauna seven times a week are a lot more likely to have a sauna of their own, so live in their own house rather than an apartment, among other things.

Comment author: Kaj_Sotala 17 December 2016 05:31:42PM 4 points [-]

Could you elaborate on the developmental tasks, at least the bolded ones? I think I get their rough contents, but their descriptions are short enough that it might just be an illusion of understanding.

Comment author: Kaj_Sotala 14 December 2016 07:29:35PM 2 points [-]

Whoa, this draft has a section on AGI and superintelligence that directly quotes Bostrom, Yudkowsky, Omohundro etc., and also has an "appreciation" section saying "We also wish to express our appreciation for the following organizations regarding their seminal efforts regarding AI/AS Ethics, including (but not limited to) [...] the Machine Intelligence Research Institute".

The executive summary for the AGI/ASI section reads as follows:

Future highly capable AI systems (sometimes referred to as artificial general intelligence or AGI) may have a transformative effect on the world on the scale of the agricultural or industrial revolutions, which could bring about unprecedented levels of global prosperity. The Safety and Beneficence of Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI) Committee has provided multiple issues and candidate recommendations to help ensure this transformation will be a positive one via the concerted effort by the AI community to shape it that way.

Issues:

• As AI systems become more capable— as measured by the ability to optimize more complex objective functions with greater autonomy across a wider variety of domains—unanticipated or unintended behavior becomes increasingly dangerous.
• Retrofitting safety into future, more generally capable, AI systems may be difficult.
• Researchers and developers will confront a progressively more complex set of ethical and technical safety issues in the development and deployment of increasingly autonomous and capable AI systems.
• Future AI systems may have the capacity to impact the world on the scale of the agricultural or industrial revolutions.

Comment author: ig0r 07 December 2016 10:25:16PM 0 points [-]

Nice. Just curious, how much did you do, and why'd you stop (if you did)?

In response to comment by ig0r on Finding slices of joy
Comment author: Kaj_Sotala 11 December 2016 12:30:18PM *  0 points [-]

Hard to say, both because I haven't been sticking very hard to any specific style of meditation, and also because the amount of meditation I've done has varied a lot, depending on various life circumstances. There was a time when I'd meditate for several hours a day; these days I do less formal practice (I try to go for at least twenty minutes a day), but I tend to also incorporate meditation into my daily activities and routines and maintain a level of mindfulness throughout the day. I tend to easily slip into a meditative state in the morning, after waking up but before getting up from bed, and might spend an hour or two that way.

I haven't actually done very much pure vipassana; instead I've found tranquility meditation, "just-sitting" zazen, and most recently metta more rewarding.

Comment author: owencb 10 December 2016 03:49:55PM *  4 points [-]

Thanks for engaging. Further thoughts:

I agree with you that framing is important; I just deleted the old ETA.

For what it's worth I think even without saying that your aim is explicitly AI safety, a lot of people reading this post will take that away unless you do more to cancel the implicature. Even the title does this! It's a slightly odd grammatical construction which looks an awful lot like CFAR’s new focus: AI Safety; I think without being more up-front about alternative interpretation it will sometimes be read that way.

I'm curious where our two new docs leave you

Me too! (I assume that these have not been posted yet, but if I'm just failing to find them please let me know.)

I think they make clearer that we will still be doing some rationality qua rationality.

Great. Just to highlight that I think there are two important aspects of doing rationality qua rationality:

  • Have the people pursuing the activity have this as their goal. (I'm less worried about you failing on this one.)
  • Have external perceptions be that this is what you're doing. I have some concern that rationality-qua-rationality activities pursued by an AI safety org will be perceived as having an underlying agenda relating to that. And that this could e.g. make some people less inclined to engage, even relative to if they're run by a rationality org which has a significant project on AI safety.

my guess is that there isn't enough money and staff firepower to run a good standalone rationality organization in CFAR's stead

I feel pretty uncertain about this, but my guess goes the other way. Also, I think if there are two separate orgs, the standalone rationality one should probably retain the CFAR brand! (as it seems more valuable there)

I do worry about transition costs and losing synergies of working together from splitting off a new org. Though these might be cheaper earlier than later, and even if it's borderline right now whether there's enough money and staff to do both I think it won't be borderline within a small number of years.

Julia will be launching a small spinoff organization called Convergence

This sounds interesting! That's a specialised enough remit that it (mostly) doesn't negate my above concerns, but I'm happy to hear about it anyway.

Comment author: Kaj_Sotala 10 December 2016 06:14:43PM 6 points [-]

Even the title does this! It's a slightly odd grammatical construction which looks an awful lot like CFAR’s new focus: AI Safety; I think without being more up-front about alternative interpretation it will sometimes be read that way.

Datapoint: it wasn't until reading your comment that I realized that the title actually doesn't read "CFAR's new focus: AI safety".

View more: Next