Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: username2 05 July 2017 09:21:08AM 3 points [-]

How is physical torture (or chronic back pain) the result of attachment to outcomes?

Comment author: toonalfrink 05 July 2017 09:28:43AM 0 points [-]

This is a rather extreme case, but people exist that don't suffer from physical damage because they don't identify with their physical body.

Granted, it would take a good 20 years of meditation/brainwashing to get to that state and it's probably not worth it for now.

Luckily many forms of suffering are based on more shallow beliefs

Comment author: ignoranceprior 05 July 2017 12:59:44AM *  4 points [-]

Some people in the EA community have already written a bit about this.

I think this is the kind of thing Mike Johnson (/user/johnsonmx) and Andres Gomez Emilsson (/user/algekalipso) of the Qualia Research Institute are interested in, though they probably take a different approach. See:

Effective Altruism, and building a better QALY

Principia Qualia: blueprint for a new cause area, consciousness research with an eye toward ethics and x-risk

The Foundational Research Institute also takes an interest in the issue, but they tend to advocate an eliminativist, subjectivist view according to which there is no way to objectively determine which beings are conscious because consciousness itself is an essentially contested concept. (I don't know if everyone at FRI agrees with that, but at least a few including Brian Tomasik do.) FRI also has done some work on measuring happiness and suffering.

Animal Charity Evaluators announced in 2016 that they were starting a deep investigation of animal sentience. I don't know if they have done anything since then.

Luke Muehlhauser (/u/lukeprog) wrote an extensive report on consciousness for the Open Philanthropy Project. He has also indicated an interest in further exploring the area of sentience and moral weight. Since phenomenal consciousness is necessary to experience either happiness or suffering, this may fall under the same umbrella as the above research. Lukeprog's LW posts on affective neuroscience are relevant as well (as well as a couple by Yvain).

Comment author: toonalfrink 05 July 2017 07:12:31AM 0 points [-]

This is great info, but it's about a different angle from what I'd like to see.

(I now realise it is totally impossible to infer my angle from my post, so here goes)

I want to describe the causes of happiness with the intentional stance. That is, I want to explain them in terms of beliefs, feelings and intentions.

For example, it seems very relevant that (allegedly) suffering is a result of attachment to outcomes, but I haven't heard any rationalists talk about this.

Comment author: Dagon 04 July 2017 09:11:06PM 0 points [-]

Are you exploring your own goals and preferences, or hoping to understand/enforce "common" goals on others (including animals)?

I applaud research (including time spent at a Buddhist monastery, however you'll need to acknowledge that you'll perceive different emotions if you're exploring it for happiness than if it's your only option in life) and reporting on such. I've mostly accepted that there's no such thing as coherent terminal goals for humans - everything is relative to each of our imagined possible futures.

Comment author: toonalfrink 05 July 2017 07:03:23AM 0 points [-]

I have a strong contrarian hunch that human terminal goals converge as long as you go far enough up the goal chain. What you see in the wild is people having vastly different tastes of how to live life. One likes freedom, the next likes community, and then the next is just trying to gain as much power as possible. But I call those subterminal goals, and I think what generated them is the same algorithm with different inputs (different perceived possibilities?). The algorithm, which I think optimizes for some proxies of genetic survival like sameness and self-preservation, that's the terminal goal. And no, I'm not trying to enforce any values. This isn't about things-in-the-world that ought to make us happy. This is about inner game.

We need a better theory of happiness and suffering

1 toonalfrink 04 July 2017 08:14PM

We rationalists know a lot about winning, but we don't know what our terminal goals really are. Such things are handwaved away, as we just mumble something like "QALYs" and make a few guesses about what a five year old would like.

I'd like to dispel the myth that a 5 year old knows what they like. Have you ever seen a kid with a sack of candy? I don't think they really wanted to get nauseous.

"But hold up", you say. "Maybe that's true for special cases involving competing subagents, but most cases are actually pretty straightforward, like blindness and death and captivity."

Well, you may have a point with death, but what if blind people and inmates are actually as happy as the next guy? What's the point of curing blindness, then?

A special case where we need to check our assumptions is animal welfare. What if the substrate of suffering is something in higher-order cognition, something that all but mammals lack?

One could hold that it is impossible to make inferences about another being's qualia, but we can come quite far with introspection plus assuming that similar brains yield similar qualia. We can even correlate happiness with brain scans.

The former is why I've moved to a Buddhist monastery. If (whatever really causes) happiness is your goal, it seems to me that the claim that one can permanently attain a state of bliss is worth investigating.

So, to sum up, if we want to fix suffering, let's find out it's proximal cause first. Spoiler: it's not pain.

(To be continued)

Comment author: ChristianKl 16 June 2017 04:52:00PM 1 point [-]

How about having a list of possible AGI safety related topics that could provide material for a bachelor or master thesis?

Comment author: toonalfrink 27 June 2017 05:14:51PM 0 points [-]

What about the research agendas that have already been published?

Comment author: itaibn0 21 June 2017 01:55:55AM 0 points [-]

I specifically appreciate the article on research debt.

Since I was confused by this when I first read this, I want to clarify: As far as I can tell the article is not written by anybody associated with AASAA. You're saying it was nice of toonalfrink to link to it.

(I'm not sure if this comment is useful, since I don't expect a lot of people to have the same misunderstanding I did.)

Comment author: toonalfrink 27 June 2017 05:13:27PM 0 points [-]

Am not associated. Just found the article in the MIRI newsletter

Comment author: ChristianKl 16 June 2017 04:05:37PM 1 point [-]

It has only about 50 researchers, and it’s mostly talent-constrained.

What's the evidence that it's mostly talent-constrained?

Comment author: toonalfrink 17 June 2017 07:21:44AM *  1 point [-]

As stated here:

FHI and CSER recently raised large academic grants to fund safety research, and may not be able to fill all their positions with talented researchers. Elon Musk recently donated $10m through the Future of Life Institute, and Open Phil donated a further $1m, which was their assessment of how much was needed to fund the remaining high-quality proposals. I’m aware of other major funders, including billionaires, who would like to fund safety researchers, but don’t think there’s enough talent in the pool. The problem is that it takes many years to gain the relevant skill set and few people are interested in the research, so even raising salaries won’t help significantly. Other funders are concerned that the research isn’t actually tractable, so the main priority is having someone demonstrate that progress can be made. Previous efforts to demonstrate progress have yielded large increases in funding.

But to be fair, that's november 2015, so let me know if I should update.

Comment author: Daniel_Burfoot 15 June 2017 10:43:28PM 3 points [-]

I really don't think you should try to convince mid-career professionals to switch careers to AI safety risk research. Instead, you should focus on recruiting talented young people, ideally people who are still in university or at most a few years out.

Comment author: toonalfrink 15 June 2017 11:59:12PM 2 points [-]

I agree.

I must admit that the "convince academics" part of the plan is still a bit vague. It's unclear to me how new fields become fashionable in academia. How does one even figure that out? I'd love to know.

The project focuses on the "create a MOOC" part right now, which is plenty of value in itself.

Comment author: siIver 15 June 2017 07:30:48PM 1 point [-]

This looks solid.

Can you go into a bit of detail on the level / spectrum of difficulty of the courses you're aiming for, and the background knowledge that'll be expected? I suspect you don't want to discourage people, but realistically speaking, it can hardly be low enough to allow everyone who's interested to participate meaningfully.

Comment author: toonalfrink 15 June 2017 07:45:39PM *  1 point [-]

Thank you!

Difficulty/prerequisites is one of the uncertainties that will have to be addressed. Some AI safety only requires algebra skills while other stuff needs logic/ML/RL/category theory/other, and then there is stuff that isn't formalized at all.

But there are other applied mathematics fields with this problem, and I expect that we can steal a solution by having a look there.

Announcing AASAA - Accelerating AI Safety Adoption in Academia (and elsewhere)

12 toonalfrink 15 June 2017 06:55PM

AI safety is a small field. It has only about 50 researchers, and it’s mostly talent-constrained. I believe this number should be drastically higher.

A: the missing step from zero to hero

I have spoken to many intelligent, self-motivated people that bear a sense of urgency about AI. They are willing to switch careers to doing research, but they are unable to get there. This is understandable: the path up to research-level understanding is lonely, arduous, long, and uncertain. It is like a pilgrimage.

One has to study concepts from the papers in which they first appeared. This is not easy. Such papers are undistilled. Unless one is lucky, there is no one to provide guidance and answer questions. Then should one come out on top, there is no guarantee that the quality of their work will be sufficient for a paycheck or a useful contribution.

Unless one is particularly risk-tolerant or has a perfect safety net, they will not be able to fully take the plunge.
I believe plenty of measures can be made to make getting into AI safety more like an "It's a small world"-ride:

  • Let there be a tested path with signposts along the way to make progress clear and measurable.

  • Let there be social reinforcement so that we are not hindered but helped by our instinct for conformity.

  • Let there be high-quality explanations of the material to speed up and ease the learning process, so that it is cheap.


B: the giant unrelenting research machine that we don’t use

The majority of researchers nowadays build their careers through academia. The typical story is for an academic to become acquainted with various topics during their study, pick one that is particularly interesting, and work on it for the rest of their career.

I have learned through personal experience that AI safety can be very interesting, and the reason it isn’t so popular yet is all about lack of exposure. If students were to be acquainted with the field early on, I believe a sizable amount of them would end up working in it (though this is an assumption that should be checked).

AI safety is in an innovator phase. Innovators are highly risk-tolerant and have a large amount of agency, which allows them to survive an environment with little guidance, polish or supporting infrastructure. Let us not fall for the typical mind fallacy, expecting less risk-tolerant people to move into AI safety all by themselves. Academia can provide that supporting infrastructure that they need.


AASAA adresses both of these issues. It has 2 phases:

A: Distill the field of AI safety into a high-quality MOOC: “Introduction to AI safety”

B: Use the MOOC as a proof of concept to convince universities to teach the field

 

read more...

 

We are bottlenecked for volunteers and ideas. If you'd like to help out, even if just by sharing perspective, fill in this form and I will invite you to the slack and get you involved.

View more: Next