Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: toonalfrink 13 October 2017 02:13:50PM 0 points [-]

I'd like to note that "caring about Us a bit" can also be read as "small probability of caring about Us a lot".

Comment author: polarix 19 January 2017 02:58:19PM 0 points [-]

Maybe a tangent, but: Are we humans corrigible?

I think about this a lot -- it seems that no matter what I do, I'm not able to prevent a sufficiently motivated attacker from ending my life.

Comment author: toonalfrink 13 October 2017 01:57:40PM 0 points [-]

I'm positive. Humans strongly update their utility function based on the morality of the people around them. Do you ever find yourself a bit paralyzed in a new social environment because you don't know about the local customs?

On the other hand, humans are also notorious for trying to fix someone's problem before properly listening to them. Hmm.

Meetup : LW Netherlands

0 toonalfrink 04 September 2017 02:53PM

Discussion article for the meetup : LW Netherlands

WHEN: 03 September 2122 04:52:00PM (+0200)

WHERE: Amsterdam

Discussion article for the meetup : LW Netherlands

Comment author: username2 05 July 2017 09:21:08AM 3 points [-]

How is physical torture (or chronic back pain) the result of attachment to outcomes?

Comment author: toonalfrink 05 July 2017 09:28:43AM 0 points [-]

This is a rather extreme case, but people exist that don't suffer from physical damage because they don't identify with their physical body.

Granted, it would take a good 20 years of meditation/brainwashing to get to that state and it's probably not worth it for now.

Luckily many forms of suffering are based on more shallow beliefs

Comment author: ignoranceprior 05 July 2017 12:59:44AM *  4 points [-]

Some people in the EA community have already written a bit about this.

I think this is the kind of thing Mike Johnson (/user/johnsonmx) and Andres Gomez Emilsson (/user/algekalipso) of the Qualia Research Institute are interested in, though they probably take a different approach. See:

Effective Altruism, and building a better QALY

Principia Qualia: blueprint for a new cause area, consciousness research with an eye toward ethics and x-risk

The Foundational Research Institute also takes an interest in the issue, but they tend to advocate an eliminativist, subjectivist view according to which there is no way to objectively determine which beings are conscious because consciousness itself is an essentially contested concept. (I don't know if everyone at FRI agrees with that, but at least a few including Brian Tomasik do.) FRI also has done some work on measuring happiness and suffering.

Animal Charity Evaluators announced in 2016 that they were starting a deep investigation of animal sentience. I don't know if they have done anything since then.

Luke Muehlhauser (/u/lukeprog) wrote an extensive report on consciousness for the Open Philanthropy Project. He has also indicated an interest in further exploring the area of sentience and moral weight. Since phenomenal consciousness is necessary to experience either happiness or suffering, this may fall under the same umbrella as the above research. Lukeprog's LW posts on affective neuroscience are relevant as well (as well as a couple by Yvain).

Comment author: toonalfrink 05 July 2017 07:12:31AM 0 points [-]

This is great info, but it's about a different angle from what I'd like to see.

(I now realise it is totally impossible to infer my angle from my post, so here goes)

I want to describe the causes of happiness with the intentional stance. That is, I want to explain them in terms of beliefs, feelings and intentions.

For example, it seems very relevant that (allegedly) suffering is a result of attachment to outcomes, but I haven't heard any rationalists talk about this.

Comment author: Dagon 04 July 2017 09:11:06PM 0 points [-]

Are you exploring your own goals and preferences, or hoping to understand/enforce "common" goals on others (including animals)?

I applaud research (including time spent at a Buddhist monastery, however you'll need to acknowledge that you'll perceive different emotions if you're exploring it for happiness than if it's your only option in life) and reporting on such. I've mostly accepted that there's no such thing as coherent terminal goals for humans - everything is relative to each of our imagined possible futures.

Comment author: toonalfrink 05 July 2017 07:03:23AM 0 points [-]

I have a strong contrarian hunch that human terminal goals converge as long as you go far enough up the goal chain. What you see in the wild is people having vastly different tastes of how to live life. One likes freedom, the next likes community, and then the next is just trying to gain as much power as possible. But I call those subterminal goals, and I think what generated them is the same algorithm with different inputs (different perceived possibilities?). The algorithm, which I think optimizes for some proxies of genetic survival like sameness and self-preservation, that's the terminal goal. And no, I'm not trying to enforce any values. This isn't about things-in-the-world that ought to make us happy. This is about inner game.

We need a better theory of happiness and suffering

1 toonalfrink 04 July 2017 08:14PM

We rationalists know a lot about winning, but we don't know what our terminal goals really are. Such things are handwaved away, as we just mumble something like "QALYs" and make a few guesses about what a five year old would like.

I'd like to dispel the myth that a 5 year old knows what they like. Have you ever seen a kid with a sack of candy? I don't think they really wanted to get nauseous.

"But hold up", you say. "Maybe that's true for special cases involving competing subagents, but most cases are actually pretty straightforward, like blindness and death and captivity."

Well, you may have a point with death, but what if blind people and inmates are actually as happy as the next guy? What's the point of curing blindness, then?

A special case where we need to check our assumptions is animal welfare. What if the substrate of suffering is something in higher-order cognition, something that all but mammals lack?

One could hold that it is impossible to make inferences about another being's qualia, but we can come quite far with introspection plus assuming that similar brains yield similar qualia. We can even correlate happiness with brain scans.

The former is why I've moved to a Buddhist monastery. If (whatever really causes) happiness is your goal, it seems to me that the claim that one can permanently attain a state of bliss is worth investigating.

So, to sum up, if we want to fix suffering, let's find out it's proximal cause first. Spoiler: it's not pain.

(To be continued)

Comment author: ChristianKl 16 June 2017 04:52:00PM 1 point [-]

How about having a list of possible AGI safety related topics that could provide material for a bachelor or master thesis?

Comment author: toonalfrink 27 June 2017 05:14:51PM 0 points [-]

What about the research agendas that have already been published?

Comment author: itaibn0 21 June 2017 01:55:55AM 0 points [-]

I specifically appreciate the article on research debt.

Since I was confused by this when I first read this, I want to clarify: As far as I can tell the article is not written by anybody associated with AASAA. You're saying it was nice of toonalfrink to link to it.

(I'm not sure if this comment is useful, since I don't expect a lot of people to have the same misunderstanding I did.)

Comment author: toonalfrink 27 June 2017 05:13:27PM 0 points [-]

Am not associated. Just found the article in the MIRI newsletter

Comment author: ChristianKl 16 June 2017 04:05:37PM 1 point [-]

It has only about 50 researchers, and it’s mostly talent-constrained.

What's the evidence that it's mostly talent-constrained?

Comment author: toonalfrink 17 June 2017 07:21:44AM *  1 point [-]

As stated here:

FHI and CSER recently raised large academic grants to fund safety research, and may not be able to fill all their positions with talented researchers. Elon Musk recently donated $10m through the Future of Life Institute, and Open Phil donated a further $1m, which was their assessment of how much was needed to fund the remaining high-quality proposals. I’m aware of other major funders, including billionaires, who would like to fund safety researchers, but don’t think there’s enough talent in the pool. The problem is that it takes many years to gain the relevant skill set and few people are interested in the research, so even raising salaries won’t help significantly. Other funders are concerned that the research isn’t actually tractable, so the main priority is having someone demonstrate that progress can be made. Previous efforts to demonstrate progress have yielded large increases in funding.

But to be fair, that's november 2015, so let me know if I should update.

View more: Next