Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: agilecaveman 02 December 2016 12:31:38AM *  6 points [-]

This is really good, however i would love some additional discussion on the way that the current optimization changes the user.

Keep in mind, when facebook optimizes "clicks" or "scrolls", it does so by altering user behavior, thus altering the user's internal S1 model of what is important. This could frequently lead to a distortion of reality, beliefs and self-esteem. There have been many articles and studies correlating facebook usage with mental health. However, simply understanding "optimization" is enough evidence that this is happening.

While, a lot of these issues are pushed under the same umbrella of "digital addiction," i think facebook is a lot more of a serious problem that, say video games. Video games do not, as a rule, act through the very social channels that are helpful to reducing mental illness. Facebook does.

Also another problem is facebook's internal culture that, as of 4 years ago was very marked by the cool-aid that somehow promised unbelievable power(1 billion users, horray) without necessarily caring about responsibility (all we want to do is make the world open and connected, why is everyone mad at us).

This problem is also compounded by the fact that facebook get a lot of shitty critiques (like the critique of the fact that they run A/B tests at all) and has thus learned to ignore legitimate questions of value learning.

full disclosure, i used to work at FB.

Meetup : Seattle Secular Solstice

0 agilecaveman 18 November 2016 10:00PM

Discussion article for the meetup : Seattle Secular Solstice

WHEN: 10 December 2016 04:00:00PM (-0800)

WHERE: 114 Alaskan Way S, Seattle, Washington 98104

A night to gather and celebrate humanity, warmth, knowledge and progress, the solstice is an annual tradition full of songs, stories, laughter, light and beauty. This is an opportunity for Seattlites to come together for an experience that makes us feel more connected to community, our world, and what it means to be alive.

The beginning activities will be focused on fun and celebration, getting to know each other and playing games. The rest of the evening follow the arc of the solstice itself - thematically moving from light to dark to light again. It will begin with bright and festive energy while we focus on the ingenuity that allowed humanity to turn the winter from a season historically known for fear and deprivation and into a time of promise, plenty and warmth.

See Facebook event for more details:


Discussion article for the meetup : Seattle Secular Solstice

Meetup : Discussion of AI as a positive and negative factor in global risk

1 agilecaveman 10 January 2016 04:31AM

Discussion article for the meetup : Discussion of AI as a positive and negative factor in global risk

WHEN: 17 January 2016 01:00:00PM (-0800)

WHERE: 109 15th ave, Seattle, WA, 98122

I wanted to set up a more introductory discussion where people can learn about the process of thinking that made some people concerned that an existence of a more powerful than human AI is not necessarily going to lead to good outcomes by default.

We are reading: http://intelligence.org/files/AIPosNegFactor.pdf

Facebook event here: https://www.facebook.com/events/494298400741977/

Discussion article for the meetup : Discussion of AI as a positive and negative factor in global risk

Meetup : MIRIx: Sleeping Beauty discussion

1 agilecaveman 23 December 2015 04:29AM

Discussion article for the meetup : MIRIx: Sleeping Beauty discussion

WHEN: 27 December 2015 01:00:00PM (-0800)

WHERE: 109 15th ave, Seattle, WA

One important question that frequently comes up in considering probabilities and decision making processes is how do we estimate probabilities that are entangled with our own existence, aka, anthropic probabilities. An even broader question is the notion of probability an actually correct philosophical notion in this case or is the notion of a decision, algorithm or utility more fundamental?

I want to actually try and tackle a current unsolved problem (Sleeping Beauty) and attempt to understand it better and see if any progress can be made on it. There are several issues that are brought up here: a) What is the “correct probability”? Can we make situations in which one is better than another? b) Do we abandon probability? How do we update on new data in the case? c) Does considering “self” as algorithm vs “self” as instance make a difference?

Discussion article for the meetup : MIRIx: Sleeping Beauty discussion

Meetup : Donation Decision Day

1 agilecaveman 23 December 2015 04:03AM

Discussion article for the meetup : Donation Decision Day

WHEN: 26 December 2015 01:00:00PM (-0800)

WHERE: 425 Harvard Ave E, Seattle, Washington, 98102

Come and work on your personal donation decisions; with a partner or by yourself. Do research, run the numbers, and if you run into trouble, have a 1-on-1 discussion with someone about it. Facebook event: https://www.facebook.com/events/1529586254029715/

Discussion article for the meetup : Donation Decision Day

Meetup : Seattle Solstice

1 agilecaveman 09 November 2015 10:17PM

Discussion article for the meetup : Seattle Solstice

WHEN: 19 December 2015 05:00:00PM (-0800)

WHERE: Seattle, WA

Seattle Solstice Celebration, which will feature, compassionate communication, ceremony of songs, candle lighting and speeches. Facebook Event: https://www.facebook.com/events/1694434594123474/

Discussion article for the meetup : Seattle Solstice

Comment author: agilecaveman 12 May 2015 05:56:53PM 0 points [-]

hmm, looks like the year is wrong and the delete button has failed to work :(

Meetup : MIRI paper reading party

1 agilecaveman 31 March 2015 08:53AM

Discussion article for the meetup : MIRI paper reading party

WHEN: 05 April 2015 12:00:00PM (-0700)

WHERE: 109 15th Ave, Seattle WA, 98122


We will be following: https://intelligence.org/2014/12/23/new-technical-research-agenda-overview/ In order.

The papers are independent though, so it's stop by even if you missed the first one.

This time we'll get to: 2. https://intelligence.org/files/TowardIdealizedDecisionTheory.pdf

Generally the format is to alternate reading a chunk and discussing it.

Discussion article for the meetup : MIRI paper reading party

Comment author: agilecaveman 11 March 2015 04:59:20AM 8 points [-]

Maybe this have been said before, but here is a simple idea:

Directly specify a utility function U which you are not sure about, but also discount AI's own power as part of it. So the new utility function is U - power(AI), where power is a fast growing function of a mix of AI's source code complexity, intelligence, hardware, electricity costs. One needs to be careful of how to define "self" in this case, as a careful redefinition by the AI will remove the controls.

One also needs to consider the creation of subagents with proper utilities as well, since in a naive implementation, sub-agents will just optimize U, without restrictions.

This is likely not enough, but has the advantage that the AI does not have a will to become stronger a priori, which is better than boxing an AI which does.

Comment author: Vaniver 08 February 2015 12:45:58AM 1 point [-]

life expectancy(DALY or QALY), since to me, it is easier to measure than happiness.

Whoa, how are you measuring the disability/quality adjustment? That sounds like sneaking in 'happiness' measurements, and there are a bunch of challenges: we already run into issues where people who have a condition rate it as less bad than people who don't have it. (For example, sighted people rate being blind as worse than blind people rate being blind.)

if you could be born in any society on earth today, what one number would be most congruent with your preference? Average life expectancy captures very well which societies are good to be born at.

There's a general principle in management that really ought to be a larger part of the discussion of value learning: Goodhart's Law. Right now, life expectancy is higher in better places, because good things are correlated. But if you directed your attention to optimizing towards life expectancy, you could find many things that make life less good but longer (or your definition of "QALY" needs to include the entirety of what goodness is, in which case we have made the problem no easier).

However, i'd rather have an approximate starting point for direct specification, rather than give up on the approach all-together.

But here's where we come back to Goodhart's Law: regardless of what simple measure you pick, it will be possible to demonstrate a perverse consequence of optimizing for that measure, because simplicity necessarily cuts out complexity that we don't want to lose. (If you didn't cut out the complexity, it's not simple!)

Comment author: agilecaveman 08 February 2015 06:53:25AM 0 points [-]

Well, i get where you are coming from with Goodhart's Law, but that's not the question. Formally speaking, if we take the set of all utility functions with complexity < N = FIXED complexity number, then one of them is going to be the "best", i.e. most correlated with the "true utility" function which we can't compute.

As you point out, with we are selecting utilities that are too simple, such as straight up life expectancy, then even the "best" function is not "good enough" to just punch into an AGI because it will likely overfit and produce bad consequences. However we can still reason about "better" or "worse" measures of societies. People might complain about un-employment rate, but it's a crappy metric to base your decision about which societies are over-all better than others, plus it's easier to game.

The use of at least "trying" to formalize values means we can at least have a set of metrics, that's not too large that we might care about in arguments like: "but the AGI reduced GDP, well it also reduced suicide rate"? Which is more important? Without a simple guidance of simply something we value, it's going to be a long and UN-productive debate.

View more: Next