Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Lumifer 13 February 2017 07:53:48PM 0 points [-]

Well, there are these words and expression sprinkled throughout your comment:

... promoting elitism and entitlement ... and sexism ... value the thoughts of other people who are more knowledgeable about sexism over yours ... being offensive and harmful ...

All of this seems to go deeper than "mannerisms".

Your basic beef with the post seems to be that it is mean and insensitive and I think such an approach missed the post's main point. It seems that you think the main point is to stigmatize stupid people, label them sub-human, and, possibly, subject them to mandatory treatments with drugs and such. I think the main point is to stress that stupidity is not an unchanging natural condition ("sky is blue, water is wet, some people are stupid") but something that could be changed.

Comment author: whpearson 13 February 2017 08:07:09PM 1 point [-]

We can have the idea that something is changeable about people (e.g. fitness levels) without having to label its lack an illness.

I can see where silver is coming from. The language in this article is probably harmful. Imagine a bunch of body builders calling a nerds inability to bench press 50KG an illness, which can be fixed by steroids.

Comment author: whpearson 13 February 2017 07:39:25PM 3 points [-]

I came across the Toyata Kata today at work.

It looks interesting as a way of organising work that you cannot naturally backwards chain towards.

Comment author: Lumifer 08 February 2017 07:04:25PM *  0 points [-]

Sure, he who pays the piper calls the tune :-) but I don't know if it's a good way to run science. However if you want to go in that direction, shouldn't your poll be addressed to potential (large) donors?

Comment author: whpearson 08 February 2017 07:24:29PM 0 points [-]

If you can get access to them sure. Convincing smaller donors to donate to you is a good way of not being too dependent on the big ones and also being able to show a broad support base to the larger donors.

Comment author: Lumifer 08 February 2017 05:13:23PM 0 points [-]

If you are interested in existential risk reduction, why wouldn't you be interested in what other people think?

For the same reasons quantum physicists don't ask the public which experiments they should run next.

Surviving is a team sport.

Errrr... That really depends X-)

Comment author: whpearson 08 February 2017 06:55:12PM 0 points [-]

For the same reasons quantum physicists don't ask the public which experiments they should run next.

But a quantum research institute that is funded via donations might ask the public which of the many experiments they want to run might attract funding. They can hire more researchers and answer more questions. Build good will etc.

Comment author: Lumifer 07 February 2017 10:02:37PM 0 points [-]

I looked at that document. I still don't see why do you think you'll be able to extract useful information out of a bunch of unqualified opinions (and a degree in psychology qualifies for AI risk discussions? really?) And why is the EA forum relevant to this?

Comment author: whpearson 07 February 2017 10:48:26PM 0 points [-]

I'm bound to get useful information as I am only interested in what people think. If you are interested in existential risk reduction, why wouldn't you be interested in what other people think? Surviving is a team sport.

Someone recommended EA here for existential risk discussion

Comment author: Lumifer 07 February 2017 09:13:59PM 0 points [-]

By "people" do you mean "LW people"? If you're interested in what the world cares about, running polls on LW will tell you nothing useful.

Comment author: whpearson 07 February 2017 09:20:10PM *  0 points [-]

Oh, you've not read the document I linked to in the post. I planned to try and get it posted in LW, EA forum and subreddits associated with AI and AIrisk.

Comment author: Lumifer 07 February 2017 08:11:46PM 0 points [-]

Do you have a short write-up somewhere about what do you want to do and why other people should help you?

Comment author: whpearson 07 February 2017 08:49:05PM 0 points [-]

I want to gather information about what people care about in AI Risks. Other people should help me if they also want to gather information about what people care about in AI Risks.

Comment author: whpearson 07 February 2017 07:34:22PM 0 points [-]

I'm currently lacking people to put the more mainstream points across.

I'd like to know why people aren't interested in helping me.

Submitting...

Comment author: hairyfigment 07 February 2017 12:22:34AM 0 points [-]

If it can't be solved, how will MIRI know?

For one, they wouldn't find a single example of a solution. They wouldn't see any fscking human beings maintaining any goal not defined in terms of their own perceptions - eg, making others happy, having an historical artifact, or visiting a place where some event actually happened - despite changing their understanding of our world's fundamental reality.

If I try to interpret the rest of your response charitably, it looks like you're saying the AGI can have goals wholly defined in terms of perception, because it can avoid wireheading via satisficing. That seems incompatible with what you said before, which again invoked "some abstract notion of good and bad" rather than sensory data. So I have to wonder if you understand anything I'm saying, or if you're conflating ontological crises with some less important "paradigm shift" - something, at least, that you have made no case for caring about.

Comment author: whpearson 07 February 2017 08:25:35AM 2 points [-]

For one, they wouldn't find a single example of a solution. They wouldn't see any fscking human beings maintaining any goal not defined in terms of their own perceptions - eg, making others happy, having an historical artifact, or visiting a place where some event actually happened - despite changing their understanding of our world's fundamental reality.

Fscking humans aren't examples of maximizers with coherent ontologies changing in them in a way that will guarantee that the goals will be followed. They're examples of systems with multiple different languages for describing the world that exist simultaneously. The multiple languages sometimes come into conflict and people sometimes go insane. People are only generally maximis-ish in certain domains and that maximisation is not constant over people's life times.

If I try to interpret the rest of your response charitably, it looks like you're saying the AGI can have goals wholly defined in terms of perception, because it can avoid wireheading via satisficing. That seems incompatible with what you said before, which again invoked "some abstract notion of good and bad" rather than sensory data

You can hard code some sensory data to mean an abstract notion of good or bad, if you know you have a helpful human around to supply that sensory data and keep that meaning.

ontological crises with some less important "paradigm shift"

Paradigm shifts are ontological crises limited to the language used to describe a domain. You can go read about them on wikipedia if you want and make up your own mind if they are important.

Comment author: Stuart_Armstrong 06 February 2017 11:54:15AM 4 points [-]

The reason I blog is to have the discipline of formulating the problem for others, and to get some immediate feedback. I would recommend it for anything that doesn't need to be kept secret, as simply writing down the problem for others helps to clarify it in your own mind.

Comment author: whpearson 06 February 2017 08:21:47PM 1 point [-]

I much prefer to write things down in code, if I am writing things down.

I shall think about the discipline of formulating the problem for others.

View more: Next