LESSWRONG
LW

Stefan_Schubert
1459422990
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
whestler's Shortform
Stefan_Schubert1y150

Cf this Bostrom quote.

Far from being the smartest possible biological species, we are probably better thought of as the stupidest possible biological species capable of starting a technological civilization - a niche we filled because we got there first, not because we are in any sense optimally adapted to it.

Re this:

In evolutionary timescales, virtually no time has elapsed since hominids began trading, utilizing complex symbolic thinking, making art, hunting large animals etc, and here we are, a blip later in high technology.

A bit nit-picky, but a recent paper studying West Eurasia found significant evolution over the last 14,000 years.

Reply2
Alexander Gietelink Oldenziel's Shortform
Stefan_Schubert1y20

There's a related confusion between uses of "theory" that are neutral about the likelihood of the theory being true, and uses that suggest that the theory isn't proved to be true.

Cf the expression "the theory of evolution". Scientists who talk about the "theory" of evolution don't thereby imply anything about its probability of being true - indeed, many believe it's overwhelmingly likely to be true. But some critics interpret this expression differently, saying it's "just a theory" (meaning it's not the established consensus).

Reply1
We might be missing some key feature of AI takeoff; it'll probably seem like "we could've seen this coming"
Stefan_Schubert1y2114

Thanks for this thoughtful article.

It seems to me that the first and the second examples have something in common, namely an underestimate of the degree to which people will react to perceived dangers. I think this is fairly common in speculations about potential future disasters, and have called it sleepwalk bias. It seems like something that one should be able to correct for.

I think there is an element of sleepwalk bias in the AI risk debate. See this post where I criticise a particular vignette.

Reply43
The "public debate" about AI is confusing for the general public and for policymakers because it is a three-sided debate
Stefan_Schubert2y20

Yeah, I think so. But since those people generally find AI less important (there's both less of an upside and less of a downside) they generally participate less in the debate. Hence there's a bit of a selection effect hiding those people.

There are some people who arguably are in that corner who do participate in the debate, though - e.g. Robin Hanson. (He thinks some sort of AI will eventually be enormously important, but that the near-term effects, while significant, will not be at the level people on the right side think).

Looking at the 2x2 I posted I wonder if you could call the lower left corner something relating to "non-existential risks". That seems to capture their views. It might be hard to come up with a catch term, though.

The upper left corner could maybe be called "sceptics".

Reply
The "public debate" about AI is confusing for the general public and for policymakers because it is a three-sided debate
Stefan_Schubert2y40

Not exactly what you're asking for, but maybe a 2x2 could be food for thought. 

Reply
The "public debate" about AI is confusing for the general public and for policymakers because it is a three-sided debate
Stefan_Schubert2y168

Realist and pragmatist don't seem like the best choices of terms, since they pre-judge the issue a bit in the direction of that view.

Reply
AI psychology should ground the theories of AI consciousness and inform human-AI ethical interaction design
Stefan_Schubert3y20

Thanks.

I think psychologists-scientists should have unusually good imaginations about the potential inner workings of other minds, which many ML engineers probably lack.

That's not clear to me, given that AI systems are so unlike human minds. 

Reply
AI psychology should ground the theories of AI consciousness and inform human-AI ethical interaction design
Stefan_Schubert3y20

tell your fellow psychologist (or zoopsychologist) about this, maybe they will be incentivised to make a switch and do some ground-laying work in the field of AI psychology

Do you believe that (conventional) psychologists would be especially good at what you call AI psychology, and if so, why? I guess other skills (e.g. knowledge of AI systems) could be important.

Reply
Let’s think about slowing down AI
Stefan_Schubert3y64

I think that's exactly right.

Reply
Let’s think about slowing down AI
Stefan_Schubert3y74

I think that could be valuable.

It might be worth testing quite carefully for robustness - to ask multiple different questions probing the same issue, and see whether responses converge. My sense is that people's stated opinions about risks from artificial intelligence, and existential risks more generally, could vary substantially depending on framing. Most haven't thought a lot about these issues, which likely contributes. I think a problem problem with some studies on these issues is that researchers over-generalise from highly framing-dependent survey responses.

Reply
Load More
7Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them
8y
3
4Algorithmic tacit collusion
8y
2
11Stuart Ritche reviews Keith Stanovich's book "The rationality quotient: Toward a test of rational thinking"
9y
1
6Social effects of algorithms that accurately identify human behaviour and traits
9y
6
54Hedge drift and advanced motte-and-bailey
9y
13
57Sleepwalk bias, self-defeating predictions and existential risk
9y
11
14Identifying bias. A Bayesian analysis of suspicious agreement between beliefs and values.
10y
26
14Does the Internet lead to good ideas spreading quicker?
10y
22
35ClearerThinking's Fact-Checking 2.0
10y
40
10[Link] Tetlock on the power of precise predictions to counter political polarization
10y
7
Load More