Wiki Contributions

Comments

rvnnt10

In Fig 1, is the vertical axis P(world) ?

rvnnt10

Possibly a nitpick, but:

The development and deployment of AGI, or similarly advanced systems, could constitute a transformation rivaling those of the agricultural and industrial revolutions.

seems like a very strong understatement. Maybe replace "rivaling" with e.g. "(vastly) exceeding"?

rvnnt50

Referring to the quote-picture from the Nvidia GTC keynote talk: I searched the talk's transcript, and could not find anything like the quote.

Could someone point out time-stamps of where Huang says (or implies) anything like the quote? Or is the quote entirely made up?

rvnnt10

That clarifies a bunch of thing. Thanks!

rvnnt21

I'm not sure I understand what the post's central claim/conclusion is. I'm curious to understand it better. To focus on the Summary:

So overall, evolution is the source of ethics,

Do you mean: Evolution is the process that produced humans, and strongly influenced humans' ethics? Or are you claiming that (humans') evolution-induced ethics are what any reasonable agent ought to adhere to? Or something else?

and sapient evolved agents inherently have a dramatically different ethical status than any well-designed created agents [...]

...according to some hypothetical evolved agents' ethical framework, under the assumption that those evolved agents managed to construct the created agents in the right ways (to not want moral patienthood etc.)? Or was the quoted sentence making some stronger claim?

evolution and evolved beings having a special role in Ethics is not just entirely justified, but inevitable

Is that sentence saying that

  • evolution and evolved beings are of special importance in any theory of ethics (what ethics are, how they arise, etc.), due to Evolution being one of the primary processes that produce agents with moral/ethical preferences [1]

or is it saying something like

  • evolution and evolved beings ought to have a special role; or we ought to regard the preferences of evolved beings as the True Morality?

I roughly agree with the first version; I strongly disagree with the second: I agree that {what oughts humans have} is (partially) explained by Evolutionary theory. I don't see how that crosses the is-ought gap. If you're saying that that somehow does cross the is-ought gap, could you explain why/how?


  1. I.e., similar to how one might say "amino acids having a special role in Biochemistry is not just entirely justified, but inevitable"? ↩︎

rvnnt30

I wonder how much work it'd take to implement a system that incrementally generates a graph of the entire conversation. (Vertices would be sub-topics, represented as e.g. a thumbnail image + a short text summary.) Would require the GPT to be able to (i.a.) understand the logical content of the discussion, and detect when a topic is revisited, etc. Could be useful for improving clarity/productivity of conversations.

rvnnt10

One of the main questions on which I'd like to understand others' views is something like: Conditional on sentient/conscious humans[1] continuing to exist in an x-risk scenario[2], with what probability do you think they will be in an inescapable dystopia[3]?

(My own current guess is that dystopia is very likely.)


  1. or non-human minds, other than the machines/Minds that are in control ↩︎

  2. as defined by Bostrom, i.e. "the permanent and drastic destruction of [humanity's] potential for desirable future development" ↩︎

  3. Versus e.g. just limited to a small disempowered population, but living in pleasant conditions? Or a large population living in unpleasant conditions, but where everyone at least has the option of suicide? ↩︎

rvnnt30

That makes sense; but:

so far outside the realm of human reckoning that I'm not sure it's reasonable to call them dystopian.

setting aside the question of what to call such scenarios, with what probability do you think the humans[1] in those scenarios would (strongly) prefer to not exist?


  1. or non-human minds, other than the machines/Minds that are in control ↩︎

rvnnt20

non-extinction AI x-risk scenarios are unlikely

Many people disagreed with that. So, apparently many people believe that inescapable dystopias are not-unlikely? (If you're one of the people who disagreed with the quote, I'm curious to hear your thoughts on this.)

rvnnt1-2

(Ah. Seems we were using the terms "(alignment) success/failure" differently. Thanks for noting it.)

In-retrospect-obvious key question I should've already asked: Conditional on (some representative group of) humans succeeding at aligning ASI, what fraction of the maximum possible value-from-Evolution's-perspective do you expect the future to attain? [1]

My modal guess is that the future would attain ~1% of maximum possible "Evolution-value".[2]

If tech evolution is similar enough to bio evolution then we should roughly expect tech evolution to have a similar level of success

Seems like a reasonable (albeit very preliminary/weak) outside view, sure. So, under that heuristic, I'd guess that the future will attain ~1% of max possible "human-value".


  1. setting completely aside whether to consider the present "success" or "failure" from Evolution's perspective. ↩︎

  2. I'd call that failure on Evolution's part, but IIUC you'd call it partial success? (Since the absolute value would still be high?) ↩︎

Load More