All of thetasafe's Comments + Replies

The ideas should be posted at both the places, since, it is hard and also less important to measure relevance (in terms of "ideas" which people may be capable of providing) but easy and also necessary to maintain reach. There are relevant people everywhere. The greater the reach of the ideas, the greater would be the chance of their review or refutability.

However, I would like to learn the context of "relevant" here, as I'm still unsure of my answer.

(i) Because “live forever” is the inductive consequence of the short-term “live till tomorrow” preference applied to every day.

Then, "die after a century" is the inductive consequence of the long-term "?" preference applied to "?".

(ii) No. It implies that the human can be successfully modelled as having a mix of RLong and RKurtz preferences, conditional on which philosopher they meet first. And the AI is trying to best implement human preferences, yet humans have these odd mixed preferences. What we (the AI) have to “do”, is decide which philosopher the

... (read more)

How can the short term preference be classified as "live forever" and the long term preference as "die after a century"? It can also be put through your argument then, that "die after a century" would take precedence over "live forever".

Do the arguments imply that the AI will have an RLong function and a PKurtz function for preference-shaping (holding that it will have multiple opportunities)?

I was unable to gather the context in which you put your questions - "What should we do? And what principles should we use to do so?", lacking the light to gather, 'what is it that we have to "do"?'.

0Stuart_Armstrong
Because "live forever" is the inductive consequence of the short-term "live till tomorrow" preference applied to every day. No. It implies that the human can be successfully modelled as having a mix of RLong and RKurtz preferences, conditional on which philosopher they meet first. And the AI is trying to best implement human preferences, yet humans have these odd mixed preferences. What we (the AI) have to "do", is decide which philosopher the human meets first, and hence what their future preferences will be.

Please explain the term "meta-preferences", if here it doesn't means the same as put by Sir James Buchanan in his 1985 work titled "The reason of rules" for the term "meta-preferences" to be 'a preference for preferences'.

0Stuart_Armstrong
It is ‘a preference for preferences’; eg "my long term needs take precedence over my short term desires" is a meta-preference (in fact the use of terms 'needs' vs 'desires' is itself a meta-preference, as at the lowest formal level, both are just preferences).

I would like to suggest that I do not identify the problems of "values" and the "poor predictions" as potentially resolvable problems. It is because,

  1. Among humans there are infants, younger children and growing adults too who at least (for the sake of brevity for construct) develop at maximum till 19 years of age to their naturally physical and mental potentials. Holding thus, it remains no longer a logical validity to constitute the "values problems" as a problem for developing an AI/Oracle AI because before 19 years of age the values cannot be known, b

... (read more)

Sir, please tell me if the 'pdf' you're referring to as taking out every year and asking how much safety would it buy about "Oracle AI" of Sir Nick Bostrom is the same as "Thinking inside the box: using and controlling an Oracle AI" and if so, then has your perspective changed over the years given your comment dated to August, 2008 and if in case you've been referring to a 'pdf' other than the one I came across, please provide me the 'pdf' and your perspectives along. Thank you!

0DragonGod
I think he was talking to pdf23ds.