Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

The ideas should be posted at both the places, since, it is hard and also less important to measure relevance (in terms of "ideas" which people may be capable of providing) but easy and also necessary to maintain reach. There are relevant people everywhere. The greater the reach of the ideas, the greater would be the chance of their review or refutability.

However, I would like to learn the context of "relevant" here, as I'm still unsure of my answer.

(i) Because “live forever” is the inductive consequence of the short-term “live till tomorrow” preference applied to every day.

Then, "die after a century" is the inductive consequence of the long-term "?" preference applied to "?".

(ii) No. It implies that the human can be successfully modelled as having a mix of RLong and RKurtz preferences, conditional on which philosopher they meet first. And the AI is trying to best implement human preferences, yet humans have these odd mixed preferences. What we (the AI) have to “do”, is decide which philosopher the human meets first, and hence what their future preferences will be.

I still am unable to sort out the relation between the "human", the "AI"/"the AI" and the "philosophers". I am relating it as, there is some human "H" with some name who will meet the philosophers "RLong" and "PKurtz" who will model the preferences of "H" into "RLong" and "RKurtz", conditional on whether they meet Mr./Ms. "RLong" first or Mr./Ms. "PKurtz" first. Am I right in understanding this much?

Apart from this, what/who/where is "(the AI)"? If we are not referring to our respective understandings of "the AI".

Moreover, regarding "we" i.e. "ourselves" as "the AI" i.e. "our respective understandings of the AI theory", the human "H" should meet Mr./Ms. "PKurtz" first because it will prove to be comparatively more beneficial in my understanding, where my understanding suggests an outcome "O" to be measured in terms of efficient utilization of time, if the human "H" were me or even not me, as it will save time.

To achieve anything in "long-term" needs first an understanding of the "short-term".

How can the short term preference be classified as "live forever" and the long term preference as "die after a century"? It can also be put through your argument then, that "die after a century" would take precedence over "live forever".

Do the arguments imply that the AI will have an RLong function and a PKurtz function for preference-shaping (holding that it will have multiple opportunities)?

I was unable to gather the context in which you put your questions - "What should we do? And what principles should we use to do so?", lacking the light to gather, 'what is it that we have to "do"?'.

Please explain the term "meta-preferences", if here it doesn't means the same as put by Sir James Buchanan in his 1985 work titled "The reason of rules" for the term "meta-preferences" to be 'a preference for preferences'.

I would like to suggest that I do not identify the problems of "values" and the "poor predictions" as potentially resolvable problems. It is because,

  1. Among humans there are infants, younger children and growing adults too who at least (for the sake of brevity for construct) develop at maximum till 19 years of age to their naturally physical and mental potentials. Holding thus, it remains no longer a logical validity to constitute the "values problems" as a problem for developing an AI/Oracle AI because before 19 years of age the values cannot be known, by the virtue of the development stage being at onset. Apart from being ideal theoretically, it might prove dangerous to assign or align values to humans for the sake of natural development of the human civilization.

  2. Holding the current status quo of "Universal Basic Education" and the above(1.) "values development" argument, it is not a logical argument that humans would be able to predict AI/Oracle AI behaviour at a time when not even AI researchers can predict with full guarantee the potentials of an Oracle AI or an AI developing itself into an AGI (a meagre case but cannot be held as holding no potential for now). Thus, holding the "poor predictions" case to be logically irresolvable as a problem.

But, to halt the development, if, of the two mentioned cases, especially the "poor predictions" one, is not logical for academic purpose.

Sir, please tell me if the 'pdf' you're referring to as taking out every year and asking how much safety would it buy about "Oracle AI" of Sir Nick Bostrom is the same as "Thinking inside the box: using and controlling an Oracle AI" and if so, then has your perspective changed over the years given your comment dated to August, 2008 and if in case you've been referring to a 'pdf' other than the one I came across, please provide me the 'pdf' and your perspectives along. Thank you!