All of Long time lurker's Comments + Replies

/Edit 1: I want to preface this by saying I am just a noob who has never posted on Less Wrong before.

/Edit 2: 

I feel I should clarify my main questions (which are controversial): Is there a reason why turning all of reality into maximized conscious happiness is not objectively the best outcome for all of reality, regardless of human survival and human values?
Should this in any way affect our strategy to align the first agi, and why?

/Original comment:

If we zoom out and look at the biggest picture philosophically possible, then, isn´t the only thing tha... (read more)

2AnthonyC
The fact that the statement is controversial is, I think, the reason. What makes a world-state or possible future valuable is a matter of human judgment, and not every human believes this.  EY's short story Three Worlds Collide explores what can happen when beings with different conceptions of what is valuable, have to interact. Even when they understand each other's reasoning, it doesn't change what they themselves value. Might be a useful read, and hopefully a fun one.
2Charlie Steiner
For a more involved discussion than Kaj's answer, you might check out the "Mere Goodness" section of Rationality: A-Z.
6Kaj_Sotala
What would it mean for an outcome to be objectively best for all of reality? It might be your subjective opinion that maximized conscious happiness would be the objectively best reality. Another human's subjective opinion might be that a reality that maximized the fulfillment of fundamentalist Christian values was the objectively best reality. A third human might hold that there's no such thing as the objectively best, and all we have are subjective opinions. Given that different people disagree, one could argue that we shouldn't privilege any single person's opinion, but try to take everyone's opinions into account - that is, build an AI that cared about the fulfillment of something like "human values". Of course, that would be just their subjective opinion. But it's the kind of subjective opinion that the people involved in AI alignment discussions tend to have.