Philosophy, education, literary studies, computer game design.
This could also be a labeling issue because you cannot identify the function of your habits. For example, limiting yourself in different areas could be a way of keeping your mind from being stressed too much. Trying to overcome this could be beneficial in general, but it could also be detrimental to your health. Of course, you could argue that my point leads to some kind of self-preservational self-deception because every questionable behavior could be considered as "helping in another way". But I just want to make sure that not every comfort zone has to be labeled as problematic, but merely those which have negative impacts on your primary goals. You should ask yourself: Is "growing" my most important goal? Or is "growing" a means to an end?
I thought a solved alignment problem would implicate a constant process of changing the values of the AI in regard to the most recent human values. So if something does not lead to the expected terminal goals of the human (such as enjoyable emotions), then the human can indicate that outcome to the AI and the AI would adjust its own goals accordingly.
Thank you for explaining it. I really like this concept for stories because it focuses on the psychological aspect of stories as understanding something which sometimes is missing in literary perspectives. How would you differentiate between a personal understanding of a definition and a story? Would you?
My main approach to stories is to define them more abstractly as a rhetorical device for representing change. This allows me to differentiatie between a story (changes), a description (states) and an argument (logical connections of assertions). I suppose, in your understanding, all of them would be some kind of story? This differentiation could also be helpful in understanding the process of telling a story versus giving a description.
Unfortunately, you did not explain how your answer relates to "stories have the minimum level of internal complexity to explain the complex phenomena we experience". In your answer you do not compare stories to other ways of encoding information in the brain. Are there any others, in your opinion?
I am eager to explore your answer. Why do you think that "stories have the minimum level of internal complexity to explain the complex phenomena we experience"? Is it only because you suppose we internalize phenomena as stories? Do you have any data or studies on that? What's your understanding of a story? Isn't a straightforward description not even less complex because you do not need a full-blown plot to depict something like a chair?
This is the same conclusion and argument I arrived after reading tivelen's comment. But my objection would be that a "momentary fluctuation" generally is not a good moral argument. You could doubt every decision because the time you took to not be considered a fluctuation is arbitrary.
I thought about that and also agree with you. But I wanted this room to be thought about as an investigation of personal choice rather than a choice made by others for you. So I opted for the inclusion of this concept. It would be appropriate not to overemphasize this aspect. But it is of course an understandable rejection. Thank you for bringing it to the foreground.
This is a thoughtful analysis of possible effects. Thank you for this. I do not want to have such rooms because I do not want to lose anybody ever. But sometimes there is a tendency in humans for quick decisions which would be supported by such an invention. I suppose this thought experiment shows me that blocking access to easy decision making has potential value.
One possible way to make learning a reality could be in building a product with other people: https://www.lesswrong.com/posts/i2dNdHrGMT6QgqBFy/what-product-are-you-building
If you build something you would use, then your learning is evaluated by yourself through your goals constantly.