Where most of the information that composes a person comes from and what function they "should" optimise seem like rather different topics to me.
A lot of what we acquire from our environment is not information that impacts on what our goals are, but rather is used to build a model of the environment - which we then use to help us pursue our goals.
The jacket text for Keith Stanovich's The Robot's Rebellion sums up the book well:
The book is an excellent introduction to the first stage of Yudkowskian philosophy: We are robots in a mechanistic universe running on a swiss army knife of cognitive modules. But at least we finally noticed we're robots, and we can use the skills of rationality to hop off our habit treadmills and pursue our values instead. These values are complex and often arbitrary, but we can use our reflective capacities to extrapolate our values based on "higher-order" desires, a desire for preference consistency, and other considerations. All this is argued for at length in Stanovich's book. The only thing missing is a discussion of what to do about all this when AI arrives.