One reason we did not go travelling might have been a resource constraint, perhaps money but also a limited ability to plan good trips because of distraction or knowledge should be counted as a limitation of planning resources.
That aside, people still have multiple drives which are not really goals, and we sort of compromise amongst these drives. The approach the mind takes is not always the best.
In people, it's really those mid-brain drives that run a lot of things, not intellect.
We could try to carefully program in some lower-level or more complex sets of "drives" into an AI. The "utility function" people speak of in these threads is really more like an incredibly overpowering drive for the AI.
If it is wrong, then there is no hedge, check or diversification. The AI will just pursue that drive.
As much as our minds often .take us in the wrong direction with our drives, at least they are diversified and checked.
Checks and diversification of drives seem like an appealing element of mind design, even at significant cost to efficiency at achieving goals. We should explore these options in detail.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
One reason we did not go travelling might have been a resource constraint, perhaps money but also a limited ability to plan good trips because of distraction or knowledge should be counted as a limitation of planning resources.
That aside, people still have multiple drives which are not really goals, and we sort of compromise amongst these drives. The approach the mind takes is not always the best.
In people, it's really those mid-brain drives that run a lot of things, not intellect.
We could try to carefully program in some lower-level or more complex sets of "drives" into an AI. The "utility function" people speak of in these threads is really more like an incredibly overpowering drive for the AI.
If it is wrong, then there is no hedge, check or diversification. The AI will just pursue that drive.
As much as our minds often .take us in the wrong direction with our drives, at least they are diversified and checked.
Checks and diversification of drives seem like an appealing element of mind design, even at significant cost to efficiency at achieving goals. We should explore these options in detail.
But I don't think "utility function" in the context of this post has to mean, a numerical utility explicitly computed in the code.
It could just be, the agent behaves as-if its utilities are given by a particular numerical function, regardless of whether this is written down anywhere.