This is a linkpost for https://twitter.com/colekillian_/status/1599101111985926145
chatgpt is not a consistent agent; it is incredibly inclined to agree with whatever you ask. it can provide insights, but because it's so inclined to agree, it has far stronger confirmation bias than humans. while its guesses seem reasonable, the hedge it insists on outputting constantly is not actually wrong.
Was playing around with chat gpt and and some fun learning about its thoughts on metaphysics. It looks like the ego is an illusion and hedonistic utilitarianism is too narrow minded to capture all of welfare. Instead, it opts for principles of beneficence, non-maleficence, autonomy, and justice. Seems to check out. What do you guys think?