Hedonic Treadmill and the Economy
The hedonic treadmill is when permanent changes to living conditions lead to only temporary increases in happiness. This keeps us always wanting improvements to our lives. We often spend money on the newest Iphones and focus our attention on improving our external circumstances. We ignore the quote:
"What lies before us and what lies behind us are tiny matters compared to what lies within us"
Some people eat chips to quell their boredom. The hedonic treadmill ensures that, despite improvements in income, people are not ...
I have an idea for a possible utility function combination method. It basically normalizes based on how much utility is at stake in a random dictatorship. The combined utility function has these nice properties:
Pareto-optimality wrt all input utilities on all lotteries
Adding Pareto-dominated options (threats) does not change players' utilities
Invariance to utility scaling
Invariance to cloning every utility function
Threat resistance
The combination method goes like this:
X=list of utility functions to combine
dist(U)=worlds where random utility function ...
What if most people would develop superhuman intelligences in their brains without school but, because they have to write essays in school, these superhuman intelligences become aligned with writing essays fast? And no doomsday scenario has happened because they mostly cancel out each others' attempted manipulations and they couldn't program nanobots with their complicated utility functions. ChatGPT writes faster than us and has 20B parameters where humans have 100T parameters, but our neural activations are more noisy than floating-point arithmetic.
‘By 'obvious to the algorithm' I mean that, to the algorithm, A is referenced with no intermediate computation. This is how pleasure and pain feel to me. I do not believe all reinforcement learning algorithms feel pleasure/pain. A simple example that does not suffer is the Simpleton iterated prisoner’s dilemma strategy. I believe pain and pleasure are effective ways to implement reinforcement learning. In animals, reinforcement learning is called operant conditioning. See Reinforcement learning on a chicken for a chicken that has experienced it...
As this algorithm executes, the last and 2last variables become the program's last 2 outputs. L1's even indexes become the average input(reward?) given the number of ones the program outputted the last 2 times. I called L1's odd indexes 'confidence' because, as they get higher, the corresponding average reward changes less based on evidence. When L1 becomes entangled with the input generation process, the algorithm chooses which outputs make the inputs higher on average. That is why I called the input 'reward'. L2 reads off the average reward given the las...
The Edge home page featured an online editorial that downplayed AI art because it just combines images that already exist. If you look closely enough, human artwork is also combinations of things that already existed.
One example is Blackballed Totem Drawing: Roger 'The Rajah' Brown. James Pate drew this charcoal drawing in 2016. It was the Individual Artist Winner of the Governor's Award for the Arts. At the microscopic scale, this artwork is microscopic black particles embedded in a large sheet of paper. I doubt he made the paper he drew on, and the black... (read more)