All of Alex1V's Comments + Replies

Alex1V43

“Suffering is a mainly measure, rather than the target metric.”

What is the metric? What could be more important than reducing suffering and increasing happiness? Things are only bad if they cause suffering or reduce happiness, and only good if they increase happiness or decrease suffering.

“Either what you are doing was fine before, or you did not hereby make it fine.”

The bad things we do to animals (cages, slaughter, etc) are bad because it causes them suffering. If we find a way to prevent them from suffering, these bad things are no longer bad.

“but expec... (read more)

Alex1V10

I think there’s lots of specific internal reasons why people make bad choices: sometimes it’s just pure selfishness of sadism.

But as for why some people are delusional, selfish, sadistic. As for why some people “succumb to evolved default behaviors like anger, instead of using their freedom of thought.” I’m not really seeing an alternate explanation here other than some people where unlucky enough to have genes and environment that built a brain that followed the laws of physics until it they did something bad. And from an internal perspective, maybe the p... (read more)

Alex1V10

We can simulate the brain of C. elegans, I see no reason why it couldn’t theoretically be scaled up to a human brain. I guess technically you need computation AND a full map of the human brain not just computation for that.

2TAG
How do you know that indeterminism isn't also a limitation to prediction?
Alex1V10

I think the atoms in my brain will follow the laws of physics until a choice is made. And to me that process feels like I’m deciding something, because that’s what computation feels like from the inside. But actually the outcome is predetermined.

Alex1V10

No, but only because I lack the computing power to do so. I very powerful AI could.

1TAG
Huw do you know that computational power is the only limitation?
0Feel_Love
You do have some computing power, though. You compute choices according to processes that are interconnected with all other processes, including genetic evolution and the broader environment. These choosing-algorithms operate according to causes ("inputs"), which means they are not random. Rather, they can result in the creation of information instead of entropy. The environment is not something that happens to us. We are part of it. We are informed by it and also inform it in turn, as an output of energy expenditure. Omega hasn't run the calculation that you're running right now. Until you decide, the future is literally undecided.
Alex1V10

So why do some people choose to do good while others choose to do evil? I think genes and environment are fully sufficient to explain why people make different choices, but if you have an alternate hypothesis I’d be interested to hear it. But the answer can’t be something like “because some people choose different intentions” because then you’d have to explain why some people have different intentions.

To put it another way, you may choose your intentions deliberately, but did you make the choice to be the kind of person who chooses intentions deliberately?... (read more)

0Feel_Love
Intentions depend on beliefs, i.e. the views a person holds, their model of reality. A bad choice follows from a lack of understanding: confusion, delusion, or ignorance about the causal laws of this world. A "choice to do evil" in the extreme could be understood as a choice stemming from a worldview such as harm leads to happiness. (In reality, harm leads to suffering.) How could someone become so deluded? They succumbed to evolved default behaviors like anger, instead of using their freedom of thought to cultivate more accurate beliefs about what does and does not lead to suffering. People like Hitler made a long series of such errors, causing massive suffering. They failed to use innumerable opportunities, moment by moment, to allow their model to investigate itself and strive to learn the truth. Not because they were externally compelled, but because they chose wrongly.
4TAG
Can you make precise predictions of behaviour, given that information...?
Alex1V00

You raise two very valid concerns. That Hitler might hurt others if you allow him to interact with them, and that Hitler might find a way to escape the box.

Even if Hitler was willing to reflect on his actions and change, his presence in the network (B) would likely make other people unhappy.

So while I think (A) is ethically mandatory if you can contain him, (B) comes with a lot of complex problems that might not be solvable.

Alex1V*1-1

The bit of your brain that chooses to think nice thoughts (“I”/“me”) is just as much a product of your genes and environment as the bit of your brain that wants to think bad thoughts.

You didn’t choose to have a brain that tries not to think bad thoughts and Hitler didn’t choose to have a brain that outputs genocide when given some specific environmental conditions. The only way Hitler could have realised that his actions were bad and choose to be good would be if his genes and environment built a brain that would do so given some environmental input.

-1Feel_Love
The brain is an ongoing process, not a fixed thing that is given at birth. Hitler was part of the environment that built his brain. Many crucial developmental inputs came from the part of the environment we call Hitler. I did and do choose my intentions deliberately, repeatedly, with focused effort. That's a major reason the brain develops the way it does. It generates inputs for itself, through conscious modeling. It doesn't just process information passively and automatically based solely on genes and sensory input. That's the Chinese Room thought experiment -- information processing devoid of any understanding. The human mind reflects and practices ways of relating to itself and the environment. You never get a pass to say, "Sorry I'm killing you! I'm not happy about it either. It's just that my genes and the environment require this to happen. Some crazy ride we're on together here, huh?" That's more like how a mouse trap processes information. With the human level of awareness, you can actually make an effort and choose to stop killing. We help create the world -- discover the unknown future -- by resolving uncertainty through this lived process. The fact that decision-making and choosing occur within reality (or "the environment") rather than outside of it is logical and necessary. It doesn't mean that there is no choosing. Choosing is merely real, another step in the causal chain of events.
Alex1V80

Hitler’s evil actions were determined by the physical structure of his brain. His brain was built by genes (which he didn’t choose), and modified by his environment (which didn’t choose), and then certain environmental inputs (which he didn’t choose) caused his brain to output genocide. If you had Hitler’s genes and Hitler’s environment, you would have Hitler’s brain and so you would do as Hitler did.

To punish someone, or in this case withhold high resolution paradise, can only be useful and good in so far as it changes behaviour or acts as a deterrent to ... (read more)

-1Feel_Love
I can't speak for you, but I personally can choose to stop thinking thoughts if they are causing suffering, and instead think a different thought. For example, if I notice that I'm replaying a stressful memory, I might choose to pick up the guitar and focus on those sounds and feelings instead. This trains neural pathways that make me less and less susceptible to compulsively "output genocide." Sure, "I" am as much a part of the environment as anything else, as is "my" decision-making process. So you could say that it's the environment choosing a brain-training input, not me. But "I" am what the environment feels like in the model of reality simulated by a particular brain. And there is a decision-making process happening within the model, led by its intentions. Hitler had a choice. He could make an effort to train certain neural pathways of the brain, or he could train others by default. He chose to write divisive propaganda when he should have painted. The bad outcomes that followed were not compelled by the environment. They are attributable to particular minds. We who have capacity for decision-making are all accountable for our own moral deeds.
3Raemon
So there’s two different facets of the hypothetical ancestor simulation response I came up with. A) deliberately not being a paradise B) not connecting it to some broader network of simulation paradises. I can totally buy coming to believe the first part is pointlessly cruel. The second part feels more like its… actually enforcing boundaries for the safety of others. The ‘infinite energy’ clause is a bit weird here. If ‘you’ have total control over not just infinite energy but also the entire posthuman world, then yeah you can do things like let Hitler wander around making new allies and… somehow intervene if this starts to go awry. But I have an easier time imagining being confident in ‘not let Hitler out of the box until he’s trustworthy’ then the latter. (Ie there can be infinite energy ‘around’ but not actually in a uniform control) Also, it’s not obvious to me which is more cruel. (I think it depends on Hitler’s own values) Also, while I said ‘infinite energy’ in the hypothetical, I do think in most optimistic worlds we still end up with only ‘very large finite energy’ and I don’t even know that I’d get around to doing any kind of ancestor sim at all for him, let alone getting to optimize it fully for him. I think I love Hitler, but I also think I love everyone else and it just seems reasonable to prioritize both the safety and well being of people who didn’t go out of their way to create horrific death camps, and manipulate their way into power.
Alex1V10

I think more exposition is needed. For example, one episode could have someone who knows how dangerous AI is, warns the other characters about it, and explains toward the end why things are going wrong. In other episodes, the characters could realise their own mistake, far too late, but in time to explain what's going on with a bit of dialogue. Alternatively, the AI explains its own nature before killing the characters.

For example, at the end of Cashbot, as nukes are slowly destroying civilisation, someone could give a short monologue about how AIs don't have human values, ethics, empathy or restraint, and that they will follow their goals to the exclusion of all else.