contravariant

Wiki Contributions

Comments

Sorted by
"exist"(whatever that means)

What are you implying here? It's clear that *we*, or at least *you* exist, in the sense that the computation of our minds is being performed and inputs are being given to it. We can also say, (with slightly less certainty) that observable external physical objects such as atoms exist because the evolution of their states from one Planck instant to the next is being performed (even when we're not observing it - if the easiest way to get from observation t1 to observation t2 is by computing all the intermediate states between t1 and t2, it's likely that the external object exists on the entire interval [t1..t2]). This is my conception of an object's existence, that the computation of an object's state is being done. What is yours?

Why would they want to stop us from fleeing? It doesn't reduce their expansion rate, and we already established that we don't pose any serious threat to them. We would essentially be giving a perfectly good planet and star to them, undamaged by war (we would probably have enough time to launch at least some nuclear missiles, probably not harming them much but wrecking the ecosystem and making the planet ill-suited for colonization by biological life). Unless they're just sadistic and value the destruction of life as a final goal, I see no reason for them to care. Any planets and star systems that would be colonized by the escaping humans would be taken just as easily as Earth, with only a minor delay.

Evolution also had 1 chance, in the sense that the first intelligent species created would take over the world and reform it very quickly, leaving no time for evolution to try any other mind-design. I'm pretty sure there will be no other intelligent species that evolves by pure natural selection after humanity - unless it's part of an experiment run by humans. Evolution had a lot of chances to try to create a functional intelligence, but as for the friendliness problem, it had only one chance. The reason being, a faulty intelligence will die out soon enough, and give evolution time to design a better one, but a working paperclip maximizer is quite capable of surviving and reproducing and eliminating any other attempts at intelligence.

Evolution is smarter than you.

Could you qualify that statement? If I was given a full time job to find the best way to increase some bacterium's fitness, I'm sure I could study the microbiology necessary and find at least some improvement well before evolution could. Yes, evolution created things that we don't yet understand, but then again, she had a planet's worth of processing power and 7 orders of magnitude more time to do it - and yet we can still see many obvious errors. Evolution has much more processing power than me, sure, but I wouldn't say she is smarter than me. There's nothing evolution created over all its history that humans weren't able to overpower in an eyeblink of a time. Things like lack of foresight and inability to reuse knowledge or exchange it among species, mean that most of this processing power is squandered.

And if something as stupid as evolution (almost) solved the alignment problem, it would suggest that it should be much easier for humans.

Replies to some points in your comment:

One could say AI is efficient cross-domain optimization, or "something that, given a mental representation of an arbitrary goal in the universe, can accomplish it in the same timescale as humans or faster", but personally I think the "A" is not really necessary here, and we all know what intelligence is. It's the trait that evolved in Homo sapiens that let them take over the planet in an evolutionary eyeblink. We can't precisely define it, and the definitions I offered are only grasping at things that might be important.

If you think of intelligence as a trait of a process, you can imagine how many possible different things with utterly alien goals might get intelligence, and what they might use it for. Even the ones that would be a tiny bit interesting to us are just a small minority.

You may not care about satisfying human values, but I want my preferences to be satisfied and I have a meta-value that we should do the best effort to satisfy the preferences of any sapient being. If we look for the easiest thing to find that displays intelligence, the odds of that happening are next to none. It would eat us alive for a world of something that makes paperclips look beautiful in comparison.

And the prospect of an AI designed by the "Memetic Supercivilization" frankly terrifies me. A few minutes after an AI developer submits the last bugfix on github, a script kiddie thinks "Hey, let's put a minus in front of the utility function right here and have it TORTURE PEOPLE LULZ" and thus the world ends. I think that is something best left to a small group of people. Placing our trust in the fact that the emergent structure of society that had little Darwinian selection, and a spectacular history of failures over a pretty short timescale, handed such a dangerous technology, would produce something good even for itself, let alone humans, seems unreasonable.

But how can you use complex language to express your long term goals, then, like you're doing now? Do you get/trick S2 into doing it for you?

I mean, S2 can be used by S1, for instance if someone is addicted to heroin and they use S2 to invent reasons to take another dose would be the most clear example. But it must be hard doing anything more long term, you'd be giving up too much control.

Or is the concept of long term goals itself also part of the alien thing you have to use as a tool? Your S2 must really be a good FAI :D

That's a subjective value judgement from your point of view.

If you intend it to be more than that, you would have to explain why others shouldn't see it as off-putting.

Otherwise, I don't see how it contributes to the discussion other than "there's at least one person out there who thinks masculinity isn't off putting", which we already know, there's billions of examples.

It seems to me like it's extremely hard to think about sociology, especially relating to policies and social justice without falling into this trap. When you consider a statistic about a group of people, "is this statistic accurate?" is put in the same bucket as "does this mean discriminating against this group is justified?" or even "are these people worth less?" almost instinctively. Especially if you are a part of that group yourself. Now that you've explained it that way, it seems that understanding that this is what going on is a good strategy to avoid being mindkilled by such discussions.

Though, in this case, it can still be a valid concern that others may be affected by this fallacy if you publish or spread the original statistic, so if it can pose a threat to a large number of people it may still be more ethical to avoid publicizing it. However that is an ethical issue and not an epistemic one.

People who require help can be divided into those who are capable of helping themselves, and those who are not. Such a position as yours would express the value preference that sacrificing the good of the latter group is better than letting the first group get unpaid rewards - in all cases. For me it's not that simple, the choice depends on the proportion of the groups, cost to me and society, and just how much good is being sacrificed. To make an extreme example, I would save someone's life even if this encourages other people to be less careful protecting theirs.

Load More