Wiki Contributions

Comments

Is this acutally a bad thing? In both cases, Bob and Sally not only succeeded in their initial goals, but also made some extra progress.

Also, fictional evidence. It is not implausible to imagine a scenario when Bob does all the same things and learns French, German and then fails on e.g. Spanish. The same thing for Sally.

In general, if you have tried some strategy and succeeded, it does make sense to go ahead and try it on other problems (until it finally stops working). If you have invented e.g. a new machine learning method to solve a specific practical problem, the obvious next step is to try to apply it to other problems. If you found a very interesting article in a blog it makes sense to take a look at other articles in it. And so on. A method being successful is an evidence for it being successful in the future / on other sets of problems / etc.

So, I wouldn't change to change those mistakes into successes, because they weren't mistakes in the first place. An optimal strategy is not guaranteed to succeed every single time; rather it should have the maximal success probability.

Well, I agree, that would help FAI build people similar to you. But why do you want FAI to do that?

And what copying precision is OK for you? Would just making a clone based in your DNA suffice? Maybe, you don't even have to bother with all these screenshots and photos.

I'm very skeptical of the third. A human brain contains ~10^10 neurons and ~10^14 synapses -- which would be hard to infer from ~10^5 photos/screenshots, esp. considering that they don't convey that much information about your brain structure. DNA and comprehensive brain scans are better, but I guess that getting brain scans with required precision isn't quite easy.

Cryonics, at least, might work.

It is; and actually it is a more plausible scenario. Aliens surely may want it; like humans do both in fiction and reality -- for example, see the First directive in Star Trek and the practice of sterilizing rovers before sending them to other planets in real life.

I, however, investigated that particular flavor of the Zoo hypotheses it the post.

I don't know whether the statement (intelligence => consciousness) is true, so I assign a non-zero probability to it being false.

Suppose I said "Assume NP = P", or the contrary "Assume NP != P". One of those statements is logically false (the same way 1 = 2 is false). Still, while you can dismiss an argument which starts "Assume 1 = 2", you probably shouldn't do the same with those NP ones, even if one of them is, strictly speaking, logical nonsense.

Also a few words about concepts. You can explain a concept using other concepts, and then explain the concepts you have used to explain the first one, and so on, but the chain should end somewhere, right? So here it ends on consciousness.

1) I know that there is a phenomenon (that I call 'consciousness'), because I observe it directly.

2) I don't know a decent theory to explain what it really is, and what properties does it have.

3) To my knowledge, nobody actually has. That is why, the problem of consciousness is labeled as 'hard'.

Too many people, I've noticed, just pick a theory of consciousness that they consider the best, and then become overconfident of it. Not quite a good idea given that there is so little data.

So if the most plausible says (intelligence => consciousness) is true, you shouldn't immediately dismiss everything that is based on the opposite. The Bayesian way is to integrate over all possible theories, weighted by their probabilities.

Modern computers can be programmed to do almost every task a human can make, including very high-level ones, that's why sort-of yes, they are (and maybe sort-of conscious, if you are willing to stretch this concept that far).

Some time ago we could program computers to execute some algorithm which solves a problem; now we have machine learning and don't have to provide an algorithm for every task; but we still have different machine learning algorithms for different areas/meta-tasks (computer vision, classification, time series prediction, etc.). When we build systems that are capable of solving problems in all these areas simultaneously -- and combining the results to reach some goal -- I would call such systems truly intelligent.

Having said that, I don't think I need an insight or explanation here -- because well, I mostly agree with you or jacob_cannel -- it's likely that intelligence and unconsciousness are logically incompatible. Yet as long as the problem of consciousness is not fully resolved, I can't be certain, therefore assign non-zero probability for the conjunction to be possible.

Makes sense.

Anyway, any trait which isn't consciousness (and obviously it wouldn't be consciousness) would suffice, provided there is some reason to hide from Earth rather than destroy it.

There are concepts which are hardly explainable (given our current understanding of them). Consciousness is one of them. Qualia. Subjective experience. The thing which separates p-zombies from non-p-zombies.

If you don't already understand what I mean, small chance that I would be able to explain.

As for the assumption, I agree that it is implausible, yet possible. Do you consider your computer conscious?

And no doubt that scenarios your mention are more plausible.

Why do you think it is unlikely? I think any simple criterion which separates aliens from environment would suffice.

Personally, I think that the scenario is implausible for the other reason: human moral system would easily adapt to such aliens. People sometimes personify things that aren't remotely sentient, let alone aliens who would actually act as sentient/conscious beings.

The other reason is that I consider sentience without consciousness relatively implausible.

Filters don't have to be mutually exclusive, and as for collectively exhaustive part, take all plausible Great Filter candidates.

I don't quite understand that Great Filter hype, by the way; having a single cause for civilization failure seems very implausible (<1%).

Load More