Comment author: estimator 31 March 2016 06:15:24PM 1 point [-]

Is this acutally a bad thing? In both cases, Bob and Sally not only succeeded in their initial goals, but also made some extra progress.

Also, fictional evidence. It is not implausible to imagine a scenario when Bob does all the same things and learns French, German and then fails on e.g. Spanish. The same thing for Sally.

In general, if you have tried some strategy and succeeded, it does make sense to go ahead and try it on other problems (until it finally stops working). If you have invented e.g. a new machine learning method to solve a specific practical problem, the obvious next step is to try to apply it to other problems. If you found a very interesting article in a blog it makes sense to take a look at other articles in it. And so on. A method being successful is an evidence for it being successful in the future / on other sets of problems / etc.

So, I wouldn't change to change those mistakes into successes, because they weren't mistakes in the first place. An optimal strategy is not guaranteed to succeed every single time; rather it should have the maximal success probability.

Comment author: D_Malik 24 August 2015 02:17:07PM 1 point [-]

You don't need to reconstruct all the neurons and synapses, though. If something behaves almost exactly as I would behave, I'd say that thing is me. 20 years of screenshots 8 hours a day is around 14% of a waking lifetime, which seems like enough to pick out from mindspace a mind that behaves very similarly to mine.

Comment author: estimator 24 August 2015 02:44:10PM 0 points [-]

Well, I agree, that would help FAI build people similar to you. But why do you want FAI to do that?

And what copying precision is OK for you? Would just making a clone based in your DNA suffice? Maybe, you don't even have to bother with all these screenshots and photos.

Comment author: D_Malik 23 August 2015 02:57:55AM 9 points [-]
  • Getting an air filter can gain you ~0.6 years of lifespan, plus some healthspan. Here's /u/Louie's post where I saw this.
  • Lose weight. Try Shangri-La, and if that doesn't work consider the EC stack or a ketogenic diet.
  • Seconding James_Miller's recommendation of vegetables, especially cruciferous vegetables (broccoli, bok choy, cauliflower, collard greens, arugula...) Just eat entire plates of the stuff often.
  • Write a script that takes a screenshot and webcam picture every 30 seconds. Save the files to an external hard drive. After a few decades, bury the external, along with some of your DNA and possibly brain scans, somewhere it'll stay safe for a couple hundred years or longer. This is a pretty long shot, but there's a chance that a future FAI will find your horcrux and use it to resurrect you. I think this is a better deal than cryonics since it costs so much less.
Comment author: estimator 24 August 2015 06:57:10AM 1 point [-]

I'm very skeptical of the third. A human brain contains ~10^10 neurons and ~10^14 synapses -- which would be hard to infer from ~10^5 photos/screenshots, esp. considering that they don't convey that much information about your brain structure. DNA and comprehensive brain scans are better, but I guess that getting brain scans with required precision isn't quite easy.

Cryonics, at least, might work.

Comment author: buybuydandavis 16 July 2015 11:13:39PM 1 point [-]

"humans are monsters and any sane civilization avoids them, that's why Galactic Zoo"

Isn't the Galactic Zoo hypothesis based on wanting to maintain the humans in their primitive habitat, and not interfere with the "natural" development?

It's not that we're horrible monsters that need to be avoided. The Earth is just a nature preserve.

Comment author: estimator 17 July 2015 01:20:00AM 0 points [-]

It is; and actually it is a more plausible scenario. Aliens surely may want it; like humans do both in fiction and reality -- for example, see the First directive in Star Trek and the practice of sterilizing rovers before sending them to other planets in real life.

I, however, investigated that particular flavor of the Zoo hypotheses it the post.

Comment author: [deleted] 16 July 2015 11:11:57PM 0 points [-]

If you dont know what you're talking about when you say consciousness, your premise becomes incoherent.

In response to comment by [deleted] on On the Galactic Zoo hypothesis
Comment author: estimator 17 July 2015 12:57:17AM *  0 points [-]

I don't know whether the statement (intelligence => consciousness) is true, so I assign a non-zero probability to it being false.

Suppose I said "Assume NP = P", or the contrary "Assume NP != P". One of those statements is logically false (the same way 1 = 2 is false). Still, while you can dismiss an argument which starts "Assume 1 = 2", you probably shouldn't do the same with those NP ones, even if one of them is, strictly speaking, logical nonsense.

Also a few words about concepts. You can explain a concept using other concepts, and then explain the concepts you have used to explain the first one, and so on, but the chain should end somewhere, right? So here it ends on consciousness.

1) I know that there is a phenomenon (that I call 'consciousness'), because I observe it directly.

2) I don't know a decent theory to explain what it really is, and what properties does it have.

3) To my knowledge, nobody actually has. That is why, the problem of consciousness is labeled as 'hard'.

Too many people, I've noticed, just pick a theory of consciousness that they consider the best, and then become overconfident of it. Not quite a good idea given that there is so little data.

So if the most plausible says (intelligence => consciousness) is true, you shouldn't immediately dismiss everything that is based on the opposite. The Bayesian way is to integrate over all possible theories, weighted by their probabilities.

Comment author: Manfred 16 July 2015 11:15:17PM 0 points [-]

Do you consider your computer conscious?

Are (modern) computers intelligent but not conscious, by your lights?

If so, then there's a very important thing you might provide some insight into, which is what sort of observations humans could make of an alien race, that would lead to us thinking that they're intelligent but not conscious.

Comment author: estimator 17 July 2015 12:26:59AM 0 points [-]

Modern computers can be programmed to do almost every task a human can make, including very high-level ones, that's why sort-of yes, they are (and maybe sort-of conscious, if you are willing to stretch this concept that far).

Some time ago we could program computers to execute some algorithm which solves a problem; now we have machine learning and don't have to provide an algorithm for every task; but we still have different machine learning algorithms for different areas/meta-tasks (computer vision, classification, time series prediction, etc.). When we build systems that are capable of solving problems in all these areas simultaneously -- and combining the results to reach some goal -- I would call such systems truly intelligent.

Having said that, I don't think I need an insight or explanation here -- because well, I mostly agree with you or jacob_cannel -- it's likely that intelligence and unconsciousness are logically incompatible. Yet as long as the problem of consciousness is not fully resolved, I can't be certain, therefore assign non-zero probability for the conjunction to be possible.

Comment author: Vaniver 16 July 2015 09:38:28PM *  2 points [-]

Why do you think it is unlikely?

Basically, the hierarchical control model of intelligence, which sees 'intelligence' as trying to maintain some perception at some reference level by actuating the environment. (Longer explanation here.) If you have multiple control systems, and they have different reference levels, then they will get into 'conflict', much like a tug of war.

That is, simple intelligence looks like it leads to rivalry rather than cooperation by default, and so valuing intelligence rather than alignment seems weird; there's not a clear path that leads from nothing to there.

Comment author: estimator 16 July 2015 09:48:33PM 0 points [-]

Makes sense.

Anyway, any trait which isn't consciousness (and obviously it wouldn't be consciousness) would suffice, provided there is some reason to hide from Earth rather than destroy it.

Comment author: jacob_cannell 16 July 2015 08:46:20PM 6 points [-]

Assume that intelligence doesn't always imply consciousness

Taboo 'consciousness', and attempt to make that assumption still work.

So they decide to use their advanced technology to render their civilizations invisible for human scientists.

The feasibility of this idea is inversely proportional to the resource expenditure required to remain invisible. It is more likely that - if aliens exist - that they are naturally mostly-invisible as a result of computational optimization into compact cold dark arcilects. If stealth/invisibility plays a role, they are more likely to be hiding from other powerful civs rather than us.

Comment author: estimator 16 July 2015 09:18:25PM *  -3 points [-]

There are concepts which are hardly explainable (given our current understanding of them). Consciousness is one of them. Qualia. Subjective experience. The thing which separates p-zombies from non-p-zombies.

If you don't already understand what I mean, small chance that I would be able to explain.

As for the assumption, I agree that it is implausible, yet possible. Do you consider your computer conscious?

And no doubt that scenarios your mention are more plausible.

Comment author: Vaniver 16 July 2015 08:30:19PM 1 point [-]

I thought the defining feature of being a p-zombie was acting as if they had consciousness while not "actually" having it, whereas these aliens act as though they did not have consciousness.

(I think a generic and global intelligence-valuation ethos is very unlikely to arise, and so I think there are other reasons to dislike this formulation of the Galactic Zoo.)

Comment author: estimator 16 July 2015 09:07:25PM 0 points [-]

Why do you think it is unlikely? I think any simple criterion which separates aliens from environment would suffice.

Personally, I think that the scenario is implausible for the other reason: human moral system would easily adapt to such aliens. People sometimes personify things that aren't remotely sentient, let alone aliens who would actually act as sentient/conscious beings.

The other reason is that I consider sentience without consciousness relatively implausible.

Comment author: chaosmage 17 June 2015 04:01:30PM *  0 points [-]

That seems correct to me, but it is quite different from your original proposal.

Can you think of other filters that are MECE with the Malthusian trap? I don't see obvious ones. Maybe a good way out of the Malthusian trap would be mechanisms that limit procreation, and those make interplanetary colonization - which is procreation of biospheres - seem immoral? I don't think that sounds very convincing.

Comment author: estimator 18 June 2015 07:16:56AM 0 points [-]

Filters don't have to be mutually exclusive, and as for collectively exhaustive part, take all plausible Great Filter candidates.

I don't quite understand that Great Filter hype, by the way; having a single cause for civilization failure seems very implausible (<1%).

View more: Next