Comment author: estimator 31 March 2016 06:15:24PM 1 point [-]

Is this acutally a bad thing? In both cases, Bob and Sally not only succeeded in their initial goals, but also made some extra progress.

Also, fictional evidence. It is not implausible to imagine a scenario when Bob does all the same things and learns French, German and then fails on e.g. Spanish. The same thing for Sally.

In general, if you have tried some strategy and succeeded, it does make sense to go ahead and try it on other problems (until it finally stops working). If you have invented e.g. a new machine learning method to solve a specific practical problem, the obvious next step is to try to apply it to other problems. If you found a very interesting article in a blog it makes sense to take a look at other articles in it. And so on. A method being successful is an evidence for it being successful in the future / on other sets of problems / etc.

So, I wouldn't change to change those mistakes into successes, because they weren't mistakes in the first place. An optimal strategy is not guaranteed to succeed every single time; rather it should have the maximal success probability.

Comment author: D_Malik 24 August 2015 02:17:07PM 1 point [-]

You don't need to reconstruct all the neurons and synapses, though. If something behaves almost exactly as I would behave, I'd say that thing is me. 20 years of screenshots 8 hours a day is around 14% of a waking lifetime, which seems like enough to pick out from mindspace a mind that behaves very similarly to mine.

Comment author: estimator 24 August 2015 02:44:10PM 0 points [-]

Well, I agree, that would help FAI build people similar to you. But why do you want FAI to do that?

And what copying precision is OK for you? Would just making a clone based in your DNA suffice? Maybe, you don't even have to bother with all these screenshots and photos.

Comment author: D_Malik 23 August 2015 02:57:55AM 9 points [-]
  • Getting an air filter can gain you ~0.6 years of lifespan, plus some healthspan. Here's /u/Louie's post where I saw this.
  • Lose weight. Try Shangri-La, and if that doesn't work consider the EC stack or a ketogenic diet.
  • Seconding James_Miller's recommendation of vegetables, especially cruciferous vegetables (broccoli, bok choy, cauliflower, collard greens, arugula...) Just eat entire plates of the stuff often.
  • Write a script that takes a screenshot and webcam picture every 30 seconds. Save the files to an external hard drive. After a few decades, bury the external, along with some of your DNA and possibly brain scans, somewhere it'll stay safe for a couple hundred years or longer. This is a pretty long shot, but there's a chance that a future FAI will find your horcrux and use it to resurrect you. I think this is a better deal than cryonics since it costs so much less.
Comment author: estimator 24 August 2015 06:57:10AM 1 point [-]

I'm very skeptical of the third. A human brain contains ~10^10 neurons and ~10^14 synapses -- which would be hard to infer from ~10^5 photos/screenshots, esp. considering that they don't convey that much information about your brain structure. DNA and comprehensive brain scans are better, but I guess that getting brain scans with required precision isn't quite easy.

Cryonics, at least, might work.

Comment author: buybuydandavis 16 July 2015 11:13:39PM 1 point [-]

"humans are monsters and any sane civilization avoids them, that's why Galactic Zoo"

Isn't the Galactic Zoo hypothesis based on wanting to maintain the humans in their primitive habitat, and not interfere with the "natural" development?

It's not that we're horrible monsters that need to be avoided. The Earth is just a nature preserve.

Comment author: estimator 17 July 2015 01:20:00AM 0 points [-]

It is; and actually it is a more plausible scenario. Aliens surely may want it; like humans do both in fiction and reality -- for example, see the First directive in Star Trek and the practice of sterilizing rovers before sending them to other planets in real life.

I, however, investigated that particular flavor of the Zoo hypotheses it the post.

Comment author: Vaniver 16 July 2015 09:38:28PM *  2 points [-]

Why do you think it is unlikely?

Basically, the hierarchical control model of intelligence, which sees 'intelligence' as trying to maintain some perception at some reference level by actuating the environment. (Longer explanation here.) If you have multiple control systems, and they have different reference levels, then they will get into 'conflict', much like a tug of war.

That is, simple intelligence looks like it leads to rivalry rather than cooperation by default, and so valuing intelligence rather than alignment seems weird; there's not a clear path that leads from nothing to there.

Comment author: estimator 16 July 2015 09:48:33PM 0 points [-]

Makes sense.

Anyway, any trait which isn't consciousness (and obviously it wouldn't be consciousness) would suffice, provided there is some reason to hide from Earth rather than destroy it.

Comment author: Vaniver 16 July 2015 08:30:19PM 1 point [-]

I thought the defining feature of being a p-zombie was acting as if they had consciousness while not "actually" having it, whereas these aliens act as though they did not have consciousness.

(I think a generic and global intelligence-valuation ethos is very unlikely to arise, and so I think there are other reasons to dislike this formulation of the Galactic Zoo.)

Comment author: estimator 16 July 2015 09:07:25PM 0 points [-]

Why do you think it is unlikely? I think any simple criterion which separates aliens from environment would suffice.

Personally, I think that the scenario is implausible for the other reason: human moral system would easily adapt to such aliens. People sometimes personify things that aren't remotely sentient, let alone aliens who would actually act as sentient/conscious beings.

The other reason is that I consider sentience without consciousness relatively implausible.

On the Galactic Zoo hypothesis

-8 estimator 16 July 2015 07:12PM

Recently, I was reading some arguments about Fermi paradox and aliens and so on; also there was an opinion among the lines of "humans are monsters and any sane civilization avoids them, that's why Galactic Zoo". As implausible as it is, but I've found one more or less sane scenario where it might be true.

Assume that intelligence doesn't always imply consciousness, and assume that evolution processes are more likely to yield intelligent, but unconscious life forms, rather than intelligent and conscious. For example, if consciousness is resource-consuming and otherwise almost useless (as in Blindsight).

Now imagine that all the alien species evolved without consciousness. Being an important coordination tool, their moral system takes that into account -- it relies on a trait that they have -- intelligence, rather than consciousness. For example, they consider destroying anything capable of performing complex computations immoral.

Then human morality system would be completely blind to them. Killing such an alien would be no more immoral, then, say, recycling a computer. So, for these aliens, human race would be indeed monstrous.

The aliens consider extermination of an entire civilization immoral, since that would imply destroying a few billions of devices, capable of performing complex enough computations. So they decide to use their advanced technology to render their civilizations invisible for human scientists.

Comment author: chaosmage 17 June 2015 04:01:30PM *  0 points [-]

That seems correct to me, but it is quite different from your original proposal.

Can you think of other filters that are MECE with the Malthusian trap? I don't see obvious ones. Maybe a good way out of the Malthusian trap would be mechanisms that limit procreation, and those make interplanetary colonization - which is procreation of biospheres - seem immoral? I don't think that sounds very convincing.

Comment author: estimator 18 June 2015 07:16:56AM 0 points [-]

Filters don't have to be mutually exclusive, and as for collectively exhaustive part, take all plausible Great Filter candidates.

I don't quite understand that Great Filter hype, by the way; having a single cause for civilization failure seems very implausible (<1%).

Comment author: [deleted] 16 June 2015 01:47:23AM *  1 point [-]

People working on friendly AI probably assume that the odds of inventing a friendly AI is higher than establishing a world order in which research associated with existential risks is generally banned. Why is that? Is the reasoning that our civilization is likely to end without significant technological progress (due to reasons like nuclear war, climate change and societal collapse), so we should give it at least a try?

In response to comment by [deleted] on Open Thread, Jun. 15 - Jun. 21, 2015
Comment author: estimator 17 June 2015 06:54:18AM 3 points [-]

It's extremely hard to ban the research worldwide, and then it's extremely hard to enforce such decision.

Firstly, you'll have to convince all the world's governments (btw, there are >200) to pass such laws.

Then, you'll likely have all powerful nations doing the research secretly, because it provides some powerful weaponry / other ways to acquire power; or just out of fear that some other government will do it first.

And even if you somehow managed to pass the law worldwide, and stopped governments from doing research secretly, how would you stop individual researchers?

The humanity hasn't prevented the use of nuclear bombs, and has barely prevented a full-blown nuclear war; while nuclear bombs require national-level industry to produce, and are available to a few countries only. How can we hope to ban something which can be researched and launched in your basement?

Comment author: Viliam 10 June 2015 08:22:53AM *  4 points [-]

But why do you want it in the first place?

Emotionally -- for the feeling that something new and great is happening here, and I can see it growing.

Reflecting on this: I should not optimize for my emotions (wireheading), but the emotions are important and should reflect reality. If great things are not happening, I want to know that, and I want to fix that. But if great things are happening, then I would like a mechanism that aligns my emotions with this fact.

Okay, what exactly are the "great things" I am thinking about here? What was the referent of this emotion when Eliezer was writing the Sequences?

When Eliezer was writing the Sequences, merely the fact that "there will exist a blog about rationality; without Straw Vulcanism, without Deep Wisdom" seemed like a huge improvement of the world, because it seemed that when such blog will exist, rational people will be able to meet there and conspire to optimize the universe. Did this happen? Well, we have MIRI and CFAR, meetups in various countries (I really appreciate not having to travel across the planet just to meet people with similar values). Do they have impact other than providing people a nice place to chat? I hope so.

Maybe the lowest-hanging fruit was already picked. If someone tried to write Sequences 2.0, what would it be about? Cognitive biases that Eliezer skipped? Or the same ones, perhaps more nicely written, with better examples? Both would be nice things to have, but their awesomeness would probably be smaller than going from zero to Sequences 1.0. (Although, if the Sequences 2.0 would be written so well that they would become a bestseller, and thousands of students outside of existing rationalist communities would read them, then I would rate that as more awesome. So the possibility is there. It just requires very specialized skills.) Or maybe explaining some mathematical or programming concepts in a more accessible way. I mean those concepts that you can use in thinking about probability or how human brain works.

Internet vs real life -- things happening in the real world are usually more awesome than things happening merely online. For example, a rationalist meetup is usually better than reading an open thread on LW. The problem is visibility. The basic rule of bureaucracy -- if it isn't documented, it didn't happen -- is important here, too. When given a choice between writing another article and doing something in the real world, please choose the latter (unless the article is really exceptionally good). But then, please also write an article about it, so that your fellow rationalists who were not able to participate personally can share the experience. It may inspire them to do something similar.

By the way, if you are unhappy about the "decline" of LW because it will make a worse impression on new people you would like to introduce to LW culture -- point them towards the book instead.

Do you care about rationality? Then research rationality and write about it, here or anywhere else. Do you enjoy the community of LWers? Then participate in meetups, discuss random things in OTs, have nice conversations, etc. Do you want to write more rationalist fiction? Do it. And so on.

Adding: if you would like to see a rationalist community growing, research and write about creating and organizing communities. (That is an advice for myself, when I will have more free time.)

Comment author: estimator 11 June 2015 03:04:48PM *  3 points [-]

Why do you prefer offline conversations to online?

Off the top of my head, I can name 3 advantages of online communication, which are quite important to LessWrong:

  • You don't have to go anywhere. Since the LW community is distributed all over the world, it is really important; when you go to meetups, you can communicate only with people who happen to be in the same place as you, when you communicate online, you can communicate with everyone.

  • You have more time to think before reply, if you need to. For example, you can support your arguments with relevant research papers or data.

  • As you have noticed, online articles and discussions remain available on the site. You have proposed to write articles after offline events, but a) not everything will be covered by them and b) it requires additional effort.

Well, enjoy offline events if you like to; but the claim that people should always prefer offline activities over online activities is highly questionable, IMO.

View more: Next