Comment author: [deleted] 16 June 2015 01:47:23AM *  1 point [-]

People working on friendly AI probably assume that the odds of inventing a friendly AI is higher than establishing a world order in which research associated with existential risks is generally banned. Why is that? Is the reasoning that our civilization is likely to end without significant technological progress (due to reasons like nuclear war, climate change and societal collapse), so we should give it at least a try?

In response to comment by [deleted] on Open Thread, Jun. 15 - Jun. 21, 2015
Comment author: estimator 17 June 2015 06:54:18AM 3 points [-]

It's extremely hard to ban the research worldwide, and then it's extremely hard to enforce such decision.

Firstly, you'll have to convince all the world's governments (btw, there are >200) to pass such laws.

Then, you'll likely have all powerful nations doing the research secretly, because it provides some powerful weaponry / other ways to acquire power; or just out of fear that some other government will do it first.

And even if you somehow managed to pass the law worldwide, and stopped governments from doing research secretly, how would you stop individual researchers?

The humanity hasn't prevented the use of nuclear bombs, and has barely prevented a full-blown nuclear war; while nuclear bombs require national-level industry to produce, and are available to a few countries only. How can we hope to ban something which can be researched and launched in your basement?

Comment author: Viliam 10 June 2015 08:22:53AM *  4 points [-]

But why do you want it in the first place?

Emotionally -- for the feeling that something new and great is happening here, and I can see it growing.

Reflecting on this: I should not optimize for my emotions (wireheading), but the emotions are important and should reflect reality. If great things are not happening, I want to know that, and I want to fix that. But if great things are happening, then I would like a mechanism that aligns my emotions with this fact.

Okay, what exactly are the "great things" I am thinking about here? What was the referent of this emotion when Eliezer was writing the Sequences?

When Eliezer was writing the Sequences, merely the fact that "there will exist a blog about rationality; without Straw Vulcanism, without Deep Wisdom" seemed like a huge improvement of the world, because it seemed that when such blog will exist, rational people will be able to meet there and conspire to optimize the universe. Did this happen? Well, we have MIRI and CFAR, meetups in various countries (I really appreciate not having to travel across the planet just to meet people with similar values). Do they have impact other than providing people a nice place to chat? I hope so.

Maybe the lowest-hanging fruit was already picked. If someone tried to write Sequences 2.0, what would it be about? Cognitive biases that Eliezer skipped? Or the same ones, perhaps more nicely written, with better examples? Both would be nice things to have, but their awesomeness would probably be smaller than going from zero to Sequences 1.0. (Although, if the Sequences 2.0 would be written so well that they would become a bestseller, and thousands of students outside of existing rationalist communities would read them, then I would rate that as more awesome. So the possibility is there. It just requires very specialized skills.) Or maybe explaining some mathematical or programming concepts in a more accessible way. I mean those concepts that you can use in thinking about probability or how human brain works.

Internet vs real life -- things happening in the real world are usually more awesome than things happening merely online. For example, a rationalist meetup is usually better than reading an open thread on LW. The problem is visibility. The basic rule of bureaucracy -- if it isn't documented, it didn't happen -- is important here, too. When given a choice between writing another article and doing something in the real world, please choose the latter (unless the article is really exceptionally good). But then, please also write an article about it, so that your fellow rationalists who were not able to participate personally can share the experience. It may inspire them to do something similar.

By the way, if you are unhappy about the "decline" of LW because it will make a worse impression on new people you would like to introduce to LW culture -- point them towards the book instead.

Do you care about rationality? Then research rationality and write about it, here or anywhere else. Do you enjoy the community of LWers? Then participate in meetups, discuss random things in OTs, have nice conversations, etc. Do you want to write more rationalist fiction? Do it. And so on.

Adding: if you would like to see a rationalist community growing, research and write about creating and organizing communities. (That is an advice for myself, when I will have more free time.)

Comment author: estimator 11 June 2015 03:04:48PM *  3 points [-]

Why do you prefer offline conversations to online?

Off the top of my head, I can name 3 advantages of online communication, which are quite important to LessWrong:

  • You don't have to go anywhere. Since the LW community is distributed all over the world, it is really important; when you go to meetups, you can communicate only with people who happen to be in the same place as you, when you communicate online, you can communicate with everyone.

  • You have more time to think before reply, if you need to. For example, you can support your arguments with relevant research papers or data.

  • As you have noticed, online articles and discussions remain available on the site. You have proposed to write articles after offline events, but a) not everything will be covered by them and b) it requires additional effort.

Well, enjoy offline events if you like to; but the claim that people should always prefer offline activities over online activities is highly questionable, IMO.

Comment author: cousin_it 09 June 2015 04:39:33PM 7 points [-]

Judging from the recent decline of LW, it seems that the initial success of LW wasn't due to rationality, but rather due to Eliezer's great writing. If we want LW to become a fun place again, we should probably focus on writing skills instead of rationality skills. Not everyone can be as good as Eliezer or Yvain, but there's probably a lot of low hanging fruit. For example, we pretty much know what kind of fiction would appeal to an LWish audience (HPMOR, Worm, Homestuck...) and writing more of it seems like an easier task than writing fiction with mass-market appeal.

Does anyone else feel that it might be a promising direction for the community? Is there a more structured way to learn writing skills?

Comment author: estimator 10 June 2015 12:40:12AM 12 points [-]

I have noticed that many people here want LW resurrection for the sake of LW resurrection.

But why do you want it in the first place?

Do you care about rationality? Then research rationality and write about it, here or anywhere else. Do you enjoy the community of LWers? Then participate in meetups, discuss random things in OTs, have nice conversations, etc. Do you want to write more rationalist fiction? Do it. And so on.

After all, if you think that Eliezer's writing constitute most of LW value, and Eliezer doesn't write here anymore, maybe the wise decision is to let it decay.

Beware the lost purposes.

The Joy of Bias

14 estimator 09 June 2015 07:04PM

What do you feel when you discover that your reasoning is flawed? when you find your recurring mistakes? when you find that you have been doing something wrong for quite a long time?

Many people feel bad. For example, here is a quote from a recent article on LessWrong:

By depicting the self as always flawed, and portraying the aspiring rationalist's job as seeking to find the flaws, the virtue of perfectionism is framed negatively, and is bound to result in negative reinforcement. Finding a flaw feels bad, and in many people that creates ugh fields around actually doing that search, as reported by participants at the Meetup.

But actually, when you find a serious flaw of yours, you should usually jump for joy. Here's why.

continue reading »
Comment author: estimator 31 May 2015 03:28:47PM 2 points [-]

What is the point of having separated Open Threads and Stupid Questions threads, instead of allowing "stupid questions" in OTs and making OTs more frequent?

Comment author: Good_Burning_Plastic 31 May 2015 10:59:46AM *  0 points [-]

which often costs lesser by an order of magnitude.

At equilibrium, the price equals the marginal cost; sure, it is more than the average cost, but I can't see why the latter is relevant.

And the effort required to earn the money to buy the ring is also wasted.

Comment author: estimator 31 May 2015 11:44:29AM 4 points [-]

And the effort required to earn the money to buy the ring is also wasted.

No, it's not. You have produced (hopefully) valuable goods or services; why they are wasted, from the viewpoint of society?

Comment author: Artaxerxes 31 May 2015 03:56:28AM 1 point [-]

Maybe not for that reason. But the opportunity cost of having kids, for example in terms of time and money, is pretty high. You could easily make an argument that those resources would be more effectively used for higher impact activities.

The money as dead children analogy might be particularly useful here, since we're comparing kids with kids.

Comment author: estimator 31 May 2015 09:58:37AM 2 points [-]

Such cost calculations are wildly overestimated.

Suppose you buy a luxury item, like a golden ring with brilliants. You pay a lot of money, but your money isn't going to disappear; it is redistributed between traders, jewelers, miners, etc. The only thing that's lost is the total effort required to produce that ring, which often costs lesser by an order of magnitude. And if the item you buy is actually useful, the wasted effort is even lower.

The cost of having kids is so high for you, because you will likely raise well-educated children with high intelligence, which are valuable assets to our society; likely being net positive, after all. Needless to say, actually ensuring that these poor children in Africa will end up that well, rather than, say, die of starvation the next year, is going to cost you much more than 800$. So you pay for quality here.

Comment author: Eitan_Zohar 31 May 2015 03:04:22AM 1 point [-]

Is it unethical to have children pre-Singularity, for the risk of them dying?

Comment author: estimator 31 May 2015 08:32:22AM *  6 points [-]

Well, everyone will likely die sooner or later, even post-Singularity (provided that it will happen, which isn't quite a solid fact).

Anyway, I think that any morality system that proclaims unethical all and every birth happened so far is inadequate.

Comment author: Nanashi 29 May 2015 08:44:07PM 0 points [-]

Yes, this this this this this this this. "The capacity of human minds is limited and I'll accept climbing up higher in abstraction levels at the price of forgetting how the lower-level gears turn." If I could upvote this multiple times, I would.

This is the crux of this entire approach. Learn the higher level, applied abstractions. And learn the very basic fundamentals. Forget learning how the lower-level gears turn: just learn the fundamental laws of physics. If you ever need to figure out a lower-level gear, you can just derive it from your knowledge of the fundamentals, combined with your big-picture knowledge of how that gear fits into the overall system.

Comment author: estimator 29 May 2015 09:09:57PM 0 points [-]

That only works if there are few levels of abstraction; I doubt that you can derive how do programs work at the machine codes level based of your knowledge of physics and high-level programming. Sometimes, gears are so small that you can't even see them on your top level big picture, and sometimes just climbing up one level of abstraction takes enormous effort if you don't know in advance how to do it.

I think that you should understand, at least once, how the system works on each level and refresh/deepen that knowledge when you need it.

Comment author: Nornagest 29 May 2015 07:29:27PM 0 points [-]

On the other hand, if you don't have a solid grasp of linear algebra, your ability to do most types of machine learning is seriously impaired. You can learn techniques like e.g. matrix inversions as needed to implement the algorithms you're learning, but if you don't understand how those techniques work in their original context, they become very hard to debug or optimize. Similarly for e.g. cryptography and basic information theory.

That's probably more the exception than the rule, though; I sense that the point of most prerequisites in a traditional science curriculum is less to provide skills to build on and more to build habits of rigorous thinking.

Comment author: estimator 29 May 2015 08:57:28PM 0 points [-]

Read what is a matrix, how to add, multiply and invert them, what is a determinant and what is an eigenvector and that's enough to get you started. There are many algorithms in ML where vectors/matrices are used mostly as a handy notation.

Yes, you will be unable to understand some parts of ML which substantially require linear algebra; yes, understanding ML without linear algebra is harder; yes, you need linear algebra for almost any kind of serious ML research -- but it doesn't mean that you have to spend a few years studying arcane math before you can open a ML textbook.

View more: Prev | Next