Comment author: snarles 03 April 2012 12:53:01PM 1 point [-]

Try to convert your non-rationalist friends.

Comment author: quanticle 03 April 2012 06:38:55PM 10 points [-]

I don't think that's a good idea, to be honest. Conversion of other individuals is one of the more difficult things you can do as an aspiring rationalist. Let's face it, a lot of irrational arguments have very very strong intuitive appeal. Unless you are very familiar with the standard arguments for rationalism, you're more likely to simply alienate those around you and further isolate yourself by attempting to convert your non-rationalist friends.

Comment author: gwern 02 March 2012 06:14:59PM *  23 points [-]

Tipler paper

Wow, that's all kinds of crazy. I'm not sure how much as I'm not a mathematical physicist - MWI and quantum mechanics implied by Newton? Really? - but one big flag for me is pg187-188 where he doggedly insists that the universe is closed, although as far as I know the current cosmological consensus is the opposite, and I trust them a heck of a lot more than a fellow who tries to prove his Christianity with his physics.

(This is actually convenient for me: a few weeks ago I was wondering on IRC what the current status of Tipler's theories were, given that he had clearly stated they were valid only if the universe were closed and if the Higgs boson was within certain values, IIRC, but I was feeling too lazy to look it all up.)

And the extraction of a transcendent system of ethics from a Feynman quote...

A moment’s thought will convince the reader that Feynman has described not only the process of science, but the process of rationality itself. Notice that the bold-faced words are all moral imperatives. Science, in other words, is fundamentally based on ethics. More generally, rational thought itself is based on ethics. It is based on a particular ethical system. A true human level intelligence program will thus of necessity have to incorporate this particular ethical system. Our human brains do, whether we like to acknowledge it or not, and whether we want to make use of this ethical system in all circumstances. When we do not make use of this system of ethics, we generate cargo cult science rather than science.

This is just too wrong for words. This is like saying that looking both ways before crossing the street is obviously a part of rational street-crossing - a moment's thought will convince the reader (Dark Arts) - and so we can collapse Hume's fork and promote looking both ways to a universal meta-ethical principal that future AIs will obey!

An AI program must incorporate this morality, otherwise it would not be an AI at all.

Show me this morality in the AIXI equation or GTFO!

After all, what is a computer program but a series of imperative sentences?

A map from range to domain, a proof in propositional logic, or a series of lambda equations and reductions all come to mind...

In fact, I claim that an ethical system that encompasses all human actions, and more generally, all actions of any set of rational beings (in particular, artificial intelligences) can be deduced from the Feynman axioms. In particular, note that destroying other rational beings would make impossible the honestly Feynman requires.

One man's modus ponens is another man's modus tollens. That the 'honestly' requires other entities is proof that this cannot be an ethical system which encompasses all rational beings.

Hence, they will be part of the community of intelligent beings deciding whether to resurrect us or not. Do not children try to see to their parents’ health and well-being? Do they not try and see their parent survive (if it doesn’t cost too much, and it the far future, it won’t)? They do, and they will, both in the future, and in the far future.

Any argument that rests on a series of rhetorical questions is untrustworthy. Specifically, sure, I can in 5 seconds come up with a reason they would not preserve us: there are X mind-states we can be in while still maintaining identity or continuity; there are Y (Y < X) that we would like or would value; with infinite computing power, we will exhaust all Y. At that point, by definition, we could choose to not be preserved. Hence, I have proven we will inevitably choose to die even if uploaded to Tipler's Singularity.

(Correct and true? Dunno. But let's say this shows Tipler is massively overreaching...)

What a terrible paper altogether. This was a peer-reviewed journal, right?

Comment author: quanticle 02 March 2012 10:41:23PM *  7 points [-]

The quote that stood out for me was the following:

The nineteenth century physicists also believed in the aether, as did Newton. There were many aether theories available, but only one was consistent with observation: H.A. Lorentz's theory, which simply asserted that the Maxwell equations were the equations for the aether. In 1904, Lorentz showed (Einstein et al., 1923) that this theory of the aether - equivalently the Maxwell equations - implied that absolute time could not exist, and he deduced the transformations between space and time that now bear his name. [...] That is, general relativity is already there in 19th century classical mechanics.

Now, all that's well and good, except for one, tiny, teensy little flaw: there is no such thing as aether. Michelson and Morley proved that quite conclusively in 1887. Tipler, in this case, appears to be basing his argument on a theory that was discredited over a century ago. Yes, some of the conclusions of aetheric theory are superficially similar to the conclusions of relativity. That, however, doesn't make the aetheric theory any less wrong.

Comment author: quanticle 02 March 2012 10:07:34PM 5 points [-]

Our reason for placing the Singularity within the lifetimes of practi- cally everyone now living who is not already retired, is the fact that our supercomputers already have sufficient power to run a Singularity level program (Tipler, 2007). We lack not the hardware, but the soft- ware. Moore’s Law insures that today’s fastest supercomputer speed will be standard laptop computer speed in roughly twenty years (Tipler, 1994).

Really? I was unaware that Moore's law was an actual physical law. Our state of the art has already hit the absolute physical limit of transistor design - we have single atom transistors in the lab. So, if you'll forgive me, I'll be taking the claim of, "Moore's law ensures that today's fastest supercomputer speed will be the standard laptop computer speed in 20 years with a bit of salt."

Now, perhaps we'll have some other technology that allows laptops twenty years hence to be as powerful as supercomputers today. But to just handwave that enormous engineering problem away by saying, "Moore's law will take care of it," is fuzzy thinking of worst sort.

In response to comment by [deleted] on The lessons of a world without Hitler
Comment author: Stuart_Armstrong 16 January 2012 08:26:41PM *  -2 points [-]

Hitler went to war with France and GB with no realistic prospect of winning. That's the major irrationality; close second was his cruelty to the subject nationalities in the USSR that turned them back to Stalin. Churchill did nothing on this scale (perhaps staying in the war alone was irrational; but he did have an empire to back him up, and plausible hope the USA would join in). Stalin... internally did a lot of stupid things, and trusted Hitler, but didn't commit massive external errors, and was often very prudent.

But Hitler just started war after war until one of them went badly for him.

Comment author: quanticle 16 January 2012 09:15:15PM *  3 points [-]

No realistic prospect? I disagree. When Hitler invaded France in 1941, the potency of blitzkrieg had been demonstrated. The Germans knew that they could pull off a Schlieffen Plan end-run much more quickly than they could in 1914.

Of course the French and British thought differently, but I don't think there's any evidence that the German general staff thought that a conflict with France was a sure loss as of 1941. If you'd been talking about the Remilitarization of the Rhineland in 1936, I'd have agreed with you.

Comment author: quanticle 19 December 2011 05:41:10PM 2 points [-]

Since most atheists, agnostics, etc, consider the First Amendment pretty important, we can assume they 'believe in religion'.

That's a pretty large logical leap. The First Amendment protects the right to speech and the right to petition government in addition to freedom of religion. Even if I accept Moldbug's assertion that religious ideology should be treated no different from other ideology, I can still think that the First Amendment is important.

Comment author: quanticle 07 December 2011 05:29:52PM 2 points [-]

It's hard to be the first to join a revolution, I agree. But should we really be making it easier for ourselves to be the lone dissenting voice in the woods? After all, most of those dissenting voices are just crazy; they don't have access to a greater truth, but they think they do. Maybe the difficulty of starting a revolution is a good thing -- it forces you to be really, really convinced in your idea.

Comment author: [deleted] 21 November 2011 07:59:07PM *  5 points [-]

In general the article was interesting. However the failures described seem to be related to lack of perspicacity more than to irrationality. If upper management are fooled by the middle managers’ signalling, they are not perspicacious enough and probably aren’t fit to be doing their jobs – training in rationality is unlikely to change that.

In the vast majority of failed projects I’ve been called to looked at, the managers have not read one book on software engineering. They haven’t taken one class, read one article, or been to one workshop. At best, they’ve managed other failing software projects.

This is not irrationality per se, but stupidity and incompetence.

Yes, businesses are under pressure to gravitate toward bad engineering practices, but shouldn’t they be under equal market pressure to compete against companies that are using actually good software engineering practices? Why sure, in the long run. But as Keynes succinctly put it, “In the long run, we’ll all be dead.” Eventually is a long time.

An alternative perspective is that extensive government intervention substantially reduces the market pressure that large companies experience. Here is Moldbug on the subject of Dilbertization (the government intervention he implies is its propping up of the maturity transforming banking system):

The late Communist world was a world in which it was often your job to do strange, useless things badly, all day, for no good reason at all.

There is another world in which it is often your job to do strange, useless things badly, all day, for no good reason at all. This is the world of Dilbert. Many of us have experienced it.

My theory is that Dilbert and Brezhnev are the same thing. I call it Dilbert-Brezhnev syndrome, or DBS. While we are certainly not the Soviet Union, my theory is that America has contracted a rather serious case of DBS.

The Soviet Union was a world in which business bore no relation to profit. People did strange, useless things badly because, lacking the discipline of profit that enforces efficiency, they succumbed to ulterior motives. Their unprofitable enterprises, purportedly businesses, were in fact patronage structures.

America is a much more interesting case, because (aside from its endlessly burgeoning political system, including its grant-funded "nongovernmental" periphery), the industries in which we see Dilbert syndrome are private, profitable. Nothing in America today is Brezhnev bad, but it is getting there. [...]

The answer, I think, is our friend zombie finance. We do not have Gosplan, but we have Wall Street. America is Dilbertized to the extent that Wall Street is zombified.

Let's take the loans that created the housing bubble. These were zombie loans to a T. So, for example, Steve Sailer asks: where was capitalism? Why wasn't anyone betting against these loans, and driving them down to their true value?

The answer is that the free market bears very little relationship to the process that distributed these loans. [...]

My feeling is that American zombie finance is largely responsible for the appearance of DBS in the New World. Why is the country covered with hideous developments, strip-malls and chain stores? Because it has, or had, a financial system designed to finance these things. Many of us would prefer a Ritual to a Starbucks and a Joe's Diner to a Burger King, but the chains have an unbeatable advantage: it is much easier for them to get a loan. [...]

Starbucks is subject to the discipline of profit - but it is not as subject as it should be, because its sheer size gives it access to zombie money. Thus the generic can defeat the specific, blandness outcompetes character, and we drink charred cat hair rather than coffee.

Comment author: quanticle 21 November 2011 08:33:48PM 2 points [-]

That's true... for the banking sector. However, the author was talking about the software projects in general. In my experience (and the author's experience appears to agree with mine) the sort of organizational irrationality peculiar to software isn't especially overrepresented in any particular sector. It's present in all sectors, from banking to video games. There's a deep intuition suggesting that adding more workers to the project will make progress occur more quickly. (Bad) middle managers play to that intuition and add workers even when the addition of more workers actually slows down the progress of the project.

I haven't seen any evidence of extensive governmental intervention in, say, XBox Games, but management practices at EA appear to fit this stereotype to a tee.

[LINK] Signalling and irrationality in Software Development

9 quanticle 21 November 2011 04:24PM

Why Software Projects are terrible and how not to fix them (by Drew Crawford):

Unless you are having a meeting with the one person who is going to use the software that you’re writing, you’re not meeting with the real customer.  You’re meeting with a person who has to explain to someone who can explain to someone who can explain what you’re saying to the real customer.  It’s not enough to convince the person you’re sitting in the room with that Agile is a good idea.  He has to convince his boss.  That person has to convince his boss.  That person has to convince the sales team.  The sales team has to convince the customer.  If the customer is b2b, your contact at the customer organization has to convince his boss.  Who convinces his boss.  Who convinces the real customer.  Maybe.  Unless that sale is also b2b.  This is a very long game of telephone.  If the guy you’re talking too is thinking “This sounds like a really good idea but I’m concerned I can’t sell this upstairs,” you are dead in the water.  At any point in the chain, if somebody thinks that, you are dead in the water.  You can’t just say “It’s objectively better,” you have to show how he can turn around and sell the idea to someone else.

Put yourself in the middle manager’s shoes.  If the project goes bad, he has to “look busy”.  He has to put more developers on the project, call a meeting and yell at people, and other arbitrary bad ideas.  Not because he thinks those will solve the problem.  In fact, managers often do this in spite of the fact that they know it’s bad.   Because that’s what will convince upper management that they’re doing their best.

In other words, it's all about signaling, isn't it? Managers will take actions that actively harm the continued progress of the project if that action makes them look "decisive" and "in charge".  I've seen this on many projects I've been on, and it took me a while to realize that my managers weren't stupid or ignorant. It's just that the organization I was working in put a higher priority on process than on results. My managers, therefore quite rationally did things that maximized their apparent value in the eyes of their bosses, even if it meant that the project (and, as a result) the entire organization was hurt.

Crawford then goes on to detail why organizations with such maladaptive practices survive:

Yes, businesses are under pressure to gravitate toward bad engineering practices, but shouldn’t they be under equal market pressure to compete against companies that are using actually good software engineering practices?  Shouldn’t, at some point, bad companies simply implode under their own weight? Why sure, in the long run.  But as Keynes succinctly put it, “In the long run, we’ll all be dead.”  Eventually is a long time.  It’s months, years, or decades.  A project can be failing a long time before management is clued in.  And even longer before management’s management is clued in.  And it can be ages before it hits the user.

I think this is something that we as rationalists sometimes forget about. Irrationality has momentum. Humans have been thinking intuitively for thousands (hundreds of thousands, even) of years before we figured out how to think with rigorous rationality. Even if rationality had a massive advantage of intuitive thinking in everyday situations (it doesn't, as far as I can tell) it would take a very long time for rational thought to propagate through society.

So the next time you get frustrated at some bit of wanton irrationality, remind yourself, "Momentum," before you get frustrated.

 

EDIT: Fixed spelling as per RolfAndreassen's post.

How did you come to find LessWrong?

5 quanticle 21 November 2011 03:32PM

I was reflecting the other day about how I learned about LessWrong. As best as I can recall/retrace, I learned about LessWrong from gwern, who I met in the #wikipedia IRC channel via an essentially chance meeting. I'm wondering how typical my experience is. How did you come to LessWrong?

EDIT: Optional follow-up question: Do you think that we (the community) are doing enough to bring in new users to LessWrong? If not, what do you think could be done to increase awareness of LessWrong amongst potential rationalists?

Comment author: Eugine_Nier 03 January 2011 06:35:25AM 1 point [-]

California will implement austerity measures similar to the ones currently being implemented by European countries: 80%.

The bubble underlying the current Chinese boom will collapse: 35%.

Some European country will abandon the Euro: 20%.

Comment author: quanticle 04 January 2011 04:28:42PM *  0 points [-]

I'd personally put the probability of a country abandoning the Euro this year at <5%. I think the major European powers (e.g. Germany and France) are still committed enough to the monetary union to try to make things work out. However, if corrective action fails or is rejected by the voters of southern Europe, then I think we'll see a greater willingness to abandon the Euro by all parties.

EDIT: This raises the related question of, "What is the probability that Greece, Spain, Portugal and Ireland will agree to and implement sufficient austerity measures to prevent a breakup of the Euro?"

View more: Prev | Next