Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to comment by Constant2 on Cached Thoughts
Comment author: Promethean 10 July 2012 07:45:33AM *  1 point [-]

Two problems.

First, each of us has a different mind that produces a different thought cache, and most of us probably won't be able to find much of a trunk build that we can agree on. To avoid conflicts, we'll have to transition from the current monolithic architecture to a Unix-like modular architecture. But that will take years, because we'll have to figure out who's running what modules, and which modules each entry in the thought cache comes from. (You can't count on lsmod to give complete or accurate results. I'd been running several unnamed modules for years before I found out they were a reimplementation of something called Singularitarianism.)

Second, how much data will we have to transfer (allowing for authentication, error correction, and Byzantine fault-tolerance), and are you sure anyone has enough input and output bandwidth?

In response to comment by Promethean on Cached Thoughts
Comment author: ec429 22 August 2012 04:49:19AM 3 points [-]

most of us probably won't be able to find much of a trunk build that we can agree on

I think you're wrong as a question of fact, but I love the way you've expressed yourself.

It's more like a non-monotonic DVCS; we may all have divergent head states, but almost every commit you have is replicated in millions of other people's thought caches.

Also, I don't think the system needs to be Byzantine fault tolerant; indeed we may do well to leave out authentication and error correction in exchange for a higher raw data rate, relying on Release Early Release Often to quash bugs as soon as they arise.

(Rationality as software development; it's an interesting model, but perhaps we shouldn't stretch the analogy too far)

Comment author: ec429 21 August 2012 08:09:22PM 1 point [-]

On the other hand, if you’re Dr. Evil and you’re in your moon base preparing to fire your giant laser at Washington, DC when you get a phone call from Austin “Omega” Powers

So, does this mean ata is going to write an Austin Powers: Superrational Man of Mysterious Answers fanfic?

Comment author: ec429 14 August 2012 10:37:26PM *  0 points [-]

How exactly are abstract, non-physical objects -- laws of nature, living in their "transcendent aerie" -- supposed to interact with physical stuff? What is the mechanism by which the constraint is applied? Could the laws of nature have been different, so that they forced electrons to attract one another?

I feel I should link to my post The Apparent Reality of Physics right now. To summarise: both the "descriptions" and "rules" views are wrong as they suppose there is something to be described or ruled. The (to me, obvious) dissolution is to state that a Universe is its rules.

In response to comment by ec429 on The Crackpot Offer
Comment author: pnrjulius 30 June 2012 03:36:32AM *  0 points [-]

I don't think you're just rationalizing. I think this is exactly what the philosophy of mathematics needs in fact.

If we really understand the foundations of mathematics, Godel's theorems should seem to us, if not irrelevant, then perfectly reasonable---perhaps even trivially obvious (or at least trivially obvious in hindsight, which is of course not the same thing), the way that a lot of very well-understood things seem to us.

In my mind I've gotten fairly close to this point, so maybe this will help: By being inside the system, you're always going to get "paradoxes" of self-reference that aren't really catastrophes.

For example, I cannot coherently and honestly assert this statement: "It is raining in Bangladesh but Patrick Julius does not believe that." The statement could in fact be true. It has often been true many times in the past. But I can't assert it, because I am part of it, and part of what it says is that I don't believe it, and hence can't assert it.

Likewise, Godel's theorems are a way of making number theory talk about itself and say things like "Number theory can't prove this statement"; well, of course it can't, because you made the statement about number theory proving things.

Comment author: ec429 14 August 2012 06:39:14PM 0 points [-]

There is a further subtlety here. As I discussed in "Syntacticism", in Gödel's theorems number theory is in fact talking about "number theory", and we apply a metatheory to prove that "number theory is "number theory"", and think we've proved that number theory is "number theory". The answer I came to was to conclude that number theory isn't talking about anything (ie. ascription of semantics to mathematics does not reflect any underlying reality), it's just a set of symbols and rules for manipulating same, and that those symbols and rules together embody a Platonic object. Others may reach different conclusions.

In response to [link] Is Alu Life?
Comment author: [deleted] 08 April 2012 04:12:10PM 0 points [-]

Am I the only one who's bothered by the colour scheme of the article? (BTW, are there people who take the Sapir--Whorf hypothesis so seriously as to believe that speakers of languages with separate words for ‘navy blue’ and ‘sky blue’ would find it easier?)

In response to comment by [deleted] on [link] Is Alu Life?
Comment author: ec429 08 April 2012 08:28:58PM 0 points [-]

I don't believe it, but it sounds like it should be testable, and if it hasn't been tested I'd be somewhat interested in doing so. I believe there are standard methods of comparing legibility or readability of two versions of a text (although, IIRC, they tend to show no statistically significant difference between perfect typesetting and text that would make a typographer scream).

You're probably not the only one bothered by the colour scheme, though; historically, every colour scheme I've used on the various iterations of my website has bothered many people. The previous one was bright green on black :S

In response to comment by ec429 on [link] Is Alu Life?
Comment author: pedanterrific 08 April 2012 07:32:12PM 0 points [-]

So, if I were building a planet-destroying superlaser (for, um, mining I guess) I wouldn't see any particular difference between testing it on Kudzu World or the barren rock next door.

Comment author: ec429 08 April 2012 08:23:24PM 0 points [-]

That's interesting, because I would see a difference. Given the choice, I'd test it on the barren rock. However, I can't justify that, nor am I sure how much benefit I'd have to derive to be willing to blow up Eta Kudzunae.

In response to comment by ec429 on [link] Is Alu Life?
Comment author: pedanterrific 08 April 2012 08:18:55AM 1 point [-]

hought experiment: imagine a planet with a xenobiology that only supports plant life - nothing sentient lives there or could do so - and there is (let us assume) no direct benefit to us to be derived from its existence. Would we think it acceptable to destroy that planet?

I think this scenario is a little difficult to visualize- an entire biosphere we can't derive a benefit from, even for sheer curiosity's sake? So, applying the LCPW: the planet has been invaded by a single species of xenokudzu, which has choked out all other life but is thriving merrily on its own (maybe it's an ecocidal bioweapon or something). Would it be acceptable to destroy that planet? I'd say yes. Agree / disagree / think my changes alter the question?

Comment author: ec429 08 April 2012 01:00:32PM *  0 points [-]

Agree, and think your changes alter the question I was trying to ask, which is, not whether destroying Xenokudzu Planet would be absolutely unacceptable (as a rule, most things aren't), but whether we'd need a sufficiently good reason.

which has choked out all other life

I think the LCPW for you here is to suppose that this planet is only capable of supporting this xenokudzu, and no other kind of life. (Maybe the xenokudzu is plasma helices, and the 'planet' is actually a sun, and suppose for the sake of argument that that environment can't support sentient life)

So, more generally, let the gain (to non-xenokudzu utility) from destroying Xeno Planet tend to zero. Is there a point at which you choose not to destroy, or will any nonzero positive gain to sentient life justify wiping out Xeno Planet?

Comment author: fubarobfusco 08 April 2012 01:42:53AM 1 point [-]

Source?

Comment author: ec429 08 April 2012 02:04:04AM 0 points [-]

Well, my source is Dr Bursill-Hall's History of Mathematics lectures at Cambridge; I presume his source is 'the literature'. Sorry I can't give you a better source than that.

In response to comment by ec429 on [link] Is Alu Life?
Comment author: Nisan 08 April 2012 12:50:48AM 9 points [-]

At the very least, you should reconsider the syllogism at the heart of your article:

  1. All life has ethical value.
  2. Transposons are life.
    Therefore, transposons have ethical value.

We can substitute in your tentative definition of life:

  1. All "self-replicating structures with a genotype which determines their phenotype and is susceptible to mutation and selection" have ethical value.
  2. Transposons are self-replicating structures with a genotype which determine their phenotype and are susceptible to mutation and selection.
    Therefore, transposons have ethical value.

Premise 2 is an empirical claim. Premise 1 is a moral claim that is strictly stronger than the conclusion, and you do not justify it at all.

If you have moral intuitions or moral arguments for the first premise, then perhaps you should write about those instead. And your arguments ought to make sense without using the word "life". If your argument is along the lines of "well, humans and chimpanzees have ethical value, and they're both self-replicating structures with genomes etc., so it only makes sense that transposons have ethical value too", that's not good enough. You'd have to say why being a self-replicating structure with a genome etc. is the reason why humans and chimpanzees have ethical value. If humans and chimpanzees have ethical value because of some other feature, then perhaps transposons don't share that feature and they don't have ethical value after all.

In response to comment by Nisan on [link] Is Alu Life?
Comment author: ec429 08 April 2012 01:02:14AM *  4 points [-]

Hmm. I do understand that, but I still don't think it's relevant. I don't try to argue that Premise 1 is true (except in a throwaway parenthetical which I am considering retracting), rather I'm arguing that Premise 2 is true, and that consequently Premise 1 implies the conclusion ("transposons have ethical value") which in turn implies various things ranging from the disconcerting to the absurd. In fact I believed Premise 1 (albeit without great examination) until I learned about transposons, and now I doubt it (though I haven't rejected it so far; I'm cognitively marking it as "I'm confused about this"). That's why I felt there was something worth writing about: namely, that transposons expose the absurdity of an assumption that had previously been part of my moral theory, and by extension may perhaps be part of others'.

Edit: well, that's one reason I wrote the article. The other reason was to raise the questions in the hope of creating a discussion through which I might come to better understand the problem.

Further edit: actually, I'm not sure the first reason was my reason for writing the article; I think I was indeed (initially) arguing for Premise 1, and I have been trying to make excuses and pretend I'd never argued for it. Yet I still can't let go of Premise 1 completely. Thought experiment: imagine a planet with a xenobiology that only supports plant life - nothing sentient lives there or could do so - and there is (let us assume) no direct benefit to us to be derived from its existence. Would we think it acceptable to destroy that planet? I think not, yet the obvious "feature conferring ethical value on humans and chimps" would be sentience. I remain confused.

In response to [link] Is Alu Life?
Comment author: pedanterrific 07 April 2012 10:36:53PM *  2 points [-]

The potential moral implications of Alu being life have nothing to do with multiplying by the number of transposons, they have to do with realizing that what you value isn't "life".

What definition of "life" is satisfied by both a transposon and an AI?

Edit: Did you learn ethics from Orson Scott Card or something?

If we consider all life to have ethical value (and we must, if we wish to be raman and not varelse), and if we classify transposons and other mobile genetic elements as life, then it is a simple syllogism to conclude that transposons have ethical value.

Comment author: ec429 07 April 2012 10:43:42PM 0 points [-]

My ethics were influenced a nonzero amount by reading Orson Scott Card. More to the point, OSC provided terminology which I felt was both useful and likely to be understood by my audience.

I now think that my use of the word "must" in the above-quoted passage was a mistake.

View more: Next