All of randallsquared's Comments + Replies

May is missing from Birth Month.

[anonymous]110

I think it's pretty astounding that nobody at Less Wrong was born in May. I'm not sure why Scott doesn't think that's a deviation from randomness.

I would also like to know for next year. I have four older siblings on my father's side, and two on my mother's, and only spent any home time with one (from my mother's side). So, I answered 6 for older, but depending on whether this was a socialization or uterine environment question, the best answer might have been either 1 or 2 for older.

Especially if the builders are concerned about unintended consequences, the final goal might be relatively narrow and easily achieved, yet result in the wiping out of the builder species.

Most goals include "I will not tolerate any challenges to my power" as a subgoal. Tolerating challenges to power to execute goals reduces the likelihood of acheiving them.

I have seen people querulously quibbling, "ah, but suppose I find everything a user posts bad and I downvote each of them, is that a bannable offense and if not how are you going to tell, eh?" But I have not yet see anyone saying, Eugine was right to downvote everything that these people posted, regardless of what it was, and everyone else should do the same until they are driven away.

Ah, but it's not clear that those are different activities, or if they are, whether there's any way in the database or logs to tell the difference. So, when pe... (read more)

8Richard_Kennaway
In the present case, there was enough evidence to raise a reasonable suspicion, whereupon Kaj approached Eugine, who confirmed that he "was engaged in a "weeding" of users" (quoted from original post). Rules come from judgement, not judgement from rules. Any bad post is worth downvoting. If someone writes nothing but bad posts, and there have been a few examples, every one of their posts gets downvoted. Such people are rare and they never last long. When an obvious moron or crank pops up here, I have myself on occasion systematically read their entire comment history (it's never very long) and judged every comment. But I am always voting on the individual comment, never the person. I am certainly not going to downvote a meetup announcement because the poster is a Bad Person who must be spat on wherever they show their face, let alone write a bot to do the spitting for me. The transparency of how this case has been handled seems sufficient to me.

In fact, people experience this all the time whenever we dream about being someone else, and wake up confused about who we are for a few seconds or whatever. It's definitely important to me that the thread of consciousness of who I am survives, separately from my memories and preferences, since I've experienced being me without those, like everyone else, in dreams.

Russia is a poor counter-argument, given that the ruler of Russia was called Caesar.

2Salemicus
No, Russia is an excellent counter-argument. Why was the ruler of Russia called Caesar? Because a some culturally Roman guy conquered them, as in JQuinton's narrative? No. Rather, because they converted to Christianity, and so they greatly respected the (Eastern) Roman Empire and saw it as part of their world, and so their rulers started calling themselves Caesar to invoke that heritage. In other words, they took to the Roman (Byzantine) cultural heritage because they became Christian, they did not become Christian because they had Roman or Byzantine heritage.

It's more that my definition of identity just is something like an internally-forward-flowing, indistinguishable-from-the-inside sequence of observer slices and the definition that other people are pushing just...isn't.

Hm. Does "internally-forward-flowing" mean that stateA is a (primary? major? efficient? not sure if there's a technical term, here) cause of stateB, or does it mean only that internally, stateB remembers "being" stateA?

If the former, then I think you and I actually agree.

Moby Dick is not a single physical manuscript somewhere.

"Moby Dick" can refer either to a specific object, or to a set. Your argument is that people are like a set, and Error's argument is that they are like an object (or a process, possibly; that's my own view). Conflating sets and objects assumes the conclusion.

0PDH
I'm not conflating them, I'm distinguishing between them. It's because they're already conflated that we're having this problem. I'm explicitly saying that the substrate is not what it's important here. But this works both ways: what is the non-question begging argument that observer slices can only be regarded as older versions of previous slices in the case that the latter and the former are both running on meat-based substrates? As far as I can see, you have to just presuppose that view to say that an upload's observer slice doesn't count as a legitimate continuation. I don't want to get drawn into a game of burden of proof tennis because I don''t think that we disagree on any relevant physical facts. It's more that my definition of identity just is something like an internally-forward-flowing, indistinguishable-from-the-inside sequence of observer slices and the definition that other people are pushing just...isn't. All I can say, really, is that I think that Error and Mark et al are demanding an overly strong moment-to-moment connection between observer slices for their conception of identity. My view is easier to reconcile with things like quantum physics, ageing, revived comatose patients etc. and that is the sort of thing I appeal to by way of support.

People in the rationality community tend to believe that there's a lot of low-hanging fruit to be had in thinking rationally, and that the average person and the average society is missing out on this. This is difficult to reconcile with arguments for tradition and being cautious about rapid change, which is the heart of (old school) conservatism.

What's your evidence? I have some anecdotal evidence (based on waking from sleep, and on drinking alcohol) that seems to imply that consciousness and intelligence are quite strongly correlated, but perhaps you know of experiments in which they've been shown to vary separately?

Haha, no, sorry. I was referring to Child's Jack Reacher, who starts off with a strong moral code and seems to lose track of it around book 12.

Not every specific question need have contributed to fitness.

0Viliam_Bur
No, not every specific question, but this one did. I mean, guys even today try to impress girls by being "deep" and "philosophical".
3[anonymous]
Just as the ability to read never contributed to fitness until someone figured out how to do it with our already existing hardware.

You may, however, come to strongly dislike the protagonist later in the series.

2drethelin
Miles? He does some douchebaggy things but then he grows up. It's one of my favorite character arcs.

I think "numerically identical" is just a stupid way of saying "they're the same".

In English, at least, there appears to be no good way to differentiate between "this is the same thing" and "this is an exactly similar thing (except that there are at least two of them)". In programming, you can just test whether two objects have the same memory location, but the simplest way to indicate that in English about arbitrary objects is to point out that there's only one item. Hence the need for phrasing like "numerically identical".

Is there a better way?

3.1 ounces of very lean meat

That's a very specific number. Why not just "about 3 ounces (85g)"?

We can imagine a world in which brains were highly efficient and people looked more like elephants, in which one could revolutionize physics every year or so but it takes a decade to push out a calf.

That's not even required, though. What we're looking for (blade-size-wise) is whether a million additional people produce enough innovation to support more than a million additional people, and even if innovators are one in a thousand, it's not clear which way that swings in general.

0gwern
Sure, it's just an example which does not seem to be impossible but where the blade of innovation is clearly bigger than the blade of population growth. But the basic empirical point remains the same: the world does not look like one where population growth drives innovation in a virtuous spiral or anything remotely close to that*. * except, per Miller's final reply, in the very wealthiest countries post-demographic-transition where reproduction is sub-replacement and growth maybe even net negative like Japan and South Korea are approaching, then in these exceptional countries some more population growth may maximize innovation growth and increase rather than decrease per capita income.

subtle, feminine, discrete and firm

Probably you meant discreet, but if not, consider using "distinct" to avoid confusion.

If you prefer suffering to nonexistence, this ceases to be a problem. One could argue that this justifies raising animals for food (which would otherwise never have existed), but it's not clear to me what the sign of the change is.

...but "argh" is pronounced that way... http://www.youtube.com/watch?v=pOlKRMXvTiA :) Since the late 90s, at least.

...but people (around me, at least, in the DC area) do say "Er..." literally, sometimes. It appears to be pronounced that way when the speaker wants to emphasize the pause, as far as I can tell.

2amacfie
I hear "er", literally (rhotically), quite infrequently and I always assumed that people said it that way because of seeing "er" in written English and not knowing that it was intended to be pronounced "uh"; similarly, I've heard "arg" spoken by people who thought "argh" from written English was pronounced that way.

But it's actually true that solving the Hard Problem of Consciousness is necessary to fully explode the Chinese Room! Without having solved it, it's still possible that the Room isn't understanding anything, even if you don't regard this as a knock against the possibility of GAI. I think the Room does say something useful about Turing tests: that behavior suggests implementation, but doesn't necessarily constrain it. The Giant Lookup Table is another, similarly impractical, argument that makes the same point.

Understanding is either only inferred from be... (read more)

1OrphanWilde
Exploding the Chinese Room leads to understanding that the Hard Problem of Consciousness is in fact a problem; its purpose was to demonstrate that computers can't implement consciousness, which it doesn't actually do. Hence my view that it's a useful idea for somebody considering AI to dissolve, but not necessarily a problem in and of itself.

no immortal horses, imagine that.

No ponies or friendship? Hard to imagine, indeed. :|

Not Michaelos, but in this sense, I would say that, yes, a billion years from now is magical gibberish for almost any decision you'd make today. I have the feeling you meant that the other way 'round, though.

In the context of

But when this is phrased as "the set of minds included in CEV is totally arbitrary, and hence, so will be the output," an essential truth is lost

I think it's clear that with

valuing others' not having abortions loses to their valuing choice

you have decided to exclude some (potential) minds from CEV. You could just as easily have decided to include them and said "valuing choice loses to others valuing their life".

But, to be clear, I don't think that even if you limit it to "existing, thinking human minds at the time of the calculation", you will get some sort of unambiguous result.

A very common desire is to be more prosperous than one's peers. It's not clear to me that there is some "real" goal that this serves (for an individual) -- it could be literally a primary goal. If that's the case, then we already have a problem: two people in a peer group cannot both get all they want if both want to have more than any other. I can't think of any satisfactory solution to this. Now, one might say, "well, if they'd grown up farther together this would be solvable", but I don't see any reason that should be true. Peop... (read more)

The point you quoted is my main objection to CEV as well.

You might object that a person might fundamentally value something that clashes with my values. But I think this is not likely to be found on Earth.

Right now there are large groups who have specific goals that fundamentally clash with some goals of those in other groups. The idea of "knowing more about [...] ethics" either presumes an objective ethics or merely points at you or where you wish you were.

-2see
Objective? Sure, without being universal. Human beings are physically/genetically/mentally similar within certain tolerances; this implies there is one system of ethics (within certain tolerances) that is best suited all of us, which could be objectively determined by a thorough and competent enough analysis of humans. The edges of the bell curve on various factors might have certain variances. There might be a multi-modal distribution of fit (bimodal on men and women, for example), too. But, basically, one objective ethics for humans. This ethics would clearly be unsuited for cats, sharks, bees, or trees. It seems vanishingly unlikely that sapient minds from other evolutions would also be suited for such an ethics, either. So it's not universal, it's not a code God wrote into everything. It's just the best way to be a human . . . as humans exposed to it would in fact judge, because it's fitted to us better than any of our current fumbling attempts.
-1Ben Pace
The existence of moral disagreement is not an argument against CEV, unless all disagreeing parties know everything there is to know about their desires, and are perfect bayesians. Otherwise, people can be mistaken about what they really want, or what the facts prescribe (given their values). 'Objective ethics'? 'Merely points... at where you wish you were'? "Merely"!? Take your most innate desires. Not 'I like chocolate' or 'I ought to condemn murder', but the most basic levels (go to a neuroscientist to figure those out). Then take the facts of the world. If you had a sufficiently powerful computer, and you could input the values and plug in the facts, then the output would be what you wanted to do best. That doesn't mean whichever urge is strongest, but it takes into account the desires that make up your conscience, and the bit of you saying 'but that's not what's right'. If you could perform this calculation in your head, you'd get the feeling of 'Yes, that's what is right. What else could it possibly be? What else could possibly matter?' This isn't 'merely' where you wish you were. This is the 'right' place to be. This reply is more about the meta-ethics, but for interpersonal ethics, please see my response to peter_hurford's comment above.

Yes, I thought about that when writing the above, but I figured I'd fall back on the term "entity". ;) An entity would be something that could have goals (sidestepping the hard work of exactly what object qualify).

I think I must be misunderstanding you. It's not so much that I'm saying that our goals are the bedrock, as that there's no objective bedrock to begin with. We do value things, and we can make decisions about actions in pursuit of things we value, so in that sense there's some basis for what we "ought" to do, but I'm making exactly the same point you are when you say:

what evidence is there that there is any 'ought' above 'maxing out our utility functions'?

I know of no such evidence. We do act in pursuit of goals, and that's enough for a po... (read more)

0Raoul589
I think that you are right that we don't disagree on the 'basis of morality' issue. My claim is only that which you said above: there is no objective bedrock for morality, and there's no evidence that we ought to do anything other than max out our utility functions. I am sorry for the digression.
0Kawoomba
I agree with the rest of your comment, and depending on how you define "goal" with the quote as well. However, what about entities driven only by heuristics? Those may have developed to pursue a goal, but not necessarily so. Would you call an agent that is only heuristics-driven goal-oriented? (I have in mind simple commands along the lines of "go left when there is a light on the right", think Braitenberg vehicles minus the evolutionary aspect.

Just to be clear, I don't think you're disagreeing with me.

0Raoul589
We disagree if you intended to make the claim that 'our goals' are the bedrock on which we should base the notion of 'ought', since we can take the moral skepticism a step further, and ask: what evidence is there that there is any 'ought' above 'maxing out our utility functions'? A further point of clarification: It doesn't follow - by definition, as you say - that what is valuable is what we value. Would making paperclips become valuable if we created a paperclip maximiser? What about if paperclip maximisers outnumbered humans? I think benthamite is right: the assumption that 'what is valuable is what we value' tends just to be smuggled into arguments without further defense. This is the move that the wirehead rejects. Note: I took the statement 'what is valuable is what we value' to be equivalent to 'things are valuable because we value them'. The statement has another possbile meaning: 'we value things because they are valuable'. I think both are incorrect for the same reason.

I'm asking about how to efficiently signal actual pacifism.

-2[anonymous]
Yes?

I'm not asking about faking pacifism. I'm asking about how to efficiently signal actual pacifism. How else am I supposed to ask about that?

Replace "serious injury or death" with "causing serious injury or death".

0[anonymous]
No. It's absurd to act like "real" conscientious objectors don't do other things like care about the probability that they would be sent to jail or sent to military service. It's as if, in your model, conscientious objectors are never allowed to speak about self interest. Which is preposterous.
3Gurkenglas
If God doesn't exist, loads of people are currently fooling themselves into thinking they know what He would want, and CronoDAS claims that's enough.

When you consider this, consider the difference between our current world (with all the consequences for those of IQ 85), and a world where 85 was the average, so that civilization and all its comforts never developed at all...

5prase
Even if it were true that average IQ 85 meant that civilisation never developed at all (an assumption I find dubious), being a chief in a neolithic tribal society still doesn't sound dramatically worse than being a village idiot in a civilised society. Also, saying that I would profit from a marginal decrease in average IQ at level 100 doesn't imply that I would profit from similar decrease at any level. I am pretty sure I wouldn't want everybody else being dramatically different from me, thus there is some point below which I wouldn't like the average IQ to plunge. This point may lie quite above the level where civilisation of any kind becomes impossible.

When people say that it's conceivable for something to act exactly as if it were in pain without actually feeling pain, they are using the word "feel" in a way that I don't understand or care about.

Taken literally, this suggests that you believe all actors really believe they are the character (at least, if they are acting exactly like the character). Since that seems unlikely, I'm not sure what you mean.

1aaronde
If an actor stays in character his entire life, making friends and holding down a job, in character - and if, whenever he seemed to zone out, you could interrupt him at any time to ask what he was thinking about, and he could give a detailed description of the day dream he was having, in character... Well then I'd say the character is a lot less fictional than the actor. But even if there is an actor - an entirely different person putting on a show - the character is still a real person. This is no different from saying that a person is still a person, even if they're a brain emulation running on a computer. In this case, the actor is the substrate on which the character is running.

people can see after 30 years that the idea [of molecular manufacturing] turned out sterile.

Did I miss the paper where it was shown not to be workable, or are you basing this only on the current lack of assemblers?

Raw processing power. In the computer analogy, intelligence is the combination of enough processing power with software that implements the intelligence. When people compare computers to brains, they usually seem to be ignoring the software side.

0DaFranker
This is true, but possibly not quite exactly the way you intended. "Most people" (AKA everyone I've talked to about this who is not a programmer or has related IT experience) will automatically associate computing power with "power". Humans have intellectual "power", since their intellect allows them to build incredibly tools, like computers. If we give computers more ((computing) power => "power" => ability to affect environment, reason and build useful tools), they will "obviously become more intelligent". It seems to me like a standard symbol problem unfortunately much too common even among people who should know better.

Can you point out why the analogy is bad?

I've read over one hundred books I think were better. And I mean that literally; if I spent a day doing it, I could actually go through my bookshelves and write down a list of one hundred and one books I liked more.

I've read many, many books I liked more than many books which I would consider "better" in a general sense. From the context of the discussion, I'd think "were better" was the meaning you meant. Alternatively, maybe you don't experience such a discrepancy between what you like and what you believe is "good writing"?

1CronoDAS
A book can be well written and still be bad because of other flaws. Nathaniel Hawthorne's The Scarlet Letter was very well written in a technical sense, but the story itself was boring as hell and Hawthorne's skill couldn't save it.

Me, too, but about two years ago. Unfortunately, I've had a hard time liking wine, so I'm hoping that moderate amounts of scotch and/or rum have a similar effect.

There are (at least) two meaning for "why ought we be moral":

  • "Why should an entity without goals choose to follow goals", or, more generally, "Why should an entity without goals choose [anything]",
  • and, "Why should an entity with a top level goal of X discard this in favor of a top level goal of Y."

I can imagine answers to the second question (it could be that explicitly replacing X with Y results in achieving X better than if you don't; this is one driver of extremism in many areas), but it seems clear that the first question admits of no attack.

1Eugine_Nier
An entity without goals would not be reading Gauthier's book.

Unless J is much, much less intelligent than you, or you've spent a lot of time planning different scenarios, it seems like any one of J's answers might well require too much thought for a quick response. For example,

tld: Well, God was there, and now he's left that world behind. So it's a world without God - what changes, what would be different about the world if God weren't in it?

J: I can't imagine a world without God in it.

Lots of theists might answer this in a much more specific fashion. "Well, I suppose the world would cease to exist, woul... (read more)

Morality consists of courses of action to achieve a goal or goals, and the goal or goals themselves. Game theory, evolutionary biology, and other areas of study can help choose courses of action, and they can explain why we have the goals we have, but they can't explain why we "ought" to have a given goal or goals. If you believe that a god created everything except itself, but including morality, then said god presumably can ground morality simply by virtue of having created it.

0Jayson_Virissimo
Yeah, that is the dominant view, but Gauthier actually attempts to answer the question "why be moral?" (not only the question of "what is moral?") using game-theoretic concepts. In short, his answer is that being moral is rational. I don't remember whether or not he tries to answer the question "why be rational?"; I haven't read Morals by Agreement in years.

Also this year,

Nitpick: actually last year (March 2011, per http://www.ncbi.nlm.nih.gov/pubmed/21280961 ).

This is not (to paraphrase Eliezer) a thunderbolt of insight. [...]

This sentence seems exactly the same to me as saying, "This was obvious, but, [...]".

Sometimes, people assert obviousness as a self-deprecating maneuver or to preempt criticism, rather than because they believe that everyone would consider the statement in question obvious.

SG-1 usually had a very anti-theist message, as long as you group all gods together, but the writers went out of their way at least once to exempt the Christian God when the earthborn characters wondered if God might be a goa'uld: "Teal'c: I know of no Goa'uld capable of showing the necessary compassion or benevolence that I've read of in your bible."

However, the overall thrust of the show was pretty anti-diety, and the big bads of the last few seasons were very, very medieval-priestish.

0wedrifid
The Christian God being a Goa'uld would break the theme anyway. They tended to divide up pantheons by species.

I like Pandora enough that I pay for it. That said, there are some issues with it:

  • a given station seems to be limited to 20-30 songs, with a very occasional other song tossed in, so if you listen to it throughout a workday, you'll have heard the same song repeatedly. This can be ideal, however, for worktime music, where repetitive enjoyability is more important that novelty.
  • Pandora doesn't have some artists, especially (I think) those not completely representable with ASCII, like Alizée.
  • If you upvote everything you like, and downvote things you don'
... (read more)

[...] or thwarting "the single best invention of life," according to Steve Jobs.

Which was even more odd given that it immediately followed a worshipful Jobs documentary featuring Adam Savage and Jamie, which contained that very quote.

Load More