Harry Yudkowsky and the Methods of Postrationality: Chapter One: Em Dashes Colons and Ellipses, Littérateurs Go Wild

-7 Will_Newsome 06 July 2014 09:34AM

 

"If you give George Lukács any taste at all, immediately become the Deathstar." — Old Klingon Proverb

 

There was no nice way to put it: Harry James Potter-Yudkowsky was half Potter, half Yudkowsky. Harry just didn’t fit in. It wasn't that he lacked humanity. It was just that no one else knew (P)Many_Worlds, (P)singularity, or (P)their_special_insight_into_the_true_beautiful_Bayesian_fractally_recursive_nature_of_reality. Other people were rolesand how shall an actor, an agent, relate to those who are merely what they are, merely their roles? Merely their roles, without pretext or irony? How shall the PC fuck with the NPCs? Harry James Potter-Yudkowsky oft asked himself this question, but his 11-year-old mind lacked the g to grasp the answer. For if you are to draw any moral from this tale, godforsaken readers, the moral you must draw is this: P!=NP.

 

One night Harry Potter-Yudkowsky was outside, pretending to be Keats, staring at the stars and the incomprehensibly vast distances between them, pondering his own infinite significance in the face of such an overwhelming sea of stupidity, when an owl dropped a letter directly on his head, winking slyly. “You’re a wizard,” said the letter, while the owl watched, increasingly gloatingly, “and we strongly suggest you attend our school, which goes by the name Hogwarts. 'Because we’re sexy and you know it.’”

 

Harry pondered this for five seconds. “Curse the stars!, literally curse them!, Abra Kadabra!, for I must admit what I always knew in my heart to be true,” lamented Harry. “This is fanfic.”

 

“Meh.”

 

And so, as they'd been furiously engaged in for months, the divers models of Harry Potter-Yudkowsky gathered dust. In layman’s terms...

 

Harry didn’t update at all.

 

Harry: 1

Author:  0

 

 

(To be fair, the author was drunk.)

 

Next chapter: "Analyzing the Fuck out of an Owl"

...

Criticism appreciated.

Sugar and motivation

9 NancyLebovitz 13 June 2014 03:46PM

I reread  Are Wireheads Happy?[1], which is about the difference between liking and wanting.

I've noticed that too much simple carbs (not terribly much-- three twenty ounce Cokes on consecutive days will do it) severely damages my desire to do much of anything, and the effects take approximately two low-simple carbs days to clear out.

This is obviously something physiological, even though it looks like an emotional problem. Failing to remember clue to avoid simple carbs (a dessert once a week might not be a problem) might be an emotional issue. Too much sugar used to lead to an internal voice saying "I don't care". This time, it didn't, but if I thought of something I might do, there was a clear feeling of "no reward there" and a sense that it was too much effort. I was capable of enjoying things, but not of anticipating that I would like them.

I'm wondering if anything is known about simple carbs, motivation, and/or serotonin/dopamine.

[1]Recommended here.

AI is Software is AI

-42 AndyWood 05 June 2014 06:15PM

Turing's Test is from 1950. We don't judge dogs only by how human they are. Judging software by a human ideal is like a species bias.

Software is the new System. It errs. Some errors are jokes (witness funny auto-correct). Driver-less cars don't crash like we do. Maybe a few will.

These processes are our partners now (Siri). Whether a singleton evolves rapidly, software evolves continuously, now.

 

Crocker's Rules

The Useful Definition of "I"

4 ete 28 May 2014 11:44AM

aka The Fuzzy Pattern Theory of Identity

Background readingTimeless IdentityThe Anthropic Trilemma

Identity is not based on continuity of physical material.

Identity is not based on causal links to previous/future selves.

Identity is not usefully defined as a single point in thingspace. An "I" which only exists for an instant (i.e. zero continuity of identity) does not even remotely correspond to what we're trying to express by the word "I" in general use, and refers instead to a single snapshot. Consider the choice between putting yourself in stasis for eternity against living normally; a definition of "I" which prefers self-preservation by literally preserving a snapshot of one instant is massively unintuitive and uninformative compared to a definition which leads us to preserve "I" by allowing it to keep living even if that includes change.

Identity is not the current isolated frame.

 

So if none of those are what "I"/Identity is based on, what is?

Some configurations of matter I would consider to be definitely me, and some definitely not me. Between the two extremes there are plenty of border cases wherever you try to draw a line. As an exercise: five minutes in the past ete, 30 years in the future ete, alternate branch ete brought up by different parents, ete's identical twin, ete with different genetics/body but a mindstate near-identical to current ete, sibling raised in same environment with many shared memories, random human, monkey, mouse, bacteria, rock. With sufficiently advanced technology, it would be possible to change me between those configurations one atom at a time. Without appeals to physical or causal continuity, there's no way to cleanly draw a hard binary line without violating what we mean by "I" in some important way or allowing, at some point, a change vastly below perceptible levels to flip a configuration from "me" to "not-me" all at once.

Or, put another way, identity is not binary, it is fuzzy like everything else in human conceptspace.

It's interesting to note that examining common language use shows that in some sense this is widely known. When someone's changed by an experience or acting in a way unfitting with your model of them it's common to say something along the lines of "he's like a different person" or "she's not acting like herself", and the qualifier!person nomenclature that is becoming a bit more frequent, all hint at different versions of a person having only partially the same identity.

 

Why do we have a sense of identity?

For something as universal as the feeling of having an identity there's likely to be some evolutionary purpose. Luckily, it's fairly straightforward to see why it would increase fitness. The brain's learning is based on reward/punishment and connecting behaviours which are helpful/harmful to them, which is great for some things but could struggle with long term goals since the reward for making the right/punishment for wrong decision comes very distantly from the choice, so relatively weakly connected and reinforced. Creatures which can easily identify future/past continuations using an "I" concept of their own presence have a ready-built way to handle delayed gratification situations. Evolution needs to connect up "doing this will make "I" concept future be expected to get reward" to some reward in order to encourage the creature to think longer term, rather than specifically connecting each possible long term beneficial reward to each behaviour. Kaj_Sotala's attempt to dissolve subjective expectation and personal identity contains another approach to understanding why we have a sense of identity, as well as many other interesting thoughts.

 

So what is it?

If you took yourself from right now and changed your entire body into a hippopotamus, or uploaded yourself into a computer, but still retained full memories/consciousness/responses to situations, you would likely consider yourself a more central example of the fuzzy "I" concept than if you made the physically relatively small change of removing your personality and memories. General physical structure is not a core feature of "I", though it is a relatively minor part.

Your "I"/identity is a concept (in the conceptspace/thingspace sense), centred on current you, with configurations of matter being considered more central to the "I" cluster the more similar they are to current you in the ways which current you values.

To give some concrete examples: Most people consider their memories to be very important to them, so any configuration without a similar set of memories is going to be distant. Many people consider some political/social/family group/belief system to be extremely important to them, so an alternate version of themselves in a different group would be considered moderately distant. An Olympic athlete or model may put an unusually large amount of importance on their body, so changes to it would move a configuration away from their idea of self quicker than for most.

This fits very nicely with intuition about changing core beliefs or things you care about (e.g. athlete becomes disabled, large change in personal circumstances) making you in at least some sense a different person, and as far as I can tell does not fall apart/prove useless in similar ways to alternative definitions.

 

What consequences does this theory have for common issues with identity?

  • Moment to moment identity is almost entirely, but not perfectly retained.
  • You will wake up as yourself after a night's sleep in a meaningful sense, but not as quite as central example of current-you's "I" as you would after a few seconds.
  • The teleporter to Mars does not kill you in the most important sense (unless somehow your location on Earth is a particularly core part of your identity).
  • Any high-fidelity clone can be usefully considered to be you, however it originated, until it diverges significantly.
  • Cryonics or plastination do present a chance for bringing you back (conditional on information being preserved to reasonable fidelity), especially if you consider your mind rather than your body as core to your identity (so would not consider being an upload a huge change).
  • Suggest more in comments!

Why does this matter?

Flawed assumptions and confusion about identity seem to underlie several notable difficulties in decision theory, anthropic issues, and less directly problems understanding what morality is, as I hope to explore in future posts.


Thanks to Skeptityke for reading through this and giving useful suggestions, as well as writing this which meant there was a lot less background I needed to explain.

What is the most anti-altruistic way to spend a million dollars?

-4 Punoxysm 24 March 2014 09:50PM

Edit: The purpose of this question is not to make the world worse, but to see whether we actually have concrete ideas of what would, and my guess is that most of us don't, not in a really concrete way. From the downvotes I'm wondering if everyone else is thinking way darker directions than I am. If so please share.

There is a lot of discussion here about effective altruism. Organizations like GiveWell with donations, using criterion like quality-life-years-saved-per-dollar. People distinguish warm-and-fuzzy giving from the most effective use of dollars from various utilitarian perspectives.

But I want to ask a different question: What would effective anti-altruism be?

To make it more concrete:

I am an eccentric multimillionaire, proposing a contest to all of you, who will for the purposes of this exercise play greedy and callous, yet honest and efficient, contest entrants.

Whoever can propose the most negative possible use for my money, in the sense that it causes the greatest amount of global misery, (feel free to argue for your own interpretation of the details of what this means) will receive $1 million to carry out his or her proposal and $1 million to keep for him or herself to with as desired. 

A few rules:

1) Everything must be 100% legal in whatever jurisdiction you propose. Edit: People had trouble with the old phrasing, so I'll add that it should not only be legal in the letter of the law, but also in some reasonable interpretation of the spirit of the law.

1a) In fact, I encourage you to think of things that aren't merely legal but that would also be legal under whatever your favorite hypothetical laws are. Maybe that means non-coercive, non-violent, or something else in that vein.

2) This money may be used as seed funding for a non-profit or for-profit anti-altruistic venture, but I will take into account both the risk and the marginal impact of only the first million dollars.

3) Risk and plausibility are factors just as they would be in any investment for effective altruism

4) If you're going to propose that you keep and embezzle the first million dollars, you should have an extremely good justification for why such a mundane plan would match my standards for anti-altruism.

 

I hope this pushes you all to think of truly anti-altruistic means of spending this money. I think you may find that effective anti-altruism is a good deal harder than you'd believe.

Skepticism about Probability

-8 Carinthium 27 January 2014 09:49AM

I've raised arguments for philosophical scepticism before, which have mostly been argued against in a Popper-esque manner of arguing that even if we don't know anything with certainty, we can have legitimate knowledge on probabilities.

The problem with this, however, is how you answer a sceptic about the notion of probability having a correlation with reality. Probability depends upon axioms of probability- how are said axioms to be justified? It can't be by definition, or it has no correlation to reality.

The first AI probably won't be very smart

-2 jpaulson 16 January 2014 01:37AM

Claim: The first human-level AIs are not likely to undergo an intelligence explosion.

1) Brains have a ton of computational power: ~86 billion neurons and trillions of connections between them. Unless there's a "shortcut" to intelligence, we won't be able to efficiently simulate a brain for a long time. http://io9.com/this-computer-took-40-minutes-to-simulate-one-second-of-1043288954 describes one of the largest computers in the world simulating 1s of brain activity in 40m (i.e. this "AI" would think 2400 times slower than you or me). The first AIs are not likely to be fast thinkers.

2) Being able to read your own source code does not mean you can self-modify. You know that you're made of DNA. You can even get your own "source code" for a few thousand dollars. No humans have successfully self-modified into an intelligence explosion; the idea seems laughable.

3) Self-improvement is not like compound interest: if an AI comes up with an idea to modify it's source code to make it smarter, that doesn't automatically mean it will have a new idea tomorrow. In fact, as it picks off low-hanging fruit, new ideas will probably be harder and harder to think of. There's no guarantee that "how smart the AI is" will keep up with "how hard it is to think of ways to make the AI smarter"; to me, it seems very unlikely.

Consciousness affecting the world

-3 DavidPlumpton 06 December 2013 07:37PM

In Zombies! Zombies? Eliezer mentions that one aspect of consciousness is that it can causally affect the real world, e.g. cause you to say "I feel conscious right now", or result in me typing out these words.

Even if a generally accepted mechanism of consciousness has not been found yet are there any tentative explanations for this "can change world" property? Googling around I was unable to find anything (although Zombies are certainly popular).

I had an idea of how this might work, but just wanted to see if it was worth the effort of writing.

Personal examples of semantic stopsigns

44 Alexei 06 December 2013 02:12AM

I think most of us are familiar with the common semantic stopsigns like "God", "just because", and "it's a tradition." However, I've recently been noticing more interesting ones that I haven't really seen discussed on LW. (Or it's also likely that I missed those discussion.)

The first one is "humans are stupid." I notice this one very often, in particular in LW and other rationalist communities. The obvious problem here is that humans are not that stupid. Often what might seem like sheer stupidity was caused by a rather reasonable chain of actions and events. And even if a person or a group of people is being stupid, it's very interesting to chase down the cause. That's how you end up discovering biases from scratch or finding a great opportunity.

The second semantic stopsign is "should." Hat tip to Michael Vassar for bringing this one up. If you and I have a discussing about how I eat too much chocolate, and I say, "You are right, I should eat less chocolate," the conversation will basically end there. But 99 times out of a 100 nothing will actually come out of it. I try to taboo the word "should" from my vocabulary, so instead I will say something like, "You are right, I will not purchase any chocolate this month." This is a concrete actionable statement.

What other semantic stopsigns have you noticed in yourself and others?

 

On Walmart, And Who Bears Responsibility For the Poor

13 ChrisHallquist 27 November 2013 05:08AM

Note: Originally posted in Discussion, edited to take comments there into account.


Yes, politics, boo hiss. In my defense, the topic of this post cuts across usual tribal affiliations (I write it as a liberal criticizing other liberals), and has a couple strong tie-ins with main LessWrong topics:

  • It's a tidy example of a failure to apply consequentialist / effective altruist-type reasoning. And while it's probably true that the people I'm critiquing aren't consequentialists by any means, it's a case where failing to look at the consequences leads people to say some particularly silly things.
  • I think there's a good chance this is a political issue that will become a lot more important as more and more jobs are replaced by automation. (If the previous sentence sounds obviously stupid to you, the best I can do without writing an entire post on that is vaguely gesturing at gwern on neo-luddism, though I don't agree with all of it.)

The issue is this: recently, I've seen a meme going around to the effect that companies like Walmart that have a large number of employees on government benefits are the "real welfare queens" or somesuch, and with the implied message that all companies have a moral obligation to pay their employees enough that they don't need government benefits. (I say mention Walmart because it's the most frequently mentioned villain in this meme, but others, like McDonalds, get mentioned.)

My initial awareness of this meme came from it being all over my Facebook feed, but when I went to Google to track down examples, I found it coming out of the mouths of some fairly prominent congresscritters. For example Alan Grayson:

In state after state, the largest group of Medicaid recipients is Walmart employees. I'm sure that the same thing is true of food stamp recipients. Each Walmart "associate" costs the taxpayers an average of more than $1,000 in public assistance.

Or Bernie Sanders:

The Walmart family... here's an amazing story. The Walmart family is the wealthiest family in this country, worth about $100 billion. owning more wealth than the bottom 40 percent of the American people, and yet here's the incredible fact.

Because their wages and benefits are so low, they are the major welfare recipients in America, because many, many of their workers depend on Medicaid, depend on food stamps, depend on government subsidies for housing. So, if the minimum wage went up for Walmart, would be a real cut in their profits, but it would be a real savings by the way for taxpayers, who would not having to subsidize Walmart employees because of their low wages.

Now here's why this is weird: consider Grayson's claim that each Walmart employee costs the taxpayers on average $1,000. In what sense is that true? If Walmart fired those employees, it wouldn't save the taxpayers money: if anything, it would increase the strain on public services. Conversely, it's unlikely that cutting benefits would force Walmart to pay higher wages: if anything, it would make people more desperate and willing to work for low wages. (Cf. this this excellent critique of the anti-Walmart meme).

Or consider Sanders' claim that it would be better to raise the minimum wage and spend less on government benefits. He emphasizes that Walmart could take a hit in profits to pay its employees more. It's unclear to what degree that's true (see again previous link), and unclear if there's a practical way for the government to force Walmart to do that, but ignore those issues, it's worth pointing out that you could also just raise taxes on rich people generally to increase benefits for low-wage workers. The idea seems to be that morally, Walmart employees should be primarily Walmart's moral responsibility, and not so much the moral responsibility of the (the more well-off segment of) the population in general.

But the idea that employing someone gives you a general responsibility for their welfare (beyond, say, not tricking them into working for less pay or under worse conditions than you initially promised) is also very odd. It suggests that if you want to be virtuous, you should avoid hiring people, so as to keep your hands clean and avoid the moral contagion that comes with employing low wage workers. Yet such a policy doesn't actually help the people who might want jobs from you. This is not to deny that, plausibly, wealthy onwers of Walmart stock have a moral responsibility to the poor. What's implausible is that non-Walmart stock owners have significantly less responsibility to the poor.

This meme also worries me because I lean towards thinking that the minimum wage isn't a terrible policy but we'd be better off replacing it with guaranteed basic income (or an otherwise more lavish welfare state). And guaranteed basic income could be a really important policy to have as more and more jobs are replaced by automation (again see gwern if that seems crazy to you). I worry that this anti-Walmart meme could lead to an odd left-wing resistance to GBI/more lavish welfare state, since the policy would be branded as a subsidy to Walmart.

View more: Prev | Next