All of A4FB53AC's Comments + Replies

You could in principle very easily ignore the dice and eat the chocolate regardless. You need to take it upon yourself to follow through with the scheme and forfeit the chocolate 3 times out of 4. If you start with the understanding that chocolate is a possibility 4 times out of 4 if you followed a more permissive scheme, then you are effectively punishing yourself 3/4 of the time, which I expect would work as negative reinforcement for said task or the reward scheme in general. And it would also require enough willpower, which some people won't have.

2Viliam
This makes sense and feels correct to me.
0Tem42
I have observed that when gamblers know the odds, they don't gamble less. But my sample size is low.

The only factor under your control may be to realize that the only factor under your control is to obtain and use better methods and processes to think, gather information, act in the real world, generate feedback and adjust yourself.

Illustratively, no matter how innately intelligent a native English speaker might be, if he never had any experience with Japanese, he won't be able to read and understand kanji. Is that a failure of intelligence, or a failure of knowledge and method? If you've never had any experience in any science, and don't know the specia... (read more)

This is consistent with my experience with European life-extension movements. Generally speaking we just don't have a clear idea of where we should be going. Neither do we even always agree an what research or project is even relevant. So we have a collection of people sharing a vaguely defined goal of life-extension, all pushing for their pet projects and hypotheses. No one is really willing to abandon what they came up with because no clear evidence-based project under which they could assemble exists (or is perceptible)(this therefore of course includes... (read more)

Hm. This was eye opening enough that I felt like commenting for the first time in a year. I've known for a while about people being too despaired to desire living on, but this puts it under a new perspective.

Most importantly it helps explain the huge discrepancy between how instrumentally important staying alive and able is for anyone who has any goal at all (barring some fringe cases), and how little most people do to plan and organize themselves in order to avoid aging and dying, even as it is reasonably expected to be unavoidable with our current means... (read more)

0[anonymous]
I have this impression that anti-aging and anti-death on LW is a general extension of that kind of culture in America that 50 and 60 year old still exercise and not smoke and drink little because they still expect a lot of happiness rolling in to worth it. At least this is the culture I generally glean from e.g. The New York times who seem to always have some fad diet and exercise and seem to at least pay lip service to health. They are the kind of people who would NOT find a joke like "A real mans six course dinner is one pizza, five beers" funny, which also suggests that this culture drifted quite afar from blue-collar values, I think it is the white-collarization of American society that ultimately created it. While I was strongly influenced by the culture of the less developed parts of Europe where that joke would be funny, mores are still on blue-collar levels, it is OK to drink and smoke yourself to death at 60 because by that your kids can make a living so you no longer owe much duty to them, and you lived for discharging your duties anyway and not for fun. Or, the fun was precisely in things getting drunk with friends. What missing from this view is of course the third option, precisely the option that seems most prevalent on LW: living not for discharging duties nor for "partying" but for pursuing personally selected goals. I think the idea of personal selected goals requires a culture or attitude that is individualistic, and even egalitarian, where individuals are expected to be autonomous enough to find goals and empowered enough to have a chance at them. In other words, a political culture where people are more "association members" and less "subjects". I think it also requires an economic environment with significant discretionary incomes that worrying about bills is no big deal, and people can expect to make a living out of interesting jobs, not simply taking anything that pays the bills. I mentioned collars, because all this cultural background diff

Interesting opinion. I rarely browse open threads, mainly because I find them a mess, and it takes a longer time to find if there's anything which would interest me in there. Discussion posts have their own page with neatly ordered titles, you get an idea at a glance, and can on a first filter sort through around 20 topics in around 2 seconds.

Please do note the delicious irony here :

I don't see much good in associating rationality with extreme caution.

I don't think that teaching people to expect worse case scenarios increases rational thinking.

Which in essence looks suspiciously like cautiously assuming a bad case scenario in which this story won't help the rationality cause, or even a worst case scenario in which it will do more wrong than right.

If you want to go forth and create a story about rationality, then do it. Humans are complex creatures, not everyone will react the same way to y... (read more)

2Ritalin
That's me allright. Heck, now that the examples of Hellcity, Worm and Pact have been brought up, I feel like such a work would be redundant.

I think this misses the point of the OP, which wasn't that IQ or intelligence can accurately be guessed in a casual conversation, but rather that intelligence can be guessed more accurately than other important parameters such as "conscientiousness, benevolence, and loyalty", for which we don't have tools nearly as good as those we have for measuring IQ. The consequence of which being, since we can't assess these as methodically, people can fake them more easily, and this has negative social consequences.

4Vaniver
On a second read, I agree with you- I don't think I paid much attention to the third sentence, because the first two both rubbed me the wrong way. I have known people who turned out to be all hat and no cattle, intelligence-wise, and see that as a general phenomenon, and think verbal ability can be very distinct from mathematical/technical ability. There's significant anecdotal and statistical evidence for that. We have good measures of conscientiousness, but are either benevolence or loyalty single factors? Benevolence or loyalty to a single entity we have moderately good tests for, and it's not clear to me it's possible to do better without mindreading.

Especially to mess with one of those people intolerant of our beliefs in the supernatural, who always have to go about how this or that can easily be dismissed if only you were rational. How ironical could it be then to get one to believe in a haunted house because it was the rational thing to do given the "evidence"?

It's the choice between two kinds of new minds, one modeled after a mind that has existed once, and one modeled after a better design.

Still I wonder then, what could I do, to enhance my probability of being resurrected if worse comes to worse and I can't manage to stay alive to protect and ensure the posterity of my own current self if I am not one of those better minds (according to which values though?)

2jacob_cannell
I've pondered this some, and it seems that the best strategy in distant historical eras was just to be famous, and more specifically to write an autobiography. Having successful ancestors also seems to grow in importance as we get into the modern era. For us today we have cryonics of course, and being succesful/famous/wealthy is obviously viable, but blogging is probably to be recommended as well.
4Kaj_Sotala
I realize that this probably won't be very useful advice for you, but I'd recommend working on letting go of the sense of having a lasting self in the first place. Not that I'd fully alieve in that yet either, but the closer that I've gotten to always alieving it, the less I've felt like I have reason to worry about (not) living forever. Me possibly dying in fourty years is no big deal if I don't even think I'm the same person tomorrow, or five minutes from now.

I know I prefer to exist now. I'd also like to survive for a very long time, indefinitely. I'm also not even sure the person I'll be 10 or 20 years from now will still be significantly "me". I'm not sure the closest projection of my self on a system incapable of suffering at all would still be me. Sure I'd prefer not to suffer, but over that, there's a certain amount of suffering I'm ready to endure if I have to in order to stay alive.

Then on the other side of this question you could consider creating new sentiences who couldn't suffer at all. Bu... (read more)

5Kaj_Sotala
From the point of view of those who'll actually create the minds, it's not a choice between somebody who exists already and a new mind. It's the choice between two kinds of new minds, one modeled after a mind that has existed once, and one modeled after a better design. One might also invoke Big Universe considerations to say that even the "new" kind of a mind has already existed in some corner of the universe (maybe as a Boltzmann brain), so they'll be regardless choosing between two kinds of minds that have existed once. Which just goes to show that the whole "this mind has existed once, so it should be given priority over a one that hasn't" argument doesn't make a lot of sense. Yes. See also David Pearce's notion of beings who've replaced pain and pleasure with gradients of pleasure - instead of having suffering as a feedback mechanism, their feedback mechanism is a lack of pleasure.

I think you're making too many separate points (how to resurrect past people using all the information you can, simulation argument, some religious undertone) and the text is pretty long, many will not read it to the end. Also even if someone agrees with some part of it, it's likely they'll disagree with another (which often results in downvoting the whole post in my experience). I think you'd be better off rewriting this as several different posts.

0jacob_cannell
Good points. Agreed. I'm going to tighten it into one or more smaller, tighter and hopefully more interesting discussion worthy bits.

First off, I'd like to say, I have met Christians who similarly were very open to rationality and applying it to the premises of their religion, especially the ethics. In practice, one of these was the only person who directly recognized me as an immortalist a few sentences into our first discussion, where no one else around me even knew what that is. I find that admirable, and fascinating.

I also think it likely that human beings as they are now need some sort of comfort, reassurance, that their universe is not that universe of cold mathematics.

So I'm not ... (read more)

0scav
I actually like the idea of the universe of cold mathematics. I would find the idea of a non-mathematical universe sort of disappointing and hopeless. I think a few people are assuming odd things about what I currently believe, and that's probably to be expected after a post like that. For me now, my "faith" isn't "epistemic belief in the existence of a particular God", but "provisional trust in the hypothesis of an admittedly poorly expressed ideal". This is no different than provisional trust in any other hypothesis, except inasmuch as I don't have a nice clean experiment to falsify it. I'm just living my life and seeing how it goes. It's not impossible that I will find that it goes badly enough to make me abandon some of the heuristics I currently adopt.

comes with nifty bonuses like 'increases the IQ of females more than males'.

Why is that a bonus?

Athrelon160

Because in the eyes of the majority of western elites, poor women have higher status than poor men.

gwern230

Because that makes iodine an intervention easier to market to feminists and anyone with feminist leanings, and increases in female intelligence may have positive effects on particularly benighted and distasteful countries like Afghanistan or Pakistan.

A4FB53AC130

Suppose that SI now activates its AGI, unleashing it to reshape the world as it sees fit. What will be the outcome? I believe that the probability of an unfavorable outcome - by which I mean an outcome essentially equivalent to what a UFAI would bring about - exceeds 90% in such a scenario. I believe the goal of designing a "Friendly" utility function is likely to be beyond the abilities even of the best team of humans willing to design such a function. I do not have a tight argument for why I believe this.

My immediate reaction to this was &qu... (read more)

4Viliam_Bur
Seems to me that Holden's opinion is something like: "If you can't make the AI reliably friendly, just make it passive, so it will listen to humans instead of transforming the universe according to its own utility function. Making a passive AI is safe, but making an almost-friendly active AI is dangerous. SI is good at explaining why almost-friendly active AI is dangerous, so why don't they take the next logical step?" But from SI's point of view, this is not a solution. First, it is difficult, maybe even impossible, to make something passive and also generally intellligent and capable of recursive self-improvement. It might destroy the universe as a side effect of trying to do what it percieves as our command. Second, the more technology progresses, the relatively easier it will be to build an active AI. Even if we build a few passive AIs, it does not prevent some other individual or group to build an active AI and use it to destroy the world. Having a blueprint for a passive AI will probably make building active AI easier. (Note: I am not sure I am representing Holden's or SI's views correctly, but this is how it makes most sense to me.)

The mind I've probably gained the most by exploring is Eliezer's, both because so much of his thinking is available online, and because out of many useful habits and qualities I didn't have, he seemed to have those qualities to the greatest extent. I'm not referring to the explicit points he's made in his writing (though I've gained by those as well), but the overall way he thinks and feels about the world.

Well, as Eliezer said

I have striven for a long time now to convey, pass on, share a piece of the strange thing I touched, which seems to me so prec

... (read more)
1FrankAdamek
Interesting, I hadn't remembered him saying that.

Actually, not against. I was thinking that current moderation techniques on lesswrong are inadequate/insufficient. I don't think the reddit karma system's been optimized much. We just imported it. I'm sure we can adapt it and do better.

At least part of my point should have been that moderation should provide richer information. For instance by allowing for graded scores on a scale from -10 to 10, and showing the average score rather than the sum of all votes. Also, giving some clue as to how controversial a post is. That'd not be a silver bullet, but it'd ... (read more)

4NancyLebovitz
Karma graphs would give a lot of information-- whether a person's average karma is trending up or down, and whether their average karma is the result of a lot of similar karma or +/- swings.
A4FB53AC-10

Not more so than "vote up".

In this case I don't think both are significantly different. They both don't convey a lot of information, both are very noisy, and a lot of people seem to already mean "more like this" when they "vote up" anyway.

6khafra
I don't think it was clear from the context that you were arguing against the practice of community moderation in general. I also don't think you supported your case anywhere near well enough to justify your verbal vehemence. Was this a test/demonstration of Wei Dai's point about intolerance of overconfident newcomers with different ideas?

True, except you don't know how many people didn't vote (i.e. we don't keep track of that : a comment at 0 could as well have been read and voted as "0" by 0, 1, 10 or a hundred people and is the default state anyway.)(We similarly can't know if a comment is controversial, that is, how many upvotes and downvotes went into the aggregated score).

2Bugmaster
The system does keep track of how everyone voted, though; it needs to do that in order to render the thumbs up/down buttons as green or gray. wedrifid is right though; using suitable compression, you might be able to get away with less than two bits (in aggregate).
A4FB53AC-20

You should call it black and white. Because that's what it is, black and white thinking.

Just think about it : using nothing more than one bit of non normalized information by compressing the opinion of people who use wildly variable judgement criteria, from variable populations (different people care and vote for different topics).

Then you're going to tell me it "works nonetheless", that it self-corrects because several (how many do you really need to obtain such a self-correction effect?) people are aggregating their opinions and that people u... (read more)

3David_Gerard
More so than "vote up"? You've made a statement here that looks like it should be supported by evidence. What sites do you know of this happening from going from "vote up" to "more of this"?
2Bugmaster
Don't you technically need at least two bits ? There are three states: "downvoted", "upvoted", and "not voted at all".
7NancyLebovitz
I've noticed that humor gets a lot of upvotes compared to good but non-funny comments. However, humor hasn't taken over, probably because being funny can take some thought. I don't think karma conveys a lot of information at this point, though heavily upvoted articles tend to be good, and I've given up on reading down-voted articles, with a possible exception of those that get a significant number of comments.

Is the amount of bits necessary to discriminate one functional human brain among all permutations of matter of the same volume greater or smaller than the amount of bits necessary to discriminate a version of yourself among all permutations of functional human brains? My intuition is that once you've defined the first, there isn't much left needed, comparatively, to define the latter.

Corollary, cryonics doesn't need to preserve a lot of information, if any, you can patch it up with, among other things, info from what a generic human brain is, or better wh... (read more)

suppose that a Friendly AI fills a human-sized three-dimensional grid with atoms, using a quantum dice to determine which atom occupies each "pixel" in the grid. This splits the universe into as many branches as there are possible permutations of the grid (presumably a lot)

How is that a Friendly AI?

'alive' relative to a specific environment

It's always relative to a certain environment. Human beings and most animals can't survive outside of the current biosphere. In that respect we're no less independent from certain peculiar conditions than viruses are. We both depend on other living organisms in order to survive.

Maybe redefine life against a continuum of how unlikely, complex the necessary environmental conditions are that sustain it?

Some autotrophic cells might rank at one currently known bound while higher animals would be on the other end.

0Douglas_Reay
Yes, sorry, that's another thing I missed off the write-up. Rather than "life" being a binary "you're either a living organism or you are not", it might be better looked at as a scale. One possible measure being the range of environments in which you are a viable searcher; another being the percentage you control of the parts of the environment that are relevant to your replication. The alternative viewpoint suggested was rather than asking how alive a single organism is, ask what is the required organism+(other organisms or part of the environment) that should be considered to have the necessary attributes to count (collectively) as alive.
A4FB53AC370

A faith which cannot survive collision with the truth is not worth many regrets.

Arthur C. Clarke

5NancyLebovitz
That's very nice to say, but people are apt to find giving up some faiths very emotionally wrenching and socially costly (even if the faith isn't high status, a believer is likely to have a lot of relationships with people who are also believers). Now what?

The trouble is, the most problematic kinds of faith can survive it just fine.

Yeah being considered a part of an AI. I might hate to be,say, its "hair". Just thinking about its next metaphorical "fashion induced haircut and coloring" gives me the chills.

Just because something is a part of something else doesn't mean it'll be treated in ways that it finds acceptable, let alone pleasant.

The idea may be interesting for human-like minds and ems derived from humans - and even then still dangerous. I don't see how that could apply in any marginally useful way to minds in general.

For what it's worth I had already observed this effect. I am less likely to carry on with some plan if I talk about it to other people. Now I tend to just do what I have to, and only talk about it once it's done.

Part of the problem is I hate feeling pressured into doing something. Social commitment will, if anything, simply make me want to run away from what I just implicitly promised I'd do. Perhaps because I can never be sure whether I can achieve something : if I fail silently and nobody knows, it's ok. Less so if I told people about it. It feels better... (read more)

I feel like I can relate to that. It's not like I never rationalize, but I always know when I do it. Sometimes It may be pretty faint, but I'll still be aware of it. Whether I allow myself to proceed with justifying a false belief depends on the context. Sometimes it just feels uncomfortable enough to admit to being wrong, sometimes it is efficient to mislead people, and so on.