The only factor under your control may be to realize that the only factor under your control is to obtain and use better methods and processes to think, gather information, act in the real world, generate feedback and adjust yourself.
Illustratively, no matter how innately intelligent a native English speaker might be, if he never had any experience with Japanese, he won't be able to read and understand kanji. Is that a failure of intelligence, or a failure of knowledge and method? If you've never had any experience in any science, and don't know the specia...
This is consistent with my experience with European life-extension movements. Generally speaking we just don't have a clear idea of where we should be going. Neither do we even always agree an what research or project is even relevant. So we have a collection of people sharing a vaguely defined goal of life-extension, all pushing for their pet projects and hypotheses. No one is really willing to abandon what they came up with because no clear evidence-based project under which they could assemble exists (or is perceptible)(this therefore of course includes...
Hm. This was eye opening enough that I felt like commenting for the first time in a year. I've known for a while about people being too despaired to desire living on, but this puts it under a new perspective.
Most importantly it helps explain the huge discrepancy between how instrumentally important staying alive and able is for anyone who has any goal at all (barring some fringe cases), and how little most people do to plan and organize themselves in order to avoid aging and dying, even as it is reasonably expected to be unavoidable with our current means...
Interesting opinion. I rarely browse open threads, mainly because I find them a mess, and it takes a longer time to find if there's anything which would interest me in there. Discussion posts have their own page with neatly ordered titles, you get an idea at a glance, and can on a first filter sort through around 20 topics in around 2 seconds.
Please do note the delicious irony here :
I don't see much good in associating rationality with extreme caution.
I don't think that teaching people to expect worse case scenarios increases rational thinking.
Which in essence looks suspiciously like cautiously assuming a bad case scenario in which this story won't help the rationality cause, or even a worst case scenario in which it will do more wrong than right.
If you want to go forth and create a story about rationality, then do it. Humans are complex creatures, not everyone will react the same way to y...
I think this misses the point of the OP, which wasn't that IQ or intelligence can accurately be guessed in a casual conversation, but rather that intelligence can be guessed more accurately than other important parameters such as "conscientiousness, benevolence, and loyalty", for which we don't have tools nearly as good as those we have for measuring IQ. The consequence of which being, since we can't assess these as methodically, people can fake them more easily, and this has negative social consequences.
Especially to mess with one of those people intolerant of our beliefs in the supernatural, who always have to go about how this or that can easily be dismissed if only you were rational. How ironical could it be then to get one to believe in a haunted house because it was the rational thing to do given the "evidence"?
It's the choice between two kinds of new minds, one modeled after a mind that has existed once, and one modeled after a better design.
Still I wonder then, what could I do, to enhance my probability of being resurrected if worse comes to worse and I can't manage to stay alive to protect and ensure the posterity of my own current self if I am not one of those better minds (according to which values though?)
I know I prefer to exist now. I'd also like to survive for a very long time, indefinitely. I'm also not even sure the person I'll be 10 or 20 years from now will still be significantly "me". I'm not sure the closest projection of my self on a system incapable of suffering at all would still be me. Sure I'd prefer not to suffer, but over that, there's a certain amount of suffering I'm ready to endure if I have to in order to stay alive.
Then on the other side of this question you could consider creating new sentiences who couldn't suffer at all. Bu...
I think you're making too many separate points (how to resurrect past people using all the information you can, simulation argument, some religious undertone) and the text is pretty long, many will not read it to the end. Also even if someone agrees with some part of it, it's likely they'll disagree with another (which often results in downvoting the whole post in my experience). I think you'd be better off rewriting this as several different posts.
First off, I'd like to say, I have met Christians who similarly were very open to rationality and applying it to the premises of their religion, especially the ethics. In practice, one of these was the only person who directly recognized me as an immortalist a few sentences into our first discussion, where no one else around me even knew what that is. I find that admirable, and fascinating.
I also think it likely that human beings as they are now need some sort of comfort, reassurance, that their universe is not that universe of cold mathematics.
So I'm not ...
comes with nifty bonuses like 'increases the IQ of females more than males'.
Why is that a bonus?
Because in the eyes of the majority of western elites, poor women have higher status than poor men.
Because that makes iodine an intervention easier to market to feminists and anyone with feminist leanings, and increases in female intelligence may have positive effects on particularly benighted and distasteful countries like Afghanistan or Pakistan.
Suppose that SI now activates its AGI, unleashing it to reshape the world as it sees fit. What will be the outcome? I believe that the probability of an unfavorable outcome - by which I mean an outcome essentially equivalent to what a UFAI would bring about - exceeds 90% in such a scenario. I believe the goal of designing a "Friendly" utility function is likely to be beyond the abilities even of the best team of humans willing to design such a function. I do not have a tight argument for why I believe this.
My immediate reaction to this was &qu...
The mind I've probably gained the most by exploring is Eliezer's, both because so much of his thinking is available online, and because out of many useful habits and qualities I didn't have, he seemed to have those qualities to the greatest extent. I'm not referring to the explicit points he's made in his writing (though I've gained by those as well), but the overall way he thinks and feels about the world.
Well, as Eliezer said
...
Actually, not against. I was thinking that current moderation techniques on lesswrong are inadequate/insufficient. I don't think the reddit karma system's been optimized much. We just imported it. I'm sure we can adapt it and do better.
At least part of my point should have been that moderation should provide richer information. For instance by allowing for graded scores on a scale from -10 to 10, and showing the average score rather than the sum of all votes. Also, giving some clue as to how controversial a post is. That'd not be a silver bullet, but it'd ...
Not more so than "vote up".
In this case I don't think both are significantly different. They both don't convey a lot of information, both are very noisy, and a lot of people seem to already mean "more like this" when they "vote up" anyway.
True, except you don't know how many people didn't vote (i.e. we don't keep track of that : a comment at 0 could as well have been read and voted as "0" by 0, 1, 10 or a hundred people and is the default state anyway.)(We similarly can't know if a comment is controversial, that is, how many upvotes and downvotes went into the aggregated score).
You should call it black and white. Because that's what it is, black and white thinking.
Just think about it : using nothing more than one bit of non normalized information by compressing the opinion of people who use wildly variable judgement criteria, from variable populations (different people care and vote for different topics).
Then you're going to tell me it "works nonetheless", that it self-corrects because several (how many do you really need to obtain such a self-correction effect?) people are aggregating their opinions and that people u...
Is the amount of bits necessary to discriminate one functional human brain among all permutations of matter of the same volume greater or smaller than the amount of bits necessary to discriminate a version of yourself among all permutations of functional human brains? My intuition is that once you've defined the first, there isn't much left needed, comparatively, to define the latter.
Corollary, cryonics doesn't need to preserve a lot of information, if any, you can patch it up with, among other things, info from what a generic human brain is, or better wh...
suppose that a Friendly AI fills a human-sized three-dimensional grid with atoms, using a quantum dice to determine which atom occupies each "pixel" in the grid. This splits the universe into as many branches as there are possible permutations of the grid (presumably a lot)
How is that a Friendly AI?
'alive' relative to a specific environment
It's always relative to a certain environment. Human beings and most animals can't survive outside of the current biosphere. In that respect we're no less independent from certain peculiar conditions than viruses are. We both depend on other living organisms in order to survive.
Maybe redefine life against a continuum of how unlikely, complex the necessary environmental conditions are that sustain it?
Some autotrophic cells might rank at one currently known bound while higher animals would be on the other end.
A faith which cannot survive collision with the truth is not worth many regrets.
Arthur C. Clarke
The trouble is, the most problematic kinds of faith can survive it just fine.
Yeah being considered a part of an AI. I might hate to be,say, its "hair". Just thinking about its next metaphorical "fashion induced haircut and coloring" gives me the chills.
Just because something is a part of something else doesn't mean it'll be treated in ways that it finds acceptable, let alone pleasant.
The idea may be interesting for human-like minds and ems derived from humans - and even then still dangerous. I don't see how that could apply in any marginally useful way to minds in general.
For what it's worth I had already observed this effect. I am less likely to carry on with some plan if I talk about it to other people. Now I tend to just do what I have to, and only talk about it once it's done.
Part of the problem is I hate feeling pressured into doing something. Social commitment will, if anything, simply make me want to run away from what I just implicitly promised I'd do. Perhaps because I can never be sure whether I can achieve something : if I fail silently and nobody knows, it's ok. Less so if I told people about it. It feels better...
I feel like I can relate to that. It's not like I never rationalize, but I always know when I do it. Sometimes It may be pretty faint, but I'll still be aware of it. Whether I allow myself to proceed with justifying a false belief depends on the context. Sometimes it just feels uncomfortable enough to admit to being wrong, sometimes it is efficient to mislead people, and so on.
You could in principle very easily ignore the dice and eat the chocolate regardless. You need to take it upon yourself to follow through with the scheme and forfeit the chocolate 3 times out of 4. If you start with the understanding that chocolate is a possibility 4 times out of 4 if you followed a more permissive scheme, then you are effectively punishing yourself 3/4 of the time, which I expect would work as negative reinforcement for said task or the reward scheme in general. And it would also require enough willpower, which some people won't have.