All of Raiden's Comments + Replies

Raiden140

Robin, or anyone who agrees with Robin:

What evidence can you imagine would convince you that AGI would go FOOM?

jprwg190

While I find Robin's model more convincing than Eliezer's, I'm still pretty uncertain.

That said, two pieces of evidence that would push me somewhat strongly towards the Yudkowskian view:

  • A fairly confident scientific consensus that the human brain is actually simple and homogeneous after all. This could perhaps be the full blank-slate version of Predictive Processing as Scott Alexander discussed recently, or something along similar lines.

  • Long-run data showing AI systems gradually increasing in capability without any increase in complexity. Th

... (read more)

What evidence would convince you that AGI won't go FOOM?

9whpearson
I'm currently unsure of the speed of takeoff. Things that would convince me it was fast. 1) Research that showed that the ability to paradigm shift was a general skill, and not just mainly right place/right time (this is probably hard to get). 2) Research that showed that the variation in human task ability for economically important tasks is mainly due to differences in learning from trial and error situations and less to do with tapping into the general human culture built up over time. 3) Research that showed that computers were significantly more information efficient than humans for finding patterns in research. I am unsure of the amount needed here though. 4) Research that showed that the speed of human thought is a significant bottle neck in important research. That is it takes 90% of the time. I'm trying to think of more here
Raiden00

I don't think it being unfalsifiable is a problem. I think this is more of a definition than a derivation. Morality is a fuzzy concept that we have intuitions about, and we like to formalize these sorts of things into definitions. This can't be disproven any more than the definition of a triangle can be disproven.

What needs to be done instead is show the definition to be incoherent or that it doesn't match our intuition.

Raiden20

Can you explain why that's a misconception? Or at least point me to a source that explains it?

I've started working with neural networks lately and I don't know too much yet, but the idea that they recreate the generative process behind a system, at least implicitly, seems almost obvious. If I train a neural network on a simple linear function, the weights on the network will probably change to reflect the coefficients of that function. Does this not generalize?

2Manfred
Well, consider a neural net for distinguishing dogs from cats. This neural network might develop features that look like "dog-like eyes" and "cat-like eyes," which are pattern-matched across the image. Images with more activation on the first feature are claimed to be dogs and images with more activation on the second feature are claimed to be cats, along with input from many other features. This is fairly typical-sounding. Now imagine how bonkers a neural net would have to be in order to reproduce the generative process behind the images! Leaving aside simulations of the early universe, our neural network should still have a solid understanding of the biology of dogs and cats, the different grooming and adornment practices, macroscopic physics and physiology that leads to poses, and the preferences of people taking and storing photographs.
Raiden90

It fits with the idea of the universe having an orderly underlying structure. The simulation hypothesis is just one way that can be true. Physics being true is another, simpler explanation.

Raiden70

Neural networks may very well turn out to be the easiest way to create a general intelligence, but whether they're the easiest way to create a friendly general intelligence is another question altogether.

turchin150

They may be used to create complex but boring part of the real AI like image recognition. DeepMind is no where near to NN, it combines several architectures. So NNs are like ToolAIs inside large AI system: they do a lot of work but on low level.

Raiden20

Many civilizations may fear AI, but maybe there's a super-complicated but persuasive proof of friendliness that convinces most AI researchers, but has a well-hidden flaw. That's probably a similar thing to what you're saying about unpredictable physics though, and the universe might look the same to us in either case.

Raiden30

Not necessarily all instances. Just enough instances to allow our observations to not be incredibly unlikely. I wouldn't be too surprised if out of a sample of 100 000 AIs none of them managed to produce successful vNP before crashing. In addition to the previous points the vNP would have to leave the solar system fast enough to avoid the AI's "crash radius" of destruction.

Regarding your second point, if it turns out that most organic races can't produce a stable AI, then I doubt an insane AI would be able to make a sane intelligence. Even if it... (read more)

1turchin
In my second point I meant original people, who created AI. Not all of the will be killed during creation and during AI halt. Many will survive and will be rather strong posthumans from our point of view. Just one instance of them is enough to start intelligence wave. Another option is that AI may create nanobots capable to self-replicate in space, but not to star travel. But they anyway will jump from one comet to another randomly and in 1 billion year (arox) will colonise all Galaxy. We could search for such relicts in space. They may be rather benign from the risk point, just like mechanical plants.
Raiden30

That's a good point. Possible solutions:

  1. AI just don't create them in the first place. Most utility functions don't need non-evolving von Neumann probes, and instead the AI itself leads the expansion.

  2. AI crash before creating von Neumann probes. There are lots of destructive technologies an AI could get to before being able to build such probes. An unstable AI that isn't in the attractor zone of self-correcting fooms would probably become more and more unstable with each modification, meaning that the more powerful it becomes the more likely it is to des

... (read more)
3turchin
Any real solution of Fermi paradox must work in ALL instances. If we have 100 000 AIs in past light cone, it seems unplausible that all of them will fail the same trap before creating vNP. Most of them will have stable form of intelligence like local "humans" which will able to navigate starships even after AI fails. So it will be like old school star navigation without AI. We will return to the world there strong AI is impossible and space is colonised by humanoid colonists. Nice plot, but where are way? Another solution to FP is that most of new AIs fail to superAI predator which sends virus-like messages via some kind of space radio. The message is complex enough that only AI could find and read it.
Raiden10

My statement itself isn't something I believe with certainty, but adding that qualifier to everything I say would be a pointless hassle, especially for things that I believe with a near-enough certainty that my mind feels it is certain. The part with the "ALL" is itself a part of the statement I believe with near certainty, not a qualifier of the statement I believe. Sorry I didn't make that clearer.

0James_Miller
OK, and appropriate when writing on LW. But I wonder if part of the reason most people don't think of "beliefs being probabilities on a continuum" is that even statistically literate people don't usually bother qualifying statements that if taken literally would mean they held some belief with probability 1.
Raiden230

The idea of ALL beliefs being probabilities on a continuum, not just belief vs disbelief.

2James_Miller
Doesn't the word "ALL" make your statement self-contradictory?
1MrMind
This a million times! How many bias are based on this alone? It's discomforting...
Raiden00

Suppose I'm destructively uploaded. Let's assume also that my consciousness is destroyed, a new consciousness is created for the upload, and there is no continuity. The upload of me will continue to think what I would've thought, feel what I would've felt, choose what I would've chosen, and generally optimize the world in the way I would've. The only thing it would lack is my "original consciousness", which doesn't seem to have any observable effect in the world. Saying that there's no conscious continuity doesn't seem meaningful. The only actual... (read more)

Raiden10

I expect that most people are biased when it comes to judging how attractive they are. Asking people probably doesn't help too much, since people are likely to be nice, and close friends probably also have a biased view of ones attractiveness. So is there a good way to calibrate your perception of how good you look?

0Risto_Saarelma
If there was a large dataset of faces shot in a similar way and rated for attractiveness somewhere, you could take a photo of yourself, look for people in the set who look like you (possibly with some sort of face recognition program) and see how they are rated.
2Dagon
Most people don't have a strong operational definition of what "how attractive" means - it's not so much that people are biased, but that the question is incoherent. Even the visual components of attraction between two people have a lot of dimensions, which different viewers will combine differently. Depending on why you want to know, I can suggest a few different paths: 1) seek professional opinion - ask people at modeling agencies whether you have looks that will sell product. 2) seek crowd opinion - there are sites where you can post a photo and see how many responses you get. 3) find ways to measure the common components of beauty (symmetry, ratios between features, etc.). 4) find ways to identify (and enhance) attractiveness to specific people rather than in general. None of these are objective. Give that up - beauty isn't actually objective (though there are components that correlate strongly with majority subjective reporting). Also, you don't say "physical attractiveness", nor "sexual attractiveness", so perhaps you intend to mean the total package of likeability for all purposes - if so, ignore 1-3; #4 is the one which acknowledges the idiosyncratic nature of human attraction.
2ChristianKl
What exactly do you want to know about your looks? In what way would an answer to the question help you?
3Dahlen
You gawk a lot at people and develop an eye for what attractiveness means. Don't ask people, that's almost always useless, unless you happen to run into an expert on this. See what your eye responds positively to. Then evaluating yourself is as easy as keeping a reference feature in your mind up for comparison when you look at yourself. Keep in mind that attractive people are not all identical; there are attractive and unattractive versions and combinations of any trait. There are also some things you could do to get an eye-opening perspective of yourself – ever looked at yourself through a second mirror forming an acute angle to the first mirror, so you can see yourself from the side view? I guarantee that the first time you do it you'll feel very surprised. Same thing when you're filmed talking and then watch the footage. Images that are flipped horizontally relative to your mirror image also help you notice asymmetries. The point is that the eye notices a lot more when the image is even slightly unfamiliar.
8Vaniver
Can't you just post a photo on a relevant website? okCupid has a rating system, I think HotOrNot is still around, etc.
2raydora
Perhaps a rating system based on proportions, symmetry, and skin health. However, I'm not convinced this is that (it is a large factor in decisions, yes, but it's not one you can change much beyond style and hygiene, unless you're willing to undergo plastic surgery) important, except in the realm of Tinder-esque situations. If you happen to live somewhere where random people will complement you or flirt with you, I suppose number of incidents/number of people exposed to over a large span of time could be a metric.
Raiden00

One thing that helped me a lot was doing some soul-searching. It's not so much about finding something to protect so much as realizing what I already care about, even if there are some layers of distance between my current feelings and that thing. I think that a lot of that listless feeling of not having something to protect is just sort of being distracted from what we actually care about. I would recommend just looking for anything you care about at all, even slightly, and just focusing on that feeling.

At least that makes sense and works for me.

0ChristianKl
What exactly did you do?
Raiden00

There are a lot of ways to be irrational, and if enough people are being irrational in different ways, at least some of them are bound to pay off. Using your example, some of the people with blind idealism may get stuck to an idea that they can accomplish, but most of them fail. The point of trying to be rational isn't to do everything perfectly, but to systematically increasing your chances of succeeding, even though in some cases you might get unlucky.

Raiden20

I think the biggest reason we have to assume that the universe is empty is that the earth hasn't already been colonized.

Raiden10

Ah I see. I was thinking of motte and bailey as something like a fallacy or a singular argument tactic, not a description of a general behavior. The name makes much more sense now. Thank you. Also, you said it's called that "everywhere except the Scottosphere". Could you elaborate on that?

0tut
Scott introduced the concept of a motte and bailey doctrine on Slate Star Codex, in an article called Social Justice and Words Words Words or something like that. I don't think he said anything that was wrong in that post (about that concept), but it appears that a lot of readers who hadn't heard about M&BDs before misunderstood it to be about a debate tactic/fallacy. So on SSC and to some extent on LW 'motte and bailey' is often used with the meaning 'bait and switch'.
Raiden00

What does the tern "doctrine" mean in this context anyways? It's not exactly a belief or anything, just a type of argument. I've seen that it's called that but I don't understand why.

0tut
A doctrine is something like a rule or principle or concept. The point is that when you claim that something is a motte and bailey doctrine you don't just attack one argument, but rather the whole body of thought that argues about that thing using those concepts.
Raiden10

Is this the same thing as the motte and bailey argument?

1tut
Not the motte and bailey argument, a motte and bailey doctrine. But yeah, it sounds a lot like what is called a motte and bailey doctrine everywhere except in the Scottosphere.
Raiden00

You cite the language's tendency to borrow foreign terms as a positive thing. Wouldn't that require an inconsistent orthography?

3polymathwannabe
In its current state, English does tend to borrow terms without changing their spelling (e.g. plateau), but in my proposed system they would all have to be adapted. Many languages already do that: Spanish borrowed football and turned it into fútbol.
Raiden00

Also, if these super-Turing machines are possible, and the real universe is finite, then we are living in a simulation with probability 1, because you could use them to simulate infinitely many observer-seconds.

This is probably true. I think a lot of people feel uncomfortable with the possibility of us living in a simulation, because we'd be in a "less real" universe or we'd be under the complete control of the simulators, or various other complaints. But if such super-Turing machines are possible, then the simulated nature of the universe wou... (read more)

Raiden90

I always thought that the "most civilizations just upload and live in a simulated utopia instead of colonizing the universe" response to the Fermi Paradox was obviously wrong, because it would only take ONE civilization breaking this trend to be visible, and regardless of what the aliens are doing, a galaxy of resources is always useful to have. But i was reading somewhere (I don't remember where) about an interesting idea of a super-Turing computer that could calculate anything, regardless of time constraints and ignoring the halting problem. I... (read more)

8Viliam
Seems to me that this "obvious solution" has exactly the same problem as the original one... "it would only take ONE civilization breaking this trend to be visible".
9D_Malik
If humanity did this, at least some of us would still want to spread out in the real universe, for instance to help other civilizations. (Yes, the world inside the computer is infinitely more important than real civilizations, but I don't think that matters.) Also, if these super-Turing machines are possible, and the real universe is finite, then we are living in a simulation with probability 1, because you could use them to simulate infinitely many observer-seconds.
1Raziel123
This looks like Tipler's Omega point. Except that it's singular in the universe and for not clear reasons, it will resurrect us all in a simulated heaven.
Raiden00

I strongly suspect that the effectiveness of capitalism as a system of economic organization is proportional to how rational agents participating in it are. I expect that capitalism only optimizes against the general welfare when people in a capitalist society make decisions that go against their own long-term values. The more rational a capitalist society is, the more it begins to resemble an economist's paradise.

Raiden30

Thank you! That's the first in-depth presentation of someone actually benefiting from MBTI that I've ever seen, and it's really interesting. I'll mull over it. I guess the main thing to keep in mind is that other people are different from me.

Raiden40

I've noticed that a lot of my desire to be rational is social. I was raised as the local "smart kid" and continue to feel associated with that identity. I get all the stuff about rationality should be approached like "I have this thing I care about, and therefore become rational to protect it." but I just don't feel that way. I'm not sure how I feel about that.

Of the three reasons to be rational that are described, I'm most motivated by the moral reason. This is probably because of the aforementioned identity. I feel very offended at a... (read more)

4[anonymous]
Oooh, I have advice! I've gotten so much from this site in my first week or two here, and this is my first chance to potentially help someone else :) If you think MBTI personality typing has no value, don't bother with this. It sounds silly, but finding out about Myers-Briggs was actually life-changing for me. Knowing someone's type can help you develop realistic expectations for their behavior, communicate much more effectively, and empathize. Other people are no longer mysteries! Idk how familiar you are with MBTI, but there are 4 strict dichotomies, and of course some people fall on the borderline for some of them, but one of the more interesting to me is Thinking (not to be confused with intelligence) vs. Feeling (not to be confused with emotion). This gives a thorough explanation, which should help you understand "irrational" people a little better. And once you understand them, you'll be less likely to be offended by them and more likely to get along. If there's anyone in particular that this is a struggle with, I'd recommend trying to figure out their full personality and reading the profile on the personality page here. When I was little, my strong-willed, very rational ISTP personality conflicted with my mother's ESFJ type and led to many mutual frustrations; we just couldn't relate to each other. Maybe you have some ESFJ types in your life. These are their weaknesses: *May be unable to correctly judge what really is for the best *May become spiteful and extremely intractable in the face of clear, logical reasoning *May be unable to shrug off feelings that others are not "good people" *May be unable to acknowledge anything that goes against their certainty about the "correct" or "right" way to do things *May attribute their own problems to arbitrary and unprovable notions about the way people "ought" to behave *May be at a loss when confronted with situations that require basic technical expertise or clear thinking *May be oblivious to all but thei
Raiden00

I'd have to be stronger than the group in order to get more food than the entire group, but depending on their ability to cooperate I may be able to steal plenty for myself, an amount that would seem tiny compared to the large amount needed for the whole group.

The example I chose was a somewhat bad one I think though because the villagers would have a defender's advantage of protecting their food. You can substitute "food" for "abstract, uncontrolled resource" to clarify my point.

Raiden10

an armed group such as occupiers or raiders who kept forcibly taking resources from the native population would be high status among the population, which seems clearly untrue.

Maybe that's still the same kind of status, but it is in regards to a different domain. Perhaps an effective understanding of status acknowledges that groups overlap and may be formed around different resources. In your example, there is group (raiders and natives) which forms around literal physical resources, perhaps food. In this group, status is determined by military might, s... (read more)

0Caue
You'd have to be stronger than the group of villagers.
Raiden00

When making Anki cards, is it more effective to ask the meaning of a term, or to ask what term describes a concept?

2gjm
Why not both?
Raiden40

Would a boxed AI be able to affect the world in any important way using the computer hardware itself? Like, make electrons move in funky patterns or affect air flow with cooling fans? If so, would it be able to do anything significant?

5JoshuaFox
See a recent MIRI paper. A narrow AI, "tasked with designing an oscillating circuit, re-purposed the circuit tracks on its motherboard to use as a radio which amplified oscillating signals from nearby computers."
7Houshalter
Possibly. You can send information through power lines for example. There are consumer devices that use it for like local internet connections, and I think power companies have started using something like it to replace meter readers. There are multiple ways to transmit radio frequencies through computer monitors (e.g. this), and communication via ultrasonic sound which we can't hear.
3Viliam_Bur
One of those things we can't know for sure, because there could hypothetically exist a new physical law we don't know yet. The AI could somehow learn about this law -- even if it cannot do experiments, it could somehow derive it from the first principles... ahem, Solomonoff priors. But I guess that is rather unlikely. Then, there is the possibility of using the familiar laws of physics and the existing hardware (which is what you asked). Seems to me that this kind of output would be too noisy. I can imagine the AI using the fans to create some resonation and maybe literally break the box... but since the AI is at that moment not a physical entity but only a pattern existing in the computer memory, that would be equivalent to suicide. Okay, with a really insane computing power the AI could hypothetically measure the positions of particles in the air, and use the fans to manipulate them... now this would depend on whether the AI can gather the information about the particles in its environment faster than the information is lost because of e.g. ventilation in the room which keeps bringing new particles with unpredictable positions and speeds. At some moment the AI would be limited by the fact that it does not have literally infinite computing power. (AI with literally infinite computing power and infinite computing speed could probably do almost anything. It could use Solomonoff priors to model all possible universes, use all its data to select the most likely one, and for any possible action X it could make, it could calculate whether its desired goal is more likely to happen if it does X or if it does non-X. Thus the probability of the goal would keep increasing; the question is how fast, and whether that growth would have a limit lower than 100%. Maybe the AI would require millenia to strategically modify the particles in the air to bring the desired outcome through e.g. a series of almost invisible social changes; but maybe it will be disassembled sooner so it
Raiden100

Regarding point 2, while it would be epistemologically risky and borderline dark arts, I think the idea is more about what to emphasize and openly signal, not what to actually believe.

6plex
True, perhaps I should have been more clear in my dealing with the two, and explained how I think the they can blur across unintentionally. I do think being selective with signals can be instrumentally effective, but I think it's important to be intentionally aware when you're doing that and not allow your current mask to bleed over and influence your true beliefs unduly. Essentially I'd like this post to come with a "Do this sometimes, but be careful and mindful of the possible changes to your beliefs caused by signaling as if you have different beliefs." warning.
Raiden30

Thank you to those who commented here. It helped!

Raiden00

Hmm it seems obvious in retrospect, but it didn't occur to me that biochemistry would relate to nanotech. I suppose I compartmentalized "biological" from "super-cool high-tech stuff." Thank you very much for that point!

Raiden50

I'm at that point in life where I have to make a lot of choices about my future life. I'm considering doing a double major in biochemistry and computer science. I find both of these topics to be fascinating, but I'm not sure if that's the most effective way to help the world. I am comfortable in my skills as an autodidact, and I find myself to be interested in comp sci, biochemistry, physics, and mathematics. I believe that regardless which I actually major in, I could learn any of the others quite well. I have a nagging voice in my head saying that I shou... (read more)

0IlyaShpitser
People with bio and algorithmic skills are in extremely high demand, but: (a) there might be a biotech bubble (b) it might be worthwhile to go after difficult to learn meta skills that help you learn other things more quickly (math, etc.), and just pickup whatever is in demand later.
3Raiden
Thank you to those who commented here. It helped!
[anonymous]120

I have a nagging voice in my head saying that I shouldn't bother learning biochemistry, because it won't be useful in the long term because everything will be based on nanotech and we will all be uploads. Is that a valid point?

Keeping in mind the biases (EDIT: but also the expertise) that my username indicates, I would say that is nearly exactly backwards - modifications and engineering of biochemistry and biochemistry-type systems will actually occur (and already are) while what most people around here think of when they say 'nanotech' is a pipe dream.... (read more)

-1[anonymous]
This seems like the bottleneck question. Why don't you try to study that? After all, you should only prefer to be skilled and educated if you get this question right. If you get it wrong, it's either a matter of indifference, or actually better for everyone if you're as unskilled and uneducated as possible.
3drethelin
Nanotech without biochemistry won't be able to help anyone medically. That's like saying you don't need to know about biology because farming is all going to be done with machines these days. ALSO: Biochemistry and cell biology are the best existing examples we have of nanotech machines.
2polymathwannabe
Biochemistry has tremendous world-saving potential. With both computer science and biochemistry in your arsenal, you could work in molecule modeling. The design and simulation of molecules is a key part of the development of new drugs and vaccines. Besides, we're running out of usable antibiotics. And as healthcare continues to prolong our working life years, we will need to improve our understanding of degenerative diseases like arthritis and Alzheimer's.
5Squark
It seems likely we will have to learn more biochemistry to realize uploading.
6Shmi
Maybe, some day. And as a "double major in biochemistry and computer science" you will be well positioned to help bring said nanotech from the realm of SciFi to reality. Certainly you have plenty of time, nothing as revolutionary is likely to happen in the next few years, and you will have your degree by then. I'd actually bet that "nanotech and uploads" are decades away, even being optimistic.
5Lumifer
No. Think about the timelines involved.
Raiden00

I guess what I'm saying is that since simpler ones are run more, they are more important. That would be true if every simulation was individually important, but I think one thing about this is that the mathematical entity itself is important, regardless of the number of times it's instituted. But it still intuitively feels as though there would be more "weight" to the ones run more often. Things that happen in such universes would have more "influence" over reality as a whole.

0Scott Garrabrant
I am saying that in order to make the claim "simple universes are run more," you first need the claim that "most universes are more likely to run simple simulations than complex simulations." In order to make that second claim, you need to start with a measure of what "most universes" means, which you do using simplicity. (Most universes run simple simulations more because running simple simulations is simpler.) I think there is a circular logic there that you cannot get past.
Raiden10

What I mean though, is that the more complicated universes can't be less significant, because they are contained within this simple universe. All universes would have to be at least as morally significant as this universe, would they not?

0Scott Garrabrant
If I have have a world containing many people, I can say that the world is more morally significant than any of the individual people.
Raiden00

Another thought: Wouldn't one of the simplest universes be a universal turing machine that runs through every possible tape? All other universes would be contained within this universes, making them all "simple."

5Scott Garrabrant
Simple things can contain more complex things. The reason the more complex thing can be more complex is that it takes extra bits to specify what part of the simple thing to look at.
Raiden00

Instead of saying that you care about simpler universes more, couldn't a similar preference arise out of pure utilitarianism? Simpler universes would be more important because things that happen within them will be more likely to also happen within more complicated universes that end up creating a similar series of states. For every universe, isn't there an infinite number of more complicated universes that end up with the simpler universe existing within part of it?

0Scott Garrabrant
In order to make claims like that, you have to put a measure on your multiverse. I do not like doing that for three reasons: 1) It feels arbitrary. I do not think the essence of reality relies on something chunky like a Turing machine. 2) It limits the multiverse to be some set of worlds that I can put a measure on. The collection of all mathematical structures is not a set, and I think the multiverse should be at least that big. 3) It requires some sort inherent measure that is outside of the any of the individual universes in the multiverse. It is simpler to imagine that there is just every possible universe, with no inherent way to compare them. However, regardless of those very personal beliefs, I think that the argument of simpler universes show up in more other universes does not actually answer any questions. You are trying to explain why you have a measure which makes simpler universes more likely by starting with a collection of universes in which the simpler ones are more likely, and observing that the simple ones are run more. This just walks you in circles.
0Raiden
Another thought: Wouldn't one of the simplest universes be a universal turing machine that runs through every possible tape? All other universes would be contained within this universes, making them all "simple."
Raiden10

Ones I've noticed are "lazy" or "stupid" or other words that are used to describe people. Sure, it can be good to have such models so that one can predict the behavior of a person, like "This person isn't likely to do his work." or "She might have trouble understanding that." The thing is, these are often treated as fundamental properties of an ontologically fundamental thing, which the human mind is not.

Why is this person lazy? Do they fall victim to hyperbolic discounting? Is there an ugh field related to their wo... (read more)

-2Lumifer
Being stupid is a fundamental property ("stupid" understood as having low g and not an inability to understand some particular issue).
Raiden30

I really would like a chronological order.

Raiden10

I really like the cute little story as you say, but agree that it isn't effective where it is. Maybe include it in the end as a sort of appendix?

Raiden00

Three shall be Peverell's sons and three their devices by which Death shall be defeated.

What is meant by the three sons? Harry, Draco, and someone else? Quirrell perhaps? Using the three Deathly Hallows?

0CAE_Jones
On Reddit, there seems to be a substantial number of users hoping for Harry, Draco and Hermione. Draco makes some degree of sense (ur jnf gur znfgre bs gur Ryqre Jnaq sbe zbfg bs pnaba Qrnguyl Unyybjf), though the Hermione ideas are pretty handwavy (still, the idea of Hermione somehow resurrecting herself and mastering the Resurrection Stone is awesome, if hard to believe possible). The main objection to Dumbledore as the master of the wand is his devout deathism; Quirrel participating as the master of the stone is much more believable.
Spurlock160

I interpreted this to mean that long ago, there were 3 Peverell brothers, each of which created one of the Hallows. Harry is descended from this family. Note that it doesn't say that "Pevererll's sons" will necessarily be the ones to use their devices to defeat Death, only that the devices are theirs.

0solipsist
I don't think they'll go this route, but the three heirs to Gryffindor and Slytherin (Fred, George, and Harry)?
Raiden00

I generally consider myself to be a utilitarian, but I only apply that utilitarianism to things that have the property of personhood. But I'm beginning to see that things aren't so simple.

-1ChristianKl
Do corporation who are legally persons count?
Raiden50

I have no idea what I consider a person to be. I think that I wish it was binary because that would be neat and pretty and make moral questions a lot easier to answer. But I think that it probably isn't. Right now I feel as though what separates person from nonperson is totally arbitrary.

It seems as though we evolved methods of feeling sympathy for others, and now we attempt to make a logical model from that to define things as people. It's like "person" is an unsound concept that cannot be organized into an internally consistent system. Heck, I'm actually starting to feel like all of human nature is an internally inconsistent mess doomed to never make sense.

Raiden00

Well I certainly feel very confused. I generally do feel that way when pondering anything related to morality. The whole concept of what is the right thing to do feels like a complete mess and any attempts to figure it out just seem to add to the mess. Yet I still feel very strongly compelled to understand it. It's hard to resist the urge to just give up and wait until we have a detailed neurological model of a human brain and are able to construct a mathematical model from that which would explain exactly what I am asking when I ask what is right and what the answer is.

Raiden00

I am VERY confused. I suspect that some people can value some things differently, but it seems as though there should be a universal value system among humans as well. The thing that distinguishes "person" from "object" seems to belong to the latter.

0Baughn
Is that a normative 'should' or a descriptive 'should'? If the latter, where would it come from? :-)
Raiden60

My current view is that most animals are not people, in the sense that they are not subject to moral concern. Of course, I do get upset when I see things such as animal abuse, but it seems to me that helping animals only nets me warm fuzzy feelings. I know animals react to suffering in a manner that we can sympathize with, but it just seems to me that they are still just running a program that is "below" that of humans. I think I feel that "react to pain" does not equal "worthy of moral consideration." The only exceptions to this in my eyes may be "higher mammals" such as other primates. Yet others on this site have advocated concern for animal welfare. Where am I confused?

0[anonymous]
I think you are confused in thinking that humans are somehow not just also running a program that reacts to pain and whatnot. You feel sympathy for animals, and more sympathy for humans. I don't think that requires any special explanation or justification, especially when doing so results in preferences or assertions that are stupid: "I don't care about animals at all because animals and humans are ontologically distinct." Why not just admit that you care about both, just differently, and do whatever seems best from there? Perhaps just taking your apparent preferences at fact value like that you run into some kind of specific contradiction, or perhaps not. If you do, then you at least have a concrete muddle to resolve.
6simplicio
First thing to note is that "worthy of moral consideration" is plausibly a scalar. The philosophical & scientific challenges involved in defining it are formidable, but in my books it has something to do with to what extent a non-human animal experiences suffering. So I am much less concerned with hurting a mosquito than a gorilla, because I suspect mosquitoes do not experience much of anything, but I suspect gorillas do. Although I think ability to suffer is correlated with intelligence, it's difficult to know whether it scales with intelligence in a simple way. Sure, a gorilla is better than a mouse at problem-solving, but that doesn't make it obvious that it suffers more. Consider the presumed evolutionary functional purpose of suffering, as a motivator for action. Assuming the experience of suffering does not require very advanced cognitive architecture, why would a mouse necessarily experience vastly less suffering that a more intelligent gorilla? It needs the motivation just as much. To sum up, I have a preference for creatures that can experience suffering to not suffer gratuitously, as I suspect that many do (although the detailed philosophy behind this suspicion is muddy to say the least). Thus, utilitarian veganism, and also the unsolved problem of what the hell to do about the "Darwinian holocaust."
3ChristianKl
Do you think that all humans are persons? What about unborn children? A 1 year old? A mentally handicapped person? What your criteria for granting personhood. Is it binary?
0Qiaochu_Yuan
Why do you assume you're confused?
2somervta
Three hypothesis which may not be mutually exclusive: 1) Some people disagree (with you) about whether or not some animals are persons. 2) Some people disagree (with you) about whether or not being a person is a necessary condition for moral consideration - here you've stipulated 'people' as 'things subject to moral concern', but that word may too connotative laden for this to be effective. 3) Some people disagree (with you) about 'person'/'being worthy of moral consideration' being a binary category.
-6blacktrance
3drethelin
Are you confused? It seems like you recognize that you have somewhat different values than other people. Do you think everyone should have the same values? In that case all but one of the views is wrong. On the other hand, if values can be something that's different between people it's legitimate for some people to care about animals and others not to.
Raiden00

I would very much like to attend this, having never attended a meetup before. However, I am currently a minor who lacks transportation ability and have had little luck convincing my guardians to drive me to it. Is there anybody who is attending and is coming from the Birmingham, AL area who would be willing to drive me? I am willing to pay for the service.

0Nova_Division
Raiden, let's talk on IM and see if we can figure out a way to get you there. I'm not sure we have any members coming from AL, but there may be bus options we can explore. IM me on gchat at amidstawoken@gmail.com. -Katie
Raiden20

I've noticed that I seem to get really angry at people when I observe them playing the status game with what I perceive as poor skill. Is there some ev psych basis for this or is it just a personal quirk?

3Viliam_Bur
Not sure if related, but I often get angry at people doing things that make them look like idiots in my eyes, but I have a suspicion they would impress a random bystander positively. As an example, imagine a computer programmer speaking things that you as a fellow programmer recognize as a complete bullshit, or at best as wild exaggerations of random things that impressed the person... but for someone who does not understand programming at all, they might (I am not sure) sound very knowledgeable, unlike the silent types like me. -- I don't know if they really impress the outsiders positively or not. I can't well imagining myself not having the knowledge I have, and I am also not good at guessing how other people react to the tone of voice or whatever other information they may collect from the talk about topic they don't understand. -- I just perceive the danger that the person may sound more impressive than me, and... well, as an employee, my quality of life depends on the impressions of people who can't measure my output separately from the output of the team containing also the other person. Also, again not sure if related, when I get angry at someone, when I analyze the situation I usually find that they are better than me in something. In the specific situation above, it would be "an ability to impress people who completely don't understand my work". This is easy to miss, if I remain focused only on the "they speak nonsense" part. But the truth is their speaking nonsense does not make me angry; it's relatively easy to ignore, and it would not bother me if I did not perceive a threat. So, for your situation: are you afraid that the "people playing the status game with (supposedly) poor skill" might still win some status at your expense? If yes, the angry reaction is obvious: you are in a situation where you could lose, but you could also win; which is the best situation to invest your energy in. (Imagine an alternative universe, where the person trying to pla
5Vaniver
My suspicion: status games are generally seen as zero sum. Someone attempting to play the status game around you is a threat, and thus it probably helps to be angry with them, unless you expect them to be better than you at status games, in which case being angry with them probably reduces the chance that they'll be your ally, and they will be able to respond more negatively to your anger than a weaker opponent.
1niceguyanon
Not an explanation, but perhaps try to see this as a benefit to you? I have witnessed plenty of poker players get very angry at bad players. Over time bad players lose money to good players, so one shouldn't complain about bad players. Someone who is ineffective at status signalling won't affect you, you already see through them. Personally, I find that I have an admiration for people with skill, even in things such as effective status signalling. When people lack a certain savoir-faire about them, it makes me upset, but then I remind myself I shouldn't.
7drethelin
I think it's a very common trait, but any Evo psych explanation I know would probably just be a just-so story. Just So Story: The consequence of getting angry is treating someone badly, or from a game theoretical perspective, punishing them. Your perception of someone playing status games with low skill is a manifestation of the zero sum nature of status in tribes: Someone playing with low skill is a low status person trying to act and receive the benefits of being higher status, and it behooves you to punish them, in order to preserve or increase your own status. It's easier for evolution to select for emotional reactions to things than for game theoretical calculations.
8ahh
I think this paper (while mathematically interesting!) is rather oversold. A positive result to their proposed experiment says one of the following is true: A) we're simulated on a cubic grid B) we're not simulated, but True Physics has cubic structure C) (other non-obvious cause of anisotropy) Not only is it very difficult in my mind to distinguish between A and B, think what a negative result means; one of: A) we're simulated on a non-cubic grid B) we're simulated with a more complex discretization that deals with anisotropy C) we're not simulated, and True Physics doesn't have a cubic structure I think the only thing a cubic anistropy can tell us about is the structure of True Physics, not whether or not that true physics is based on a simulation.
0Jayson_Virissimo
Thanks.
Raiden20

Is it probable for intelligent life to evolve?

2FiftyTwo
If we assume primates and other intelligent social mammals continue to exist then yes, the transition from their level to human level is comparatively minor to the steps needed to get that far.
Load More