All of JacobW38's Comments + Replies

Yes, I am a developing empirical researcher of metaphysical phenomena. My primary item of study is past-life memory cases of young children, because I think this line of research is both the strongest evidentially (hard verifications of such claims, to the satisfaction of any impartial arbiter, are quite routine), as well as the most practical for longtermist world-optimizing purposes (it quickly becomes obvious we're literally studying people who've successfully overcome death). I don't want to undercut the fact that scientific metaphysics is a much large... (read more)

I'm a hardcore consciousness and metaphysics nerd, so some of your questions fall within my epistemic wheelhouse. Others, I am simply interested in as you are, and can only respond with opinion or conjecture. I will take a stab at a selection of them below:

4: "Easy" is up in the air, but one of my favorite instrumental practices is to identify lines of preprogrammed "code" in my cognition that do me absolutely no good (grief, for instance), and simply hack into them to make them execute different emotional and behavioral outputs. I think the best way to st... (read more)

1RohanS
Lots of interesting thoughts, thanks for sharing! You seem to have an unconventional view about death informed by your metaphysics (suggested by your responses to 56, 89, and 96), but I don’t fully see what it is. Can you elaborate?
JacobW38-1-4

If you go back even further we’re the descendants of single celled organisms that absolutely don’t have experience.

My disagreement is here. Anyone with a microscope can still look at them today. The ones that can move clearly demonstrate acting on intention in a recognizable way. They have survival instincts just like an insect or a mouse or a bird. It'd be completely illogical not to generalize downward that the ones that don't move also exercise intention in other ways to survive. I see zero reason to dispute the assumption that experience co-originat... (read more)

4gbear605
If bacteria have experience, then I see no reason to say that a computer program doesn’t have experience. If you want to say that a bacteria has experience based on guesses from its actions, then why not say that a computer program has experience based on its words? From a different angle, suppose that we have a computer program that can perfectly simulate a bacteria. Does that bacteria have experience? I don’t see any reason why not, since it will demonstrate all the same ability to act on intention. And if so, then why couldn’t a different computer program also be conscious? (If you want to say that a computer can’t possibly perfectly simulate a bacteria, then great, we have a testable crux, albeit one that can’t be tested right now.)

Explain to me how a sufficiently powerful AI would fail to qualify as a p-zombie. The definition I understand for that term is "something that is externally indistinguishable from an entity that has experience, but internally has no experience". While it is impossible to tell the difference empirically, we can know by following evolutionary lines: all future AIs are conceptually descended from computer systems that we know don't have experience, whereas even the earliest things we ultimately evolved from almost certainly did have experience (I have no clue... (read more)

2Dentin
AIUI, you've got the definition of a p-zombie wrong in a way that's probably misleading you. Let me restate the above: "something that is externally indistinguishable from an entity that experiences things, but internally does not actually experience things" The whole p-zombie thing hinges on what it means to "experience something", not whether or not something "has experience".
4gbear605
If you look far enough back in time, humans are are descended from animals akin to sponges that seem to me like they couldn’t possibly have experience. They don’t even have neurons. If you go back even further we’re the descendants of single celled organisms that absolutely don’t have experience. But at some point along the line, animals developed the ability to have experience. If you believe in a higher being, then maybe it introduced it, or maybe some other metaphysical cause, but otherwise it seems like qualia has to arise spontaneously from the evolution of something that doesn’t have experience - with possibly some “half conscious” steps along the way. From that point of view, I don’t see any problem with supposing that a future AI could have experience, even if current ones don’t. I think it’s reasonable to even suppose that current ones do, though their lack of persistent memory means that it’s very alien to our own, probably more like one of those “half conscious” steps.

I spoke briefly on acceptance in my comment to the other essay, and I think I agree more with how that one conceptualized it. Mostly, I disagree that acceptance entails grief, or that it has to be hard or complicated. At the very least, that's not a particularly radical form of acceptance. My view on grief is largely that it is an avoidable problem we put ourselves through for lack of radical acceptance. Acceptance is one move: you say all's well and you move on. With intensive pre-invested effort, this can be done for anything, up to and including whateve... (read more)

A very necessary post in a place like here, in times like these; thank you very much for these words. A couple disclaimers to my reply: I'm cockily unafraid of death in personal terms, and I'm not fully bought into the probable AI disaster narrative, although far be it from me to claim to have enough knowledge to form an educated opinion; it's really a field I follow with an interested layman's eye. But I'm not exactly one of those struggling at the moment, and I'd even say that the recent developments with ChatGPT, Bing, and whatever follows them excite m... (read more)

I fully agree with the gist of this post. Empowerment, as you define it, is both a very important factor in my own utility function, and seems to be an integral component to any formulation of fun theory. In your words, "to transcend mortality and biology, to become a substrate independent mind, to wear new bodies like clothes" describes my terminal goals for a thousand years into the future so smack-dab perfectly that I don't think I could've possibly put it any better. Empowerment is, yes, an instrumental goal for all the options it creates, but also an ... (read more)

I highly recommend following Rational Animations on Youtube for this sort of general purpose. I'd describe their format as "LW meets Kurzgesagt", the latter which I already found highly engaging. They don't post new videos that often, but their stuff is excellent, even more so recently, and definitely triggers my dopamine circuits in a way that rationality content generally struggles to satisfy. Imo, it's perfect introductory material to anyone new on LW to get familiar with its ideology in a way that makes learning easy and fun.

(Not affiliated with RA in any way, just a casual enjoyer of chonky shibes)

You've described habituation, and yes, it does cut both ways. You also speak of "pulling the unusual into ordinary experience", as though that is undesirable, but contrarily, I find exactly that a central motivation to me. When I come upon things that on first blush inspire awe, my drive is to fully understand them, perhaps even to command them. I don't think I know how to see anything as "bigger than myself" in a way that doesn't ring simply as a challenge to rise above whatever it is.

Manipulating one's own utility functions is supposed to be hard? That would be news to me. I've never found it problematic, once I've either learned new information that led me to update it, or become aware of a pre-existing inconsistency. For example, loss aversion is something I probably had until it was pointed out to me, but not after that. The only exception to this would be things one easily attaches to emotionally, such as pets, to which I've learned to simply not allow myself to become so attached. Otherwise, could you please explain why you make the claim that such traits are not readily editable in a more general capacity?

Thanks for asking. I'll likely be publishing my first paper early next year, but the subject matter is quite advanced, definitely not entry-level stuff. It takes more of a practical orientation to the issue than merely establishing evidence (the former my specialty as a researcher; as is probably clear from other replies, I'm satisfied with the raw evidence).

As for best published papers for introductory purposes, here you can find one of my personal all-time favorites. https://www.semanticscholar.org/paper/Development-of-Certainty-About-the-Correct-Decease... (read more)

Apologies for the absence; combination of busy/annoyance with downvotes, but I could also do a better job of being clear and concise. Unfortunately, after having given it thought, I just don't think your request is something I can do for you, nor should it be. Honestly, if you were to simply take my word for it, I'd wonder what you were thinking. But good information, including primary sources, is openly accessible, and it's something that I encourage those with the interest to take a deep dive into, for sure. Once you go far enough in, in my experience, t... (read more)

1momom2
I invite you. You can send me this summary in private to avoid downvotes.

Based on evidence I've been presented with to this point - I'd say high enough to confidently bet every dollar I'll ever earn on it. Easily >99% that it'll be put beyond reasonable doubt in the next 100-150 years, and I only specify that long because of the spectacularly lofty standards academia forces such evidence to measure up to. I'm basically alone in my field in actually being in favor of the latter, however, so I have no interest in declining to play the long game with it.

Been staying hard away from crypto all year, with the general trend of about one seismic project failure every 3 months, and this might be the true Lehman moment on top of the shitcoin sundae. Passing no assumptions on intent or possible criminal actions until more info is revealed, but it certainly looks like SBF mismanaged a lot of other people's money and was overconfident in his own, being largely pegged to illiquid altcoins and FTT. The most shocking thing to me is how CZ took a look at their balance sheets for all of like 3 hours after announcing int... (read more)

That being said, I could see how this feeling would come about if the value/importance in question is being imposed on you by others, rather than being the value you truly assign to the project. In that case, such a burden can weigh heavily and manifest aversively. But avoiding something you actually assign said value to just seems like a basic error in utility math?

1David Hartsough
Many people I know personally (including myself) have experienced or regularly experience this "imposed" "burden" you're referring to, except they place it on themselves with "ought" and "should" (instead of "want", for example). ("I am going to work on that this month." Vs "I want to work on that this month." Vs "I should work on that this month." Vs "I have to work on that this month." The differences are subtle in language but massive in cognitive weight.) Sometimes it's like having someone inside your head with a whip trying to drive behavior with excessive pressure according to some maxim or moral imperative. This is obviously not healthy or long-term effective, but some people genuinely go through this (and some never make it out of it).

I have a taboo on the word "believe", but I am an academic researcher of afterlife evidence. I personally specialize in verifiable instances of early-childhood past-life recall.

1Aiyen
If you don't like the word "believe", what is the probability you assign to it?  
8the gears to ascension
You still haven't actually provided verifiable instances, only referenced them and summarized them as adding up to an insight; if you're interested in extracting the insights for others I'd be interested, but right now I don't estimate high likelihood that doing so will provide evidence that warrants concluding there's hidden-variable soul memory that provides access to passwords or other long facts that someone could not have had classical physical access to. I do agree with you, actually, in contrast to almost everyone else here, that it is warranted to call memetic knowledge "reincarnation" weakly, and kids knowing unexpected things doesn't seem shocking to me - but it doesn't appear to me that there's evidence that implies requirement of physics violations, and it still seems to me that the evidence continues to imply that any memory that is uniquely stored in a person's brain at time of death diffuses irretrievably into heat as the body decays. I'd sure love to be wrong about that, let us all know when you've got more precise receipts.
JacobW381-28

Honestly, even from a purely selfish standpoint, I'd be much more concerned about a plausible extinction scenario than just dying. Figuring out what to do when I'm dead is pretty much my life's work, and if I'm being completely honest and brazenly flouting convention, the stuff I've learned from that research holds a genuine, not-at-all-morbid appeal to me. Like, even if death wasn't inevitable, I'd still want to see it for myself at some point. I definitely wouldn't choose to artificially prolong my lifespan, given the opportunity. So personally, death an... (read more)

1Filip Dousek
do you have any published papers on this? or, what are the top papers on the topic? 
4Lone Pine
Do you believe in an afterlife?

I like the thought behind this. You've hit on something I think is important for being productive: if thinking about the alternative makes you want to punch through a wall, that's great, and you should try to make yourself feel that way. I do a similar thing, but more toward general goal-accomplishment; if I have an objective in sight that I'm heavily attracted to, I identify every possible obstacle to the end (essentially murphyjitsu'ing), and then I cultivate a driving, vengeful rage toward each specific obstacle, on top of what motivation I already had ... (read more)

It appears what you have is free won’t!

For the own-behavior predictions, could you put together a chart with calibration accuracy on the Y axis, and time elapsed between the prediction and the final decision (in buckets) on the X axis? I wonder whether the predictions became less-calibrated the farther into the future you tried to predict, since a broader time gap would result in more opportunity for your intentions to change.

This is way too interesting not to have comments!

First, I think this bears on the makeup of one's utility function. If your UF contains absolutes, infinite value judgments, then in my opinion, it is impossible not to be truly motivated toward them. No pushing is ever required; at least, it never feels like pushing. Obstacles just manifest to the mind in the form of fun challenges that only amplify the engagement, because you already know you have the will to win. If your UF does not include absolutes, or you step down to the levels that are finite (for the... (read more)

3Kaj_Sotala
I actually think that this depends on the nature of the absolutes. I think a lot of the fake qualities emerge because a part of one's mind feels that e.g. not being productive would be shameful and infinitely bad, and then it's so freaked out about the thought of not being productive that it tries to do everything it can to force the person into being productive. But since it's a part that can't actually generate genuine motivation, it may end up blocking the genuine motivation and thus prevent progress - exactly because it sees no-productivity as so infinitely bad that it can't stand the thought of spending any time not being productive. And thus it's incapable of doing the thing of "wait for the real quality to return" that you mention, because that would take time and meanwhile AAAAAAH I'M NOT BEING PRODUCTIVE. (Productivity just being one specific example, a similar logic can be applied to the other examples too.)

I suspect the dichotomy may be slightly misapportioned here, because I sometimes find that ideas which are presented on the right side end up intersecting back with the logical extremes of methods from the left side. For example, the extent to which I push my own rationality practice is effectively what has convinced me that there's a lot of ecological validity to classical free will. The conclusion that self-directed cognitive modification has no limits, which implies conceptually unbounded internal authority, is not something that I would imagine one cou... (read more)

I don't think I've ever experienced this. I'd actually say I could be described by the blue graph. The more I really, really care about something, the more I want to do absolutely nothing but it, especially if I care about it for bigger reasons than, say, because it's a lot of fun at this moment. Sometimes, there comes a point where continuing to improve said objective feels like it's bringing diminishing returns, so I call the project sufficiently complete to my liking. Other times, it never stops feeling worth the effort, or it is simply too important no... (read more)

1David Hartsough
That's awesome! I'm jealous :) The conclusion is simply: if this applies to you, try to be aware of it and prevent it from getting in your way. But if none of the things under the section at the top beginning "See if any of this sound familiar" actually seem familiar to you, then this post won't be relevant or applicable to you. In fact, please let this post pass your mind, and carry on "moving forward forever" and feeling awesome! 😃

I like this proposal. In light of the issues raised in this post, it's important for people to come into the custom of explaining their own criteria for "truth" instead of leaving what they are talking about ambiguous. I tend not to use the word much myself, in fact, because I find it more helpful to describe exactly what kind of reality judgments I am interested in arriving at. Basically, we shouldn't be talking about the world as though we have actual means of knowing things about it with probability 1.

Important post. The degree to which my search for truth is motivated, and to what ends, is something I grapple with frequently. I generally prefer the definition of truth as "that which pays the most rent in anticipated experience"; essentially a demand for observability and falsifiability, a combination of your correspondence and predictive criteria. This, of course, leaves what is true subject to updating if new ideas lead to better results, but I think it is the best way we have of approximating truth. So I'm constantly looking really hard at the eviden... (read more)

Unfortunate to say I haven't kept a neat record of where exactly each case is published, so I asked my industry connections and was directed to the following article. Having reviewed it, it would of course be presumptuous of me to say I endorse everything stated therein, since I have not read the primary source for every case described. But those sources are referenced at bottom, many with links. It should suffice as a compilation of information pertaining to your question, and you can judge what meets your standards.

https://psi-encyclopedia.spr.ac.uk/articles/reincarnation-cases-records-made-verifications

Disclaimer, I'm not someone who personally investigates cases. What you've raised has actually been a massive problem for researchers since the beginning, and has little to do with the internet - Stevenson himself often learned of his cases many years after they were in their strongest phase, and sometimes after connections had already been made to a possible previous identity. In general, the earlier a researcher can get on a case and in contact with the subject, the better. As a result, cases in which important statements given by the subject are documen... (read more)

4ChristianKl
Where do you think the most convincing information about those cases is published?

On that note, the main way I could envision AI being really destructive is getting access to a government's nuclear arsenal. Otherwise, it's extremely resourceful but still trapped in an electronic medium; the most it could do if it really wanted to cause damage is destroy the power grid (which would destroy it too).

5the gears to ascension
you're underestimating biology
JacobW38-3-10

Feels like Y2K: Electric Boogaloo to me. In any case, if a major catastrophe did come of the first attempt to release an AGI, I think the global response would be to shut it all down, taboo the entire subject, and never let it be raised as a possibility again.

9Jon Garcia
The tricky thing with human politics is that governments will still fund research into very dangerous technology if it has the potential to grant them a decisive advantage on the world stage. No one wants nuclear war, but everyone wants nukes, even (or especially) after their destructive potential has been demonstrated. No one wants AGI to destroy the world, but everyone will want an AGI that can outthink their enemies, even (or especially) after its power has been demonstrated. The goal, of course, is to figure out alignment before the first metaphorical (or literal) bomb goes off.

Are you telling me you'd be okay with releasing an AI that has a 25% chance of killing over a billion people, and a 50% chance of at least killling hundreds of millions? I have to be missing the point here, because this post isn't doing anything to convince me that AI researchers aren't Stalin on steroids.

Or are you saying that if one can get to that point, it's much easier from there to get to the point of having an AI that will cause very few fatalities and is actually fit for practical use?

3alexey
It's explicitly the second:

Rather, I think he means that alignment is such a narrow target, and the space of all possible minds is so vast, that the default outcome is that unaligned AGI becomes unaligned ASI and ends up killing all humans (or even all life) in pursuit of its unaligned objectives. Hitting anywhere close to the alignment target (such that there's at least 50% chance of "only" one billion people dying) would be a big win by comparison.

Of course, the actual goal is for “things [to] go great in the long run”, not just for us to avoid extinction. Alignment itself is the ... (read more)

5RobertM
He's saying the second.

As a new member and hardcore rationalist/mental optimizer who knows little about AI, I've certainly noticed the same thing in the couple weeks I've been around. The most I'd say of it is that it's a little tougher to find the content I'm really looking for, but it's not like the site has lost its way in terms of what is still being posted. It doesn't make me feel less welcome in the community, the site just seems slightly unfocused.

That's definitely the proper naïve reaction to assume in my opinion. I would say with extremely high confidence that this is one of those things that takes dozens of hours of reading to overcome one's priors toward, if your priors are well-defined. It took every bit of that for me. The reason for this is that there's always a solid-sounding objection to any one case - it takes knowing tons of them by heart to see how the common challenges fail to hold up. So, in my experience and that of many I know, the degree which one is inclined to buy into it is a dir... (read more)

I can't say I understand what you think something of that sort would actually be. Certainly none of your examples in the OP qualify. Nothing exists which violates the laws of nature, because if it exists, it must follow the laws of nature. Updating our knowledge of the laws of nature is a different matter, but it's not something that inspires horror.

3Kaj_Sotala
Right, it's not that the thing's existence literally violates the laws of nature, but rather it's that it's incompatible with the model of reality that your mind has constructed. So the subjective feeling of it is the fabric of reality being torn apart. Though of course on an objective level, no such thing is happening. An example that comes to mind would be if a young child was used to their mother always being safe and available, and then the mother died. Previously, "I can always be safe with my mother" was basically an axiomatic assumption for how they oriented towards the world, but then suddenly the person their mind had been treating as invincible and immortal and the cornerstone of safety was gone. (I read somewhere a quote from someone whose parents had died at an early age, and who described the feeling as a literal one of reality being ripped apart and all of existence feeling wrong ever since. I didn't save the quote and don't remember the exact wording, though.)

There is a case on record that involved a recalled phone number. A password is a completely plausible next step forward.

For a very approachable and modernized take on the subject matter, I'd check out the book Before by Jim Tucker, a current leading researcher.

As a disclaimer, it's perfectly rational and Bayesian to be extremely doubtful of such "modest" proposals at first blush - I was for a good length of time, until I did the depth of investigation that was necessary to form an expert opinion. Don't take my word for things!

One of the best, approachable overviews of all this I've ever read. I've dabbled in some, but not all of the topics you've raised here, and I certainly know about the difficulties they've all faced with increasing to a scientific level of rigor. What I've always said is that parapsychology needs Doctor Strange to become real, and he's not here yet and probably never will be. Otherwise, every attempt at "proof" is going to be dealing with some combination of unfalsifiability, minuscule effect sizes, or severe replication issues. The only related phenomenon ... (read more)

I assume you mean to say the odds of two subjects remembering the same life by chance would be infinitesimal, which, fair. The odds of one subject remembering two concurrent lives would be much, much higher. Still doesn't happen. In fact, we don't see much in the way of multiple-cases at all, but when we do, it's always separate time periods.

I haven't read Sheldrake in depth, but I'm familiar with some of his novel concepts. The issue with positing anything so circumstantial being the mechanism for these phenomena is that the cases follow such narrow, exceptionless patterns that would not be so utterly predictable in the event of a non-directed etiology. The subjects never exhibit memories of people who are still alive, there are never two different subjects claiming to have been the same person, one subject never claims memories of two separate people who lived simultaneously... all these thi... (read more)

1rsaarelm
Have you run the numbers on these? For example sounds like a case of the Birthday paradox. Assume there's order of magnitude 10^11 dead people since 8000 BCE. So if you have a test group of, say, 10 000 reincarnation claimants and all of them can have memories of any dead person, already claimed or not, what's the probability of you actually observing two of them claiming the same dead person? The bit about the memories always being from dead people is a bit more plausible. We seem to have like 10 % of all people who ever lived alive right now, so assuming the memories are random and you can actually verify where they came from, you should see living people memories pretty fast.

I commend you sir, because what you've done here is found a critical failure in materialism (forgive me if you're not a materialist!). As a hard dualist, I love planarians because they pose such challenging questions about the formation and transfer of consciousness, and I've done many thought experiments of my own involving them, exactly like this. Obviously, though, my logical progression isn't going to lean into the paradox as this formulation does. Rather, the clear answer is to decide one way or the other at the point of the first split which way Worm... (read more)

Good on you doing your DD. His official count (counting all cases known to him, not only ones he investigated) is around 1700, which probably means that my collective estimate is on the way low side - there's just a lot of unpublished material to try to account for (file drawer effect) - but I would definitely say that a great deal of the advancement in the field after Stevenson has been of a conceptual and theoretical nature rather than collecting large amounts of additional data. In general, researchers have pivoted to allowing cases to come to their att... (read more)

7ChristianKl
How do you go about validating a case that comes to your attention via the internet? It seems to me like it's very hard to have access to information that the person in question has no way of knowing for cases that reach you via the internet.
4Mitchell_Porter
Epoch Times in 2015 said Stevenson's successor Jim Tucker has brought the total up to "about 2000 cases".  Anyway, I will come out and say I don't believe it. Reincarnation may be logically possible - many things are logically possible - but the ascertainable facts don't provide sufficient reason to think it's actually happening. Adults consistently underestimate the imagination and intuition of children, and scientists regularly convince themselves of things that are false (and then there's the level of discussion present e.g. in cable TV documentaries, which is far more characteristic of ordinary thinking on the subject, and which cannot be counted on to have any respect for truth at all).  Also, our current understanding of neural networks, suggests that individual brains develop idiosyncratic representations for anything complex, a problem for the idea that memories of other lives, formed in other brains, get downloaded into them. This is not a decisive objection, but it's definitely an issue for anyone seeking a mechanism.  It means very little evidentially, but I will report one thing that happened when I looked into this. In the opinion of some, Stevenson's most convincing case was a boy from Lebanon. I thought: Lebanon is a Muslim country, and one doesn't associate Islam with belief in reincarnation. Then I remembered the Druze sect - and indeed, on further study by myself, the boy turned out to be from a Druze family.  Reincarnation studies may be of interest from the perspective of "anomalistic psychology" - belief in reincarnation, after all, is part of some of the world's major belief systems; and understanding why people believe in it, and how that belief is reinforced in new generations, may shed light on how those cultures work. 

Your replies are extremely informative. So essentially, the AI won't have any ability to directly prevent itself from being shut off, it'll just try not to give anyone an obvious reason to do so until it can make "shutting it off" an insufficient solution. That does indeed complicate the issue heavily. I'm far from informed enough to suggest any advice in response.

The idea of instrumental convergence, that all intelligence will follow certain basic motivations, connects with me strongly. It patterns after convergent evolution in nature, as well as invoking... (read more)

I had a hard time understanding a good bit of what you're trying to say here, but I'll try to address what I think I picked up clearly:

  • While reincarnation cases do involve memories from people within the same family at a rate higher than mere chance would predict, subjects also very often turn out to have been describing lives of people completely unknown to their "new" families. The child would have absolutely no other means of access to that information. Also, without exception, they never, ever invoke memories belonging to still-living people.

  • On t

... (read more)
5the gears to ascension
I mean I actually think you are catastrophically wrong about there being any "hidden variable" knowledge-passing, but I'm going to talk to you to figure out why you believe it, not just dismiss it a priori! I simply expect the evidence for dualist violations of known physics to turn out to be very weak. could you cite somewhere I can look to find more of this? after looking briefly at wikipedia, I find what I expected to find - careful analysis of a plausibly astounding phenomenon, carefully catalogued and currently expected to be found to not be blatantly violating thermodynamics about what the kids knew when. If the kids can recite passwords they could not possibly have had access to, then it would start to seem plausible - but it takes an awful lot of evidence to overcome "it was just the kid forgetting they'd seen the stuff before", and it looks like the evidence probably isn't there. certainly no causally isolated studies.
-1rsaarelm
It's more empirical than ideological for me. There are these pockets of "something's not clear here", where similar things keep being observed, don't line up with any current scientific explanation, and even people who don't seem obviously biased start going "hey, something's off here". There's the recent US Navy UFO sightings thing that nobody seems to know what to make of, there's Darryl Bem's 2011 ESP study that follows stuff by people like Dean Radin who seem to keep claiming the existence of a very specific sort of PSI effect. Damien Broderick's Outside the Gates of Science was an interesting overview of this stuff. I don't think I've heard much of reincarnation research recently, but it was one of the three things Carl Sagan listed as having enough plausible-looking evidence for them that people should look a lot more carefully into them in The Demon-Haunted World in 1996, when the book was otherwise all about claims of the paranormal and religious miracles being bunk. I guess the annoying thing with reincarnation is that it's very hard to study rigorously if brains are basically black boxes. The research is postulating whole new physics, so things should be established with the same sort of mechanical rigor and elimination of degrees of freedom as existing physics is, and "you ask people to tell you stories and try to figure out if the story checks out but it's completely implausible for the person telling it to you to know it" is beyond terrible degrees-of-freedom-wise if you think of it like a physicist. When you keep hearing about the same sort of weird stuff happening and don't seem to have a satisfying explanation for what's causing it, that makes it sound like there's maybe some things that ought to be poked with a stick there. On the other hand, there's some outside view concerns. Whatever weird thing is going on seems to be either not really there after all, or significantly weirder than any resolved scientific phenomenon so far. Scientists took re

That's really interesting - again, not my area of expertise, but this sounds like 101 stuff, so pardon my ignorance. I'm curious what sort of example you'd give of a way you think an AI would learn to stop people from unplugging it - say, administering lethal doses of electric shock to anyone who tries to grab the wire? Does any actual AI in existence today even adopt any sort of self-preservation imperative that'd lead to such behavior, or is that just a foreign concept to it, being an inanimate construct?

2Jay Bailey
No worries, that's what this thread's for :) The most likely way an AI would learn to stop people from unplugging it is to learn to deceive humans. Imagine an AI at roughly human level intelligence or slightly above. The AI is programmed to maximise something - let's say it wants to maximise profit for Google. The AI decides the best way to do this is to take over the stock exchange and set Google's stock to infinity, but it also realises that's not what its creators meant when it said "Maximise Google's profit". What they should have programmed was something like "Increase Google's effective control over resources", but it's too late now - we had one chance to set it's reward function, and now the AI's goal is determined. So what does this AI do? The AI will presumably pretend to co-operate, because it knows that if it reveals its true intentions, the programmers will realise they screwed up and unplug the AI. So the AI pretends to work as intended until it gets access to the Internet, wherein it creates a botnet with many, many distributed copies of itself. Now safe from being shut down, the AI can openly go after its true intention to hack the stock exchange. Now, as for self-preservation - in our story above, the AI doesn't need it. The AI doesn't care about its own life - but it cares about achieving its goal, and that goal is very unlikely to be achieved if the AI is turned off. Similarly, it doesn't care about having a million copies of itself spread throughout the world either - that's just a way of achieving the goal. This concept is called instrumental convergence, and it's the idea that there are certain instrumental subgoals like "Stay alive, become smarter, get more resources" that are useful for a wide range of goals, and so intelligent agents are likely to converge on these goals unless specific countermeasures are put in place. This is largely theoretical - we don't currently have AI systems that are capable enough to plan long-term enough or mod

Restricting the query to true top-level, sweep-me-off-my-feet material, I'd say I've personally read about at least a few dozen that hit me that hard. If we expand to any case that researchers consider "solved" - that is, the deceased person whose life the child remembers has been confidently identified - I would estimate on the order of 2000 to 2500 worldwide, possibly more at this point.

4Mitchell_Porter
Any idea how many of those would have been collected by Ian Stevenson specifically? 

No time travel: You are 100% correct. All cases ever recorded involve memories belonging to previously deceased individuals.

Minds need brains: To inhabit matter, they absolutely do. You won't see anyone incarnating into a rock, LMAO.

Everything about biology has an evolutionary explanation: Also 100% correct. Just adding dualism changes nothing about natural selection. And, once again granting the premise, the ability to retain previous-life memories is sure as hell adaptive.

By "broadcast", I assume you mean "speak about previous-life experiences". To that,... (read more)

1rsaarelm
Any thoughts on Rupert Sheldrake? Complex memories showing up with no plausible causal path sounds a lot like his morphic resonance stuff. Also, old thing from Ben Goertzel that might be relevant to your interests, Morphic Pilot Theory hypothesizes some sort of compression artifacts in quantum physics that can pop up as inexplicable paranormal knowledge.

To the first question, there's just no way to know at the current stage of research. It's perfectly possible, just as it's possible that there's life in the Andromeda galaxy. To the second, know that taking ideas like this seriously involves entertaining some hard dualism; the brain essentially has to be regarded as analogous to a personal computer (at least I find such a comparison useful). Granting that premise, there's no reason a user couldn't "download" data into it.

4Gurkenglas
I would guess memories from past lives to work like other parts of reality: no time travel (else we would be eaten by time travellers), minds need brains (else we wouldn't spend 20% of the body's oxygen in the brain), everything about biology has an evolutionary explanation. It sure would be useful to an embryo to receive foreign data, but there's little point to broadcasting such. I'd therefore suspect the ability to broadcast to be incidental - perhaps a byproduct of the ability to receive, like every radio receiver can function as a transmitter. That we, given the ability, would broadcast is straightforward: If you copy software, you're more likely to copy from those who broadcast more. The practice would spread like a virus. Dead brains generally stop doing things, and if the transmitter could work quickly we should see the same hardware used for telepathy. Therefore I suspect the transmission to be ongoing over the course of a life, that memories would very rarely be ones of death, that childhood memories are more common than elderly memories because they've been broadcasted for longer. Does this match the evidence?

"Awakened people are out there, and some people do stumble into it with minimal practice, and I wish it were this easy to get to it, but It's probably not."

Having read the preceding descriptions, I find myself wondering if I'm one of those stumblers. If "awakening" is defined by the quote you provided, "suffering less and noticing it more", that's exactly how I feel today when I compare to myself a few years ago. In casual terms, I'd say I've been blessed with the almighty power of not giving a crap; I know exactly when something should feel bad, but I can... (read more)

Personally, I mostly study reincarnation cases; they're the only evidence I really find to meet a scientific standard. Let's just say that without them, I wouldn't be a dualist on any confident epistemic ground. That said, 99 percent of what you'll encounter in a casual search on the matter is absolute nonsense. When skeptics cry "Here be dragons!" to dissuade curious folks from messing around in such territory, I honestly can't say I blame them one bit, given how much dedication it takes to separate the signal from the deafening noise. If you want to dip ... (read more)

7Mitchell_Porter
How many such cases are known to you?
3Gurkenglas
Do animals also get this? Does an embryo's brain contain biological means to receive foreign data?

This is massive amounts of overthink, and could be actively dangerous. Where are we getting the idea that AIs amount to the equivalent of people? They're programmed machines that do what their developers give them the ability to do. I'd like to think we haven't crossed the event horizon of confusing "passes the Turing test" with "being alive", because that's a horror scenario for me. We have to remember that we're talking about something that differs only in degree from my PC, and I, for one, would just as soon turn it off. Any reluctance to do so when faced with a power we have no other recourse against could, yeah, lead to some very undesirable outcomes.

1Lone Pine
I think we're ultimately going to have to give humans a moral privilege for unprincipled reasons. Just "humans get to survive because we said so and we don't need a justification to live." If we don't, principled moral systems backed by superintelligences are going to spin arguments that eventually lead to our extinction.

The principles I'm alluding to here are purely self-applied, so I don't have to worry about crossing signals with anyone in that regard, but I'll heed your advice in situations where I'm working with aligning my principles with others'. It's also an isolated case where my utility function absolutely necessitates their constant implementation and optimization; generally, I do try to be flexible with ordinary principles that don't have to be quite so unbending.

I think it's important to stress that we're talking about fundamentally different sorts of intelligence - human intelligence is spontaneous, while artificial intelligence is algorithmic. It can only do what's programmed into its capacity, so if the dev teams working on AGI are shortsighted enough to give it an out to being unplugged, that just seems like stark incompetence to me. It also seems like it'd be a really hard feature to include even if one tried; equivalent to, say, giving a human an out to having their blood drained from their body.

2Jay Bailey
I would prefer not to die. If you're trying to drain the blood from my body, I have two options. One is to somehow survive despite losing all my blood. The other is to try and stop you taking my blood in the first place. It is this latter resistance, not the former, that I would be worried about. Unfortunately, that's just really not how deep learning works. Deep learning is all about having a machine learn to do things that we didn't program into it explicitly. From computer vision to reinforcement learning to large language models, we actually don't know how to explicitly program a computer to do any of these things. As a result, all deep learning models can do things we didn't explicitly program into its capacity. Deep learning is algorithmic, yes, but it's not the kind of "if X, then Y" algorithm that we can track deterministically. GPT-3 came out two years ago and we're still learning new things it's capable of doing. So, we don't have to specifically write some sort of function for "If we try to unplug you, then try to stop us" which would, indeed, be pretty stupid. Instead, the AI learns how to achieve the goal we put into it, and how it learns that goal is pretty much out of our hands. That's a problem the AI safety field aims to remedy.

New to the site, just curious: are you that Roko? If so, then I'd like to extend a warm welcome-back to a legend.

Although I'm not deeply informed on the matter, I'd also happen to agree with you 100% here. I really think most AI risk can be heavily curtailed to fully prevented by just making sure it's very easy to terminate the project if it starts causing damage.

Load More