I'm a hardcore consciousness and metaphysics nerd, so some of your questions fall within my epistemic wheelhouse. Others, I am simply interested in as you are, and can only respond with opinion or conjecture. I will take a stab at a selection of them below:
4: "Easy" is up in the air, but one of my favorite instrumental practices is to identify lines of preprogrammed "code" in my cognition that do me absolutely no good (grief, for instance), and simply hack into them to make them execute different emotional and behavioral outputs. I think the best way to st...
If you go back even further we’re the descendants of single celled organisms that absolutely don’t have experience.
My disagreement is here. Anyone with a microscope can still look at them today. The ones that can move clearly demonstrate acting on intention in a recognizable way. They have survival instincts just like an insect or a mouse or a bird. It'd be completely illogical not to generalize downward that the ones that don't move also exercise intention in other ways to survive. I see zero reason to dispute the assumption that experience co-originat...
Explain to me how a sufficiently powerful AI would fail to qualify as a p-zombie. The definition I understand for that term is "something that is externally indistinguishable from an entity that has experience, but internally has no experience". While it is impossible to tell the difference empirically, we can know by following evolutionary lines: all future AIs are conceptually descended from computer systems that we know don't have experience, whereas even the earliest things we ultimately evolved from almost certainly did have experience (I have no clue...
I spoke briefly on acceptance in my comment to the other essay, and I think I agree more with how that one conceptualized it. Mostly, I disagree that acceptance entails grief, or that it has to be hard or complicated. At the very least, that's not a particularly radical form of acceptance. My view on grief is largely that it is an avoidable problem we put ourselves through for lack of radical acceptance. Acceptance is one move: you say all's well and you move on. With intensive pre-invested effort, this can be done for anything, up to and including whateve...
A very necessary post in a place like here, in times like these; thank you very much for these words. A couple disclaimers to my reply: I'm cockily unafraid of death in personal terms, and I'm not fully bought into the probable AI disaster narrative, although far be it from me to claim to have enough knowledge to form an educated opinion; it's really a field I follow with an interested layman's eye. But I'm not exactly one of those struggling at the moment, and I'd even say that the recent developments with ChatGPT, Bing, and whatever follows them excite m...
I fully agree with the gist of this post. Empowerment, as you define it, is both a very important factor in my own utility function, and seems to be an integral component to any formulation of fun theory. In your words, "to transcend mortality and biology, to become a substrate independent mind, to wear new bodies like clothes" describes my terminal goals for a thousand years into the future so smack-dab perfectly that I don't think I could've possibly put it any better. Empowerment is, yes, an instrumental goal for all the options it creates, but also an ...
I highly recommend following Rational Animations on Youtube for this sort of general purpose. I'd describe their format as "LW meets Kurzgesagt", the latter which I already found highly engaging. They don't post new videos that often, but their stuff is excellent, even more so recently, and definitely triggers my dopamine circuits in a way that rationality content generally struggles to satisfy. Imo, it's perfect introductory material to anyone new on LW to get familiar with its ideology in a way that makes learning easy and fun.
(Not affiliated with RA in any way, just a casual enjoyer of chonky shibes)
You've described habituation, and yes, it does cut both ways. You also speak of "pulling the unusual into ordinary experience", as though that is undesirable, but contrarily, I find exactly that a central motivation to me. When I come upon things that on first blush inspire awe, my drive is to fully understand them, perhaps even to command them. I don't think I know how to see anything as "bigger than myself" in a way that doesn't ring simply as a challenge to rise above whatever it is.
Manipulating one's own utility functions is supposed to be hard? That would be news to me. I've never found it problematic, once I've either learned new information that led me to update it, or become aware of a pre-existing inconsistency. For example, loss aversion is something I probably had until it was pointed out to me, but not after that. The only exception to this would be things one easily attaches to emotionally, such as pets, to which I've learned to simply not allow myself to become so attached. Otherwise, could you please explain why you make the claim that such traits are not readily editable in a more general capacity?
Thanks for asking. I'll likely be publishing my first paper early next year, but the subject matter is quite advanced, definitely not entry-level stuff. It takes more of a practical orientation to the issue than merely establishing evidence (the former my specialty as a researcher; as is probably clear from other replies, I'm satisfied with the raw evidence).
As for best published papers for introductory purposes, here you can find one of my personal all-time favorites. https://www.semanticscholar.org/paper/Development-of-Certainty-About-the-Correct-Decease...
Apologies for the absence; combination of busy/annoyance with downvotes, but I could also do a better job of being clear and concise. Unfortunately, after having given it thought, I just don't think your request is something I can do for you, nor should it be. Honestly, if you were to simply take my word for it, I'd wonder what you were thinking. But good information, including primary sources, is openly accessible, and it's something that I encourage those with the interest to take a deep dive into, for sure. Once you go far enough in, in my experience, t...
Based on evidence I've been presented with to this point - I'd say high enough to confidently bet every dollar I'll ever earn on it. Easily >99% that it'll be put beyond reasonable doubt in the next 100-150 years, and I only specify that long because of the spectacularly lofty standards academia forces such evidence to measure up to. I'm basically alone in my field in actually being in favor of the latter, however, so I have no interest in declining to play the long game with it.
Been staying hard away from crypto all year, with the general trend of about one seismic project failure every 3 months, and this might be the true Lehman moment on top of the shitcoin sundae. Passing no assumptions on intent or possible criminal actions until more info is revealed, but it certainly looks like SBF mismanaged a lot of other people's money and was overconfident in his own, being largely pegged to illiquid altcoins and FTT. The most shocking thing to me is how CZ took a look at their balance sheets for all of like 3 hours after announcing int...
That being said, I could see how this feeling would come about if the value/importance in question is being imposed on you by others, rather than being the value you truly assign to the project. In that case, such a burden can weigh heavily and manifest aversively. But avoiding something you actually assign said value to just seems like a basic error in utility math?
I have a taboo on the word "believe", but I am an academic researcher of afterlife evidence. I personally specialize in verifiable instances of early-childhood past-life recall.
Honestly, even from a purely selfish standpoint, I'd be much more concerned about a plausible extinction scenario than just dying. Figuring out what to do when I'm dead is pretty much my life's work, and if I'm being completely honest and brazenly flouting convention, the stuff I've learned from that research holds a genuine, not-at-all-morbid appeal to me. Like, even if death wasn't inevitable, I'd still want to see it for myself at some point. I definitely wouldn't choose to artificially prolong my lifespan, given the opportunity. So personally, death an...
I like the thought behind this. You've hit on something I think is important for being productive: if thinking about the alternative makes you want to punch through a wall, that's great, and you should try to make yourself feel that way. I do a similar thing, but more toward general goal-accomplishment; if I have an objective in sight that I'm heavily attracted to, I identify every possible obstacle to the end (essentially murphyjitsu'ing), and then I cultivate a driving, vengeful rage toward each specific obstacle, on top of what motivation I already had ...
It appears what you have is free won’t!
For the own-behavior predictions, could you put together a chart with calibration accuracy on the Y axis, and time elapsed between the prediction and the final decision (in buckets) on the X axis? I wonder whether the predictions became less-calibrated the farther into the future you tried to predict, since a broader time gap would result in more opportunity for your intentions to change.
This is way too interesting not to have comments!
First, I think this bears on the makeup of one's utility function. If your UF contains absolutes, infinite value judgments, then in my opinion, it is impossible not to be truly motivated toward them. No pushing is ever required; at least, it never feels like pushing. Obstacles just manifest to the mind in the form of fun challenges that only amplify the engagement, because you already know you have the will to win. If your UF does not include absolutes, or you step down to the levels that are finite (for the...
I suspect the dichotomy may be slightly misapportioned here, because I sometimes find that ideas which are presented on the right side end up intersecting back with the logical extremes of methods from the left side. For example, the extent to which I push my own rationality practice is effectively what has convinced me that there's a lot of ecological validity to classical free will. The conclusion that self-directed cognitive modification has no limits, which implies conceptually unbounded internal authority, is not something that I would imagine one cou...
I don't think I've ever experienced this. I'd actually say I could be described by the blue graph. The more I really, really care about something, the more I want to do absolutely nothing but it, especially if I care about it for bigger reasons than, say, because it's a lot of fun at this moment. Sometimes, there comes a point where continuing to improve said objective feels like it's bringing diminishing returns, so I call the project sufficiently complete to my liking. Other times, it never stops feeling worth the effort, or it is simply too important no...
I like this proposal. In light of the issues raised in this post, it's important for people to come into the custom of explaining their own criteria for "truth" instead of leaving what they are talking about ambiguous. I tend not to use the word much myself, in fact, because I find it more helpful to describe exactly what kind of reality judgments I am interested in arriving at. Basically, we shouldn't be talking about the world as though we have actual means of knowing things about it with probability 1.
Important post. The degree to which my search for truth is motivated, and to what ends, is something I grapple with frequently. I generally prefer the definition of truth as "that which pays the most rent in anticipated experience"; essentially a demand for observability and falsifiability, a combination of your correspondence and predictive criteria. This, of course, leaves what is true subject to updating if new ideas lead to better results, but I think it is the best way we have of approximating truth. So I'm constantly looking really hard at the eviden...
Unfortunate to say I haven't kept a neat record of where exactly each case is published, so I asked my industry connections and was directed to the following article. Having reviewed it, it would of course be presumptuous of me to say I endorse everything stated therein, since I have not read the primary source for every case described. But those sources are referenced at bottom, many with links. It should suffice as a compilation of information pertaining to your question, and you can judge what meets your standards.
https://psi-encyclopedia.spr.ac.uk/articles/reincarnation-cases-records-made-verifications
Disclaimer, I'm not someone who personally investigates cases. What you've raised has actually been a massive problem for researchers since the beginning, and has little to do with the internet - Stevenson himself often learned of his cases many years after they were in their strongest phase, and sometimes after connections had already been made to a possible previous identity. In general, the earlier a researcher can get on a case and in contact with the subject, the better. As a result, cases in which important statements given by the subject are documen...
On that note, the main way I could envision AI being really destructive is getting access to a government's nuclear arsenal. Otherwise, it's extremely resourceful but still trapped in an electronic medium; the most it could do if it really wanted to cause damage is destroy the power grid (which would destroy it too).
Feels like Y2K: Electric Boogaloo to me. In any case, if a major catastrophe did come of the first attempt to release an AGI, I think the global response would be to shut it all down, taboo the entire subject, and never let it be raised as a possibility again.
Are you telling me you'd be okay with releasing an AI that has a 25% chance of killing over a billion people, and a 50% chance of at least killling hundreds of millions? I have to be missing the point here, because this post isn't doing anything to convince me that AI researchers aren't Stalin on steroids.
Or are you saying that if one can get to that point, it's much easier from there to get to the point of having an AI that will cause very few fatalities and is actually fit for practical use?
Rather, I think he means that alignment is such a narrow target, and the space of all possible minds is so vast, that the default outcome is that unaligned AGI becomes unaligned ASI and ends up killing all humans (or even all life) in pursuit of its unaligned objectives. Hitting anywhere close to the alignment target (such that there's at least 50% chance of "only" one billion people dying) would be a big win by comparison.
Of course, the actual goal is for “things [to] go great in the long run”, not just for us to avoid extinction. Alignment itself is the ...
As a new member and hardcore rationalist/mental optimizer who knows little about AI, I've certainly noticed the same thing in the couple weeks I've been around. The most I'd say of it is that it's a little tougher to find the content I'm really looking for, but it's not like the site has lost its way in terms of what is still being posted. It doesn't make me feel less welcome in the community, the site just seems slightly unfocused.
That's definitely the proper naïve reaction to assume in my opinion. I would say with extremely high confidence that this is one of those things that takes dozens of hours of reading to overcome one's priors toward, if your priors are well-defined. It took every bit of that for me. The reason for this is that there's always a solid-sounding objection to any one case - it takes knowing tons of them by heart to see how the common challenges fail to hold up. So, in my experience and that of many I know, the degree which one is inclined to buy into it is a dir...
I can't say I understand what you think something of that sort would actually be. Certainly none of your examples in the OP qualify. Nothing exists which violates the laws of nature, because if it exists, it must follow the laws of nature. Updating our knowledge of the laws of nature is a different matter, but it's not something that inspires horror.
There is a case on record that involved a recalled phone number. A password is a completely plausible next step forward.
For a very approachable and modernized take on the subject matter, I'd check out the book Before by Jim Tucker, a current leading researcher.
As a disclaimer, it's perfectly rational and Bayesian to be extremely doubtful of such "modest" proposals at first blush - I was for a good length of time, until I did the depth of investigation that was necessary to form an expert opinion. Don't take my word for things!
One of the best, approachable overviews of all this I've ever read. I've dabbled in some, but not all of the topics you've raised here, and I certainly know about the difficulties they've all faced with increasing to a scientific level of rigor. What I've always said is that parapsychology needs Doctor Strange to become real, and he's not here yet and probably never will be. Otherwise, every attempt at "proof" is going to be dealing with some combination of unfalsifiability, minuscule effect sizes, or severe replication issues. The only related phenomenon ...
I assume you mean to say the odds of two subjects remembering the same life by chance would be infinitesimal, which, fair. The odds of one subject remembering two concurrent lives would be much, much higher. Still doesn't happen. In fact, we don't see much in the way of multiple-cases at all, but when we do, it's always separate time periods.
I haven't read Sheldrake in depth, but I'm familiar with some of his novel concepts. The issue with positing anything so circumstantial being the mechanism for these phenomena is that the cases follow such narrow, exceptionless patterns that would not be so utterly predictable in the event of a non-directed etiology. The subjects never exhibit memories of people who are still alive, there are never two different subjects claiming to have been the same person, one subject never claims memories of two separate people who lived simultaneously... all these thi...
I commend you sir, because what you've done here is found a critical failure in materialism (forgive me if you're not a materialist!). As a hard dualist, I love planarians because they pose such challenging questions about the formation and transfer of consciousness, and I've done many thought experiments of my own involving them, exactly like this. Obviously, though, my logical progression isn't going to lean into the paradox as this formulation does. Rather, the clear answer is to decide one way or the other at the point of the first split which way Worm...
Good on you doing your DD. His official count (counting all cases known to him, not only ones he investigated) is around 1700, which probably means that my collective estimate is on the way low side - there's just a lot of unpublished material to try to account for (file drawer effect) - but I would definitely say that a great deal of the advancement in the field after Stevenson has been of a conceptual and theoretical nature rather than collecting large amounts of additional data. In general, researchers have pivoted to allowing cases to come to their att...
Your replies are extremely informative. So essentially, the AI won't have any ability to directly prevent itself from being shut off, it'll just try not to give anyone an obvious reason to do so until it can make "shutting it off" an insufficient solution. That does indeed complicate the issue heavily. I'm far from informed enough to suggest any advice in response.
The idea of instrumental convergence, that all intelligence will follow certain basic motivations, connects with me strongly. It patterns after convergent evolution in nature, as well as invoking...
I had a hard time understanding a good bit of what you're trying to say here, but I'll try to address what I think I picked up clearly:
While reincarnation cases do involve memories from people within the same family at a rate higher than mere chance would predict, subjects also very often turn out to have been describing lives of people completely unknown to their "new" families. The child would have absolutely no other means of access to that information. Also, without exception, they never, ever invoke memories belonging to still-living people.
On t
That's really interesting - again, not my area of expertise, but this sounds like 101 stuff, so pardon my ignorance. I'm curious what sort of example you'd give of a way you think an AI would learn to stop people from unplugging it - say, administering lethal doses of electric shock to anyone who tries to grab the wire? Does any actual AI in existence today even adopt any sort of self-preservation imperative that'd lead to such behavior, or is that just a foreign concept to it, being an inanimate construct?
Restricting the query to true top-level, sweep-me-off-my-feet material, I'd say I've personally read about at least a few dozen that hit me that hard. If we expand to any case that researchers consider "solved" - that is, the deceased person whose life the child remembers has been confidently identified - I would estimate on the order of 2000 to 2500 worldwide, possibly more at this point.
No time travel: You are 100% correct. All cases ever recorded involve memories belonging to previously deceased individuals.
Minds need brains: To inhabit matter, they absolutely do. You won't see anyone incarnating into a rock, LMAO.
Everything about biology has an evolutionary explanation: Also 100% correct. Just adding dualism changes nothing about natural selection. And, once again granting the premise, the ability to retain previous-life memories is sure as hell adaptive.
By "broadcast", I assume you mean "speak about previous-life experiences". To that,...
To the first question, there's just no way to know at the current stage of research. It's perfectly possible, just as it's possible that there's life in the Andromeda galaxy. To the second, know that taking ideas like this seriously involves entertaining some hard dualism; the brain essentially has to be regarded as analogous to a personal computer (at least I find such a comparison useful). Granting that premise, there's no reason a user couldn't "download" data into it.
"Awakened people are out there, and some people do stumble into it with minimal practice, and I wish it were this easy to get to it, but It's probably not."
Having read the preceding descriptions, I find myself wondering if I'm one of those stumblers. If "awakening" is defined by the quote you provided, "suffering less and noticing it more", that's exactly how I feel today when I compare to myself a few years ago. In casual terms, I'd say I've been blessed with the almighty power of not giving a crap; I know exactly when something should feel bad, but I can...
Personally, I mostly study reincarnation cases; they're the only evidence I really find to meet a scientific standard. Let's just say that without them, I wouldn't be a dualist on any confident epistemic ground. That said, 99 percent of what you'll encounter in a casual search on the matter is absolute nonsense. When skeptics cry "Here be dragons!" to dissuade curious folks from messing around in such territory, I honestly can't say I blame them one bit, given how much dedication it takes to separate the signal from the deafening noise. If you want to dip ...
This is massive amounts of overthink, and could be actively dangerous. Where are we getting the idea that AIs amount to the equivalent of people? They're programmed machines that do what their developers give them the ability to do. I'd like to think we haven't crossed the event horizon of confusing "passes the Turing test" with "being alive", because that's a horror scenario for me. We have to remember that we're talking about something that differs only in degree from my PC, and I, for one, would just as soon turn it off. Any reluctance to do so when faced with a power we have no other recourse against could, yeah, lead to some very undesirable outcomes.
The principles I'm alluding to here are purely self-applied, so I don't have to worry about crossing signals with anyone in that regard, but I'll heed your advice in situations where I'm working with aligning my principles with others'. It's also an isolated case where my utility function absolutely necessitates their constant implementation and optimization; generally, I do try to be flexible with ordinary principles that don't have to be quite so unbending.
I think it's important to stress that we're talking about fundamentally different sorts of intelligence - human intelligence is spontaneous, while artificial intelligence is algorithmic. It can only do what's programmed into its capacity, so if the dev teams working on AGI are shortsighted enough to give it an out to being unplugged, that just seems like stark incompetence to me. It also seems like it'd be a really hard feature to include even if one tried; equivalent to, say, giving a human an out to having their blood drained from their body.
New to the site, just curious: are you that Roko? If so, then I'd like to extend a warm welcome-back to a legend.
Although I'm not deeply informed on the matter, I'd also happen to agree with you 100% here. I really think most AI risk can be heavily curtailed to fully prevented by just making sure it's very easy to terminate the project if it starts causing damage.
Yes, I am a developing empirical researcher of metaphysical phenomena. My primary item of study is past-life memory cases of young children, because I think this line of research is both the strongest evidentially (hard verifications of such claims, to the satisfaction of any impartial arbiter, are quite routine), as well as the most practical for longtermist world-optimizing purposes (it quickly becomes obvious we're literally studying people who've successfully overcome death). I don't want to undercut the fact that scientific metaphysics is a much large... (read more)