they said that they were no longer asexual (they never were),
I'm somewhat skeptical of the claim in parentheses. It certainly sounds like there is a state where they demonstrated enough traits to think they were asexual, and that information tends to be worth tracking, even if only for self-diagnostics.
(source epistemic status: mostly experiential and anecdotal from a lay lucid dreamer who knows a few other lucid dreamers)
The common negative effects from my lucid dreaming experiences:
- If I'm not careful with how I exert the "influence" I have in the dream, I can "crash" the dream, usually resulting in me waking up and having trouble getting back to sleep for a bit
- When I use a lot of influence in a lucid dream, especially to extend the length of a dream, I find that I end up seeming way less rested than normal (but that has proven hard to try and quant...
- Learning about the trigger conditions for serotonin, oxytocin, dopamine, and cortisol, which allowed for more direct optimization away from cortisol activations
This idea started when I read this article I was pointed at by a coworker in 2020: The DOCS Happiness Model. I then did some naturalist studies with that framing in mind, and managed to reduce cortisol activations that I considered "unhelpful" by a significant degree. I consider this of high value to people who have enough control over their environment to meaningfully optimize against cortisol trig...
The two failure modes I observe most often are not exclusive to rationality, but might still be helpful to consider.
The decision was generated by my intuition since I've done the math on this question before, but it did not draw from a specific "gut feeling" beyond me querying the heavily-programmed intuition for a response with the appropriate inputs.
Your question has raised to mind some specific deviations of my perspective I have not explicitly mentioned yet:
Ok. So remember, your choices are:
- Lock away the technology for some time
- Release it now
You are choosing to kill every living person because you hope that the next generation of humans is more moral/ethical/deserving of immortality than the present, but you get no ability to affect the outcome.
Even with this context, my calculations come out the same. It appears that our estimations of the value (and possibly sacred-ness) of lives are different, as well as our allocations of relative weights for such things. I don't know that I have anythin...
I'm not sure your position is coherent. You, as a SWE, know that you can keep producing turing complete emulations and keep any possible software from the past working, with slight patches. (for example, early game console games depended on UDB to work at all).
Source code and binary files would qualify as "immortal" by most definitions, but my experience using Linux and assisting in software rehosts has made me very dubious of the "immortality" of the software's usability.
Here's a brief summary of factors that contribute to that doubt:
Do you think that some future generation of humans (or AI replacements) will become immortal, with the treatments being widely available?
I do not estimate the probability to be zero, but other than that my estimation metrics do not have any meaningful data to report.
Assuming they do - remember, every software system humans have ever built already is immortal, so AIs will all have that property - what bounds the awfulness of future people but not the people alive right now?
First, I'm not sure I agree that software systems are immortal. I've encountered quit...
First, a brief summary of my personal stance on immortality:
- Escaping the effects of aging for myself does not currently rate highly on my "satisfying my core desires" metrics at the moment
- Improving my resilience to random chances of dying rates as a medium priority on said metrics, but that puts it in the midst of a decently large group of objectives
- If immortality becomes widely available, we will lose the current guarantee that "awful people will eventually die", which greatly increases the upper bounds of the awfulness they can spread
- Personal gro...
1:15 with the use of some distraction and breathing techniques. Mid-20s male in decent health but asthma.
I remember pushing to 90 seconds at one point when experimenting with some body control techniques, but that was a couple years ago and I'd probably have to take some unhealthy measures to repeat that nowadays.
Circling back a few months later, I have some observations from trying out this idea:
My opinion is a bit mixed on LessWrong at the moment. I'm usually looking for one of two types of content whenever I peruse the site:
- Familiar Ideas Under Other Names: Descriptions of concepts and techniques I already understand that use language more approachable to "normal" people than the highly-niche jargon I use myself, which help me discuss them with others more conveniently
- Unfamiliar or Forgotten Ideas: Descriptions of concepts and techniques I haven't thought of recently or at all, which can be used as components for future projects
I've only bee...
I have had similar experiences with getting lost in the meta, as well as the isolated experience that it provides. In my case, it would manifest as me focusing on trying to improve my big-picture "system metaphor" for my IFS-esque mental multi-threading architecture (one of my most useful constructs), even when I was well past the point where it was worth trying to further refine the top-down granularity.
I did notice the trend eventually, and once I consciously acknowledged the problem I was able to visualize some fairly straightforward paths away from it....
Another idea if you want to push against the mental pressure that kills good ideas, from Paul Graham’s recent essay on how to do good work: “One way to do that is to ask what would be good ideas for someone else to explore. Then your subconscious won't shoot them down to protect you.” I don’t know of anyone using this technique, but it might work.
This angle of attack sounds worth investigating for myself, especially because it can circumvent censorship for other reasons, such as resource availability or personal interests. I've had ideas before t...
I think naturalism can be directed even at things "contaminated by human design", if you apply the framing correctly. In a way, that's how I started out as something of a naturalist, so it is territory I'd consider a bit familiar.
The best starting point I can offer based on Raemon's comment is to look at changes in a field of study or technology over time, preferably one you already have some interest in (perhaps AI-related?). The naturalist perspective focuses on small observations over time, so I recommend embarking on brief "nature walks" where you find...
The goal of naturalism is to reach a point where you relate to a part of the world in such a way that perpetual learning is inevitable.
I utilize a stance that seems very similar in spirit and a number of details to what is described here, and I would like to emphasize the value of frequent, small experiments to gather knowledge and expand awareness of options. I have found the practice valuable in reducing the complexity and investment requirements of experimentation, as well as synchronizing well with the update speed of mental models and other "deep knowledge".
That's the one, thank you!
The most notable example of a Type 2 process that chains other Type 2 processes as well as Type 1 processes is my "path to goal" generator, but as I sit here to analyze it I am surprised to notice that much of what used to be Type 2 processing in its chain has been replaced with fairly solid Type 1 estimators with triggers for when you leave their operating scope. I am noticing that what I thought started as Type 2s that call Type 2s now looks more like Type 2s that set triggers via Type 1s to cause other Type 2s to get a turn on the processor later. It's ...
I have a modest amount of pair programming/swarming experience, and there are some lessons I have learned from studying those techniques that seem relevant here:
Under this model, then, Type 2 processing is a particular way of chaining together the outputs of various Type 1 subagents using working memory. Some of the processes involved in this chaining are themselves implemented by particular kinds of subagents.
Something I have encountered in my own self-experiments and tinkering is Type 2 processes that chain together other Type 2 processes (and often some Type 1 subagents as well), meshing well with persistent Type 2 subagents that get re-used due to their practicality and sometimes end up resembling Type 1 subagents as their decision process becomes reflexive to repeat.
Have you encountered anything similar?
I assign weights to terminal and instrumental value differently, with instrumental value growing higher for steps that are less removed from producing terminal value and/or for steps that won't easily backslide/revert without maintenance.
As far as uncertainty goes, my general formula is to focus upon keeping plans composed of "sure bet" steps if the risk of failure is high, but I'll allow less surefire steps to be attempted if there is more wiggle room in play. This sometimes results in plans that are overly circuitous, but resistant to common points of fa...
Some bullet points from my list of "framing concepts" that make up my "world-viewing lens":
- If intelligent design is present in the universe, it's not something that most, if any, occupants of the universe can easily, if ever, identify. Where would data on the option space for universe design/construction come from? How could that data be verified or validated?
- I remain unconvinced that humanity (or any subdivision of it) is some "chosen" group by any definition beyond "advantages they currently possess". Such mythology is not always ill-intentioned in origi
... (read more)