All of SilverFlame's Comments + Replies

Answer by SilverFlame21

Some bullet points from my list of "framing concepts" that make up my "world-viewing lens":

  • If intelligent design is present in the universe, it's not something that most, if any, occupants of the universe can easily, if ever, identify. Where would data on the option space for universe design/construction come from? How could that data be verified or validated?
  • I remain unconvinced that humanity (or any subdivision of it) is some "chosen" group by any definition beyond "advantages they currently possess". Such mythology is not always ill-intentioned in origi
... (read more)

they said that they were no longer asexual (they never were),

I'm somewhat skeptical of the claim in parentheses. It certainly sounds like there is a state where they demonstrated enough traits to think they were asexual, and that information tends to be worth tracking, even if only for self-diagnostics.

9G Wood
It seems quite plausible for someone to falsely belive they were asexual in this situation. I understand that if you are starving or nutrient deficient (zinc, vitamin D, vitamin B12, and iron) your sex drive can be at zero. If it's like that for long enough you may think that it is because it's inherent to who you are. You are wrong, but have no way of knowing that.

(source epistemic status: mostly experiential and anecdotal from a lay lucid dreamer who knows a few other lucid dreamers)

The common negative effects from my lucid dreaming experiences:
- If I'm not careful with how I exert the "influence" I have in the dream, I can "crash" the dream, usually resulting in me waking up and having trouble getting back to sleep for a bit
- When I use a lot of influence in a lucid dream, especially to extend the length of a dream, I find that I end up seeming way less rested than normal (but that has proven hard to try and quant... (read more)

  • Learning about the trigger conditions for serotonin, oxytocin, dopamine, and cortisol, which allowed for more direct optimization away from cortisol activations

This idea started when I read this article I was pointed at by a coworker in 2020: The DOCS Happiness Model. I then did some naturalist studies with that framing in mind, and managed to reduce cortisol activations that I considered "unhelpful" by a significant degree. I consider this of high value to people who have enough control over their environment to meaningfully optimize against cortisol trig... (read more)

Answer by SilverFlame40
  • Learning about the trigger conditions for serotonin, oxytocin, dopamine, and cortisol, which allowed for more direct optimization away from cortisol activations
  • Using method acting and other mimicry skills to more quickly learn from experts I was already trying to learn from
  • Applying operating system architecture knowledge to my internal thinking patterns to allow more efficient multithreading and context switching
1FinalFormal2
These sound super interesting- could you expand on any of them or direct me to your favorite resources to help?
Answer by SilverFlame*10

The two failure modes I observe most often are not exclusive to rationality, but might still be helpful to consider.

  1. Over-reliance on System 2 processing for improvement and a failure to make useful and/or intentional adjustments to System 1 processing
  2. Failing to base value estimations upon changes you can actually cause in reality, often focusing upon "virtual" value categories instead of the ones you might systemically prefer (this is best presented in LoganStrohl's How to Hug the Query)

The decision was generated by my intuition since I've done the math on this question before, but it did not draw from a specific "gut feeling" beyond me querying the heavily-programmed intuition for a response with the appropriate inputs.

Your question has raised to mind some specific deviations of my perspective I have not explicitly mentioned yet:

  • I spent a large amount of time tracing what virtues I value and what sorts of "value" I care about, and afterwards have spent 5-ish years using that knowledge to "automate" calculations that use such information
... (read more)

Ok.  So remember, your choices are:

  1.  Lock away the technology for some time
  2. Release it now

 

You are choosing to kill every living person because you hope that the next generation of humans is more moral/ethical/deserving of immortality than the present, but you get no ability to affect the outcome.

Even with this context, my calculations come out the same. It appears that our estimations of the value (and possibly sacred-ness) of lives are different, as well as our allocations of relative weights for such things. I don't know that I have anythin... (read more)

3[anonymous]
Do you think your process could be explained to others in an "external reasoning" way or is this just kinda an internal gut feel, like you just value everyone on the planet being dead and you roll the dice on whoever is next.

I'm not sure your position is coherent.  You, as a SWE, know that you can keep producing turing complete emulations and keep any possible software from the past working, with slight patches.  (for example, early game console games depended on UDB to work at all).

Source code and binary files would qualify as "immortal" by most definitions, but my experience using Linux and assisting in software rehosts has made me very dubious of the "immortality" of the software's usability.

Here's a brief summary of factors that contribute to that doubt:

  • Source co
... (read more)
2[anonymous]
Ok.  So remember, your choices are: 1.  Lock away the technology for some time 2. Release it now 1 doesn't mean forever, say the length of the maximum human lifespan.  You are choosing to kill every living person because you hope that the next generation of humans is more moral/ethical/deserving of immortality than the present, but you get no ability to affect the outcome.  The next generation, slightly after everyone alive is dead, will be immortal, and as unethical or not as you believe future people will be. I am saying that I don't see how 1 is very justifiable, it's also genocide even though in this hypothetical you will fail no legal consequences for committing the atrocity.   I believe this made up hypothetical is a fairly good model for actual reality.  I think people working together even by accident* - simply pretending that immortality is impossible, for example and not allowing studies on cryonics to ever be published - could in fact delay human indefinite life extension for some time, maybe as long as the maximum human lifespan.  But regardless of the length of the delay, there are 'assholes' today, and 'future assholes', and it isn't a valid argument to say your should delay immortality for hope that future people are less, well, bad.   *the reason this won't last forever is because the technology has immense instrumental utility.  Even a small amount of reliable, proven to work life extension would have almost every person who can afford it purchasing it, and advances in other areas make achieving this more and more likely.

Do you think that some future generation of humans (or AI replacements) will become immortal, with the treatments being widely available?

I do not estimate the probability to be zero, but other than that my estimation metrics do not have any meaningful data to report.

Assuming they do - remember, every software system humans have ever built already is immortal, so AIs will all have that property - what bounds the awfulness of future people but not the people alive right now?

First, I'm not sure I agree that software systems are immortal. I've encountered quit... (read more)

3[anonymous]
I'm not sure your position is coherent.  You, as a SWE, know that you can keep producing turing complete emulations and keep any possible software from the past working, with slight patches.  (for example, early game console games depended on UDB to work at all).  It's irrelevant if it isn't economically feasible to do so.  I think you and I can both agree that an "immortal" human is a human that will not die of aging or any disease that doesn't cause instant death.  It doesn't mean that it will be economically feasible to produce food to feed them in the far future, they could die from that, but they are still biologically immortal.  Similarly, software is digitally immortal and eternal...as long as you are willing to keep building emulators or replacement hardware from specs. While I found your careful thought process here inspiring, the normal hypothetical assumption is to assume you have the authority to make the decision without any consequences or duty, and are immortal.  Meaning that none of these apply.  You hypothetically can 'click the mouse'* and choose no immortality until some later date, but you personally have no authority to influence how worthy future humans are.  *such as in a computer game like Civilization Finally, the implicit assumption I make, and I think you should make given the existing evidence that software is immortal, is that:  There is a slightly less than 100% chance that within 1000 years, barring cataclysmic event, that some kind of life with the cognitive abilities of humans+ will exist in the solar system that is immortal.  There are large practical advantages to having this property, from being able to make longer term plans to simply not losing information with time.   Human lifespans were not evolved in an environment with modern tools and complex technology, they are completely unsuitable to an environment where it takes say years to transfer between planets on the most efficient trajectory, and possibly centuries to reac
Answer by SilverFlame74

First, a brief summary of my personal stance on immortality:

- Escaping the effects of aging for myself does not currently rate highly on my "satisfying my core desires" metrics at the moment

- Improving my resilience to random chances of dying rates as a medium priority on said metrics, but that puts it in the midst of a decently large group of objectives

- If immortality becomes widely available, we will lose the current guarantee that "awful people will eventually die", which greatly increases the upper bounds of the awfulness they can spread

- Personal gro... (read more)

5dr_s
I mean... amazingly good people die too. Sure, a society of immortals would obviously very weird, and possibly quite static, but I don't see how eventual random death is some kind of saving grace here. Awful people die and new ones are born anyway.
9Andrew Burns
You cannot know a person is not secretly awful until they become awful. Humans have an interpretability problem. So suppose an awful person behaves aligned (non-awful) in order to get into the immortality program, and then does a treacherous left turn and becomes extremely awful and heaps suffering on mortals and other immortals. The risks from misaligned immortals are basically the same as the risks from misaligned AIs, except the substrate differences mean immortals operate more slowly at being awful. But suppose this misaligned immortal has an IQ of 180+. Such a being could think up novel ways of inflicting lasting suffering on other immortals, creating substantial s-risk. Moreover, this single misaligned immortal could, with time, devise a misaligned AI, and when the misaligned AI turns on the misaligned immortal and also on the other immortals and the mortals (if any are left), you are left with suffering that would make Hitler blanch.
2[anonymous]
Do you think that some future generation of humans (or AI replacements) will become immortal, with the treatments being widely available?   Assuming they do - remember, every software system humans have ever built already is immortal, so AIs will all have that property - what bounds the awfulness of future people but not the people alive right now?  Why do you think future people will be better people? If you had some authority to affect the outcome - whether or not current people get to be immortal, or you can reserve the treatment for future people who don't exist yet - does your belief that future people will be better people justify this genocide of current people?
Answer by SilverFlame10

1:15 with the use of some distraction and breathing techniques. Mid-20s male in decent health but asthma.

I remember pushing to 90 seconds at one point when experimenting with some body control techniques, but that was a couple years ago and I'd probably have to take some unhealthy measures to repeat that nowadays.

Circling back a few months later, I have some observations from trying out this idea:

  • I found myself tossing ideas to friends and acquaintances more often, which tended to improve my relationships with them somewhat
  • I noticed that some of the ideas I was preparing to hand off to someone else had glimmers of concepts I could use for other things, which had obvious benefits
  • I didn't notice any impact to my normal ideation/processing bandwidth as a result of the change in operating method
  • Sometimes ideas I handed off to someone else would circle back later and be
... (read more)
2Henrik Karlsson
Thank you for this update!
Answer by SilverFlame30

My opinion is a bit mixed on LessWrong at the moment. I'm usually looking for one of two types of content whenever I peruse the site:
- Familiar Ideas Under Other Names: Descriptions of concepts and techniques I already understand that use language more approachable to "normal" people than the highly-niche jargon I use myself, which help me discuss them with others more conveniently
- Unfamiliar or Forgotten Ideas: Descriptions of concepts and techniques I haven't thought of recently or at all, which can be used as components for future projects

I've only bee... (read more)

Answer by SilverFlame20

I have had similar experiences with getting lost in the meta, as well as the isolated experience that it provides. In my case, it would manifest as me focusing on trying to improve my big-picture "system metaphor" for my IFS-esque mental multi-threading architecture (one of my most useful constructs), even when I was well past the point where it was worth trying to further refine the top-down granularity.

I did notice the trend eventually, and once I consciously acknowledged the problem I was able to visualize some fairly straightforward paths away from it.... (read more)

Another idea if you want to push against the mental pressure that kills good ideas, from Paul Graham’s recent essay on how to do good work: “One way to do that is to ask what would be good ideas for someone else to explore. Then your subconscious won't shoot them down to protect you.” I don’t know of anyone using this technique, but it might work.

This angle of attack sounds worth investigating for myself, especially because it can circumvent censorship for other reasons, such as resource availability or personal interests. I've had ideas before t... (read more)

1SilverFlame
Circling back a few months later, I have some observations from trying out this idea: * I found myself tossing ideas to friends and acquaintances more often, which tended to improve my relationships with them somewhat * I noticed that some of the ideas I was preparing to hand off to someone else had glimmers of concepts I could use for other things, which had obvious benefits * I didn't notice any impact to my normal ideation/processing bandwidth as a result of the change in operating method * Sometimes ideas I handed off to someone else would circle back later and benefit one of my own projects, although I suspect the success rates for such second-order results will vary wildly Overall, it seems to have been worth trying, and I'll probably keep it going.

I think naturalism can be directed even at things "contaminated by human design", if you apply the framing correctly. In a way, that's how I started out as something of a naturalist, so it is territory I'd consider a bit familiar.

The best starting point I can offer based on Raemon's comment is to look at changes in a field of study or technology over time, preferably one you already have some interest in (perhaps AI-related?). The naturalist perspective focuses on small observations over time, so I recommend embarking on brief "nature walks" where you find... (read more)

The goal of naturalism is to reach a point where you relate to a part of the world in such a way that perpetual learning is inevitable.

I utilize a stance that seems very similar in spirit and a number of details to what is described here, and I would like to emphasize the value of frequent, small experiments to gather knowledge and expand awareness of options. I have found the practice valuable in reducing the complexity and investment requirements of experimentation, as well as synchronizing well with the update speed of mental models and other "deep knowledge".

The most notable example of a Type 2 process that chains other Type 2 processes as well as Type 1 processes is my "path to goal" generator, but as I sit here to analyze it I am surprised to notice that much of what used to be Type 2 processing in its chain has been replaced with fairly solid Type 1 estimators with triggers for when you leave their operating scope. I am noticing that what I thought started as Type 2s that call Type 2s now looks more like Type 2s that set triggers via Type 1s to cause other Type 2s to get a turn on the processor later. It's ... (read more)

4Kaj_Sotala
At least Type 2 behavior turning into Type 1 behavior is a pretty common thing in skill learning; the classic example I've heard cited is driving a car, which at first is very effortful and requires a lot of conscious thought, but then gradually things get so automated that you might not even remember most of your drive home. But the same thing can happen with pretty much any skill; at first it's difficult and requires Type 2 processing, until it's familiar enough to become effortless.

I have a modest amount of pair programming/swarming experience, and there are some lessons I have learned from studying those techniques that seem relevant here:

  • General cooperation models typically opt for vagueness instead of specificity to broaden the audiences that can make use of them
  • Complicated/technical problems such as engineering, programming, and rationality tend to require a higher level of quality and efficiency in cooperation than more common problems
  • Complicated/technical problems also exaggerate the overhead costs of trying to harmonize though
... (read more)

Under this model, then, Type 2 processing is a particular way of chaining together the outputs of various Type 1 subagents using working memory. Some of the processes involved in this chaining are themselves implemented by particular kinds of subagents.

Something I have encountered in my own self-experiments and tinkering is Type 2 processes that chain together other Type 2 processes (and often some Type 1 subagents as well), meshing well with persistent Type 2 subagents that get re-used due to their practicality and sometimes end up resembling Type 1 subagents as their decision process becomes reflexive to repeat.

Have you encountered anything similar?

2Kaj_Sotala
Probably, but this description is abstract enough that I have difficulty generating examples. Do you have a more concrete example?

I assign weights to terminal and instrumental value differently, with instrumental value growing higher for steps that are less removed from producing terminal value and/or for steps that won't easily backslide/revert without maintenance.

As far as uncertainty goes, my general formula is to focus upon keeping plans composed of "sure bet" steps if the risk of failure is high, but I'll allow less surefire steps to be attempted if there is more wiggle room in play. This sometimes results in plans that are overly circuitous, but resistant to common points of fa... (read more)