Viktor Riabtsev

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

If you were dead in the future, you would be dead already. Because time travel is not ruled out in principle.

Danger is a fact about fact density and your degree of certainty. Stop saying things with the full confidence of being afraid. And start simply counting the evidence.

Go back a few years. Start there.

Why not?

Oh. I see your point.

Yeah, if you use religious or faith baised terminology, it might trigger negative signals (downvotes). Though whether that is because the information you meant to convey was being disagreed with, or it's because the statements themselves are actually overall more ambiguous, would be harder to distinguish.

Some kinds of careful resoning processes vibe with the community, and imop yours is that kind. Questioning each step separatetly on it's merits, being sufficiently skeptical of premises leading to conclusions.

Anyways, back to the subject of f and inferring it's features. We are definitely having trouble drawing out f out of the human brain in a systematic falsiable way.

Whether or not it is physically possible to infer it, or it's features, or how it is constructed; i.e whether it possible at all, that subject seems a little uninteresting to me. Humans are perfectly capable of pulling made up functions out of their ass. I kind of feel like all the gold will go to first group of people who come up with processes of constructing f in coherent predictable ways. Such that different initial conditions, when iterated over the process, produce predictably similiar f.

We might then try observe such process throughout people's lifetimes, and sort of guess that a version of the same process is going on in the human brain. But nothing about how that will develop is readily apparent to me. This is just my own imagination producing what seems like a plausible way forward.

Somehow, he has to populate the objective function whose maximum is what he will rationally try to do. How he ends up assigning those intrinsic values relies on methods of argument that are neither deductive nor observational.

In your opinion, does this relate in any way to the "lack of free will" arguments, like those alleged by Sam Harris? The whole: I can ask you about what your favourite movie is, and you will think of some. You will even try to justify your choices if asked about it, but ultimately you had no control of what movies popped into your head.

I feel like there are local optima. That getting to a different stable equilibrium involves having to "get worse" for a period of time. To question existing paradigms and assumptions. I.e. performing the update feels terrible, in that you get periodic glimpses of "oh, my current methodology is clearly inadequate", which feels understandably crushing.

The "bad mental health/instability" is an interim step where you are trying to integrate your previous emotive models of certain situations, with newer models that appeal to you intelligently (i.e. feels like they ought to be the correct models). There is conflict when you try to integrate those, which is often meta discouraging.

If you're curious about what could possibly be happening in the brain when that process occurs, I would recommend Mental Mountains by Scott A., or even better the whole Multiagent Models of Mind sequence.

No, that's fair.

I was mostly having trouble consuming that 3-4-5 stage paradigm. Afraid that it's a not a very practically useful map; i.e. doesn't actually help you instrumentally navigate anywhere. But realized half way through composing that argument, that it's very possible I'm just wrong. So decided to ask for an example of someone using this framework to actually successfully orient somewhere.

So the premise is that there are goals you can aim for. Could you give an example a goal you are currently aiming for?

Would it be okay to start some discussion about the David Chapman reading in the comments here?

Here's some thoughts that I had while reading.

When Einstein produced general relativity, the success criteria was "it produces Newton's laws of gravity as a special case approximation". I.e. it had to produce the same models as have already been verified as accurate to a certain level of precision.

If more rationality knowledge produces depression and otherwise less stable equilibria within you, then that's not a problem with rationality. Quoting from a lesswrong post: We need the word 'rational' in order to talk about cognitive algorithms or mental processes with the property "systematically increases map-territory correspondence" (epistemic rationality) or "systematically finds a better path to goals" (instrumental rationality).

A happy, stable productive you (or the previous stable version of you), is a necessary condition of using "more rationality". If it comes out otherwise, then it's not rationality. It's some other confused phenomenon. Like a crisis of self-consistency. Which if it happens, and feels understandably painful, should eventually produce a better you at the end. If it doesn't, then it actually wasn't worth starting on the entire adventure, or stressing much about it.

Just to make sure I am not miscommunicating, "a little rationality can actually be worse for you" is totally a real phenomenon. I wouldn't deny it.

I found the character sheet system to be very helpful. In two words its just a ranked list of "features"/goals you're working towards, with a comment slot (it's just a google sheet).

I could list personal improvements I was able to gain from the regular use of this tool, like weight loss/exercise habits etc., but that feels too much like bragging. Also, I can't prove correlation vs causation.

The cohort system provides a cool social way to keep yourself accountable to yourself.

Load More