Johannes C. Mayer

↘↘↘↘↘↘↙↙↙↙↙↙
Checkout my Biography.
↗↗↗↗↗↗↖↖↖↖↖↖

Wiki Contributions

Comments

Here is a model of mine, that seems related.

[Edit: Add Epistemic status]
Epistemic status: I have used this successfully in the past and found it helpful. It is relatively easy to do. is large for me.

I think it is helpful to be able to emotionally detach yourself from your ideas. There is an implicit "concept of I" in our minds. When somebody criticizes this "concept of I", it is painful. If somebody says "You suck", that hurts.

There is an implicit assumption in the mind that this concept of "I" is eternal. This has the effect, that when somebody says "You suck", it is actually more like they say "You sucked in the past, you suck now, and you will suck, always and ever".

In order to emotionally detach yourself from your ideas, you need to sever the links in your mind, between your ideas and this "concept of I". You need to see an idea as an object that is not related to you. Don't see it as "your idea", but just as an idea.

It might help to imagine that there is an idea-generation machine in your brain. That machine makes ideas magically appear in your perception as thoughts. Normally when somebody says "Your idea is dumb", you feel hurt. But now we can translate "Your idea is dumb" to "There is idea-generating machinery in my brain. This machinery has produced some output. Somebody says this output is dumb".

Instead of feeling hurt, you can think "Hmm, the idea-generating machinery in my brain produced an idea that this person thinks is bad. Well maybe they don't understand my idea yet, and they criticize their idea of my idea, and not actually my idea. How can I make them understand?" This thought is a lot harder to have while being busy feeling hurt.

Or "Hmm, this person that I think is very competent thinks this idea is bad, and after thinking about it I agree that this idea is bad. Now how can I change the idea-generating machinery in my brain, such that in the future I will have better ideas?" That thought is a lot harder to have when you think that you yourself are the problem. What is that even supposed to mean that you yourself are the problem? This might not be a meaningful statement, but it is the default interpretation when somebody criticizes you.

The basic idea here is, to frame everything without any reference to yourself. It is not me producing a bad plan, but some mechanism that I just happened to observe the output of. In my experience, this not only helps alleviate pain but also makes you think thoughts that are more useful.

Answer by Johannes C. Mayer74

Here is what I would do, in the hypothetical scenario, where I have taken over the world.

  1. Guard against existential risk.
  2. Make sure that every conscious being I have access to is at least comfortable as the baseline.
  3. Figure out how to safely self-modify, and become much much much ... much stronger.
  4. Deconfuse myself about what consciousness is, such that I can do something like 'maximize positive experiences and minimize negative experiences in the universe', without it going horribly wrong. I expect that 'maximize positive experiences, minimize negative experiences in the universe' very roughly points in the right direction, and I don't expect that would change after a long reflection. Or after getting a better understanding of consciousness.
  5. Optimize hard for what I think is best.

Though this is what I would do in any situation really. It is what I am doing right now. This is what I breathe for, and I won't stop until I am dead.

[EDIT 2023-03-01_17-59: I have recently realized that is is just how one part of my mind feels. The part that feels like me. However, there are tons of other parts in my mind that pull me in different directions. For example, there is one part that wants me to do lots of random improvements to my computer setup, which are fun to do, but probably not worth the effort. I have been ignoring these parts in the past, and I think that their grip on me is stronger because I did not take them into account appropriately in my plans.]

I noticed that by default the brain does not like to criticise itself sufficiently. So I need to train myself to red team myself, to catch any problems early.

I want to do this by playing this song on a timer.

Tulpamancy sort of works by doing concurrency on a single-core computer in my current model. So this would definitely not speed things up significantly (I don't think you implied that just mentioning it for conceptual clarity).

To actually divide the tasks I would need to switch with IA. I think this might be a good way to train switching.

Though I think most of the benefits of tulpamancy are gained if you are thinking about the same thing. Then you can leverage that IA and Johannes share the same program memory. Also, simply verbalizing your thoughts, which you then do naturally, is very helpful in general. And there are a bunch more advantages like that that you miss out on when you only have one person working.

However, I guess it would be possible for IA to just be better at certain programming tasks. Certainly, she is a lot better at social interactions (without explicit training for that).

What <mathematical scaffolding/theoretical CS> do you think I am recreating? What observations did you use to make this inference? (These questions are not intended to imply any subtext meaning.)

I am probably bad at valuing my well-being correctly. That said I don't think the initial comment made me feel bad (but maybe I am bad at noticing if it would). Rather now with this entire comment stream, I realize that I have again failed to communicate.

Yes, I think this was irrational to not clean up the glass. That is the point I want to make. I don't think it is virtuous to have failed in this way at all. What I want to say is: "Look I am running into failure modes because I want to work so much."

Not running into these failure modes is important, but these failure modes where you are working too much are much easier to handle than the failure mode of "I can't get myself to put in at least 50 hours of work per week consistently."

While I do think that it is true, I am probably very bad in general at optimizing for myself to be happy. But the thing is while I was working so hard during AISC I was most of the time very happy. The same when I made these games. Most of the time I did these things because I deeply wanted to.

There where moments during AISC where I felt like I was close to burning out, but this was the minority. Mostly I was much happier than baseline. I think usually I don't manage to work as hard and as long as I'd like, and that is a major source of unhappiness for me.

So it seems that the problem that Alex seems to see, in me working very hard (that I am failing to take my happiness into account) is actually solved by me working very hard, which is quite funny.

I have this description but it's not that good, because it's very unfocused. That's why I did not link it in the OP. The LessWrong dialog linked at the top of the post is probably the best thing in terms of describing the motivation and what the project is about at a high level.

Load More