All of Aditya's Comments + Replies

Aditya8-1

I highly recommend people watch Connor talk about his interpretation of this post.

He talks about how Eliezer is a person who managed to access many anti memes that slid right off our heads.

What is an anti meme you might ask?

Anti meme

By their very nature they resist being known or integrated into your world model. You struggle to remember them. Just like how memes are sticky, and go viral anti memes are slippery and struggle to gain traction.

They could be extraordinarily boring. They could be facts about yourself that your ego protects you from really grasp... (read more)

Aditya10

maybe it's somewhat easier on account of how we have more introspective access to a working mind than we have to the low-level physical fields;

We have a biased access which makes things tricker because we weren't selected for our introspection skills to be high fidelity and having a correspondence with reality. Rather it's about the utility to survival.

Aditya-1-1

It doesn't have to be the result of explicit metaphysical beliefs...it could be the result of vague guesswork, and analogical thinking.

 

Yeah I could be wrong but my claim is implicit metaphysical beliefs have a big role here. 

 

defining "agentic" as "possessing spooky metaphysical free will" rather than "not passive". It's perfectly possibly to build an agent-in-the-sense-of-active out of mechanical parts.

 

I was just noting that people who are aware of the internal workings of AI will have to acutely face cognitive dissonance if they adm... (read more)

Aditya20

"topics about which philosophy is still concerned because we don't or can't get information that would enable us to have sufficient certainty of answers to allow those topics to transition into science".

 

I think that is quite close. I mean the implicit assumptions behind all these discussions, which are unquestioned. Moral realism, Computationalism, Empiricism, and Reductionism all come to mind. These topics cannot be tested or falsified with the scientific method. 
 

but there's not really anything here that seems like an argument that would

... (read more)
Aditya0-1

Aren't non-academics and non-experts the majority,

 

I was talking about people who had not grokked materialism which is the majority. The people who are not aware of the technical details model AI as this black box, therefore, seem to be more open to considering that it might be agentic but that is them just deferring to an outside view that sounds convincing rather than building their own model. 

 

so maybe people there, who are working on AI and machine learning, more often have a religious or spiritual concept of human nature, compared to th

... (read more)
Aditya0-1

Thanks for the constructive critism. I thought about it and I guess I need to increase the legibility of what I wrote. 

 

I will add a TLDR and update the post soon.

Aditya21

Some things don't make sense unless you really experience it. Personally I have no words for the warping effects such emotions have on you. It's comparable to having kids or getting brain injury.

It's a socially acceptable mental disorder.

The only thing is to notice when you are in that state and put very low credence on all positive opinions you have about your Limerent Object. You cannot know to a high confidence anything about them in that state. Give it a few years.

Don't take decisions you can't undo, entangle parts of your life which will be painful to detach later.

But it's a ride worth going on. No point in living life too safely. Have fun but stay safe out there.

Aditya0-1

Evolution failed at imparting its goal into humans, since humans have their own goals that they shoot for instead when given a chance.

 

To me, your framing of inner misalignment sounds like Goodharting itself because we evolved our intrinsic motivations towards these measures because they were good measures in the ancestral environment. But when we got access to advanced technology we kept optimizing on the measure (sex, sugar, beauty, etc) which led to it becoming no longer a measure of the actual target (kids, calories, health, etc.)

I think outer ali... (read more)

Aditya111

You should come for the Bangalore meet-up this Sunday. If you are near this part of India.

2AlphaAndOmega
I wasn't aware of the meet-up, but sadly it'll be rather far for me this time. Appreciate the heads up though! Hopefully I can make it another time.
Answer by Aditya45-2

I asked out my crushes. Worked out well for me.

I used to be really inhibited, now I have tried weed, alcohol and am really enjoying the moment.

Aditya00

Feels nice to see my name in a story. This fact about Romans is just so tasty.

It was hard to really imagine someone getting so emotionally caught up about a fact. I didn't expect to find it so hard.

Most fights are never about the underlying fact but it's tribal, about winning. If people cared about knowing the truth it would be discussions not debates.

Aditya*4-1

This is totally possible and valid. I would love for this to be true. It's just that we can plan for the worst case scenario.

I think it can help to believe that things will turn out ok, we are training the AI on human data. It might adopt some values. Once you believe that, then working on alignment can just be a matter of planning for the worst case scenario.

Just in case. Seem like that would be better for mental health.

2Lone Pine
Very much so. I think there is also truth to the idea that if you believe you are going to succeed you are much more likely to succeed, and certainly if you believe you will fail, you almost certainly will. For those who are in the midst of mental health crisis, I think it is important to emphasize that plenty of smart, reasonable people have thought about this and come to the conclusion that all this talk of AI-doom is just silly, because either its going to be okay or because AI is actually centuries away. (For example, Francois Chollet) Predicting the future also has a very poor track record, whether the prediction is doom or bloom. We should put significant credence on the idea that things will mostly continue in the way they have been, for better or worse, and that the future might look a lot like the present. Also, if you are someone who struggles a lot with ruminating on what might happen, and this causes you significant distress, I strongly encourage you to listen to the audiobooks The Power of Now and A New Earth.
Aditya32

Oh ok, I had heard this theory from a friend. Looks like I was misinformed. Rather than evolution causing cancer I think it is more accurate to say evolution doesn’t care if older individuals die off.

evolutionary investments in tumor suppression may have waned in older age.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3660034/

Moreover, some processes which are important for organismal fitness in youth may actually contribute to tissue decline and increased cancer in old age, a concept known as antagonistic pleiotropy

So thanks for clearing that up. I understand cancer better now.

Aditya10

When I talk to my friends, I start with the alignment problem. I found this analogy to human evolution really drives home the point that it’s a hard problem. We aren’t close to solving it.

https://youtu.be/bJLcIBixGj8

So at this time questions come up about how intelligence necessarily means morality. I talk about orthogonality thesis. Then why would the AI care about anything other that what it was explicitly told to do, the danger comes from Instrumental convergence.

Finally people tend to say, we can never do it, they talk about spirituality, uniqueness of... (read more)

Aditya*-11

I think this is how evolution selected for cancer. To ensure humans don’t live for too long competing for resources with their descendants.

Internal time bombs are important to code in. But it’s hard to integrate that into the AI in a way that the ai doesn’t just remove it the first chance it gets. Humans don’t like having to die you know. AGI would also not like the suicide bomb tied onto it.

The problem of coding this (as part of training) into an optimiser such that it adopts it as a mesa objective is an unsolved problem.

3Alexander Gietelink Oldenziel
No. Cancer almost surely has not been selected for in the manner you describe - this is extremely unlikely l. the inclusive fitness benefits are far too low I recommend Dawkins' classic " the Selfish Gene" to understand this point better. Cancer is the 'default' state of cells; cells "want to" multiply. the body has many cancer suppression mechanisms but especially later in life there is not enough evolutionary pressure to select for enough cancer suppression mechanisms and it gradually loses out.
Aditya*40

Same this post is what made me decide I can't leave it to the experts. It is just a matter of spending the required time to catch up on what we know and tried. As Keltham said - Diversity is in itself an asset. If we can get enough humans to think about this problem we can get some breakthroughs many some angles others have not thought of yet.

 

For me, it was not demotivating. He is not a god, and it ain't over until the fat lady sings. Things are serious and it just means we should all try our best. In fact, I am kinda happy to imagine we might see a ... (read more)

Aditya*40

Eliezer's latest fanfic is pretty fun to read; if any of you guys are reading it, I would love to discuss it. 

Aditya30

I found this very informative, but I think I can contribute to this discussion from the opposite direction. The problem of having too little frame control is also something that exists. Both extremes are bad.

On one end you are pushing your frame on a person, without trying to account for their current value system. In fact if you do it gently, slowly and find a pathway they would want to talk then it becomes moral. If I know the right buttons to push, the right arguments, the evidence, the life experience that could get a friend to adopt the values, belief... (read more)

Aditya10

So in this interpretation of the word "free will", even AI would have the same free will humans have?

Am I correct in thinking that I am not the computing machine but the computation itself? If it was possible to predict my behaviour they would have simulate an approximation of me within themselves or within the computer?

I am interested in what the implications this has in how hard or easy it is to manipulate other humans. How increasingly with companies gaining access to a lot of data and computing power can they start to manipulate people at ve... (read more)