Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
AAA10

Out of curiosity - “it's because Dustin is very active in the democratic party and doesn't want to be affiliated with anything that is right-coded” Are the projects related to AI safety or just generally? And what are some examples?

AAA10

1. Maybe for everyone it would be different. It might be hard to have a standard formula to find obsessions. Sometimes it may come naturally through life events/observations/experiences. If no such experience exists yet, or one seems to be interested in multiple things, I have received an advice to try different things, and see what you would like (I agree with it). Now that I think about it, it would also be fun to survey people and ask them how they got their passion/do what they do (and to derive some standard formula/common elements if possible)!

2. I think maybe we can approach with " the best of one's ability", and when we reach that, the rest may depend a lot on luck and other things too. Maybe through time, we could get better eventually, or maybe some observations/insights accidentally happened, and we found a breakthrough point, with the right accumulation of previous experience/knowledge.

AAA10

https://arxiv.org/pdf/1803.10122 I have a similar question and found this paper source. One thing I am not sure of is if this is no longer the same concept/close enough concept that people currently talk about, nor if this is the origin.

https://www.sciencedirect.com/science/article/pii/S0893608022001150 This paper seems to suggest something at least about multimodal perception with reinforcement learning/agent type of set up.

AAA30

“A direction: asking if and how humans are stably aligned.” I think this is a great direction, and the next step seems to be breaking out what are humans aligned to - the examples here seems to mention some internal value alignment, but wondering if it would also mean external value system alignment.