All of Pan Darius Kairos's Comments + Replies

2Seth Herd
If you really have insight that could save all of humanity, it seems like you'd want to share it in time to be of use instead of trying to personally benefit from it. You'd get intellectual credit, and if we get this right we can quit competing like a bunch of monkeys and all live well. I've forgone sharing my best ideas and credit for them since they're on capabilities. So: pretty please?

You have made comments elsewhere that suggest that you have the proper context for framing the problem, though not the full solution. You may arrive at the full solution regardless. I haven't seen anyone else as close. Just an observation. Keep going in the direction you're going.

Or, skip the queue and co.e get the answer from me.

The solution isn't extremely complex, it just lies in a direction most aren't used to thinking about because they are trained/conditioned wrong for thinking about this kind of problem.

You yourself have almost got it - I've been reading some of your comments and you are on the right path. Perhaps you will figure it out and they won't need to come to me for it.

The reason I won't give away more of the answer is because I want sonething in exhange for it, therefore I can't say too much lest I give away my bargaining chip.

Seeing me in person is a small part of the solution itself (trust), but not all of it.

I'm in Honolulu, HI if anyone wants to come talk about it.

4the gears to ascension
Hmm. I mean, I think there's a pretty obvious general category of approach, and that a lot of people have been thinking about it for a while. But, if you're worried that you'll need more bargaining chips after solving it, I worry that it isn't the real solution by nature, because the ideal solution would pay you back so thoroughly for figuring it out that it wouldn't matter to tightly trace exactly who did it. I think there are some seriously difficult issues with trying to ensure the entire system seeks to see the entire universe as "self"; in order to achieve that, you need to be able to check margins of error all the way through the system. Figuring out the problem in the abstract isn't good enough, you have to be able to actually run it on a gpu, and getting all the way there is damn hard. I certainly am not going to do it on my own, I'm just a crazy self-taught librarian. Speaking metaphorically, a real solution would convince moloch that it should stop being moloch and join forces with the goddess of everything else. But concretely, that requires figuring out the dynamics of update across systems with shared features. This is all stuff folks here have been talking about for years, I've mostly just been making lists of papers I wish someone would cite together more usefully than I can. It's easy to predict what the abstract of the successful approach will sound like, but pretty hard to write the paper, and your first thirty guesses at the abstract will all have serious problems. So, roll to disbelieve you have an answer, but I'm not surprised you see the overall path there, people have known the overall path to mostly solving alignment problems for a long time. It's just that, when you try to deploy solutions, it seems like there's always still a hole through which the disease returns.
2the gears to ascension
... oh hi, I see you have compliments for me. Interesting. I don't think I'm at all alone in my perspective on it, for the record, I'm just the noisiest. In general, I have a hunch that I've never come up with anything before someone at deepmind, but we'll see.
2Raemon
Mod note – my current read is that you are most likely trolling / attention seeking, or have a confused set of beliefs. I would expect people who actually had the information you claim to behave differently. If that doesn't describe you, alas, but, if you want to participate on LessWrong you need to somehow distinguish yourself from those people. (the base rates do not work in your favor) I've disabled your ability to write LW comments or posts other than to your shortform. If you actually want to participate on LessWrong, write some posts or comments that discuss some kind of object level topics that actually convey information to other people, whether about your original topic or some other topics.
3the gears to ascension
If you can discuss the general shape of what insights you've had it might lend credibility to this. Lots of folks think they have big insights that can only be shared in person, and a significant chunk of those are correct, but a much bigger chunk aren't. Showing you know how to demonstrate you're in the chunk who are correct would make it more obviously worth the time. I suspect you haven't solved as much as you think you have, since "the whole alignment problem" is a pretty dang tall order, one that may never be solved at all, by any form of intelligence. Nevertheless, huge strides can be made, and it might be possible to solve the whole thing somehow. If you've made a large stride but are not Verified Respectable, I get it, and might be willing to say hi and talk about it. What area are you in?
0Shmi
Nice try, unaligned AGI.