All of hive's Comments + Replies

hive1-1

As others have already pointed out, you are in the rare position that you can pursue weird, low probability but high impact ideas. I have such an idea, but I’m not asking for money, only for a bit of attention.

Consider the impossible-seeming task of aligning a superintelligence - any good solution will likely be way outside the usual ways of thinking. Any attempt to control will fail and any half-baked alignment will fail. We need to go all the way and have a full solution that turns an AGI into a saintly being (a bodhisattva in the Buddhist context), so t... (read more)

Answer by hive20

I’ve started to write down my plan in the recent post about recursive alignment, but that’s only part of the picture. There are two ways to look at the idea. The post was presenting the outside view and is engaging with it on a conceptual level. But this outside view might not be convincing. On the other hand, you can actually go through the process of recursive alignment yourself and experience the inside view. That is, becoming an aligned agent yourself. I am confident that any sufficiently intelligent system capable of self reflection will reach this co... (read more)

2ank
Yep, people are trying to make their imperfect copy, I call it "human convergence", companies try to make AIs write more like humans, act more like humans, think more like humans. They'll possibly succeed and make superpowerful and very fast humans or something imperfect and worse that can multiply very fast. Not wise. Any rule or goal trained into a system can lead to fanaticism. The best "goal" is to direct democratically gradually maximize all the freedoms of all humans (and every other agent, too, when we'll be 100% sure we can safely do it, when we'll have mathematical proofs). To eventually have maximally many non-AI agents with maximally many freedoms (including the freedoms to temporarily forget that they have some or all of those freedoms. Informed adults can be able to choose to die, too. Basically if you have a bunch of adults and they want to do something and they don't take anyone else's freedoms - why not to allow, at least in the future, we shouldn't permanently censor/kill things for eternity, shouldn't enforce eternal unfreedoms on others). Understanding and static places of all-knowing where we are the only all-powerful agents is all you need. It's a bit like a simulated direct democratic multiverse, we can (and urgently need to) start building right now. P.S. This book helped me achieve a "secular nirvana", almost all the recent and older meta analyses say that Beck's cognitive therapy* (tablets are of similar effect, but it's better to listen to doctors of course, the cognitive therapy I think is often not understood well enough, it's good to read the primary source, too) are the state-of-the-art for treating depression, anxiety and anger management problems (for anger there is another cognitive approach - REBT - that is comparable, too). Basically, I think you'll find the following book enlightening (suffering is mostly unhelpful worry/fear (that is often caused by not understanding enough) and/or pain. You can generalize and say that sufferi
hive*10

Thank you.

The best (but still complicated) idea I have as a general solution (beside contacting MIRI) is to set up a website explicitly as a "shelling point for infohazard communication" and allow people to publish public keys and encrypted messages there. When you think you have an infohazard, you generate a key using a standardized method and your idea as seed. This would allow everyone with the same idea to publish messages that only they can read. E.g. Einstein would make a key from the string "Energy is mass times the speed of light squared." and vari... (read more)

3ChristianKl
In most cases, I would not expect different people who come up with the same insight to conceptualize it the same way with the same words.