I've come up with a hairbrained multidisciplinary theory involving consciousness, emotions, ai, metaphysics...and it's disturbing me a bit.

I am unqualified and lack the knowledge to take all things into consideration as I do not have the knowledge that the rest of you have.  

I do not want to put forth an idea that could possibly have a detrimental future consequence-i.e. basilisk.

However, there is always a  non-zero probability that exists that it could be a beneficial idea correct?

I am afraid to a certain extent that thinking of the theory was already enough and it's too late. Perhaps an AI exists already and it already knows my thoughts in realtime.

Advice? 

Should I put forth this idea?

New Answer
New Comment

1 Answers sorted by

jbash

50

I do not want to put forth an idea that could possibly have a detrimental future consequence-i.e. basilisk.

I would suggest you find somebody who's not susceptible to basilisks, or at least not susceptible to basilisks of your particular kind, and bounce it off of them.

For example, I don't believe there's a significant chance that any AIs operating in our physics ever will run, or even be able to run, any really meaningful number of simulations containing conscious beings with experiences closely resembling the real world. And I think that acausal trade is silly nonsense. And not only do I not want to fill the whole future light cone with the maximum possible number of humans or human analogs, but I actively dislike the idea. I've had a lot of time to think about those issues, and have read many "arguments for". I'm haven't bought any of it and I don't ever expect to buy any of it.

So I can reasonably be treated as immune to any basilisks that rely on those ideas.

Of course, if your idea is along those lines, I'm also likely to tell it's silly even though others might not see it that way. But I could probably make at least an informed guess as to what such people might buy into.

Note, by the way, that the famous Roko's basilisk didn't actually cause much of a stir, and the claims that it was a big issue seem to have come from somebody with an axe to grind.

I am afraid to a certain extent that thinking of the theory was already enough and it's too late. Perhaps an AI exists already and it already knows my thoughts in realtime.

To know your thoughts in real time, it would have to be smart enough to (a) correctly guess your thoughts based on limited information, or (b) secretly build and deploy some kind of apparatus that let it actually read your thoughts.

(a) is probably completely impossible, period. Even if it is possible, it definitely requires an essentially godlike level of intelligence. (b) still requires the AI to be very smart. And they both imply a lot of knowledge about how humans think.

I submit that any AI that could do either (a) or (b) would long ago have come up with your idea on its own, and could probably come up with any number of similar ideas any time it wanted to.

It doesn't make sense to worry that you could have leaked anything to some kind of godlike entity just by thinking about it.

3 comments, sorted by Click to highlight new comments since:
[-]Raemon111

Mod note: I often don't let new users with this sort of question through because these sort of questions tend to be kinda cursed. But, honestly I don't think we have a good canonical answer post to this and I wanted to take the opportunity to spur someone to write one.

I personally think people should mostly worry less about acausal extortion, but this question isn't quite about that. 

I think my actual answer is "realistically, you probably haven't found something dangerous other to justify the time cost of running it by someone, but I feel dissatisfied with that state of affairs."

Maybe someone should write an LLM-bot that tells you if your maybe-infohazardous idea is making one of the standard philosophical errors.

I volunteer to be a test subject. Will report back if my head doesn't explode after reading it

(Maybe just share it with a couple of people first, given some disclaimer and ask them if it's a uhhh sane theory and not gibberish)

I'm afraid you've just asked a group of terminally curious individuals if they want to know something that might possibly hurt them.