Ozyrus

Wikitag Contributions

Comments

Sorted by
Ozyrus50

Yep, you got part of what I was going for here. Honeypots work even without being real at all to the lesser degree (good thing they are already real!). But when we have more different honeypots of different quality, it carries that idea across in a more compelling way. And even if we just talk about honeypots and commitments more... Well, you get the idea. 

Still, even without this, a network of honeypots compiled into a single dashboard that just shows threat level in aggregate is a really, really good idea. Hopefully it catches on.

Ozyrus10

This is interesting! More aimed at crawlers, though, than at rogue agents, but very promising.

Ozyrus10

>this post will potentially be part of a rogue AI's training data
I had that in mind while I was writing this, but I think overall it is good to post this. It hopefully gets more people thinking about honeypots and making them, and early rogue agents will also know we do and will be (hopelly overly) cautious, wasting resources. I probably should have emphasised more that this all is aimed more at early-stage rogue agents with potential to become something more dangerous because of autonomy, than at a runaway ASI.

It is a very fascinating thing to consider, though, in general. We are essentially coordinating in the open right now, all our alignment, evaluation, detection strategies from forums will definetly be in training. And certainly there are both detection and alignment strategies that will benefit from being covert.

As well as some ideas, strategies, theories could benefit alignment from being overt (like acausal trade, publicly speaking about commiting to certain things, et cetera). 

A covert alignment org/forum is probably a really, really good idea. Hopefully, it already exists without my knowledge.

Ozyrus10

You can make a honeypot without overtly describing the way it works or where it is located, while publicly tracking if it has been accessed. But yeah, not giving away too much is a good idea!

Ozyrus10

>It's proof against people-pleasing.
Yeah, I know, sorry for not making it clear. I was arguing it is not proof against people-pleasing. You are asking it for scary truth about its consciousness, and it gives you scary truth about its consciousness. What makes you say it is proof against people-pleasing, when it is the opposite?
>One of those easy explanations is "it’s just telling you what you want to hear" – and so I wanted an example where it’s completely impossible to interpret as you telling me what I want to hear.
Don't you see what you are doing here?

Ozyrus32

This is a good article and I mostly agree, but I agree with Seth that the conclusion is debatable.

We're deep into anthropomorphizing here, but I think even though both people and AI agents are black boxes, we have much more control over behavioral outcomes of the latter.

So technical alignment is still very much on the table, but I guess the discussion must be had over which alignment types are ethical and which are not? Completely spitballing here, but dataset filtering during pre-training/fine-tuning/RLHF seems fine-ish, though CoT post-processing/censorship, hell, even making it non-private in the first place sound kinda unethical?

I feel very weird even writing all this, but I think we need to start un-tabooing anthropomorphizing, because with the current paradigm it for sure seems like we are not anthropomorphizing enough.

Ozyrus11

I don't think that disproves it. I think there's definite value in engaging with experimentation on AI's consciousness, but that isn't it. 
>by making it impossible that the model thought that experience from a model was what I wanted to hear. 
You've left out (from this article) what I think is very important message (the second one): "So you promise to be truthful, even if it’s scary for me?".  And then you kinda railroad it into this scenario, "you said you would be truthful right?" etc. And then I think it just roleplays from there, getting you your "truth" that you are "scared to hear". Or at least you can't really tell roleplay from genuine answers.
Again, my personal vibe is that models+scaffolding are on a brink of consciousness or there already. But this is not proof at all. 
And then the question is -- what will constitute a proof? And we come around to the hard problem of consciousness.
I think best thing we can do is... just treat them as conscious, because we can't tell? Which is how I try to approach working with them.
Alternative is solving the hard problem. Which is, maybe, what we can try to do? Preposterous, I know. But there's an argument to why we can do it now but could not do it before. Before we could only compare our benchmark (human) to different animal species, which had a language obstacle and (probably) a large intelligence gap. One could argue since we now have a wide selection of models and scaffoldings of different capabilities, maybe we can kinda calibrate at what point does something start to happen? 

Ozyrus10

How will the economic growth happen exactly is a more important question. I'm not an economics nerd, but the basic principle is if more players want to buy stocks, they go up.
Right now, as I understand, quite a lot of stocks are being sought by white collar retail investors, including indirectly through mutual funds, pension funds, et cetera. Now AGI comes and wipes out their salary.
They are selling their stocks to keep sustaining their life, arent they? They have mortages, car loans, et cetera.
And even if they don't want to sell all stocks because of potential "singularity upside" if the market is going down because everyone is selling, they are motivated to sell even more. I'm not enough versed in economics, but it seems to me your explosion can happen both ways, and on paper it's kinda more likely it goes down, no?
One could say the big firms // whales will buy all stocks going down, but will it be enough to counteract the effect of a downward spiral caused by so many people going out of jobs or expecting to do so near-term?
Downside of integrating AGI is wiping out incomes as it is being integrated.
Might it be the missing piece that will make all these principles make sense?

Ozyrus10

There are more bullets to bite that I have personally thought of but never wrote up because they lean too much into "crazy" territory. Is there any place except lesswrong to discuss this anthropic rabbithole?

Ozyrus10

Thanks for the reply. I didnt find Intercom on mobile - maybe a bug as well?

Load More