AnnoyedReader

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Upon reflection, it seems I was focused on the framing rather than the mechanism, which in of itself doesn't necessarily do all the bad things I described. The framing is important though. I definitely think you should change the name.

FiveThirtyEight has done something similar in the past they called a chat.

I don't think debates really fit the ethos of LessWrong. Every time I write a comment it tells me to explain not persuade, after all. Debates have an effect of splitting people into camps, which is not great. And they put people in the frame of mind of winning, rather than truth-seeking. Additionally, people end up conflating "winning the debate" (which in people's minds is not necessarily even about who has the best arguments) with being correct. There was an old post here on LessWrong a while ago I remember reading where people were talking about the problems with debates as a truth-seeking mechanism, but I can't seem to find it now.

It strikes me that anything that could be a debate would be better as a comment thread for these reasons. I think LessWrong moving in a more debate direction would be a mistake. (My point here is not that people shouldn't have debates, but that making debate a part of LessWrong specifically seems questionable.)

So given that I figured it was a joke, because it just doesn't quite fit. But I now see the prediction market, and I don't think I can guess better here. And the community response seems very positive, which I'm pretty sure isn't a joke. I feel like this always happens though. Someone comes up with a new idea to change something and people get excited and want it, but fail to consider what it will be like when it is no longer new and exciting, but rather just one other extra thing. Will the conversations had through the debate format really be better than if they had been had through a different, less adversarial method?

Let me explain my understanding of your model. An AI wants to manipulate you. To do that, it builds a model of you. It starts out with a probability distribution over the mind space that is its understanding of what human minds are like. Then, as it gathers information on you, it updates those probabilities. The more data it is given, the more accurate the model gets. Then it can model how you respond to a bunch of different stimuli and choose the one that gets the most desirable result.

But if this model is like any learning process I know about, the chart of how much is learned over time will probably look vaguely logarithmic, so once it is through half of the data, it will be way more than halfway through the improvement on the model. So if you're thirty now, and have been using not end to end encrypted messaging your whole life and all that is on some server and ends up in an AI, you've probably already thrown away more than 90% of the game, whatever you do today. Especially since if you keep making public posts it can track changes in those to expect what changes are in you for its already good model anyway.

I keep going back and forth about whether your point is a good one or not. (Your point being that it's useful to prevent non-public data about you from being easier to access by AIs on big server farms like Google's or whatever, even if you haven't been doing that so far, and you keep putting out public data.) Your idea sort of seems like a solution to another problem.

I do think your public internet presence will reveal much more to a manipulative AI than a manipulative human. AIs can make connections we can't see. And a lot of AIs will be trained on just the internet as a whole, so while your private data may end up in a few AIs, or many if they gain the ability to hack, your public data will be in tons and tons of AIs. For say an LLM to work like this, it will have to be able to model your writing to optimize its reward. If you're sufficiently different than other people that you need a unique way to be manipulated, your writing probably needs unique parameters to be optimized by an LLM. So if you have many posts online already, modern AIs probably already know you. And an AI optimized towards manipulation (either terminally or instrumentally) will just by talking to you or hearing you talk figure out who you are, or at least get a decent estimate of where you are in the mind space and already be saying whatever is most convincing to you. So when you post publicly, you are helping every AI manipulate you, but when you post privately, it only helps a few.

Does this mean we should stop making posts and comments on LessWrong?

We were not on the same page. I thought you were suggesting changes to the new re-hosted version of hpmor.com. Thanks for clarifying.

I am not mad at the LessWrong team. The reason I framed the title as an accusation was because I figured it was likely since I was sent to your website that you were responsible in some way, or at least were aware of what was going on. I now understand I was mistaken.

 As for "improvements" if/when hpmor.com comes back up, I would like to note that I am against them, for the same reasons described in the post. I don't think it's obvious at all that some change to the old site would not be bad, at least from the perspective of people who prefer the old site.

Yes! It's just that the feel of the two websites are so different. And part of it may be my imagination. But it feels like the old HPMOR site is a simple elegant wrapper around the book, while on here it is the book is dumped into a website that wasn't made for it. Like the difference between a person wearing clothes, and someone inside of a giant human shaped suit that mimicked their motions.