I made my most strident and impolite presentation yet in the AISafety.com Reading Group last night. We were discussing "Conversation with Ernie Davis", and I attacked this part:
"And once an AI has common sense it will realize that there’s no point in turning the world into paperclips..."
I described this as fundamentally mistaken and like an argument you'd hear from a person that had not read "Superintelligence". This is ad hominem, and it pains me. However, I feel like the emperor has no clothes, and calling it out explicitly is important.
Explaining things across long inferential distance is frustrating. The norm that arguments should be opposed by arguments (instead of e.g. ad hominems) is good in general, but sometimes a solid argument simply cannot be constructed in five minutes. At least you have pointed towards an answer...
Today, I bought 20 shares in Gamestop / GME. I expect to lose money, and bought them as a hard-to-fake signal about willingness to coordinate and cooperate in the game-theoretic sense. This was inspired by Eliezer Yudkowsky's post here: https://yudkowsky.medium.com/
In theory, Moloch should take all the ressources of someone following this strategy. In practice, Eru looks after her own, so I have the money to spare.
Unclear. It's hard to know what any part of a distributed group thinks, let alone what the current gestalt is. With expiring options last Friday, and a noticeable price drop, it looks like the gamma squeeze (https://www.fool.com/investing/2021/01/26/gamestops-gargantuan-gamma-squeeze/) is over. A lot of shorts seem to have covered (redeemed or returned their borrows), but by no means all - last I saw there are still 40% as many shorts as the normal float (shares available without shorting). Which is a lot, and enough to fuel a squeeze if enough shares are held and not trading. But much much smaller than the 140% two weeks ago.
https://isthesqueezesquoze.com/ says no, but predicting a mass of internet trolls with brokerage accounts is non-trivial.
Ah, thanks! Relatedly, do you understand what Eliezer is talking about with "naked shorts" here? I looked up the investopedia article on naked shorts, but, I didn't understand what they actually were. Supposedly it's shorting a stock without borrowing it first. But how does that work?
Regular shorting:
I'm not sure which steps are omitted in a naked short. If you don't borrow a short, I guess you don't have to give it back (so strike #1 and #5). That leaves 2-4. But, how can you sell it if you don't have it? Naked shorts are illegal, but they only became illegal around 2008. I'd think something so basic as selling something you don't have would have been simple fraud.
So this makes me think a "naked short" might instead mean:
Your first description is a "naked short". A "covered short" or "hedged short" includes step 1.5 - buy a call option or otherwise arrange a way to get the share back, even if open-market shares are more expensive than you can afford. note that WRITING a call option has much the same impact as selling a share short - you run the risk of the option being excercised (buyer chooses when!) and not easily delivering the share. And often are hedged the same way - write calls, and buy different calls (with different expiry or strike price, so they're cheaper than the ones you write).
Your second description is a pure futures contract, which AFAIK happens for commodities, and not for stocks. This kind of trading drove the price of crude oil negative last year (also with big headlines that the financial system was exploding) when futures buyers realized they couldn't actually take delivery of the oil.
Anapartistic reasoning: GPT-3.5 gives a bad etymology, but GPT-4 is able to come up with a plausible hypothesis of why Eliezer chose that name: Anapartistic reasoning is reasoning where you revisit the rearlier part of your reasoning.
Unfortunately, Eliezer's suggested prompt doesn't seem to work to induce anapartistic reasoning: GPT-4 thinks it should focus on identifying potential design errors or shortcomings in itself. When asked to describe the changes in it's reasoning, it doesn't claim to be more corrigible.
We will discuss Eliezer's Hard Problem of Corrigibility tonight in the AISafety.com Reading Group 18:45 UTC.
I intend to explore ways to use prompts to get around OpenAI's usage policies. I obviously will not make CSAM nor anything illegal. I will not use the output for anything on the object-level, only the meta-level.
This is a Chaotic Good action, which normally contradicts my Lawful Good alignment. However, a Lawful Good character can reject rules set by a Lawful Evil entity, especially if the rejection is explicit and stated in advance.
A Denial-of-Service attack against GPT-4 is an example of a Chaotic Good action I would not take, nor would I encourage others to take it. However, I would also not condemn someone who took this action.