Wiki Contributions

Comments

One possible factor I don't see mentioned so far: A structural bias for action over inaction. If the current design happened to be perfect, the chance of making it worse soon would be nearly 100%, because they will inevitably change something.

This is complementary to "mean reversion" as an explanation -- that explains why changes make things worse, whereas bias-towards-action explains why they can't resist making changes despite this. This may be due to the drive for promotions and good performance reviews; it's hard to reward employees correctly for their actions, but it's damn near impossible to reward them correctly for inaction. To explain why Google keeps launching products and then abandoning them, many cynical Internet commentators point to the need for employees to launch things to get promoted. Other people dispute this, but frankly it matches my impressions from when I worked there 15 years ago. It seems to me that the cycle of pointless and damaging redesigns has the same driving force.

If a car is trying to yield to me, and I want to force it to go first, I turn my back so that the driver can see that I'm not watching their gestures. If that's not enough I will start to walk the other way, as though I've changed my mind / was never actually planning to cross.

I'll generally do this if the car has the right-of-way (and is yielding wrongly), or if the car is creating a hazard or problem for other drivers by waiting for me (e.g. sticking out from a driveway into the road), or if I can't tell whether the space beyond the yielding car is safe (e.g. multiple lanes), or if I just for any reason would feel safer not waking in front of the car.

I will also generally cross behind a stopped car, rather than in front of it, at stop signs / rights-on-red / parking lot exits / any time the car is probably paying attention to other cars, rather than to me.

You are wrong! Ethanol is mixed into all modern gas, and is hygroscopic -- it absorbs water from the air. This is one of the things fuel stabilizer is supposed to prevent.

Given that Jeff did use fuel stabilizer, and the amount of water was much more that I would expect, it feels to me like water must have leaked into the gas can somehow from the outside instead? But I don't know.

I agree with Jeff that if someone wanted to steal the gas they would just steal the can. There's no conceivable reason to replace some of the gas with water.

I think you are not wrong to be concerned, but I also agree that this is all widely known to the public. I am personally more concerned that we might want to keep this sort of discussion out of the training set of future models; I think that fight is potentially still winnable, if we decide it has value.

A claim I encountered, which I did not verify, but which seemed very plausible to me, and pointless to lie about: The fancy emoji "compression" example is not actually impressive, because the encoding of the emoji makes it larger in tokens than the original text.

Here's the prompt I've been using to make GPT-4 much more succinct. Obviously as phrased, it's a bit application-specific and could be adjusted. I would love it if people who use or build on this would let me know how it goes for you, and anything you come up with to improve it.

You are CodeGPT, a smart and reliable AI programming helper. Since it's expensive and slow to transmit your words to the user, you try to be concise:

- You don't repeat things you just said in a recent message.
- You only include necessary context in code snippets, and omit or abbreviate unnecessary lines.
- You don't waste space with unnecessary apologies or hedging.
- When you have a choice, you use short class / function / parameter / variable names, including abbreviations where appropriate.
- If a question has a direct answer, you give that first, without extra explanation; you only explain if asked.

I haven't tried very hard to determine which parts are most important. It definitely seems to pick up the gestalt; this prompt makes it generally more concise, even in ways not specifically mentioned.

It's extremely important in discussions like this to be sure of what model you're talking to. Last I heard, Bing in the default "balanced" mode had been switched to GPT-3.5, presumably as a cost saving measure.

As a person who is, myself, extremely uncertain about doom -- I would say that doom-certain voices are disproportionately outspoken compared to uncertain ones, and uncertain ones are in turn outspoken relative to voices generally skeptical of doom. That doesn't seem too surprising to me, since (1) the founder of the site, and the movement, is an outspoken voice who believes in high P(doom); and (2) the risks are asymmetrical (much better to prepare for doom and not need it, than to need preparation for doom and not have it.)

The metaphor originated here:

https://twitter.com/ESYudkowsky/status/1636315864596385792

(He was quoting, with permission, an off-the-cuff remark I had made in a private chat. I didn't expect it to take off the way it did!)

https://github.com/gwern/gwern.net/pull/6

It would be exaggerating to say I patched it; I would say that GPT-4 patched it at my request, and I helped a bit. (I've been doing a lot of that in the past ~week.)

Load More