All of 8e9's Comments + Replies

After playing around with it a bit, GPT 4o mini seems really fast and somewhat better at following instructions than GPT-3.5 Turbo, but uploading photos doesn’t seem to work in the app yet and it seems less “thorough” than GPT 4o.

That's a little hard to quantify in an objective way. 😅 Looking at the emails side-by-side, though, the final version is about 40% shorter by byte count, warmer, and more specific (using information only I know, or didn't tell ChatGPT). Obviously, I'm biased, but I think the final version is far superior.

Yesterday, I read Neven’s blog post about the suckiness of outsourcing human connection to AI.

After that, I asked ChatGPT to draft a somewhat complex email I’d been wanting to send to a small company.

With that blog post in the back of my mind, I spent a lot of time and effort rewriting and refining it.

I hit send, and… received the response I was hoping for within minutes. On a Saturday afternoon.

Felt good.

3Richard_Kennaway
How much did the final version owe to its ChatGPT origin?

Ava on looking for rejection:

There is no penalty for asking. You can apply to the same thing 10 times and no one’s gonna get mad at you. You can advertise something on the Internet and even if 99% of people think it’s dumb, 1% might think it’s really cool. You are always doing things for the one person who will give you the yes. And often one yes is enough. I’ve been trying to reframe my relationship with rejection from avoiding it to literally looking for rejection—going out there and risking the NOs. I’ve been doing it in really silly ways, like trying

... (read more)

Inspired by Concentration of Force, which introduced me to the concept, I'm trying to create a TAP to answer the question "what specific task do I need to accomplish?" before I unlock my phone. If I can't answer the question, maybe that "mental speed bump" makes it easier to put my phone back down.

Some thoughts about the intro video: it's excellent but ends a little abruptly; it would be good to explain how AISafety.com plans to help. Also, I quite dislike flashing images (around the 45-second mark).

note that the Brier score at the bottom is a few percentage points lower than what's shown in the chart; the probability distributions GPT outputs differ a bit between runs despite a temperature of 0

It's now possible to get mostly deterministic outputs if you set the seed parameter to an integer of your choice, the other parameters are identical, and the model hasn't been updated.

2eggsyntax
Oh thanks, I'd missed that somehow & thought that only the temp mattered for that.

OpenAI is thinking about how to safely and responsibly allow its models to produce NSFW content that goes beyond answering sex-ed “birds and the bees” type questions.

I haven’t read the whole thing yet, but I’m glad they released this document (which deals with many thorny questions besides).

https://cdn.openai.com/spec/model-spec-2024-05-08.html#overview

Sure!

​​Sam Altman was already trying to lead the development of human-level artificial intelligence. Now he has another great ambition: raising trillions of dollars to reshape the global semiconductor industry.

The OpenAI chief executive officer is in talks with investors including the United Arab Emirates government to raise funds for a wildly ambitious tech initiative that would boost the world’s chip-building capacity, expand its ability to power AI, among other things, and cost several trillion dollars, according to people familiar with the matter. The project could require raising as much as $5 trillion to $7 trillion, one of the people said.

This article from the Wall Street Journal (linked in TFA) says developing human-level AI could cost trillions of dollars to build, which I believe is reasonable (and it could even be a good deal), not that Mr. Altman expects to raise trillions of dollars on short order.

3ChristianKl
It seems like the part that makes that claim is behind the paywall. Can you quote it?
5Noosphere89
That's much more reasonable of a claim, though it might be too high still (but much more reasonable.)

I was really surprised to see a post like this on here. I read Eneasz’s original post to get more context, and I encourage others to do so as well (even if it’s a difficult read). I think your post offers a valuable and more hopeful perspective.