All of PeterH's Comments + Replies

There is no meeting where an actual rational discussion of considerations and theories of change happens, everything really is people flying by the seat of their pants even at highest level. Talk of ethics usually just gets you excluded from the power talk.

This seems overstated. E.g. Musk and Altman both read Superintelligence, and they both met Bostrom at least once in 2015. Sam published reflective blog posts on AGI in 2015, and it's clear that the OpenAI founders had lengthy, reflective discussions from the YC Research days onwards.

My personal experience was that superintelligence made it harder to think clearly about AI by making lots of distinctions and few claims.

That's probably not what Page meant. On consideration, he would probably have clarified […]

[…]

A little more careful conversation would've prevented this whole thing.

The passage you quote earlier suggests that they had multiple lengthy conversations:

Larry Page and I used to be close friends and I would stay at his house in Paolo Alto and I would talk to him late into the night about AI safety […]

Quick discussions via email are not strong evidence of a lack of careful discussion and reflection in other contexts.

4Seth Herd
I agree that there was a lot more to that exchange than that quick summary. My point was that there wasn't enough or it wasn't careful enough.

I made a change to my Twitter setup recently.

Initially, I discovered the "AI Twitter Recap" section of the AI News newsletter (example). It is good, but it doesn't actually include the tweet texts, and it isn't quite enough to make me feel fine about skipping my Twitter home screen.

So—I made an app that extracts all the tweet URLs that are mentioned in all of my favourite newsletters, and lists them in a feed. Then I blocked x.com/home (but not other x.com URLs, so I can still read and engage with particular threads) on weekdays.

This is just repackaging th... (read more)

1. My current system

I check a couple of sources most days, at random times during the afternoon or evening. I usually do this on my phone, during breaks or when I'm otherwise AFK. My phone and laptop are configured to block most of these sources during the morning (LeechBlock and AppBlock).

When I find something I want to engage with at length, I usually put it into my "Reading inbox" note in Obsidian, or into my weekly todo list if it's above the bar.

I check my reading inbox on evenings and weekends, and also during "open" blocks that I sometimes schedule ... (read more)

1PeterH
I made a change to my Twitter setup recently. Initially, I discovered the "AI Twitter Recap" section of the AI News newsletter (example). It is good, but it doesn't actually include the tweet texts, and it isn't quite enough to make me feel fine about skipping my Twitter home screen. So—I made an app that extracts all the tweet URLs that are mentioned in all of my favourite newsletters, and lists them in a feed. Then I blocked x.com/home (but not other x.com URLs, so I can still read and engage with particular threads) on weekdays. This is just repackaging the curation work that is done by my favourite newsletters. But I'm enjoying having a single place to check, that feels more like the Twitter feed I want to have. It's helped me feel fine about blocking the normal Twitter home screen for larger fraction of the week. This setup has various obvious issues—in particular, it's still not sufficiently tailored to my interests. I could improve things by having an LLM classify search results from the Twitter API, but sadly the $100 / month plan only lets you read ~300 tweets / day.  And then the next tier is $5000 / month...

Asya: is the above sufficient to allay the suspicion you described? If not, what kind of evidence are you looking for (that we might realistically expect to get)?

CNBC reports:

The memo, addressed to each former employee, said that at the time of the person’s departure from OpenAI, “you may have been informed that you were required to execute a general release agreement that included a non-disparagement provision in order to retain the Vested Units [of equity].”

“Regardless of whether you executed the Agreement, we write to notify you that OpenAI has not canceled, and will not cancel, any Vested Units,” stated the memo, which was viewed by CNBC.

The memo said OpenAI will also not enforce any other non-disparagement o

... (read more)
1PeterH
Asya: is the above sufficient to allay the suspicion you described? If not, what kind of evidence are you looking for (that we might realistically expect to get)?

I'm not sure what to make of this omission.

OpenAI's March 2024 summary of the WilmerHale report included:

The firm conducted dozens of interviews with members of OpenAI’s prior Board, OpenAI executives, advisors to the prior Board, and other pertinent witnesses; reviewed more than 30,000 documents; and evaluated various corporate actions. Based on the record developed by WilmerHale and following the recommendation of the Special Committee, the Board expressed its full confidence in Mr. Sam Altman and Mr. Greg Brockman’s ongoing leadership of OpenAI.

[...]

W

... (read more)

If we presume that Graham’s story is accurate, it still means that Altman took on two incompatible leadership positions, and only stepped down from one of them when asked to do so by someone who could fire him. That isn’t being fired. It also isn’t entirely not being fired.

According to the most friendly judge (e.g. GPT-4o) if it was made clear Altman would get fired from YC if he did not give up one of his CEO positions, then ‘YC fired Altman’ is a reasonable claim. I do think precision is important here, so I would prefer ‘forced to choose’ or perhaps ‘ef

... (read more)
8Dana
These are the remarks Zvi was referring to in the post. Also worth noting Graham's consistent choice of the word 'agreed' rather than 'chose', and Altman's failed attempt to transition to chairman/advisor to YC. It sure doesn't sound like Altman was the one making the decisions here.

Bret Taylor and Larry Summers (members of the current OpenAI board) have responded to Helen Toner and Tasha McCauley in The Economist.

The key passages:

Helen Toner and Tasha McCauley, who left the board of Openai after its decision to reverse course on replacing Sam Altman, the CEO, last November, have offered comments on the regulation of artificial intelligence (AI) and events at OpenAI in a By Invitation piece in The Economist.

We do not accept the claims made by Ms Toner and Ms McCauley regarding events at OpenAI. Upon being asked by the former board (

... (read more)

The review’s findings rejected the idea that any kind of ai safety concern necessitated Mr Altman’s replacement. In fact, WilmerHale found that “the prior board’s decision did not arise out of concerns regarding product safety or security, the pace of development, OpenAI's finances, or its statements to investors, customers, or business partners.”

Note that Toner did not make claims regarding product safety, security, the pace of development, OAI's finances, or statements to investors (the board is not investors), customers, or business partners (the board are not business partners). She said he was not honest to the board.

we have found Mr Altman highly forthcoming

He was caught lying about the non-disparagement agreements, but I guess lying to the public is fine as long as you don't lie to the board?

Taylor's and Summers' comments here are pretty disappointing—it seems that they have no issue with, and maybe even endorse, Sam's now-publicly-verified bad behavior.

Flagging the most upvoted comment thread on EA Forum, with replies from Ozzie, which begins:

This post contains many claims that you interpret OpenAI to be making. However, unless I'm missing something, I don't see citations for any of the claims you attribute to them. Moreover, several of the claims feel like they could potentially be described as misinterpretations of what OpenAI is saying or merely poorly communicated ideas.

Nice. One thing: initially I couldn't figure out how to read this because I didn't see the key at the top. I think the key is a bit too easy to miss if you are zooming in to look at the image on mobile. Maybe make it more prominent?

Thanks for the heads up. Each of those code blocks is being treated separately, so the placeholder is repeated several times. We'll release a fix for this next week.

Usually the text inside codeblocks is not suitable for narration. This is a case where ideally we would narrate them. We'll have a think about ways to detect this.

I replaced it because it seemed like a less useful format.

  • Azure TTS cost per million characters = $16
  • Elevenlabs TTS cost per million characters = $180

1 million characters is roughly 200,000 words.

One hour of audio is roughly 9000 words.

Thanks! We're currently using Azure TTS. Our plan is to review every couple months and update to use better voices when they become available on Azure or elsewhere. Elevenlabs is a good candidate but unfortunately they're ~10x more expensive per hour of narration than Azure ($10 vs $1).

2Yoav Ravid
I think the cost per million words measure from the previous version of your comment was also useful to know. Did you replace it because it's incorrect?

Thanks! We do have feature (2)—we remember whatever playback speed you last set. If you're not seeing this, please let me know what browser you're using.

2Yoav Ravid
Oh, great! I didn't check if it exists before writing it down (whoops), so it probably works :)

Yep, if the pilot goes well then I imagine we'll do all the >100 karma posts, or something like that.

We'll add narrations for all >100 karma posts on the EA Forum later this month.

2Yoav Ravid
Perhaps instead of, or in addition to, using a karma cutoff, it could be request based? So you'd have that Icon on all posts, and if someone clicks it on an old article that doesn't yet have a narration it will ask them whether they want it to be narrated. 
3Yoav Ravid
How much would it cost to narrate all the posts on Lesswrong? Or above various karma cutoffs? Cause there's a lot of good posts under 100 karma (including many from the sequences), so I wonder what's the tradeoff.

It sounds like your story is similar to the one that Bernard Williams would tell.

Williams was in critical dialog with Peter Singer and Derek Parfit for much of his career.

This lead to a book: Philosophy as a Humanistic Discipline.

If you're curious:

Edit 2020-03-08: I made a Google Sheet that makes it easy to view Johns Hopkins data for up to 5 locations of interest.

If you want to get raw data from the Johns Hopkins Github Repo into a Google Sheet, use these formulas:

=IMPORTDATA("https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_19-covid-Confirmed.csv") =IMPORTDATA("https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_19-covid-Deaths.csv") =IM
... (read more)
2Eli Tyre
Excellent. I was just starting to figure out how to do this.