That's probably not what Page meant. On consideration, he would probably have clarified […]
[…]
A little more careful conversation would've prevented this whole thing.
The passage you quote earlier suggests that they had multiple lengthy conversations:
Larry Page and I used to be close friends and I would stay at his house in Paolo Alto and I would talk to him late into the night about AI safety […]
Quick discussions via email are not strong evidence of a lack of careful discussion and reflection in other contexts.
I made a change to my Twitter setup recently.
Initially, I discovered the "AI Twitter Recap" section of the AI News newsletter (example). It is good, but it doesn't actually include the tweet texts, and it isn't quite enough to make me feel fine about skipping my Twitter home screen.
So—I made an app that extracts all the tweet URLs that are mentioned in all of my favourite newsletters, and lists them in a feed. Then I blocked x.com/home (but not other x.com URLs, so I can still read and engage with particular threads) on weekdays.
This is just repackaging th...
1. My current system
I check a couple of sources most days, at random times during the afternoon or evening. I usually do this on my phone, during breaks or when I'm otherwise AFK. My phone and laptop are configured to block most of these sources during the morning (LeechBlock and AppBlock).
When I find something I want to engage with at length, I usually put it into my "Reading inbox" note in Obsidian, or into my weekly todo list if it's above the bar.
I check my reading inbox on evenings and weekends, and also during "open" blocks that I sometimes schedule ...
CNBC reports:
...The memo, addressed to each former employee, said that at the time of the person’s departure from OpenAI, “you may have been informed that you were required to execute a general release agreement that included a non-disparagement provision in order to retain the Vested Units [of equity].”
“Regardless of whether you executed the Agreement, we write to notify you that OpenAI has not canceled, and will not cancel, any Vested Units,” stated the memo, which was viewed by CNBC.
The memo said OpenAI will also not enforce any other non-disparagement o
I'm not sure what to make of this omission.
OpenAI's March 2024 summary of the WilmerHale report included:
...The firm conducted dozens of interviews with members of OpenAI’s prior Board, OpenAI executives, advisors to the prior Board, and other pertinent witnesses; reviewed more than 30,000 documents; and evaluated various corporate actions. Based on the record developed by WilmerHale and following the recommendation of the Special Committee, the Board expressed its full confidence in Mr. Sam Altman and Mr. Greg Brockman’s ongoing leadership of OpenAI.
[...]
W
...If we presume that Graham’s story is accurate, it still means that Altman took on two incompatible leadership positions, and only stepped down from one of them when asked to do so by someone who could fire him. That isn’t being fired. It also isn’t entirely not being fired.
According to the most friendly judge (e.g. GPT-4o) if it was made clear Altman would get fired from YC if he did not give up one of his CEO positions, then ‘YC fired Altman’ is a reasonable claim. I do think precision is important here, so I would prefer ‘forced to choose’ or perhaps ‘ef
Bret Taylor and Larry Summers (members of the current OpenAI board) have responded to Helen Toner and Tasha McCauley in The Economist.
The key passages:
...Helen Toner and Tasha McCauley, who left the board of Openai after its decision to reverse course on replacing Sam Altman, the CEO, last November, have offered comments on the regulation of artificial intelligence (AI) and events at OpenAI in a By Invitation piece in The Economist.
We do not accept the claims made by Ms Toner and Ms McCauley regarding events at OpenAI. Upon being asked by the former board (
The review’s findings rejected the idea that any kind of ai safety concern necessitated Mr Altman’s replacement. In fact, WilmerHale found that “the prior board’s decision did not arise out of concerns regarding product safety or security, the pace of development, OpenAI's finances, or its statements to investors, customers, or business partners.”
Note that Toner did not make claims regarding product safety, security, the pace of development, OAI's finances, or statements to investors (the board is not investors), customers, or business partners (the board are not business partners). She said he was not honest to the board.
we have found Mr Altman highly forthcoming
He was caught lying about the non-disparagement agreements, but I guess lying to the public is fine as long as you don't lie to the board?
Taylor's and Summers' comments here are pretty disappointing—it seems that they have no issue with, and maybe even endorse, Sam's now-publicly-verified bad behavior.
Flagging the most upvoted comment thread on EA Forum, with replies from Ozzie, which begins:
This post contains many claims that you interpret OpenAI to be making. However, unless I'm missing something, I don't see citations for any of the claims you attribute to them. Moreover, several of the claims feel like they could potentially be described as misinterpretations of what OpenAI is saying or merely poorly communicated ideas.
Thanks for the heads up. Each of those code blocks is being treated separately, so the placeholder is repeated several times. We'll release a fix for this next week.
Usually the text inside codeblocks is not suitable for narration. This is a case where ideally we would narrate them. We'll have a think about ways to detect this.
It sounds like your story is similar to the one that Bernard Williams would tell.
Williams was in critical dialog with Peter Singer and Derek Parfit for much of his career.
This lead to a book: Philosophy as a Humanistic Discipline.
If you're curious:
This website is compiling links to datasets, dashboards, tools etc:
Edit 2020-03-08: I made a Google Sheet that makes it easy to view Johns Hopkins data for up to 5 locations of interest.
If you want to get raw data from the Johns Hopkins Github Repo into a Google Sheet, use these formulas:
=IMPORTDATA("https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_19-covid-Confirmed.csv") =IMPORTDATA("https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_19-covid-Deaths.csv") =IM
...
This seems overstated. E.g. Musk and Altman both read Superintelligence, and they both met Bostrom at least once in 2015. Sam published reflective blog posts on AGI in 2015, and it's clear that the OpenAI founders had lengthy, reflective discussions from the YC Research days onwards.
My personal experience was that superintelligence made it harder to think clearly about AI by making lots of distinctions and few claims.