Your example of the janitor interrupting the scientist is a good demonstration of my point. I've organized over a hundred cybersecurity events featuring over a thousand speakers and I've never had a single janitor interrupt a talk. On the other hand, I've had numerous "experts" attempt to pass off fiction as fact, draw assumptions from faulty data, and generally behave far worse than any janitor might due to their inflated egos.
Based on my conversations with computer science and philosophy professors who aren't EA-affiliated, and several who are, their posts are frequently down-voted simply because they represent opposite viewpoints.
Do the moderators of this forum do regular assessments to see how they can make improvements in the online culture so that there's more diversity in perspective?
I think you're too close to see objectively. I haven't observed any room for policy discussions in this forum that stray from what is acceptable to the mods and active participants. If a discussion doesn't allow for opposing viewpoints, it's of little value. In my experience, and from what I've heard from others who've tried posting here and quit, you have not succeeded in making this a forum where people with opposing viewpoints feel welcome.
Have you read this? https://www.politico.eu/article/rishi-sunak-ai-testing-tech-ai-safety-institute/
"“You can’t have these AI companies jumping through hoops in each and every single different jurisdiction, and from our point of view of course our principal relationship is with the U.S. AI Safety Institute,” Meta’s president of global affairs Nick Clegg — a former British deputy prime minister — told POLITICO on the sidelines of an event in London this month."
"OpenAI and Meta are set to roll out their next batch of AI models imminently. Yet neither has granted access to the U.K.’s AI Safety Institute to do pre-release testing, according to four people close to the matter."
"Leading AI firm Anthropic, which rolled out its latest batch of models in March, has yet to allow the U.K. institute to test its models pre-release, though co-founder Jack Clark told POLITICO it is working with the body on how pre-deployment testing by governments might work.
“Pre-deployment testing is a nice idea but very difficult to implement,” said Clark."
Yes, I like it! Thanks for sharing that analysis, Gunnar.
Good list. I think I'd use a triangle to organize them. Have consciousness at the base, then sentience, then drawing from your list, phenomenal consciousness, followed by Intentionality?
Thank you for asking.
To generalize across disciplines, a critical aspect of human-level artificial intelligence, requires the ability to observe and compare. This is a feature of sentience. All sentient beings are conscious of their existence. Non-sentient conscious beings exist, of course, but none who could pass a Turing test or a Coffee-making test. That requires both sentience and consciousness.
What happens if you shut down power to the AWS or Azure console powering the Foundation model? Wouldn't this be the easiest way to test various hypotheses associated with the Shutdown Problem in order to either verify it or reject it as a problem not worth sinking further resources into?
That's a good example of my point. Instead of a petition, a more impactful document would be a survey of risks and their probability of occurring in the opinion of these notable public figures.
In addition, there should be a disclaimer regarding who has accepted money from Open Philanthropy or any other EA-affiliated non-profit for research.
Which makes it an existential risk.
"An existential risk is any risk that has the potential to eliminate all of humanity or, at the very least, kill large swaths of the global population." - FLI
Fired from OpenAI's Superalignment team, Aschenbrenner now runs a fund dedicated to funding AGI-focused startups, according to The Information.
"Former OpenAI super-alignment researcher Leopold Aschenbrenner, who was fired from the company for allegedly leaking information, has started an investment firm to back startups with capital from former Github CEO Nat Friedman, investor Daniel Gross, Stripe CEO Patrick Collision and Stripe president John Collision, according to his personal website.
In a recent podcast interview, Aschenbrenner spoke about the new firm as a cross between a hedge fund and a think tank, focused largely on AGI, or artificial general intelligence. “There’s a lot of money to be made. If AGI were priced in tomorrow, you could maybe make 100x. Probably you can make even way more than that,” he said. “Capital matters.”
“We’re going to be betting on AGI and superintelligence before the decade is out, taking that seriously, making the bets you would make if you took that seriously. If that’s wrong, the firm is not going to do that well,” he said."
What happened to his concerns over safety, I wonder?