My name is Mikhail Samin (diminutive Misha, @Mihonarium on Twitter, @misha on Telegram).
Humanity's future can be enormous and awesome; losing it would mean our lightcone (and maybe the universe) losing most of its potential value.
I have takes on what seems to me to be the very obvious shallow stuff about the technical AI notkilleveryoneism; but many AI Safety researchers told me our conversations improved their understanding of the alignment problem.
I'm running two small nonprofits: AI Governance and Safety Institute and AI Safety and Governance Fund. Learn more about our results and donate: aisgf.us/fundraising
I took the Giving What We Can pledge to donate at least 10% of my income for the rest of my life or until the day I retire (why?).
In the past, I've launched the most funded crowdfunding campaign in the history of Russia (it was to print HPMOR! we printed 21 000 copies =63k books) and founded audd.io, which allowed me to donate >$100k to EA causes, including >$60k to MIRI.
[Less important: I've also started a project to translate 80,000 Hours, a career guide that helps to find a fulfilling career that does good, into Russian. The impact and the effectiveness aside, for a year, I was the head of the Russian Pastafarian Church: a movement claiming to be a parody religion, with 200 000 members in Russia at the time, trying to increase separation between religious organisations and the state. I was a political activist and a human rights advocate. I studied relevant Russian and international law and wrote appeals that won cases against the Russian government in courts; I was able to protect people from unlawful police action. I co-founded the Moscow branch of the "Vesna" democratic movement, coordinated election observers in a Moscow district, wrote dissenting opinions for members of electoral commissions, helped Navalny's Anti-Corruption Foundation, helped Telegram with internet censorship circumvention, and participated in and organized protests and campaigns. The large-scale goal was to build a civil society and turn Russia into a democracy through nonviolent resistance. This goal wasn't achieved, but some of the more local campaigns were successful. That felt important and was also mostly fun- except for being detained by the police. I think it's likely the Russian authorities would imprison me if I ever visit Russia.]
I’m accumulating a small collection of spicy previously unreported deets about Anthropic for an upcoming post. Some of them sadly cannot publish because they might identify the sources. Others can! Some of those will be surprising to staff.
If you can share anything that’s wrong with Anthropic, that has not previously been public, DM me, preferably on Signal (@ misha.09)
Yes, I’ve read their entire post. $14.4 of “social return” per $1 in the US seems incredibly unlikely to be comparable to the best GiveWell interventions or even GiveDirectly.
Is there a write up on why the “abundance and growth” cause area is an actually relatively efficient way to spend money (instead of a way for OpenPhil to be(come) friends with everyone who’s into abundance & growth)? (These are good things to work on, but seem many orders of magnitude worse than other ways to spend money.)
My understanding is that Habryka spent hours talking to the third party as a result of receiving the information.
I mistakenly assumed a pretty friendly/high-trust relationship in this context due to previous interactions with the Lightcone team and due to the nature of the project we (me and Habryka) were both helping with.
I think the interests of everyone involved (me, Habryka, and the third party) are all very aligned, but Habryka disagrees with my assessment (a part of the reason of sharing the information was to make him talk to the third party and figure out that they're a lot more aligned than Habryka assumed).
I did not make the offer, because from the context of the DMs had assumed that Habryka's problem with the idea was keeping secrets at all, for deontology-related reasons and not because of the personal price of how complicated that is. I would've been happy to pay a reasonable price.
(Elsewhere, Habryka said the price would've been "If you recognize that as a cost and owe me a really small favor or something, I can keep it private, but please don't take this as a given".)
Would maybe work if you have to write >500 new words every day + publish >3500 words every week, as two separate requirements.
Or maybe add explicit requirements for polishing things?
Working more on polishing is valuable, but I think most XP points are gained just from writing.
Yay!
The strips are normally sticky/have built-in tape.
I’m lazy, so I just put the LEDs on any high-up surface that points up, and I also have a strip in a large floor lamp from IKEA (I use transparent cable management thingies to attach it to rods inside the lamp. Be careful with some of the floor lamps, as th LED emits a lot of heat and some floor lamps won’t have enough air flow).
You should probably instead buy any aluminum rail + diffuser for LED strips wide enough for your specific strip, attach the rail to the edge between the ceiling and the walls.
At Lighthaven, they just attach a long flat thing to the walls and add and LED strip on the side pointing at the ceiling/the walls (though they also have some strips behind diffusers).
(I’m not doing that, because I’m lazy + occasionally move rooms.)
How many strips you can power depends on how much power they want to eat and how much power your controller can manage and your power supply adapter can provide. The power supply adapter, indeed, plugs into the wall, but you can also find some that connect directly to the main (some of those will be more powerful).
Yep, pretty much! (As an example of the motion you would want to be able to make in more general/less obvious settings.)
"man, please actually ask in advance next time, this is costly and makes me regret having that whole conversation in the first place. If you recognize that as a cost and owe me a really small favor or something, I can keep it private, but please don't take this as a given"
This would’ve worked!
(Other branches seem less productive to reply to, given this.)
Your message:
‘Hypothetical scenario (this has not happened and details are made up):
Me and [name] are discussing the landscape of [thing] as it regards to Lightcone strategy. [name] is like "man, I feel like if I was worried that other people soon try to jump into the space, then we really should probably just back [a thing] because probably something will soon cement itself in the space". I would be like "Oh, well, I think [third party] might do stuff". Rafe is like "Oh, fuck, hmm, that's bad". I am like "Yep, seems pretty fucked. Plausibly we should really get going on writing up that 'why [third party’s person] seems like a low-integrity dude' post we've been thinking about". [name] is like "Yeah, maybe. Does really seem quite bad if [third party’s person] tries to position himself here centrally. Actually, I think maybe [name] from CEA Comm health was working on some piece about [third party’s person]? Seems like she should know [third party’s person] is moving into the space, since it seems a bit more urgent if that's happening". I am like "Yep, seems right".’
If you had asked in advance I would have rejected your request
You didn’t say that when we were talking about it! You implied that since I didn’t ask in advance, you are not bound by anything; you did mention “I can keep things confidential if you ask me in advance, but of course I wouldn't accept a request to receive private information about [third party] being sketchy that I can only use to their benefit?”
(“Being sketchy” is not how I’d describe the information. It was about an idea that Oliver is not okay with the third party working on, but is okay with others working on, because he doesn’t like the third party for a bunch of reasons and thinks it’s bad if they get more power, as per my understanding.)
I did not and would not have demanded somehow avoiding propagating the information. If you were like, “sorry, I obviously can’t actually not propagate this information in my world model and promise it won’t reflect on my plans, but I won’t actively try to use outside of coordinating with the third party and will keep it confidential going forward”, that would’ve been great and expected and okay.
I asked to not apply effort to using the information against the third party. I didn’t ask to apply effort to not be aware of the information in your decision-making, to keep separate world-models, or whatever. Confidentiality with people outside your team and not going hard on figuring out how to strategically share or use this information to cause damage to the third party’s interests would’ve been understandable and acceptable.
I'm very confused about how they're evaluating cost-effectiveness here. Like, no, spending $200 on vaccines in Africa to save lives seems like a much better deal than spending $200 to cause one more $400k apartment to exist.