Separately, I think it's good to invite people like Sam Altman to events like the Progress Conference, and would of course want Sam to be at important diplomatic meetings. If you think that's always bad, then I do think Lighthaven might be bad! I am definitely hoping for it to facilitate conversations between many people I think are causing harm for the world.
I think it's aproximatly always bad to invite Sam Altman. We he lies and manipuate people. We know that he succeeded at stealing OpenAI from the non profit. Inviting him to any high-trust space, where most peopel will by defualt assume good faith, (which I would be very surpprised is not the case at the Progress Conference), is in my judgment very bad. Inviting him to a negotiation where most people are already supspisios of eachother might be worth it in some situations, maybe? I have no expertice here.
In general I would like the insentive landscape to be that if you steal OpenAI from the non profit, and work towards hazen the end of the world, you are socialy shunned.
(I don't think stealing OpenAI was the most impactfull thing from a perspective of X-risk. But it's just so obviously evil from any world view. I don't see any possiblity of good faith comunication after that.)
My previous understanding of the situaton is that the Progress Connferene naiviely invited Sam Altman, and Lightcone did not veto this, and for some reason did not prioritise advising them against it. Knowing that you endorse this makes me update in a negative direction.
I wish someone would link the comment in question by habryka. I remember reading it, but I can't find it.
I think you said you "would not be supprised" or "expect it will happen" or something like that, that you would rent lighthaven to the labs. Which did not give me the impression that the tax would be very high from the lab's perspective.
I do think anyone (including habryka) have the right to say "oops, that was badly written, here's what I acctually men."
But what was said in that original comment still matters for wether or not this was a reasobable thing to be concerned about, before the interactions in the comments here.
My impression after reading that old comment from you was much more in line with what Mikhail said. So I'm happy this got borugh up and clarified.
Yes, I just rememebered that I forgott to do this. Oops.
I chose my clothing based on:
The list is roughly in order of priority, and I don't wheare anything that does not at least satisfise some baselevel of them.
Point 2 depend on the setting. E.g. I wouldn't go to a costume party without at an atempt at a costume. Also at a costume party, a great costume scores better on 2 than an average on, this is an example of fitting in not being the same as blending in.
In general 2 is not very constraining, there are a lot of diffrent looks tha qualify as fiting in, in most places I hang out, but I would still proabbly experiment with more unusual looks if I was less conformist. And I would be naked a lot more, if that was normal.
I'm emotionaly conformist. But I expect a lot of people I meet don't notice this, becasue I'm also bad at conforming. There is just so much else pulling in other directions.
I would recomend thay anyone with dependents, or any other need for economic stability (e.g. lack of safety net from your family or country) should focus on erning money.
You can save up and fund yourself. Or if that takes too long, you can give what you can give 10% (or what ever works for you) to support someone else.
Definetly yes to more honestly!
However, I think it's unfair to describe all the various AI safery programs as "MATS clones". E.g. AISC is both order and quite diffrent.
But no amount of "creative ways to bridge the gap" will solve the fundamental problem, because there isn't a gap realy. There isn't lots of senior jobs, if we could only level up people faster. The simple fact is that there isn't enough money.
So the section headings are not about the transmission type investigated, but which transmission type the studies pointed to as the leading one?
Datapoint: I found EAG to be valuable when I lived in Sweden. After moving to London, I completely lost interest. I don't need it anymore.
I'm confused by the section headings.
"The large particle test" and "The small particle test" you write about under "Fomites" seems to be about Aerosols.
The experiments described under "Aerosols" seems to be either about mixed transmission or Fomites only. Passing around cards and poker chips, etc.
Am I missunderstanding something?
Thanks for your questions
If I understand your correctly, this is already what we are doing. Each on-ndicator is distributed over S pairs of neurons in the Large Network, where we used S=6 for the results in this post.
I can't increase S more than that, for the given D and T, without braeaking the costraint that no two circuits should overlap in more than one neuron, and this constarint is an important assumption in the error calculations. However this it is a possible that a more clever way to alocate neurons could help this a bit.
See here: https://www.lesswrong.com/posts/FWkZYQceEzL84tNej/circuits-in-superposition-2-now-with-less-wrong-math#Construction
We don't have T extara neuons laying around. This post is about supperpossition, which means we have fewer neurons than features.
If we assume that which circuits are active never changes across layers (which is true for the example in this post) there is another thing we can do. We can encode, the on-indiators, at the start, in superpossition, and then just coppy this information from layer to layer. This prevents the parts of the network that is repsponsibel for on indicators from seizureing. The reasons we didn't do this is we wanted to test a method that could be used more generally.
We don't need to encode the best guess, we can just encode the true value (up to the uncertanty that comes from compressing it into superpossition), given in the input. If we assume that these values stay constant, that is. But storing the value form early on also assumes that the value of the on-indicators stay fixed.
Ooo, interesting! I will definatly have a look into those papers. Thanks!