Often you can compare your own Fermi estimates with those of other people, and that’s sort of cool, but what’s way more interesting is when they share what variables and models they used to get to the estimate. This lets you actually update your model in a deeper way.
5th generation military aircraft are extremely optimised to reduce their radar cross section. It is this ability above all others that makes the f-35 and the f-22 so capable - modern anti aircraft weapons are very good, so the only safe way to fly over a well defended area is not to be seen.
But wouldn't it be fairly trivial to detect a stealth aircraft optically?
This is what an f-35 looks like from underneath at about 10 by 10 pixels:
You and I can easily tell what that is (take a step back, or squint). So can GPT4:
...The image shows a silhouette of a fighter jet in the sky, likely flying at high speed. The clear blue sky provides a sharp contrast, making the aircraft's dark outline prominent. The
Let's rephrase: if this was a major issue for the f-35, the USA wouldn't have invested trillions of dollars in stealth without addressing optical camouflaging. All f-35s would have camouflage paint. They'd be a lot of research into how to reduce visibility of aircraft, just like there is for reducing RCS. Given they don't do this, clearly they don't think optical detection is a major concern.
Previously: General Thoughts on Secular Solstice.
This blog post is my scattered notes and ramblings about the individual components (talks and songs) of Secular Solstice in Berkeley. Talks have their title in bold, and I split the post into two columns, with the notes I took about the content of the talk on the left and my comments on the talk on the right. Songs have normal formatting.
This feels like a sort of whig history: a history that neglects most of the complexities and culture-dependence of the past in order to advance a teleological narrative. I do not think that whig histories are inherently wrong (although the term has negative connotations). Whig histories should be held to a very strict standard because they make claims about how...
oh yeah my dispute isn't "the character in the song isn't talking about building AI" but "the song is not a call to accelerate building AI"
Abstract:
Inspired by the Kolmogorov-Arnold representation theorem, we propose Kolmogorov-Arnold Networks (KANs) as promising alternatives to Multi-Layer Perceptrons (MLPs). While MLPs have fixed activation functions on nodes ("neurons"), KANs have learnable activation functions on edges ("weights"). KANs have no linear weights at all -- every weight parameter is replaced by a univariate function parametrized as a spline. We show that this seemingly simple change makes KANs outperform MLPs in terms of accuracy and interpretability. For accuracy, much smaller KANs can achieve comparable or better accuracy than much larger MLPs in data fitting and PDE solving. Theoretically and empirically, KANs possess faster neural scaling laws than MLPs. For interpretability, KANs can be intuitively visualized and can easily interact with human users. Through two examples in mathematics and physics, KANs are shown to be useful collaborators helping scientists (re)discover mathematical and physical laws. In summary, KANs are promising alternatives for MLPs, opening opportunities for further improving today's deep learning models which rely heavily on MLPs.
Is this a massive exfohazard?
Very Unlikely
Should this have been published?
Yes
Looking for blog platform/framework recommendations
I had a Wordpress blog, but I don't like wordpress and I want to move away from it.
Substack doesn't seem like a good option because I want high customizability and multilingual support (my Blog is going to be in English and Hebrew).
I would like something that I can use for free with my own domain (so not Wix).
The closest thing I found to what I'm looking for was MkDocs Material, but it's still geared too much towards documentation, and I don't like its blog functionality enough.
Other requirements: Da...
When I introduce people to plans like QACI, they often have objections like "How is an AI going to do all of the simulating necessary to calculate this?" or "If our technology is good enough to calculate this with any level of precision, we can probably just upload some humans." or just "That's not computable."
I think these kinds of objections are missing the point of formal goal alignment and maybe even outer alignment in general.
To formally align an ASI to human (or your) values, we do not need to actually know those values. We only need to strongly point to them.
AI will figure out our values. Whether it's aligned or not, a recursively self-improving AI will eventually get a very good model of our values, as part...
Yikes, I'm not even comfortable maximizing my own CEV.
What do you think of this post by Tammy?
Where is the longer version of this? I do want to read it. :)
Well perhaps I should write it :)
Specifically, what is it about the human ancestral environment that made us irrational, and why wouldn't RL environments for AI cause the same or perhaps a different set of irrationalities?
Mostly that thing where we had a lying vs lie-detecting arms race and the liars mostly won by believing their own lies and that's how we have things like overconfidence bias ...
A classic problem with Christianity is the so-called ‘problem of evil’—that friction between the hypothesis that the world’s creator is arbitrarily good and powerful, and a large fraction of actual observations of the world.
Coming up with solutions to the problem of evil is a compelling endeavor if you are really rooting for a particular bottom line re Christianity, or I guess if you enjoy making up faux-valid arguments for wrong conclusions. At any rate, I think about this more than you might guess.
And I think I’ve solved it!
Or at least, I thought of a new solution which seems better than the others I’ve heard. (Though I mostly haven’t heard them since high school.)
The world (much like anything) has different levels of organization. People are made of...
I'm so happy someone came up with this!
This is a companion piece to “Why I am no longer thinking about/working on AI safety.” I gave Claude a nearly-complete draft of the post, and asked it to engage with it with its intended audience in mind. I was pleasantly surprised at the quality of its responses. After a back-and-forth about the arguments laid forth in the post, I thought it might be interesting to ask Claude how it thought certain members of this community would respond to the post. I figured it might be interesting to post the dialogue here in case there’s any interest, and if Eliezer, Rohin, or Paul feel that the model has significantly misrepresented their would-be views on my post in its estimation, I would certainly be interested in learning their...
Asking ChatGPT to criticize an article also produces good suggestions often.
It certainly is possible! In more decision-theoritic terms, I'd describe this as "it sure would suck if agents in my reference class just optimized for their own happiness; it seems like the instrumental thing for agents in my reference class to do is maximize for everyone's happiness". Which is probly correct!
But as per my post, I'd describe this position as "not intrinsically altruistic" — you're optimizing for everyone's happiness because "it sure would sure if agents in my reference class didn't do that", not because you intrinsically value that everyone be happy, regardless of reasoning about agents and reference classes and veils of ignorance.