That's a good Coasian point. Talking out of my butt, but I think the airlines don't carry the risk. The sale channel (airlines, Expedia, etc.) take commissions distributing an insurance product designed another company (Travel Insured International, Seven Corners) who handles product design compliance, with the actual claims being handled by another company and the insurance capital by yet another company (AIG, Berkshire Hathaway).
LLMs tell me the distributors get 30–50% commission, which tells you that it's not a very good product for consumers.
But fear of death does seem like a kind of value systematization
I don't think it's system 1 doing the systemization. Evolution beat fear of death into us in lots of independent forms (fear of heights, snakes, thirst, suffocation, etc.), but for the same underlying reason. Fear of death is not just an abstraction humans invented or acquired in childhood; is a "natural idea" pointed at by our brain's innate circuitry from many directions. Utilitarianism doesn't come with that scaffolding. We don't learn to systematize Euclidian and Minkowskian spaces the same way either.
Quick takes are presented inline, posts are not. Perhaps posts could be presented as title + <80 (140?) character summary.
You may live in a place where arguments about the color of the sky are really arguments about tax policy. I don't think I live there? I'm reading your article saying "If Blue-Sky-ism is to stand a chance against the gravitational pull of Green-Sky-ism, it must offer more than talk of a redistributionist tax system" and thinking "...what on earth...?". This might be perceptive cultural insight about somewhere, but I do not understand the context. [This my guess as to why are you are being voted down]
You might be[1] overestimating the popularity of "they are playing god" in the same way you might overestimate the popularity of woke messaging. Loud moralizers aren't normal people either. Messages that appeal to them won't have the support you'd expect given their volume.
Compare, "It's going to take your job, personally". Could happen, maybe soon, for technophile programmers! Don't count them out yet.
Not rhetorical -- I really don't know
Eliezer Yudkowsky wrote a story Kindness to Kin about aliens who love(?) their family members proportionally to the Hamilton's "I'd lay down my life for two brothers or eight cousins" rule. It gives an idea to how alien it is.
Then again, Proto-Indo-European had detailed family words that correspond rather well to confidence of genetic kinship, so maybe it's a cultural thing.
Sure, I think that's a fair objection! Maybe, for a business, it may be worth paying the marginal security costs of giving 20 new people admin accounts, but for the federal government that security cost is too high. Is that what people are objecting to? I'm reading comments like this:
Yeah, that's beyond unusual. It's not even slightly normal. And it is in fact very coup-like behavior if you look at coups in other countries.
And, I just don't think that's the case. I think this is pretty-darn-usual and very normal in the management consulting / p...
And, I just don't think that's the case. I think this is pretty-darn-usual and very normal in the management consulting / private equity world.
I don't know anything about how things are done in management consulting or private equity.[1] Ever try it in a commercial bank?
Now imagine that you're in an environment where rules are more important than that.
Coups don't tend to start by bringing in data scientists.
Coups tend to start by bypassing and/or purging professionals in your government and "bringing in your own people" to get direct control over key lever...
Huh, I came at this with the background of doing data analysis in large organizations and had a very different take.
You're a data scientist. You want to analyze what this huge organization (US government) is spending its money on in concrete terms. That information is spread across 400 mutually incompatible ancient payment systems. I'm not sure if you've viscerally felt the frustration of being blocked, spending all your time trying to get permission to read from 5 incompatible systems, let alone 400. But it would take months or yea...
As someone who has been allowed access into various private and government systems as a consultant, I think the near mode view for classified government systems is different for a reason.
E.g., data is classified as Confidential when its release could cause damage to national security. It's Secret if it could cause serious damage to national security, and it's Top Secret if it could cause exceptionally grave damage to national security.
People lose their jobs for accidentally putting a classified document onto the wrong system, even if it's still...
Checking my understanding: for the case of training a neural network, would S be the parameters of the model (along with perhaps buffers/state like moment estimates in Adam)? And would the evolution of the state space be local in S space? In other words, for neural network training, would S be a good choice for H?
In a recurrent neural networks doing in-context learning, would S be something like the residual stream at a particular token?
I'll conjecture the following in a VERY SPECULATIVE, inflammatory, riff-on-vibes statements:
Thanks! I'm not a GPU expert either. The reason I want to spread the toll units inside GPU itself isn't to turn the GPU off -- it's to stop replay attacks. If the toll thing is in a separate chip, then the toll unit must have some way to tell the GPU "GPU, you are cleared to run". To hack the GPU, you just copy that "cleared to run" signal and send it to the GPU. The same "cleared to run" signal must always make the GPU work, unless there is something inside the GPU to make sure won't accept the same "cleared to run" signal tw...
I used to assume disabling a GPU in my physical possession would be impossible, but now I'm not so sure. There might be ways to make bypassing GPU lockouts on the order of difficulty of manufacturing the GPU (requiring nanoscale silicon surgery). Here's an example scheme:
Nvidia changes their business models from selling GPUs to renting them. The GPU is free, but to use your GPU you must buy Nvidia Dollars from Nvidia. Your GPU will periodically call Nvidia headquarters and get an authorization code to do 10^15 more floating point operatio...
Not a billion billion times. You need ≈2^100 presses to get any signal, and ≈O(2^200) presses to figure out which way the signal goes. 2^200≈10^60. Planck time's about 10^-45 seconds. If you try to press the button more than 10^45 times per second the radiation the electrons in the button will emit will be so high frequency an small wavelength that it will collapse into a black hole.
Incentives for NIMBYism is an objection I've seldom seen stated. "Of course I don't want to up-zone my neighborhood to allow more productive buildings -- that would triple my taxes!".
You're being downvoted and nobody's telling you why :-(, so I thought I'd give some notes.
Conjunction Fallacy. Adding detail make ideas feel more realistic, and strictly less likely to be true.
Virtues for communication and thought can be diametrically opposed.
In a world where AI progress has wildly accelerated chip manufacture
This world?
What distinction are you making between "visualising" and "seeing"?
Good question! By "seeing" I meant having qualia, an apparent subjective experience. By "visualizing" I meant...something like using the geometric intuitions you get by looking at stuff, but perhaps in a philosophical zombie sort of way? You could use non-visual intuitions to count the vertices on a polyhedron, like algebraic intuitions or 3D tactile intuitions (and I bet blind mathematicians do). I'm not using those. I'm thinking about a wireframe image, drawn...
...I do not believe this test. I'd be very good at counting vertices on a polyhedron through visualization and very bad at experiencing the sensation of seeing it. I do "visualize" the polyhedra, but I don't "see" them. (Frankly I suspect people who say they experience "seeing" images are just fooling themselves based on e.g. asking them to visualize a bicycle and having them draw it)
Thanks for crossposting! I've highly appreciated your contributions and am glad I'll continue to be able to see them.
Quick summary of a reason why constituent parts like of super-organisms, like the ant of ant colonies, the cells of multicellular organisms, and endosymbiotic organelles within cells[1] are evolutionarily incentivized to work together as a unit:
Question: why do ants seem to care more about the colony than themselves? Answer: reproduction in an ant colony is funneled through the queen. If the worker ant wants to reproduce its genes, it can't do that by being selfish. It has to help the queen reproduce. Genes in ant workers have ...
EDIT Completely rewritten to be hopefully less condescending.
There are lessons from group selection and the extended phenotype which vaguely reduce to "beware thinking about species as organisms". It is not clear from this essay whether you've encountered those ideas. It would be helpful for me reading this essay to know if you have.
Hijacking this thread, has anybody worked through Ape in the coat's anthropic posts and understood / gotten stuff out of them? It's something I might want to do sometime in my copious free time but haven't worked up to it yet.
Sorry, that was an off-the-cuff example I meant to help gesture towards the main idea. I didn't mean to imply it's a working instance (it's not). The idea I'm going for is:
This might be a reason to try to design AI's to fail-safe and break without controlling units. E.g. before fine-tuning language models to be useful, fine-tune them to not generate useful content without approval tokens generated by a supervisory model.
I suspect experiments with almost-genetically identical twin tests might advance our understanding about almost all genes except sex chromosomes.
Sex chromosomes are independent coin flips with huge effect sizes. That's amazing! Natural provided us with experiments everywhere! Most alleles are confounded (e.g.. correlated with socioeconomic status for no causal reason) and have very small effect sizes.
Example: Imagine an allele which is common in east asians, uncommon in europeans, and makes people 1.1 mm taller. Even though the alle...
The player can force a strategy where they win 2/3 of the time (guess a door and never switch). The player never needs to accept worse
The host can force a strategy where the player loses 1/3 of the time (never let the player switch). The host never needs to accept worse.
Therefore, the equilibrium has 2/3 win for the player. The player can block this number from going lower and the host can block this number from going higher.
I want to love this metaphor but don't get it at all. Religious freedom isn't a narrow valley; it's an enormous Shelling hyperplane. 85% of people are religious, but no majority is Christian or Hindu or Kuvah'magh or Kraẞël or Ŧ̈ř̈ȧ̈ӎ͛ṽ̥ŧ̊ħ or Sisters of the Screaming Nightshroud of Ɀ̈ӊ͢Ṩ͎̈Ⱦ̸Ḥ̛͑.. These religions don't agree on many things, but they all pull for freedom of religion over the crazy *#%! the other religions want.
Suppose there were some gears in physics we weren't smart enough to understand at all. What would that look like to us?
It would look like phenomena that appears intrinsically random, wouldn't it? Like imagine there were a simple rule about the spin of electrons that we just. don't. get. Instead noticing the simple pattern ("Electrons are up if the number of Planck timesteps since the beginning of the universe is a multiple of 3"), we'd only be able to figure out statistical rules of thumb for our measurements ("we measure electrons as up ...
Humans are computationally bounded, Bayes is not. In an ideal Bayesian perspective:
Humans are computationally bounded and can't think this way.
(riffing)
"Ideas" find paradigms for modeling the univer...
Statements made to the media pass through an extremely lossy compression channel, then are coarse-grained, and then turned into speech acts.
That lossy channel has maybe one bit of capacity on the EA thing. You can turn on a bit that says "your opinions about AI risk should cluster with your opinions about Effective Altruists", or not. You don't get more nuance than that.[1]
If you have to choose between outputting the more informative speech act[2] and saying something literally true, it's more cooperative to get the output speech act corre...
If someone wants to distance themselves from a group, I don't think you should make a fuss about it. Guilt by association is the rule in PR and that's terrible. If someone doesn't want to be publicly coupled, don't couple them.
I think the classic answer to the "Ozma Problem" (how to communicate to far-away aliens what earthlings mean by right and left) is the Wu experiment. Electromagnetism and the strong nuclear force aren't handed, but the weak nuclear force is handed. Left-handed electrons participate in weak nuclear force interactions but right-handed electrons are invisible to weak interactions[1].
(amateur, others can correct me)
Like electrons, right-handed neutrinos are also invisible to weak interactions. Unlike electrons, neutrinos are also invisi
Can you symmetrically put the atoms into that entangled state? You both agree on the charge of electrons (you aren't antimatter annihilating), so you can get a pair of atoms into |↑,↑⟩, but can you get the entangled pair to point in opposite directions along the plane of the mirror?
Edit Wait, I did that wrong, didn't I? You don't make a spin up atom by putting it next to a particle accelerator sending electrons up. You make a spin up atom by putting it next to electrons you accelerate in circles, moving the electrons in the direction your fingers point when a (real) right thumb is pointing up. So one of you will make a spin-up atom and the other will make a spin-down atom.
No, that's a very different problem. The matrix overlords are Laplace's demon, with god-like omniscience about the present and past. The matrix overlords know the position and momentum of every molecule in my cup of tea. They can look up the microstate of any time in the past, for free.
The future AI is not Laplace's demon. The AI is informationally bounded. It knows the temperature of my tea, but not the position and momentum of every molecule. Any uncertainties it has about the state of my tea will increase exponentiall...
Oh, wait, is this "How does a simulation keep secrets from the (computationally bounded) matrix overlords?"
I don't think I understand your hypothetical. Is your hypothetical about a future AI which has:
I think it's exponentially hard to retrodict the past. It's hard in a similar way as encryption is hard. If an AI isn't power enough to break encryption, it also isn't powerful enough to retrodict the past accurately enough to break secrets.
If you really want to keep something s...
I think a lot of people losing their jobs would probably do the trick, politics-wise. For most people the crux is "will AIs will be more capable than humans", not "might AIs more capable than humans be dangerous".
I do this (with "sev") when counting to myself. Nice to see the other people chose the same shelling point!
Yearly rent on the house is greater than yearly taxes on the house, right? As you give the government shares of your house, tax shares will convert into rental shares and you will have to pay the government more and more. A death spiral ensues and you lose the house.
"What if the government doesn't charge rent on its shares?" Then everyone lets the government own 99% of their house to avoid the taxes.
A lot of the value of Georgism is incentivizing people who don't value a property to move out so people who do value the property can move in.
(off-the-cuff opinion)
I'm very not sure how to do this, but are there ways to collect some counteracting or unbiased samples about Sam Altman? Or to do another one-sided vetting for other CEOs to see what the base rate of being able to dig up questionable things is? Collecting evidence in that points in only one direction just sets off huge warning lights 🚨🚨🚨🚨 I can't quiet.
I think this is the sort of conversation we should be having! [Side note: I think restricting compute is more effective than restricting research because you don't need 100% buy in.
The analogy to nuclear weapons is, I think, a good one. The science behind nuclear weapons is well known -- what keeps them fro...
I think the weakness with KL divergence is that the potentially harmful model can do things the safe model would be exponentially unlikely to do. Even if the safe model has a 1 in 1 trillion chance of stabbing me in the face, the KL penalty to stabbing me in the face is log(1 trillion) (and logs make even huge numbers small).
What about limiting the unknown model to chose one of the cumulative 98% most likely actions for the safe model to take? If the safe model never has more than a 1% chance of taking an action that will kill you, then the unknown model won't be able to take an action that kills you. This isn't terribly different from the Top-K sampling many language models use in practice.
Epistemic status: lukewarm take from the gut (not brain) that feels rightish
The "Big Stupid" of the AI doomers 2013-2023 was AI nerds' solution to the problem "How do we stop people from building dangerous AIs?" was "research how to build AIs". Methods normal people would consider to stop people from building dangerous AIs, like asking governments to make it illegal to build dangerous AIs, were considered gauche. When the public turned out to be somewhat receptive to the idea of regulating ...
There are some conversations about policy & government response taking place. I think there are a few main reasons you don't see them on LessWrong:
As recently as early 2023 Eliezer was very pessimistic about AI policy efforts amounting to anything, to the point that he thought anyone trying to do AI policy was hopelessly naive and should first try to ban biological gain-of-function research just to understand how hard policy is. Given how influential Eliezer is, he loses a lot of points here (and I guess Hendrycks wins?)
Then Eliezer updated and started e.g. giving podcast interviews. Policy orgs spun up and there are dozens of safety-concerned people working in AI policy. But this is not reflected in the LW frontpage. Is this inertia, or do we like thinking about computer science more than policy, or is it something else?
the problem "How do we stop people from building dangerous AIs?" was "research how to build AIs".
Not quite. It was to research how to build friendly AIs. We haven't succeeded yet. What research progress we have made points to the problem being harder than initially thought, and capabilities turned out to be easier than most of us expected as well.
Methods normal people would consider to stop people from building dangerous AIs, like asking governments to make it illegal to build dangerous AIs, were considered gauche.
Considered by whom? Rationalists? T...
Strong agree and strong upvote.
There are some efforts in the governance space and in the space of public awareness, but there should and can be much, much more.
My read of these survey results is:
AI Alignment researchers are optimistic people by nature. Despite this, most of them don't think we're on track to solve alignment in time, and they are split on whether we will even make significant progress. Most of them also support pausing AI development to give alignment research time to catch up.
As for what to actually do about it: There are a lot of options...
My comment here is not cosmically important and I may delete it if it derails the conversation.
There are times when I would really want a friend to tap me on the shoulder and say "hey, from the outside the way you talk about <X> seems way worse than normal. Are you hungry/tired/too emotionally close?". They may be wrong, but often they're right.
If you (general reader you) would deeply want someone to tap you on the shoulder, read on, otherwise this comment isn't for you.
If you burn at NYT/Cade Metz intolerable hostile garbage, are you hav...
FWIW, Cade Metz was reaching out to MIRI and some other folks in the x-risk space back in January 2020, and I went to read some of his articles and came to the conclusion in January that he's one of the least competent journalists -- like, most likely to misunderstand his beat and emit obvious howlers -- that I'd ever encountered. I told folks as much at the time, and advised against talking to him just on the basis that a lot of his journalism is comically bad and you'll risk looking foolish if you tap him.
This was six months before Metz caused SSC to shu...
I think it is useful for someone to tap me on the shoulder and say "Hey, this information you are consuming, its from <this source that you don't entirely trust and have a complex causal model of>".
Enforcing social norms to prevent scapegoating also destroys information that is valuable for accurate credit assignment and causally modelling reality. I haven't yet found a third alternative, and until then, I'd recommend people both encourage and help people in their community to not scapegoat or lose their minds in 'tribal instincts' (as you put it), w...
I appreciate that you are not speaking loudly if you don't yet have anything loud to say.
Is that your family's net worth is $100 and you gave up $85? Or your family's net worth is $15 and you gave up $85?
Either way, hats off!
The latter. Yeah idk whether the sacrifice was worth it but thanks for the support. Basically I wanted to retain my ability to criticize the company in the future. I'm not sure what I'd want to say yet though & I'm a bit scared of media attention.
How close would this rank a program p with a universal Turing machine simulating p? My sense is not very close because the "same" computation steps on each program don't align.
My "most naïve formula for logical correlation" would be something like put a probability distribution on binary string inputs, treat and as random variables , and compute their mutual information.
Yes, it means figure out how the notation works.