This seems untrue. For one thing, high-powered AI is in a lot more hands than nuclear weapons. For another, nukes are well-understood, and in a sense boring. They won’t provoke as strong of a “burn it down for the lolz” response as AI will.
Even experts like Yann LeCun often do not merely not understand the danger, they actively rationalize against understanding it. The risks are simply not understood or accepted outside of a very small number of people.
Remember the backlash around Sydney/Bing? Didn’t stop her creation. Also, the idea that gov
150,000 people die every day. That's not a small price for any delays to AGI development. Now, we need to do this right: AGI without alignment just kills everyone; it doesn't solve anything. But the faster we get aligned AI, the better. And trying to slow down capabilities research without much thought into the endgame seems remarkably callous.
Eliezer has mentioned the idea of trying to invent a new paradigm for AI, outside of the conventional neural net/backpropagation model. The context was more "what would you ...
Well said! Though it raises a question: how can we tell when such defenses are serving truth vs defending an error?
As for an easier word for “memetic immune system”, Lewis might well have called it Convention, as convention is when we disregard memes outside our normal mileu. Can’t say for Chesterton or Aquinas; I’m fairly familiar with Lewis, but much less so with the others apart from some of their memes like Chesterton’s Fence.
Good analogy, but I think it breaks down. The politician’s syllogism, and the resulting policies, are bad because they tend to make the world worse. I would say that Richard’s comment is an improvement, even if you think it might be a suboptimal one, and that pushing back against improvements tends to result in fewer improvements. Don’t let the perfect be the enemy of the good is a saying for very good reason.
The syllogism here is more like:
Something beneficial ought to be done
This is beneficial.
Therefore I probably ought not to oppose this, though if I see a better option I’ll do that instead of doubling down on this.
How functional can our community be without pushing back against people like Ziz? Richard’s comment seems to be a way of doing so, and thus potentially useful. It’s fine if you disagree with him, but while I agree the comment was flag-planting, some degree of flag-planting is likely necessary for a healthy discussion. Consider the way well kept gardens die by pacifism (can’t link on my phone, but if you’re not familiar with it there’s an excellent Yudkowsky post of that name that seems relevant). Zizianism is something worth planting a few flags to stop.
How functional can our community be without pushing back against people like Ziz? Richard’s comment seems to be a way of doing so, and thus potentially useful.
This is basically the politician's syllogism:
In general, the politician's syllogism fails because not only must we do something, but we must do something that works and doesn't cause side effects that are worse than its benefits and doesn't have too high opportunity costs etc. In this case, it's valuable for people to "push ba...
The wanting vs liking distinction seems relevant here. Politics can be truly fun, especially when you're discussing it with someone who's clearly presenting their views in good faith, and when you can both learn something from the interaction. However, it's easy for the wanting to stay strong long after the liking has completely disappeared.
I wonder if that's a common trait of most or all addictive things, or at least of "non-physical" addictions (things where you don't suffer withdrawals, yet still may find yourself spending more time o...
It is-for a certain type of unstable person. Ziz would likely have come up with different crazy ideas without Less Wrong. Compare Deepak Chopra on quantum mechanics: he pushes all manner of “quantum” bullshit, yet you can hardly blame physics for this, and if physics weren’t known, Chopra would almost certainly just be pushing a different flavor of insanity.
Combating bad regulation isn’t a solution, but a description of a property you’d want a solution to have.
Or more specifically, while you could perhaps lobby against particular destructive policies, this article is pushing for “helping [government actors] take good actions”, but given the track record of government actions, it would make far more sense to help them take no action. Pushing for political action without a plan to steer that action in a positive direction is much like pushing for AI capabilities without a plan for alignment… which we both agre...
Regulation in most other areas has been counterproductive. In AI, it will likely be even more so: there's at least some understanding of e.g. medicine by both the public and our rulers, but most people have no idea about the details of alignment.
This could easily backfire in countless ways. It could drive researchers out of the field, it could mandate "alignment" procedures that don't actually help and get in the way of finding procedures that do, it could create requirements for AIs to say what is socially desirable instead of wha...
There's potentially an aspect of this dynamic that you're missing. To think an opponent is making a mistake is not the same thing as them not being your opponent (as you yourself point out quite rightly, people with the same terminal goals can still come into conflict around differences in beliefs about the best instrumental ways to attain them), and to think someone is the enemy in a conflict is not the same thing as thinking that they aren't making mistakes.
To the extent that Mistake/Conflict Theory is pointing at a real and useful dichotomy...
What is the specific difference between “regurgitated” information and the information a smart human can produce?
The human mind appears to use predictive processing to navigate the world, i.e. it has a neural net that predicts what it will see next, compares this to what it actually sees, and makes adjustments. This is enough for human intelligence because it is human intelligence.
What, specifically, is the difference between that and how a modern neural net functions?
If we saw a human artist paint like modern AI, we’d say they were tremendously talented....
https://davidrozado.substack.com/p/what-is-the-iq-of-chatgpt
I would like to leave this here as evidence that the model stated above is not merely right on track, but arguably too conservative. I was expecting this level of performance in mid 2023, not to see it in January with a system from last year!
This is true, but it doesn’t answer the question of why not to simply use nuclear blackmail on such states. And the answer to that is that the US wants to limit the destruction of war. Nuclear blackmail is great, right up until someone calls your bluff. But then it helps to have conventional forces if you do not wish to have massive losses to local civilians, local infrastructure, and one’s own prestige.
"There are many animals which have what are called dominance contests. They rush at each other with horns - trying to knock each other down, not gore each other. They fight with their paws - with claws sheathed. But why with their claws sheathed? Surely, if they used their claws, they would stand a better chance of winning? But then their enemy might unsheathe their claws as well, and instead of resolving the dominance contest with a winner and a loser, both of them might be severely hurt." -Professor Quirell
Or to be more explicit, anything less than total...
Citation very much needed. What, specifically, do you disagree with?
Do you believe that the human mind is magical, such that no computer could ever replicate intelligence? (And never mind the ability it has shown already from chemistry to StarCraft…)
Do you believe that intelligence cannot create better tools than already exist, such that an AI couldn’t use engineering to meaningful effect? How about persuasion?
Do you believe that automation taking over the economy wouldn’t be a big deal? How about taking over genetics research, which is often bottlenec...
Fair enough, but it is equally incomplete to pretend that that’s an argument against the possibility of singularity-grade technology emerging in the foreseeable future.
By analogy, there have been many people who had crazy beliefs about radioactivity: doctors who prescribed radium as medicine, seemingly on the grounds that it was cool, and anything cool has to be good for you right? (A similar mentality led some of the ancient Chinese to drink mercury.) Atomic maximalists, who thought that anything and everything would get better with a reactor strapped ...
While true, that’s not actually relevant here. While LW does not have perfect agreement on exactly how morality works, we can generally agree that preventing vaccine waste is a good idea (at least insofar as we expect the vaccine to be net-beneficial, and any debates there are largely empirical disagreements, not moral ones). Nearly all consequentialists will agree (more people protected), as well as deontologists (it’s generally desirable to save lives, and there’s no rule against doing so by utilizing vaccines that would otherwise end up in the trash) ...
Strongly upvoted for clarification and much greater plausibility given that clarification.
"Back then it was called Czechoslovakia. I am puzzled about the disagreement votes, given that I have hedged my statement as "try to teach you, even if not very efficiently". Not sure how people do things on the other side of the planet, but I imagine that there are these things called textbooks, which are full of information, and they at least make you read them. I am not saying that the information is especially useful, or especially well explained; just that...
"Schools at least try to teach you."
I am curious where you went to school. That was not my experience, and I was in an unusually good school district by American standards. Some of my friends had noticeably worse experiences than I did. Are you conflating the nominal purpose of a school with its real-world actions? Alternatively, did you go through a good enough school system that it might be worth replacing a great many existing "educational" systems with yours as a stopgap along the way to school abolition?
"Jobs typi...
How is this different from adults having jobs?
To be clear, there are plenty of good reasons why one might not want children to work. You might want them to be able to enjoy childhood without the burden of a job, you might want them to focus on learning to be more productive later. But "the people paying them are motivated by profit" is equally true of adult jobs.
"Oh right, the whole world doesn't have education as a right."
Are you trying to argue from existing law to moral or practical value? That would be easier if the whole world hadn't had slavery and monarchy until fairly recently.
"That both destroy magic doesn't mean the destruction is it to the same degree."
That's a good point. But jobs ideally produce value. School often doesn't, and "learning" in a toxic setting specifically makes it harder to learn later. That's a harm specific to school; most jobs do not have it.
"And s...
That ignores systematic problems with schooling, which even good schools will tend to suffer from:
Teaching by class risks both losing the kids at the bottom and boring the kids at the top, whereas individual study doesn't have this problem.
Teaching by lecture is much slower than learning by reading. Yes, some students benefit from audio learning or need to do a thing themselves to grasp it, but those capable of learning from reading have massive amounts of time wasted, as potentially do the kinesthetic types who should really be taking ...
How is this any different from school, except that you could get paid rather than your parents losing money to pay the teachers? There are many valid arguments against child labor (though also many valid arguments that the child should be allowed to decide for themselves), but nearly all of them apply to schooling as well. School eliminates the time of childhood magic, actively makes it harder to be curious (many jobs would not have this effect) and you don't even get paid.
I don’t know how common loss of attention span is, but certainly reduced interest in learning occurs extremely often.
Also, potential evidence that more damage occurs than is commonly recognized: in the modern world, we generally accept that one needs to be in one’s late teens or even early twenties to handle adult life. Yet for most of human history, people took on adult responsibilities around puberty. Part of the difference may be the world becoming more complex. But how much of it is the result of locking people up in environments with very little social or intellectual stimulation until they’re 18?
The world looks exactly like one would expect it to if school stunted intellectual and emotional maturity.
I would think that it's valid, but a smaller effect than getting taught a bundle of random things in a gratuitously unpleasant way resulting in those who have been taught in school having a deep-seated fear of learning, not to mention other forms of damage. Prior to going to school, I had an excellent attention span, even by adult standards. After graduating high school, it took two years before I could concentrate on anything, and I still suffer from brain fog.
Should society eliminate schools?
That depends on what would replace them. One could imagine a scenario in which schools were eliminated, no other form of learning filled the gap, and mankind ended up worse off as a result. However, schooling in its present form seems net-negative relative to most realistic alternatives. Much of this will focus on the US, as that is the school system I'm most familiar with, but many of the lessons should transfer.
Much of the material covered has no conceivable use except as a wasteful signal. ...
Is that true? Isn't at least one clear difference that it's difficult to stop engaging in a bias, but heuristics are easier to set aside? For example, if I think jobs in a particular field are difficult to come by, that's a heuristic, and if I have reason to believe otherwise (perhaps I know a particular hiring agent and know that they'll give me a fair interview), I'll discard it temporarily. On the other hand, if I have a bias that a field is hard to break into, maybe I'll rationalize that even with my contact giving me a fair hearing it can't work. It's not impossible to decide to act against a bias, but it's harder not to overcorrect.
He cites the observation that socialized firms have not taken over the economy. That's clearly true and clearly relevant. Your response was that you'd already explained why socialized firms might not take over even if they were productive. What were those reasons again? Reviewing your post, it looks like it might be the difficulty of gaining investment and brain drain from the most productive workers leaving, but both of those reasons would be strong arguments against socialization. Rose Wrist's ideas for gaining investment an...
The specific handwave I'm referring to is Amartya Sen's.
"In the case of the free rider hypothesis, these 'rational fools' act based on such a narrow conception of self-interest that they don't take into account the obviously damaging long-term consequences of their behavior, both to the firm and ultimately to themselves. Normal, reasonable people - who are different to rational economic man - are usually happy to put efforts into a collective endeavor that will deliver benefits for them in the long run, even if that means foregoing some short-term ga...
Surely the good or bad effects of socialism are a function of policy? Whether or not a policy arises democratically and/or revolutionarily does not change the policy itself. This is a striking non-sequitur.
The Scandinavian countries are indeed pretty good places to live. This likely has nothing to do whatsoever with democracy per se, but with the fact that the Scandinavian model does not regulate to anything resembling more strongly socialist nations, despite the fact that they famously have a large welfare system. There is n...
Hence the charitable reading that the OP might be calling for a different version of socialism that might conceivably be beneficial. My point isn’t that there’s zero chance that he’s right; my point is that there’s no way to say “hey, let’s do this thing that’s superficially similar to catastrophic policies” without it either not conveying useful information, or that useful information requiring a long political debate to hash out. And that’s not appropriate for the “Politics is the mind-killer, let’s improve our rationality on easier cases” forum. I’d welcome the post and subsequent debate on e.g. a Scott Alexander forum or comment section. But this isn’t the place for it.
On the one hand, that's literally true. On the other, I feel like the connotations are dangerous. Existential risk is one of the worst possible things, and nearly anything is better than slightly increasing it. However, we should be careful that that mindset doesn't lead us into Pascal's Muggings and/or burnout. We certainly aren't likely to be able to fight existential risk if it drives us insane!
I strongly suspect that it's not self-sacrificing researchers who will solve alignment and bring us safely through the current crisis, but ones who are able to address the situation calmly and without freaking out, even though freaking out seems potentially justified.
Wouldn't it be relevant in that someone could recognize unproductive, toxic dynamics in their concerns about AI risk as per your point (if I understand you correctly), decide to process trauma first and then get stuck in the same sorts of traps? While "I'm traumatized and need to fix it before I can do anything" may not sound as flashy as "My light cone is in danger from unaligned, high-powered AI and I need to fix that before I can do anything", it's just as capable of paralyzing a person, and I speak both from my own past mistakes and from those of multiple friends.
Of course that's possible. I didn't mean to dismiss that part.
But… well, as I just wrote to Richard_Ngo:
...If you just go around healing traumas willy-nilly, then you might not ever see through any particular illusion like this one if it's running in you.
Kind of like, generically working on trauma processing in general might or might not help an alcoholic quit drinking. There's some reason for hope, but it's possible to get lost in loops of navel-gazing, especially if they never ever even admit to themselves that they have a problem.
But if it's targeted, the
Even if we assume that's true (it seems reasonable, though less capable AIs might blunder on this point, whether by failing to understand the need to act nice, failing to understand how to act nice or believing themselves to be in a winning position before they actually are), what does an AI need to do to get in a winning position? And how easy is it to make those moves without them being seen as hostile?
An unfriendly AI can sit on its server saying "I love mankind and want to serve it" all day long, and unless we have solid neural net interpr... (read more)