You can send DNA sequences to businesses right now that will manufacture the proteins that those sequences encode. Literally the only thing standing between you and nanotechnology is a good enough theory of proteins and their functions. Developing a good theory of proteins seems pretty much a pure-Reason problem.
You can make money by simply choosing a good product on Alibaba, making a website that appeals to people, using good marketing tactics and drop-shipping, no need for any physical interaction. The only thing you need is a good theory of consumer psychology. That seems like an almost-pure-Reason problem.
It seems completely obvious to me that reason is by far the dominant bottleneck in obtaining control over the material world.
You can send DNA sequences to businesses right now that will manufacture the proteins that those sequences encode
Have you ever tried this ? I have, it comes with loads of *s
Developing a good theory of proteins seems pretty much a pure-Reason problem
Under the assumption that we know all there is about proteins, which I've seen no claims of made by anyone. Current knowledge is limited and in-vitro, and doesn't generalize to "weird" families of proteins".
"Protein-based nanotechnology" requires:
I can't point to any single good canonical example, but this definitely comes up from time to time in comment threads. There's the whole issue that computers can't act in the world at all unless they're physically connected to hardware controllers that can interface with some physical system we actually care about being broken or misused. Usually, the workaround there is AI will be so persuasive that they can just get people with bodies to do the dirty work that requires being able to actually touch stuff in order to repurpose manufacturing plants or whatever it is we're worried they might do.
That does seem like there is a missing step in there somewhere. I don't think the bottleneck right now to building out a terrorist organization is that the recruiters aren't smart enough, but AI threat tends to just use "intelligence" as a shorthand for good at literally anything.
Strangely enough, actual AI doomsday fiction doesn't seem to do this. Usually, the rogue AI directly controls military hardware to begin with, or in a case like Ex Machina, Eva is able to manipulate people at least in part because she is able to convincingly take the form of an attractive embodied woman. A sufficiently advanced AI could presumably figure out that being an attractive woman helps, but if the technology to create convincing artificial bodies doesn't exist, you can't use it. This tends to get handwaved away by assuming sufficiently advanced AI can invent whatever nonexistent technology they need from scratch.
You don't need to be very persuasive to get people to take action in the real world.
Especially right now a lot of people work from home and take their orders from a computer and trust it to give them good orders.
There’s the whole issue that computers can’t act in the world at all unless they’re physically connected to hardware controllers that can interface with some physical system we actually care about being broken or misused. Usually, the workaround there is AI will be so persuasive that they can just get people with bodies to do the dirty work that requires being able to actually touch stuff in order to repurpose manufacturing plants or whatever it is we’re worried they might do.
In those cases it probably wouldn't be very hard to get people to act in the w...
I don't necessarily think you have to take the "AI" example for the point to make sense though.
I think "reasoning your way to a distant inference", as a human, is probably a far less controversial example that could be used here. In that most people here seem to assume there are ways to make distant inferences (e.g. about the capabilities of computers in the far off future), which historically seems fairly far fetched, it almost never happens when it does it is celebrated, but the success rate seems fairly small and there doesn't seem to be a clear formula for it that works.
I've always thought the same thing regarding a couple of claims that are well accepted around here, like galactic-scale space travel and never-ending growth. I'm not sure enough of my knowledge of physic to try to write a big post about it, but I'd be interested if someone did it (or I may want to work with someone on it).
[EDITED to replace "time" by "space" in "galactic-scale space travel". I guess there is a Freudian explanation of this kind of lapses, which is certainly either funny or true.
I don't see what you mean when you say galactic-scale time travel being a well-accepted claim here. I've never heard people talking about that as if it were something that obviously works (since, if I understand what you mean, it doesn't, unless it's just referring to simple relativistic effects, in which case it's trivial).
While something approximating never-ending growth may be a common assumption, I'm not sure what percentage of people here believe in genuinely unlimited growth (that never, at any point stops), and growth that goes on for a very long ex...
This, I assume, you'd base on a "hasn't happened before, no other animal or thing similar to us is doing it as far as we know, so it's improbable we will be able to do it" type assumption? Or something different?
claims that are well accepted around here, like galactic-scale space travel and never-ending growth.
I don't think anyone is claiming that never-ending growth is possible. Even if measured in Utility rather than Mass/Energy. Well technically you have "never-ending growth" if you asymptotically approach the Limit.
As for galactic-scale space travel that is perfectly possible.
Even more so, I never understood why people believe that thinking about certain problems (e.g. AI Alignment) is more efficient than random at solving certain problems, given no evidence of it being so (and no potential evidence, since the problems are in the future).
The point of focusing on AI Alignment isn't that it's an efficient way to discover new technology but that it's a way that makes it less likely that humanity will develop technology that destroys humanity.
A trade that makes us develop technology slower but increases the chances that humanity survives is worth it.
The point of focusing on AI Alignment isn't that it's an efficient way to discover new technology but that it's a way that makes it less likely that humanity will develop technology that destroys humanity.
Is "proper alignment" not a feature of an AI system, i.e. something that has to be /invented/discovered/built/?
This sound like semantics vis-a-vis the potential stance I was referring to above.
Not only do I agree with you, but I think a pretty compelling argument can be made.
The insight came to me when I was observing my pet's behaviors. I realized that you might be able to make them 'smarter', but because the animal still has finite I/O - it has no thumbs, it cannot speak, it doesn't have the right kind of eyes for tool manipulation - it wouldn't get much benefit.
This led to a general realization. The animal has a finite set of actions it can make each timestep. (finite control channel outputs). It needs to choose, from the set of all the actions it can take, one that will result in meeting the animal's goals
Like any real control system, the actual actions taken are suboptimal. When the animal jumps when startled, the direction it bounds may not always be the perfect one. It may not use the best food-gathering behavior.
But if you could cram a bigger brain in and search more deeply for a better action, the gain might be very small. An action that is 95% as good as the best action means that the better brain only gains you 5%.
This applies to "intelligence" in general. A smarter cave man may only be able to do slightly better than his competitors, not hugely better. Ultimate outcomes may be heavily governed by luck or factors intelligence cannot affect, such as susceptibility to disease.
This is true even if the intelligence is "infinite". A infinitely intelligent cave person is one who every action is calculated to be the most optimal one he can make with the knowledge he/she has.
Another realization that comes out of this is our modern world may only be possible because of stupid people. Why is that? Well, the most optimal action you can take as a human being is the one that gives you descendants who survive to mate. Agriculture, machinery, the printing press, the scientific method - the individual steps to reach these things were probably often done by tinkerers who would have been better served individually by finding a way to murder their rivals for mates/spending it on food gathering in the immediate term, etc. For example, agriculture may not have paid off in the lifespan of the first cave person to discover it.
Anyways, a millions of times smarter AI is like a machine, given a task, that can pick the 99th percentile action instead of the 95th percentile action (humans). This isn't all that effective alone. The real power of AI would be that they don't need to sleep, and can be used to in vast arrays that coordinate better with each other, and they always pick that 99th percentile action, they don't get tired or bored or distracted. And they can be made to coordinate with each other rationally where they share data and don't argue with each other. And you can clone them over and over.
This should allow for concrete, near term goals we have as humans to be accomplished.
But I don't think, for the most part, the scary possibilities could be done invisibly. For example, in order for the AI to develop a bioweapon to can kill everyone, it would need to do it like humans would do it, just more efficiently. As in, by building a series of mockups of human bodies - at least the lungs, compared to what modern day researchers do - and trying out incremental small changes to known to work viruses. Or trying out custom proteins on models of cell biology.
It needs the information to do it, and the only way to get that information requires a series of controlled experiments done by physical systems, controlled by the AI , in the real world.
Same with developing MNT or any of the other technologies we are pretty sure physics allows, we just don't have the ability to exploit. I think these things are all possible but the way to make them real would take a large amount of physical resources to methodically work your way up the complexity chain.
I realized that you might be able to make them 'smarter', but because the animal still has finite I/O - it has no thumbs, it cannot speak, it doesn't have the right kind of eyes for tool manipulation - it wouldn't get much benefit.
I'm fairly skeptical of this claim. It seems to me that even moderate differences in animal intelligence in E.G dogs leads to things like tool use and better ability to communicate things to humans.
I believe this echos out my thoughts perfectly, I might quote it in full if I ever do get around to reviving that draft.
The bit about "perfect" as not giving slack for development, I think, could be used even in the single individual scenario if you assume any given "ideal" action as lower chance of discovering something potential useful than a "mistake". I.e. adding:
This led to a general realization. The animal has a finite set of actions it can make each timestep. (finite control channel outputs). It needs to choose, from the set of all the actions it can take, one that will result in meeting the animal's goals
It seems that by having access to things like language, a computer, programming languages, the problems of a finite problem space quickly get resolves and no longer pose an issue. Theoretically I could write a program to make me billions of dollars on the stock market tomorrow. So the sp...
The most basic argument is that it really doesn't take a lot of material resources to be very smart. Human brains run on a few watts, and we have more than enough easily available material resources in our environment to build much much much bigger brains.
Then, it doesn't seem like "access to material resources" is what distinguishes humanity's success from other animals' success. Sure seems like we pretty straightforwardly won by being smarter and better at coordinating.
Also, between groups of humans, it seems that development of better technologies has vastly outperformed access to more resources (i.e. having a machine gun doesn't take very much materials, but easily allows you to win wars against less technologically advanced civilizations). Daniel Kokotajlo's work has studied in-depth the effect that better technology seems to have had on conquerors when trying to conquer the americas.
Now, you might doubt the connection between intelligence and developing new technologies. To me, it seems really obvious that there are some properties of a mind that determine how good it is at developing new technologies, holding environmental factors constant. We've seen drastic differences between different societies and different species in this respect, so there clearly is some kind of property here. I don't see how the environmental effects would dominate, given that most technologies we are developing just involve the use of existing components we already have (like, writing a new computer program that is better at doing something doesn't require special new resources).
Now the risk is that you get an AI that is much better at solving problems and developing new technologies than humans. It seems that humans are really not great at it, and that the upper bound for competence is far above where we are. This makes both sense on priors (why would the first species to make use of extensive tool-making already be at the maximum), but also from an inside-view (human minds sure don't seem very optimized for actually developing new technologies, given that we have a brain that only takes in a few watts, and have been mostly optimized for other constraints). I don't care whether you call it intelligence, and it definitely shouldn't be conflated with the concept of human intelligence. Like, humans are sometimes smarter in a very specific and narrow way and the variation between individuals humans is overall pretty minimal. When I talk about machine intelligence I mean a much broader set of potential ways to be better at thinking.
We've seen drastic differences between different societies and different species in this respect, so there clearly is some kind of property here
Is there?
Writing, agriculture, animal husbandry, similar styles of architecture and most modern inventions from flight to nuclear energy to antibiotics seem to have been developed in a convergent way given some environmental factors.
But I guess it boils down to a question of studying history, which ultimately has no good data and is only good for overfitting bias. So I guess it may be that there's no way to actuall...
Your paragraph that outlines your position mixes multiple different things into the concept of reason.
There's the intelligence of individual scientists or engineers, there are conceptual issues and there's the quality of institutions.
An organizations that's a heavily disfunctional immoral maze is going to innovate less new technology then an organization with access to the same resources but with a better organizational setup.
When it comes to raw intelligence that a lot of the productive engineers have an IQ that far exceeds that of the average population.
Conceptual insights like the idea of running controlled trials heavily influence the medical technology that can be developed in our society. We might have had concepts that would have allowed us to produce a lot more vaccines against COVID-19 much earlier
One thing I never understood in the internet sphere labelled "rationalists" (LW, OB, SSC... etc) is a series of seemingly strong beliefs about the future and/or about reality, the main one being around "AI".
Even more so, I never understood why people believe that thinking about certain problems (e.g. AI Alignment) is more efficient than random at solving certain problems, given no evidence of it being so (and no potential evidence, since the problems are in the future).
I've come to believe that I (and I'm sure many other people) differ from the mainstream (around these parts, that is) in a belief I can best outline as:
I've considered writing an article aimed solely at the LW/SSC crowd trying to defend something-like the above proposition with historical evidence, but the few times I tried it was rather tedious. I still want to do so at some point, but I'm curious if anyone wrote this sort of article before, essentially something that boils down to "A defence of a mostly-sceptical take on the world which can easily be digested by someone from the rationalist-blogosphere demographic"
I understand this probably sounds insane to the point of trolling to people here, but please keep an open mind, or at least please grant me that I'm not trolling, the position outlined above would be fairly close to what an empiricist or skeptic would hold, heck, even lightweight, since a skeptic might be skeptic of us being able to gain more knowledge/agency over the outside world in the first place, at least in a non-random way.