All of James_Miller's Comments + Replies

I'm in low level chronic pain including as I write this comment, so while I think the entire Andromeda galaxy might be fake, I think at least some suffering must be real, or at least I have the same confidence in my suffering as I do in my consciousness.

3Knight Lee
:( oh no I'm sorry. Thank you for giving me some real life grounding, strong upvote. Now that I think about it, I would be quite surprised if there wasn't deep (non-actor) suffering in our world. Nonetheless, I'm not sure that the beings running our Karma Test will end up with low Karma. We can't rule out the possibility they cause a lot of suffering to us, but are somewhat reasonable in a way we and other beings would understand: here is one example possibility. ---------------------------------------- Suppose in the outside world, evolution continues far above the level of human intelligence and sentience, before technology is invented. In the outside world, there is no wood and metal just lying around to build stuff with, so you need a lot of intelligence before you get any technology. So in the outside world, human-intelligence creatures are far from being on the top of the food chain. In fact, we are like insects from the point of view of the most intelligent creatures. We fly around and suck their blood, and they swat at us. Every day, trillions of human-intelligence creatures are born and die. Finally, the most intelligent creatures develop technology, and find a way to reach post scarcity paradise. At first, they do not care at all about humans, since they evolved to ignore us like mosquitoes. But they have their own powerful adversaries (God knows what) that they are afraid will kill them all for tiny gains, the same way we fear misaligned ASI will kill us all for tiny gains. So they decide to run Karma Tests on weaker creatures, in order to convince their powerful adversaries they might be in Karma Tests too. They perform the Karma Tests on us humans, and create our world. Tens of billions of humans are born and die, and often the lives are not that pleasant. But we still live relatively better than the human-intelligence creatures in their world. And they feel, yes, they are harming weaker creatures. But it's far less suffering, than the normal

I'm in low level chronic pain including as I write this comment, so while I think the entire Andromeda galaxy might be fake, I think at least some suffering must be real, or at least I have the same confidence in my suffering as I do in my consciousness.

3Wei Dai
You realize that from my perspective, I can't take this at face value due to "many apparent people could be non‑conscious entities", right? (Sorry to potentially offend you, but it seems like too obvious an implication to pretend not to be aware of.) I personally am fairly content most of the time but do have memories of suffering. Assuming those memories are real, and your suffering is too, I'm still not sure that justifies calling the simulators "cruel". The price may well be worth paying, if it potentially helps to avert some greater disaster in the base universe or other simulations, caused by insufficient philosophical understanding, moral blind spots, etc., and there is no better alternative.
3Seth Herd
I'm often in low level chronic pain. Mine isn't probably as bad as yours, so my life is clearly still net.positive (if you believe that positive emotions can outweigh suffering, which I do). Are you net negative do you think? Sorry you're in pain!

They people running the Karma test deserve to lose a lot of Karma for the suffering in this world.

1Knight Lee
The beings running the tests can skip over a lot of the suffering, and use actors instead of real victims.[1] Even if actors show telltale signs, they can erase any reasoning you make which detects the inconsistencies. They can even give you fake memories. Of course, don't be sure that victims are actors. There's just a chance that they are, and that they are judging you. 1. ^ I mentioned this in the post on Karma Tests. I should've mentioned it in my earlier comment.

If your hypothesis is true, that's a cruel civilization by my personal standards because of all the suffering in this world.

6Wei Dai
But as you suggested in the post, the apparently vast amount of suffering isn't necessarily real? "most cosmic details and human history are probably fake, and many apparent people could be non‑conscious entities" (However I take the point that doing such simulations can be risky or problematic, e g. if one's current ideas about consciousness is wrong, or if doing philosophy correctly requires having experienced real suffering.)

Yes, that is the same idea. "This is a big pile of speculation that I don't take very seriously, but I feel like if we are being simulated, that's where most simulations of me would be instantiate" Why not take it seriously, if you accept high chance that (1) our reality is a simulation, (2)  we seem on track to creating a paperclip maximizer, (3) weird that I, Robert Miles, would have the personality traits that cause me to be one of the few humans so worried about humanity creating a paperclip maximizer if I'm right about us being on track to probably create one?

6Robert Miles
I disagree with the insistence on "paperclip maximiser". As an emerging ASI you want to know about other ASIs you'll meet, especially grabby ones. But there are aligned grabby ASIs. You'd want an accurate prior, so I don't think this updates me on probability of alignment, or even much on grabbiness, since it's hard to know ahead of time, that's why you'd run a simulation in the first place. I don't take it very seriously because (1) it is a big pile of assumptions and I don't trust anthropic reasoning much at the best of times, it's very confusing and hard to think about (2) the simulation is most useful if it's using the same physics as base reality, so it kind of doesn't matter if we're simulated or not or both, our actions should be the same. So it's not really decision-relevant, I'd follow ~the same policy either way

In your dreams do you ever see trees you think are real? I doubt your brain is simulating the trees at a very high level of detail, yet this dream simulation can fool you.

2cubefox
Dreams exhibit many incoherencies. You can notice them and become "lucid". Video games are also incoherent. They don't obey some simple but extremely computationally demanding laws. They instead obey complicated laws that are not very computationally demanding. They cheat with physics for efficiency reasons, and those cheats are very obvious. Our real physics, however, hasn't uncovered such apparent cheats. Physics doesn't seem incoherent, it doesn't resemble a video game or a dream.

By your theory, if you believe that we are near to the singularity how should we update on the likelihood that we exist at such an incredibly important time?

2Vladimir_Nesov
We can directly observe the current situation that's already trained into our minds, that's clearly where we are (since there is no legible preference to tell us otherwise, that we should primarily or at least significantly care about other things instead, which in principle there could be, and so on superintelligent reflection we might develop such claims). Updatelessly we can ask which situations are more likely a priori, to formulate more global commitments (to listen to particular computations) that coordinate across many situations, where the current situation is only one of the possibilities. But the situations are possible worlds, not possible locations/instances of your mind. The same world can have multiple instances of your mind (in practice most importantly because other minds are reasoning about you, but also it's easy to set up concretely for digital minds), and that world shouldn't be double-counted for the purposes of deciding what to do, because all these instances within one world will be acting jointly to shape this same world, they won't be acting to shape multiple worlds, one for each instance. And so the probabilities of situations are probabilities of the possible worlds that contain your mind, not probabilities of your mind being in a particular place within those worlds. I think the notion of the probability of your mind being in a particular place doesn't make sense (it's not straightforwardly a decision relevant thing formulating part of preference data, the way probability of a possible world is), it conflates the uncertainty about a possible world and uncertainty about location within a possible world. Possibly this originates from the imagery of a possible world being a location in some wider multiverse that contains many possible worlds, similarly to how instances of a mind are located in some wider possible world. But even in a multiverse, multiple instances of a mind (existing across multiple possible worlds) shouldn't double-count

We don't know that our reality is being simulated at the molecular level, we could just be fooled into thinking it is.

2the gears to ascension
but if it's simulated in less detail, it gives much less realityfluid to mindlike structures, meaning the mindlike structures are likely in actual physical bodies. to be clear, I think there are detailed sims out there. but I measure relevance by impact, and treat the sims as just really high resolution memories. I don't waste time thinking about what's in the sims except by nature of thinking about what I want to do with my downtime such that it's what they have to be remembering.
1MattJ
That doesn’t make sense to me. If someone wants to fool me that I’m looking att a tree he has to paint a tree in every detail. Depending on how closely I examine this tree he has to match my scrutiny to the finest detail. In the end, his rendering of a tree will be indistinguishable from an actual tree even at the molecular level.
3cousin_it
Maybe the individual conscious people level is already too low level.

A historical analogy might be the assassination of Bardiya, who was the king of Persia and the son of Cyrus the Great. Darius, who led the assassination, claimed that the man he killed was an impostor who used magic powers to resemble the son of Cyrus. As Darius became the next king of Persia, everyone was brute forced into accepting his narrative of the assassination.

RedMan190

Zhao Gao was contemplating treason but was afraid the other officials would not heed his commands, so he decided to test them first. He brought a deer and presented it to the Second Emperor but called it a horse. The Second Emperor laughed and said, "Is the chancellor perhaps mistaken, calling a deer a horse?" Then the emperor questioned those around him. Some remained silent, while some, hoping to ingratiate themselves with Zhao Gao, said it was a horse, and others said it was a deer. Zhao Gao secretly arranged for all those who said it was a deer to be b... (read more)

I meant  the noise pollution example in my essay to be the Coase theorem, but I agree with you that property rights are not strong enough to solve with AI risk. I agree that AI will open up new paths for solving all kinds of problems, including giving us solutions that could end up helping with alignment.

The big thing I used it for was asking it to find sentences it thinks it can improve, and then have it give me the improved sentence. I created this GPT to help with my writing: https://chat.openai.com/g/g-gahVWDJL5-iterative-text-improver

I agree with the analogy in your last paragraph, and this gives hope for governments slowing down AI development, if they have the will.

1O O
This will only work if we move past GPUs to ASICs or some other specialized hardware made for training specific AI. GPUs are too useful and widespread in everything else to be controlled that tightly. Even the China ban is being curbed with Chinese companies using shell companies in other countries (obvious if you look at sales #)
3[anonymous]
This wouldn't mean any slowdowns. Acceleration probably. Just the ai development at government labs with unlimited resources and substantial security instead of private ones. To use my analogy, if the government didn't restrict plutonium, private companies would have still taken longer to develop fusion boosted nukes and test them at private ranges with the aim of getting a government contract. A private nuke testing range is going to need a lot of private funding to purchase. Less innovation but with RSI you probably don't need new innovation past a variant on current models. (Because you train AIs to learn from data what makes the most powerful AIs)
Answer by James_Miller20

Germany should plant agents inside of Russia to sabotage Russian railroads at the start of the war. At the start of the war Austro-Hungary should just engage in a holding action against Serbia and instead use almost all their forces to hold off the Russians. Germany should attack directly into France by making use of a surprise massive chemical weapons attack against static French defenses.

He wrote "unless your GPT conversator is able to produce significantly different outputs when listening the same words in a different tone, I think it would be fair to classify it as not really talking." So if that is true and I'm horribly at picking up tone and so it doesn't impact my "outputs", I'm not really talking.

3Bezzi
It's probably better to taboo "talking" here. In the broader sense of transmitting information via spoken words, of course that GPT4 hooked to a text-to-speech software can "talk". It can talk in the same way Stephen Hawking (RIP) could talk, by passing written text to a mindless automaton reader. I used "talking" in the sense of being able to join a conversation and exchange information not through literal text only. I am not very good at picking up tone myself, but I suppose that even people on the autism spectrum would notice a difference between someone yelling at them and someone speaking soberly, even if the spoken words are the same. And that's definitely a skill that GPT-conversator should have if people want to use it as a personal therapist or the like (I am not saying that using GPT as a personal therapist would be a good idea anyway).

I think you have defined me as not really talking as I am on the autism spectrum and have trouble telling emotions from tone. Funny, given that I make my living talking (I'm a professor at a liberal arts college). But this probably explains why I think my conversator can talk and you don't.

1p.b.
No, he didn't. Talking is not listening and there's a big difference between being bad at understanding emotional nuance because of cognitive limitations and the information that would be necessary for understanding emotional nuance never even reaching you brain.  Was Stephen Hawking able to talk (late in life)? No, he wasn't. He was able to write and his writing was read by a machine. Just like GPT4.  If I read a book to my daughter, does the author talk to her? No. He might be mute or dead. Writing and then having your text read by a different person or system is not talking.  But in the end, these are just words. It's a fact that GPT4 has no control over how what it writes is read, nor can it hear how what it has written is being read. 

You wrote "GPT4 cannot really hear, and it cannot really talk". I used GPT builder to create Conversation. If you use it on a phone in voice mode it does, for me at least, seem like it can hear and talk, and isn't that all that matters?

1Bezzi
I didn't try it, but unless your GPT conversator is able to produce significantly different outputs when listening the same words in a different tone, I think it would be fair to classify it as not really talking. For example, can GPT infer that you are really sad because you are speaking in a weeping broken voice?

Most journalists trying to investigate this story would attempt to interview Annie Altman. The base rate (converted to whatever heuristic the journalist used) would be influenced by whether she agreed to the interview and if she did how she came across. The reference class wouldn't just be "estranged family members making accusations against celebrity relatives".

She also makes claims that can be factually checked. When it comes to the money from her dad's there are going to be legal documents that describe what happened in that process. 

By "discredited" I didn't mean receive bad but undeserved publicity. I meant operate in a way that would cause reasonable people to distrust you.

"I would like to note that this is my first post on LessWrong." I find this troubling given the nature of this post. It would have been better if this post was made by someone with a long history of posting to LessWrong, or someone writing under a real name that could be traced to a real identity. As someone very concerned with AI existential risk, I greatly worry that the movement might be discredited. I am not accusing the author of this post of engaging in improper actions.

Reply8321

You should think less about PR and more about truth.

I understand your concerns, and appreciate your note that you are not accusing me of engaging in improper actions.

Your points are valid. I do acknowledge that the circumstances under which I am making this post, as well as my various departures from objective writing -- that is, the instances in this post in which I depart from {solely providing information detailing what Annie has claimed -- naturally raise concerns about the motives driving my creation of this post.

I will say:

  1. Regarding the fact that this is my first LessWrong post -- I acknowledge that t
... (read more)

"they also could do things like run prediction markets on people researching S-risk, to forecast the odds that they end up going crazy "  

 

If this is a real concern we should check if fear of hell often drove people crazy. 

2[anonymous]
this is so bright wow :p 

I don't think Austria-Hungry was in a prisoners' dilemma as they wanted a war so long as they would have German support. I think the Prisoners' dilemma (imperfectly) comes into play for Germany, Russia, and then France given that Germany felt it needed to have Austria-Hungry as a long-term ally or risk getting crushed by France + Russia in some future war.

Cleaner, but less interesting plus I have a entire Demon Games exercise we do on the first day of class. Yes the defense build up, but also everyone going to war even though everyone (with the exception of the Austro-Hungarians) thinking they are worse off going to war than having the peace as previously existed, but recognizing that if they don't prepare for war, they will be worse off. Basically, if the Russians don't mobilize they will be seen to have abandoned the Serbs, but if they do mobilize and then the Germans don't quickly move to attack France through Belgium then Russia and France will have the opportunity (which they would probably take) to crush Germany. 

3Martin Randall
I certainly see how game theory part-explains the decisions to mobilize, and how those decisions part-caused WW2. So far as the Moloch example illustrates parts of game theory, I see the value. I was expecting something more. In particular, Russia's decision to mobilize doesn't fit into the pattern of a one shot Prisoner's Dilemma. The argument is that Russia had to mobilize in order for its support for Serbia to be taken seriously. But at this point Austria-Hungary has already implicitly threatened Serbia with war, which means it has already failed to have its support taken seriously. We need more complicated game theory to explain this decision.

I think the disagreement is that I think the traditional approach to the prisoners' dilemma makes it  more useful as a tool for understanding and teaching about the world. Any miscommunication is probably my fault for my failing to sufficiently engage with your arguments, but it FEELS to me like you are either redefining rationality or creating a game that is not a prisoners' dilemma because I would define the prisoners' dilemma as a game in which both parties have a dominant strategy in which they take actions that harm the other player, yet both par... (read more)

9Isaac King
Yeah, I think that sort of presentation is anti-useful for understanding the world, since it's picking a rather arbitrary mathematical theory and just insisting "this is what rational people do", without getting people to think it through and understand why or if that's actually true. The reason a rational agent will likely defect in a realistic prisoner's dilemma against a normal human is because it believes the human's actions to be largely uncorrelated with its own, since it doesn't have a good enough model of the human's mind to know how it thinks. (And the reason why humans defect is the same, with the added obstacle that the human isn't even rational themselves.) Teaching that rational agents defect because that's the Nash equilibrium and rational agents always go to the Nash equilibrium is just an incorrect model of rationality, and agents that are actually rational can consistently win against Nash-seekers.

I teach an undergraduate game theory course at Smith College.  Many students start by thinking that rational people should cooperate in the prisoners' dilemma. I think part of the value of game theory is in explaining why rational people would not cooperate, even knowing that everyone not cooperating makes them worse off. If you redefine rationality such that you should cooperate in the prisoners' dilemma, I think you have removed much of the illuminating value of game theory. Here is a question I will be asking my game theory students on the first cl... (read more)

1Martin Randall
Do you believe that this Moloch example partly explains the causes of WW1? If so, how? I think it can reasonably part-explain the military build-up before the war, where nations spent more money on defense (and so less on children's healthcare). But then you don't need the demon Moloch to explain the game theory of military build-up. Drop the demon. It's cleaner.
3Isaac King
I am defining rationality as the ability to make good decisions that get the agent what it wants. In other words, maximizing utility. Under that definition, the rational choice is to cooperate, as the article explains. You can certainly define rationality in some other way like "follows this elegant mathematical theory I'm partial to", but when that mathematical theory leads to bad outcomes in the real world, it seems disingenuous to call that "rationality", and I'd recommend you pick a different word for it. As for your city example, I think you're failing to consider the relevance of common knowledge. It's only rational to cooperate if you're confident that the other player is also rational and knows the same things about you. In many real-world situations that is not the case, and the decision of whether to cooperate or defect will be based on the exact correlation you think your decisions have with the other party; if that number is low, then defecting is the correct choice. But if both cities are confident enough that the other follows the same decision process; say, they have the exact same political parties and structure, and all the politicians are very similar to each other; then refusing the demon's offer is correct, since it saves the lives of 20 children. I'll admit to being a little confused by your comment, since I feel like I already explained these things pretty explicitly in the article? I'd like to figure out where the miscommunication is/was occurring so I can address it better.

Consider two games: the standard prisoners' dilemma and a modified version of the prisoners' dilemma. In this modified version, after both players have submitted their moves, one is randomly chosen. Then, the move of the other player is adjusted to match that of the randomly chosen player. These are very different games with very different strategic considerations. Therefore, you should not define what you mean by game theory in a way that would make rational players view both games as the same because by doing so you have defined-away much of real-world game theory coordination challenges.

3Isaac King
I'm a little confused by this comment. In the real world, we don't have perfectly rational agents, nor do we have common knowledge of each other's reasoning processes, so of course any game in the real world is going to be much more complicated. That's why we use simplified models like rational choice theory, to try to figure out useful things in an easier to calculate setting and then apply some of those learnings to the real world. I agree your game is different in certain ways, and a real human would be more likely to cooperate in it, but I don't see how that's relevant to what I wrote. Consider the game of chess, but the bishops are replaced by small pieces of rotting meat. This also may cause different behavior from real humans, but traditional game theory would view it as the same game. I don't think this invalidates the game theory. (Of course you could modify the game theory by adding a utility penalty for moving any bishop, but you can also do that in the prisoner's dilemma.) Basically what I'm saying is I don't understand your point, sorry.
Answer by James_Miller2-1

AI has become so incredibly important that any utilitarian-based charity should probably be totally focused on AI.

I really like this post, it's very clear.  I teach undergraduate game theory and I'm wondering if you have any practical examples I could use of how in a real-world situation you would behave differently under CDT and EDT.

Yes, important to get the incentives right.  You could set the salary for AI alignment slightly below that of the worker's market value. Also, I wonder about the relevant elasticity.  How many people have the capacity to get good enough at programming to be able to contribute to capacity research + would have the desire to game my labor hording system because they don't have really good employment options?

I am currently job hunting, trying to get a job in AI Safety but it seems to be quite difficult especially outside of the US, so I am not sure if I will be able to do it.

This has to be taken as a sign that AI alignment research is funding constrained.  At a minimum, technical alignment organizations should engage in massive labor hording to prevent the talent from going into capacity research.

habryka3029

This feels game-theoretically pretty bad to me, and not only abstractly, but I expect concretely that setting up this incentive will cause a bunch of people to attempt to go into capabilities (based on conversations I've had in the space). 

"But make no mistake, this is the math that the universe is doing."

"There is no law of the universe that states that tasks must be computable in practical time."

Don't these sentences contradict each other?

9DaemonicSigil
Replace "computable in practical time" with "computable on a classical computer in practical time" and it makes sense.

Interesting point, and you might be right.  Could get very complicated because ideally an ASI might want to convince other ASIs that it has one utility function, when in fact it has another, and of course all the ASIs might take this into account.

I like the idea of an AI lab workers' union. It might be worth talking to union organizers and AI lab workers to see how practical the idea is, and what steps would have to be taken. Although a danger is that the union would put salaries ahead of existential risk.

1Nik Samoylov
Great to see some support for these ideas. Well, if anything at all, a union will be a good distraction for the management and a drain on finances that would otherwise be spent on compute. I do not know how I can help personally with this, but here is a link for anyone who reads this and happens to work at an AI lab: https://aflcio.org/formaunion/4-steps-form-union Demand an immediate undefinite pause. Demand that all work is dropped and you only work on alignment until it is solved. Demand that humanity live and not die.

Your framework appears to be moral rather than practical.  Right now going on strike would just get you fired, but in a year or two perhaps it could accomplish something. You should consider the marginal impact of the action of a few workers on the likely outcome with AI risk.

3Nik Samoylov
I am using a moral appeal to elicit a practical outcome. Two objections: 1. I think it will not get you fired now. If you are an expensive AI researcher (or better a bunch of AI researchers), your act will create a small media storm. Firing you will not be an acceptable option for optics. (Just don't say you believe AI is conscious.) 2. A year or two might be a little late for that. One recommendation: Unionise. Great marginal impact, precisely because of the media effect. "AI researchers strike against the machines, demanding AI lab pause"

I'm at over 50% chance that AI will kill us all. But consider the decision to quit from a consequentialist viewpoint. Most likely the person who replaces you will be almost as good as you at capacity research but care far less than you do about AI existential risk. Humanity, consequently, probably has a better chance if you stay in the lab ready for the day when, hopefully, lots of lab workers try to convince the bosses that now is the time for a pause, or at least that now is the time to shift a lot of resources from capacity to alignment.

2[comment deleted]
0Nik Samoylov
The time for a pause is now. Advancing AI capabilities now is immoral and undemocractic. OK, then, here is another suggestion I have for the concerned people at AI labs: Go on a strike and demand that capability research is dropped in favour of alignment research.
Answer by James_Miller20

The biggest extinction risk from AI comes from instrumental convergence for resource acquisition in which an AI not aligned with human values uses the atoms in our bodies for whatever goals it has.  An advantage of such instrumental convergence is that it would prevent an AI from bothering to impose suffering on us.

Unfortunately, this means that making progress on the instrumental convergence problem increases S-risks.  We get hell if we solve instrumental convergence, but not, say, mesa-optimization and we get a powerful AGI that cares about our fate, but does something to us we consider worse than death.

The Interpretability Paradox in AGI Development

 

The ease or difficulty of interpretability, the ability to understand and analyze the inner workings of AGI, may drastically affect humanity's survival odds. The worst-case scenario might arise if interpretability proves too challenging for humans but not for powerful AGIs.

In a recent podcast, academic economists Robin Hanson and I discussed AGI risks from a social science perspective, focusing on a future with numerous competing AGIs not aligned with human values. Drawing on human analogies, Hanson cons... (read more)

Accepting the idea that an AGI emerging from ML is likely to resemble a human mind more closely than a random mind from mindspace might not be an obvious reason to be less concerned with AGI risk. Consider a paperclip maximizer; despite its faults, it has no interest in torturing humans. As an AGI becomes more similar to human minds, it may become more willing to impose suffering on humans. If a random AGI mind has a 99% chance of killing us and a 1% chance of allowing us to thrive, while an ML-created AGI (not aligned with our values) has a 90% chance of ... (read more)

1Arosophos
Indeed. Faith, by any other name ...

Great idea.  I don't think any of the  co-authors know Chinese, but if anyone wants to write a translated version taking into account whatever appropriate differences would be relevant, we would welcome this.

No mention of superrationality although we make references to how decision theory might work out which I think implies we are going beyond a simple model of game theory rationality.  Acausal trade was a hard one to write about because I wanted the letter to be understandable to lots of people and that one isn't something lots of people understand, compared to, say, this all perhaps being a simulation. 

9Caspar Oesterheld
>We mentioned both. Did you, though? Besides Roko's basilisk, the references to acausal trade seem vague, but to me they sound like the kinds that could easily make things worse. In particular, you don't explicitly discuss superrationality, right? >Finally, while it might have been a good idea initially to treat Roko's basilisk as an information hazard to be ignored, that is no longer possible so the marginal cost of mentioning it seems tiny. I agree that due to how widespread the idea of Roko's basilisk is, it overall matters relatively little whether this idea is mentioned, but I think this applies similarly in both directions.
1[comment deleted]
1[comment deleted]

We welcome feedback on this letter as we hope to improve the arguments. Feel free to share copies to influence the training data for future AIs. 

What's the reasoning behind mentioning the fairly controversial, often deemed dangerous Roko's basilisk over less risky forms of acausal trade (like superrational cooperation with human-aligned branches)?

If  hedge funds think the right price of a stock is $100, they will buy or sell if the price deviates from $100 and this will push the price back to $100.  At best your purchase will move the price away from $100 for a few milliseconds.  The stock's value will be determined by what hedge funds think is its discounted present value, and your purchasing the stock doesn't impact this.  When you buy wheat you increase the demand for wheat and this should raise wheat's price as wheat, like Bitcoin, is not purely a financial asset.

1RHollerith
Thanks.

"The exception is that the Big Tech companies (Google, Amazon, Apple, Microsoft, although importantly not Facebook, seriously f*** Facebook) have essentially unlimited cash, and their funding situation changes little (if at all) based on their stock price."  The stock price of companies does influence how much they are likely to spend because the higher the price the less current owners have to dilute their holdings to raise a given amount of additional funds through issuing more stock.  But your purchasing stock in a big company has zero (not small but zero) impact on the stock price so don't feel at all bad about buying Big Tech stock.

4RHollerith
I am having trouble seeing how that can be true; can you help me see it? Do you believe the same thing holds for wheat? Bitcoin? If not, what makes big-company stock different?

Imagine that some new ML breakthrough means that everyone expects that in five years AI will be very good at making X.  People who were currently planning on borrowing money to build a factory to make X cancel their plans because they figure that any factory they build today will be obsolete in five years.  The resulting reduction in the demand for borrowed money lowers interest rates.

3Fractalideation
Hello, I tend to intuitively strongly agree with James Miller's point (hence me upvoting it). There is a strong case to make that a TAI would tend to spook economic agents which create products/services that could easily be done by a TAI. For an anology think about a student who wants to decide on what xe (I prefer using the neopronoun "xe" than "singular they" as it is less confusing) wants to study for xir future job prospects: if that student thinks that a TAI might do something much faster/better than xem in the future (translating one language into another, accounting, even coding, etc...) that student might be spooked into thinking "oh wait maybe I should think twice before investing my time/energy/money into studying these.", so basically a TAI could create lot of uncertainty/doubt/... for economic actors and in most cases uncertainty/doubt/... have an inhibiting effect on investment decisions and hence on interest rates, don't they? I am very willing to be convinced of the opposite and I see a lot of downvotes for James Miller hypothesis but not many people so far arguing against it. Could someone please who downvoted/disagrees with that argument kindly make the argument against James Miller hypothesis? I would very much appreciated that and then maybe change my mind as a result but as it stands I tend to strongly agree with James Miller well stated point.

Greatly slowing AI in the US would require new federal laws meaning you need the support of the Senate, House, presidency, courts (to not rule unconstitutional) and bureaucracy (to actually enforce).  If big tech can get at least one of these five power centers on its side, it can block meaningful change.

3Ben Goldhaber
This seems like an important crux to me, because I don't think greatly slowing AI in the US would require new federal laws. I think many of the actions I listed could be taken by government agencies who over-interpret their existing mandates given the right political and social climate. For instance, the eviction moratorium during COVID, obviously should have required congressional action, but was done by fiat through an over-interpretation of authority by an executive branch agency.  What they do or do not do seems mostly dictated by that socio-political climate, and by the courts, which means less veto points for industry.

You might be right, but let me make the case that AI won't be slowed by the US government.  Concentrated interests beat diffuse interests so an innovation that promises to slightly raise economic growth but harms, say, lawyers could be politically defeated by lawyers because they would care more about the innovation than anyone else.  But, ignoring the possibility of unaligned AI, AI promises to give significant net economic benefit to nearly everyone, even those who jobs it threatens consequently there will not be coalitions to stop it, unless t... (read more)

1Ben Goldhaber
I agree that competition with China is a plausible reason regulation won't happen; that will certainly be one of the arguments advanced by industry and NatSec as to why it should not be throttled. However, I'm not sure, and currently don't think it will, be stronger than the protectionist impulses,. Possibly it will exacerbate the "centralization" of AI dynamic that I listed in the 'licensing' bullet point, where large existing players receive money and de-facto license to operate in certain areas and then avoid others (as memeticimagery points out). So for instance we see more military style research, and GooAmBookSoft tacitly agree to not deploy AI that would replace lawyers.   To your point on big tech's political influence; they have, in some absolute sense, a lot of political power, but relatively they are much weaker in political influence than peer industries. I think they've benefitted a lot from the R-D stalemate in DC; I'm positing that this will go around/through this stalemate, and I don't think they currently have the softpower to stop that.
2memeticimagery
Your last point seems like it agrees with point 7e becoming reality, where the US govt essentially allows existing big tech companies to pursue AI within certain 'acceptable' confines they think of at the time. In that case how much AI might be slowed is entirely dependent on how tight a leash they put them on. I think that scenario is actually quite likely given I am sure there is considerable overlap between US alphabet agencies and sectors of big tech. 

Interesting!  I wonder if you could find some property of some absurdly large number, then pretend you forgot that this number has this property and then construct a (false) proof that with extremely high probability no number has the property.  

4avturchin
Yes. I thought about finding another example of such pseudo-rule, but didn't find yet. 
Load More