All of whpearson's Comments + Replies

Depending on the agent implementation you may find that it is demotivated to achieve any useful outcome if they are power limited. Half-assing things seems pointless and futile, they aren't sane actions in the world. E.g. trying to put out a fire when all you have is a squirt gun.

2TurnTrout
The power limitation isn’t a hard cap, it’s a tradeoff. AUP agents do not have to half-ass anything. As I wrote in another comment, If “unnecessary” is too squishy of a word for your tastes, I’m going get quite specific in the next few posts.
Answer by whpearson*30

I'm someone who is moving in the opposite direction mainly (from AI to climate change). I see AGI as a lot harder to do than most, mainly due to the potential political ramifications causing slow development and thinking it will need experiments with novel hardware, so is more visible than just coding. So I see it as relatively easy to stop, at least inside a country. Multi-nationally would be trickier.

Some advise, I would try and frame your effort as "Understanding AGI risk". While you think there is risk currently, having an open mind abo... (read more)

1otto.barten
Also another thought. (Partially) switching careers comes with a large penalty, since you don't have as much previous knowledge, experience, credibility, and network for the new topic. The only reason I'm thinking about it, is that I think AGI risk is a lot more important to work on than climate risk. If you're moving in the opposite direction: 1) Do you agree that such moving comes with a penalty? 2) Do you think that climate risk is a lot more important to work on than AGI risk? If so, only one of us can be right. It would be nice to know who that is, so we don't make silly choices.
1otto.barten
Hi WH, thank you for the reply! I find it really heartening and encouraging to learn what others are thinking. Could you explain what hardware you think would be needed? It's kind of the first time I'm hearing someone talk about that, so I'm curious of course to learn what you think it would take. I agree with your point that understanding risks of AI projects is a good way of framing things. Given the magnitude of AGI risks (as I understand it now, human extinction), an alarmist tone of a policy report would still be justified in my opinion. I also agree that we should keep an open mind: I see the benefits of AI, and even more the benefits of AGI, which would be biblical if we could control the risks. Climate adaptation could indeed be carried out a lot better, as could many other tasks. However, I think that we will not be able to control AGI, and we may therefore go extinct if we still develop it. But agreed: let's keep an open mind about the developments. Do you know any reliable overview of AGI risks? It would be great to have a kind of IPCC equivalent that's as uncontroversial as possible to convince people that this problem needs attention. Or papers stating that there is a nonzero chance of human extinction, from a reliable source. Any such information would be great! If I can help you by the way with ideas on how to fight the climate crisis, let me know!

A theory I read in "Energy and Civilisation" by Vaclav Smil is that we could get a big brain by developing tools and techniques (like cooking) to reduce the need for a complicated guts by having a higher quality diet.

This is connected to the Principled Intelligence hypothesis, because things like hunting or maintaining a fire require cooperation and communication. Maintaining the knowledge through a tribe for those things also required consistent communication. If you don't all have the same word for 'hot' and use it in the same wa... (read more)

My view is that you have to build AIs with a bunch of safeguards to stop it destroying *itself* while it doesn't have great knowledge of the world or the consequences of its actions. So some of the arguments around companies/governments skimping on safety don't hold in the naive sense.

So things like how do you :

  • Stop a robot jumping off something too high
  • Stop an AI DOSing it's own network connection
  • Stop a robot disassembling itself

When it is not vastly capable. Solving these things would give you a bunch of knowledge of safeguards and how to... (read more)

4MichaelA
I might be misunderstanding you, but I feel like this is sort of missing a key point. It seems like there could be situations in which the AI does indeed, as you point out, require "a bunch of safeguards to stop it destroying *itself*", in order to advance to a high level of capabilities. These could be built by its engineers, or developed by the AI itself, perhaps through trial and error. But that doesn't seem to mean it'd have safeguards to not destroy other things we value, or in some more abstract sense "destroy" our future potential (e.g., by colonising space and "wasting" the resources optimising for something that we don't/barely care about, even if it doesn't harm anything on Earth). It seems possible for an AI to get safeguards like how to not have its robotic manifestation jump off things too high or disassemble itself, and thereby be "safe enough" itself to become more capable, but to not have the sort of "safeguards" that e.g. Russell cares about. Indeed, this seems to related to the core point of ideas like instrumental convergent subgoals and differential progress. We or the AI might get really good at building its capabilities and building safeguards that allow it to become more capable or avoid harm to itself or its own current "goals", without necessarily getting good at building safeguards to protect "what we truly value". But here's two things you might have meant that would be consistent with what I've said: * It is only when you expect a system to radically gain capability without needing any safeguards to protect a particular thing that it makes sense to expect there to be a dangerous AI created by a team with no experience of safe guards to protect that particular thing or how to embed them. This may inform LeCun's views, if he's focusing on safeguards for the AI's own ability to operate in the world, since these will have to be developed in order for the AI to become more capable. But Russell may be focusing on the fact that a system rea
9Steven Byrnes
One thing you can do to stop a robot from destroying itself is to give it more-or-less any RL reward function whatsoever, and get better and better at designing it to understand the world and itself and act in the service of getting that reward (because of instrumental convergence). For example, each time the robot destroys itself, you build a new one seeded with the old one's memory, and tell it that its actions last time got a negative reward. Then it will learn not to do that in the future. Remember, an AGI doesn't need a robot body; a prototype AGI that accidentally corrupts its own code can be recreated instantaneously for zero cost. Why then build safeguards? Safeguards would be more likely if the AGI were, say, causing infrastructure damage while learning. I can definitely see someone, say, removing internet access, after mishaps like that. That's still not an adequate safeguard, in that when the AGI gets intelligent enough, it could hack or social-engineer its way through safeguards that were working before.
2Kaj_Sotala
That sounds right to me. Also worth noting that much of what parents do for the first few years of a child's life is just trying to stop the child from killing/injuring themselves, when the child's own understanding of the world isn't sufficiently developed yet.

As a data point for why this might be occurring. I may be an outlier, but I've not had much luck getting replies or useful dialogue from X-risk related organisations in response to my attempts at communications.

My expectation, currently. is that if I apply I won't get a response and I will have wasted my time trying to compose an application. I won't get any more information than I previously had.

If this isn't just me, you might want to encourage organisations to be more communicative.

My view is more or less the one Eliezer points to here:
The big big problem is, “Nobody knows how to make the nice AI.” You ask people how to do it, they either don’t give you any answers or they give you answers that I can shoot down in 30 seconds as a result of having worked in this field for longer than five minutes.

There are probably no fire alarms for "nice AI designs" either, just like there are no fire alarms for AI in general.

Why should we expect people to share "nice AI designs"?

For longer time frames where there might be visible development, the public needs to trust that the political regulators of AI to have their interests at heart. Else they may try and make it a party political issue, which I think would be terrible for sane global regulation.

I've come across pretty strong emotion when talking about AGI even when talking about safety, which I suspect will come bubbling to the fore more as time goes by.

It may also help morale of the thoughtful people trying to make safe AI.

I think part of the problem is that corporations are the main source of innovation and they have incentives to insert themselves into the things they invent so that they can be trolls and sustain their business.

Compare email and facebook messenger for two different types of invention, with different abilities to extract tolls. However if you can't extract a toll, it is unlikely you can create a business around innovation in an area.

I had been thinking about metrics for measuring progress towards shared agreed outcomes as a method of co-ordination between potentially competitive powers to avoid arms races.

I passed around the draft to a couple of the usual suspects in the ai metrics/risk mitigation in hopes of getting collaborators. But no joy. I learnt that Jack Clark of OpenAI is looking at that kind of thing as well and is a lot better positioned to act on it, so I have hopes around that.

Moving on from that I'm thinking that we might need a broad base of support from people (de... (read more)

1David Scott Krueger (formerly: capybaralet)
This sounds like it would be useful for getting people to support the development of AGI, rather than effective global regulation of AGI. What am I missing?

To me closed loop is impossible not due to taxes but due to desired technology level. I could probably go buy a plot of land and try and recreate iron age technology. But most likely I would injure myself, need medical attention and have to reenter society.

Taxes aren't also an impediment to close looped living as long as waste from the tax is returned. If you have land with a surplus of sunlight or other energy you can take in waste and create useful things with it (food etc). The greater loop of taxes has to be closed as well as well as the lesser loop.

From an infosec point of view, you tend to rely on responsible disclosure. That is you tell people that will be most affected or that can solve the problem for other people, they can create counter measures and then you release those counter measures to everyone else (which gives away the vulnerability as well), who should be in a position to quickly update/patch.

Otherwise you are relying on security via obscurity. People may be vulnerable and not know it.

There doesn't seem to be a similar pipeline for non-computer security threats.

2ryan_b
The non-computer analog for bug fixes is product recalls. I point out that recalling defective hardware is hideously expensive; so much so that even after widespread public outcry, it often requires lawsuits or government intervention to motivate action. As for the reporting channel, my guess is warranty claims? Physical things come with guarantees that they will not fail in unexpected ways. Although I notice that there isn’t much of a parallel for bug searches at the physical level.
3Dagon
Even for responsible infosec disclosure, it's always a limited time, and there are lots of cases of publishing before a fix, if the vendors are not cooperating, or if the exploit gains attention through other channels. And even when it works, it's mostly limited to fairly concrete proven vulnerabilities - there's no embargo on wild, unproven ideas. Nor is there anyone likely to be able to help during the period of limited-disclosure, nor are most of the ideas concrete and actionable enough to expect it to do any good to publish to a limited audience before full disclosure.
Similarly, it is not irrational to want to form a cartel or political ingroup. Quite the opposite. It's like the concept of economic moat, but for humans.

And so you get the patriarchy and the reaction to it feminism. This leads to the culture wars that we have to day. So it is locally optimal but leads to problems in the greater system.

How do we escape this kind of trap?

I'm reminded of the quote by George Bernard Shaw.

“The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.”

I think it would be interesting to look at the reasons and occasions not to follow "standard" incentives.

I've been re-reading a sci-fi book which has the interesting Existential Risk scenario where most people are going to die. But some may survive.

If you are a person on earth in the book, you have the choice of helping out people and definitely dieing or trying desperately to be one of the ones to survive (even if you personally might not be the best person to help humanity survive).

In that situation I would definitely be in the "helping people better suited for surviving" camp. Following orders because the situation was too complex to keep i... (read more)

She asked my advice on how to do creative work on AI safety, on facebook. I gave her advice as best I could.

She seemed earnest and nice. I am sorry for your loss.

Dulce et Decorum Est Pro Huminatas Moria?

As you might be able to tell from the paraphrased quote I've been taught some bad things that can happen when this is taken too far.

Therefore the important thing is how we, personally, would engage with that decision if it came from outside.

For me it depends on my opinion of the people on the outside. There are four things I weigh:

  • Epistemic rigour. With lots of crucial considerations around existential risk, do I believe that the outside has good views on the state of the world? If they do not, they/I may be d
... (read more)
2ryan_b
These look like good criteria, but I wonder how many organizations are satisfactory in this regard. My expectation would be ~0. The only ones I can think of which are even cognizant of epistemic considerations at the executive level are places like the Federal Reserve and the CDC. I can think of more organizations that think about equilibria, for liberal interpretations of the word, but they are mainly dedicated to preventing us from falling into a worse one (national defense). Moral uncertainty seems like the hardest hurdle to clear; most organizations are explicit in either their amorality or the scope of their morality, and there is very little discretion to change. Happily feedback mechanisms seem to do alright; though I come up short of examples where the feedback mechanisms improve things at the meta level. All that aside, we can surely start with a simple case and build up from there. Suppose all of these criteria were met to your satisfaction, and a decision was made which was very risky for you personally. How would you think about this? What would you do?

I'm interested in seeing where you go from here. With the old lesswrong demographic, I would predict you would struggle, due to cryonics/life extension being core to many people's identities.

I'm not so sure about current LW though. The fraction of the EA crowd that is total utilitarian probably won't be receptive.

I'm curious what it is that your intuitions do value highly. It might be better to start with that.

1ryan_b
I am also uncertain. But it appears to me that even an informed rejection will still be valuable. Follow up here.

Has anyone done work on a AI readiness index? This could track many things, like the state of AI safety research and the roll out of policy across the globe. It might have to be a bit dooms day clock-ish (going backwards and forwards as we understand more) but it might help to have a central place to collect the knowledge.

Out of curiosity what is the upper bound on impact?

Do you think the AI-assisted humanity is in a worse situation than humanity is today?

Lots of people involved in thinking about AI seem to be in a zero sum, winner-take-all mode. E.g. Macron.

I think there will be significant founder effects from the strategies of the people that create AGI. The development of AGI will be used as an example of what types of strategies win in the future during technological development. Deliberation may tell people that there are better equilibrium. But empiricism may tell people that they are too hard to reach.

Currently the... (read more)

Interesting. I didn't know Russia's defences had degraded so much.

1ryan_b
I feel the need to add an important caveat: MAD as a strategic situation may not apply, but MAD as a defense policy still does. Moving away from the defense policy is what the Foreign Affairs article warns against, and it is what the book I am reading right now concludes is the right course. In historical terms, it argues in favor of Herman Kahn over Schelling.

I'm curious what type of nuclear advantage you think America has. It is is still bound by MAD due to nukes on submersibles.

I think that US didn't have a sufficient intelligence capability to know where to inspect. Take Israel as an example.

CIA were saying in 1968 "...Israel might undertake a nuclear weapons program in the next several years". When Israel had already built a bomb in 1966.

3ryan_b
As of about 10 years ago MAD conditions no longer apply. I don't have a source for this because it was related to me directly by someone who was present at the briefing, but around 2006-07 our advanced simulations concluded that the United States had something like a ~60% chance of completely eliminating Russia's second strike capacity if they had no warning, and still a ~20% chance if they did. I was able to find a Foreign Affairs article that discusses some of the reasons for this disparity here. The short version is that we were much more successful in maintaining our nuclear forces than Russia. I am not certain how the intervening years have affected this calculus, but based on the reaction to Russia's recent claims of nuclear innovation, I suspect they are not changed much. I am reading a book called The Great American Gamble: Deterrence Theory and Practice from the Cold War to the Twenty-First Century, by Keith B. Payne, which I expect will shed considerably more light on the subject.

While I think the US could have threatened the soviets into not producing nuclear weapons at that point in time, I think I have trouble seeing how the US could put in the requisite controls/espionage to prevent India/China/Uk etc from developing nuclear weapons later on.

1ryan_b
Why would the controls the United States counterfactually put in place to maintain nuclear monopoly be less effective than the ones which are actually in place to maintain nuclear advantage? There would be no question of where or when nuclear inspectors had access, and war would’ve been a minimal risk.

I think the generalised flinching away from hypocrisy in itself, is mainly a status thing. Of the explanations for hypocrisy given.

  • Deception
  • Lack of will power
  • Inconsistent thinking

None of them are desirable traits to have in allies (at least visible to other people).

2abramdemski
Yeah. Hypocrisy can't be an ideal situation -- it always signals that something unfortunate must be going on. I might even agree with the status version of flinching away from hypocrisy? Particularly in the case that the hypocrite was saying something that made an implicit status claim initially.

I might take this up at a later date. I want to solve AI alignment, but I don't want to solve it now. I'd prefer it if our societies institutions (both governmental and non-governmental) were a bit more prepared.

Differential research that advances safety more than AI capability still advances AI capability.

1David Scott Krueger (formerly: capybaralet)
FWIW, I think I represent the majority of safety researchers in saying that you shouldn't be too concerned with your effect on capabilities; there's many more people pushing capabilities, so most safety research is likely a drop in the capabilities bucket (although there may be important exceptions!) Personally, I agree that improving social institutions seems more important for reducing AI-Xrisk ATM than technical work. Are you doing that? There are options for that kind of work as well, e.g. at FHI.

Gambling on your knowledge might work, rather thank on your luck (at least in a rationalist setting).

It is interesting to think about, what does this look like as a societal norm. Physical risk gets you to adrenaline junkies, social standing can get you many places (Burning Culture is one, pushing the boundaries of social norms). Good ol' Goodheart.

Another element of the exciting-ness of risk is the novelty. We are making risky choices everyday. To choose to go to university is a risky choice, sometimes you make a good network/grow as a person or lea... (read more)

2cousin_it
I thought about this some more, and it seems like my idea is wrong. Taking risks can help you become more exciting, but it's neither necessary nor sufficient. It's more about communication skills, we're back to square one :-/

It is Fear and the many ways it is used in society and can make a potential problem seem bigger than it is. In the general things like FUD; a concrete example of that being the red scare. Often it seems to have an existence bigger than any individual, which is why it got made a member of the pantheon, albeit a minor one

With regards to the Group, people have found fear of the Other easier to form. Obligatory sociology potential non-replicability warning.

I personally wouldn't fetishize being exciting too much. Boring stability is what allows civilisation to continue to do what functioning it somehow, against all the odds, manages to do. Too much exciting is just chaos.

That said, I would like more exciting in the world. One thing I've learnt anything from working on a live service is that any attempt at large-scale change, not matter how well planned/prepared for has an element of risk.

what kinds of risks should we take?

It might be worth enumerating the things we can risk. Your example covers a... (read more)

2cousin_it
Well, gambling addicts can look pretty pathetic, not exciting at all. Same for people who talk about their feelings too much. I suspect that physical risk is the only kind that works.

I didn't/don't have time to do the science justice, so I just tried my hand at the esoteric. It was scratching a personal itch, if I get time I might revisit this.

2JenniferRM
I see below that you're aiming for something like "fear in political situations,". This calls to mind, for me, things like the triangle hypothesis, the Richardson arms race model, and less rigorously but clearly in the same ambit also things like confidence building measures. These are tough topics and I can see how it might feel right to just "publish something" rather than sit on one's hands. I have the same issue myself (minus the courage to just go for it anyway) which leads me mostly to comment rather than top post. My sympathy... you have it!

I'm reminded of this Paul Graham essay. So maybe it is not all western cities. But the focus of the elite in those cities.

. What happened? More generally, what makes a social role exciting or boring at a certain point in time?

So I think the question is what qualities are incentivised in the social role. So for lots of bankers the behaviour that is incentivised is reliability and trustworthiness. It is not just the state that likes people to be predictable and boring, but the people giving lots of money to someone to keep safe, will also select for pr... (read more)

5cousin_it
That makes sense, thanks! It seems like the most exciting quality in a person is taking risks. For example, tech entrepreneurs in the West don't take much risk, because they can always go back to a comfortable job. That's probably why tech entrepreneurs, like bankers, are also boring to the article's author and to me. That explains another fact that has puzzled me for awhile. Since moving to the West, I've talked with a few "voluntourists" who travel to poor countries a lot. But somehow they tend to be not very exciting people, even though they have all sorts of crazy stories! The reason is that they can always fly back to the West, so they don't take as much risk as natives. Can we make this idea useful? If risk-taking makes you a more exciting person, what kinds of risks should we take? (For example, bungee jumping from 100+ meters feels scary to me, but isn't dangerous at all, so I recommend it to everyone.)

I like arguing with myself. So it is fun to make the best case. But yup I was going beyond what people might. I think I find arguments against naive views less interesting so spice them up some.

In accelerando the participants in Economy 2.0 had a treacherous turn because they had the pressure of being in a sharply competitive, resource hungry environment. This could have happened if they were EM or even aligned AGI to a subset of humanity, if they don't solve co-ordination problems.

This kind of evolutionary problem has not been talked about for a bit... (read more)

I feel that this post is straw-manning "I don't think superintelligence is worth worrying about because I don't think that a hard takeoff is realistic" a bit.

A steel man might be,

I don't feel super intelligence is worth worrying at this point, as in a soft takeoff scenario we will have lots of small AGI related accidents (people wire heading themselves with AI). This will provide both financial incentives to companies to concentrate of safety to stop themselves getting sued and if they are using it themselves, stopping the damage... (read more)

4Raemon
I think there's a difference between "Steelmanning something to learn the most you can from it" (for your own benefit), and accurately engaging with what people actually think and mean. (For example, I think it's common for consequentialists to "steelman" a deontological argument into consequentialism... but the actual reasons for a deontologists beliefs just have nothing to do with consequentialism) In the case of people saying "I don't think superintelligence is worth worrying about because I don't think that a hard takeoff is realistic" and then leaving it at that, I honestly just don't think they're thinking about it that hard, and rounding things off to vague plausibilities without a model. (Somes they don't just leave it at that – I think Robin Hanson generally has some kind of model, maybe closer to what you're saying here. But in that case you can engage with whatever they're actually saying without as much need to steelman it yourself) (I also think my OP here is roughly as good an answer to your steelman here – the issue still remains that there doesn't have to be a sharply treacherous turn, to result in things just eventually snowballing in a way similar to how powerful empires snowball, long after it's too late to do anything about it)
But when you're an adult, you are independent. You have choice to decline interactions you find unpleasant. You don't need everyone you know to like you to have a functioning life. There are still people and institutions to navigate, but they aren't out to get you. They won't thwart your cookie quests. You are free.

I think this depends a lot on the context. The higher profile you are the more people might be out to get you, because they can gain something by dragging you down. See twitter mobs etc.

Similarly if you want to doing somethin... (read more)

Ah, makes sense. I saw something on facebook by Robert Wiblin arguing against unnamed people in the "evidence-based optimist" group. And thought I was missing something important going on, for both you and cousin_it to react to. You have not been vocal on take off scenarios before. But it seems it is just conincidence.

Thanks for the explanation.

I have to say I am a little puzzled. I'm not sure who you and cousin_it are talking to with these moderate take off posts. I don't see anyone arguing that a moderate take off would be okay by default.

Even more mainstream places like mit, seem to be saying it is too early to focus on AI safety, rather than never focus on AI safety. I hope that there would conversation around when to focus on AI safety. While there is no default fire alarm it doesn't mean you can't construct one. Get people working on AGI science to say what they expect t... (read more)

2Kaj_Sotala
Like Raemon's comment suggests, people don't necessarily say it explicitly, but it's implicit whenever someone just says something along the lines of "I don't think superintelligence is worth worrying about because I don't think that a hard takeoff is realistic", and just leaves it at that. And I at least have seen a lot of people say something like that.
6Raemon
Also, cousin_it is specifically talking to Robin Hanson, so as far as "who are we talking to", anyone who takes him seriously. (Although Robin Hanson also has some additional things going on like "the Accelerando scenario seems good [or something?"]. I'm not sure I understand that, it's just my vague impression)
7Raemon
This was primarily a response to an in person conversation, but it was also (in part), an answer to Calvin Ho on the "Taking AI Seriously" thread. They said: And I guess I'll take this slot to answer them directly: This post isn't precisely an answer to this question, but points at how you could get an AI who looked pretty safe, and that honestly was pretty safe – as safe as an empathetic human who makes a reasonable effort to avoid killing bugs – and so during the year when you could have done something, it didn't look like you needed to. And then a couple decades later you find that everything is computronium and only minds that are optimized for controlling the solar system get to control the solar system.

I suppose there is the risk that the AGI or IA is suffering while helping out humanity as well.

I didn't know that!

I do think there is a difference in strategy though still. In the foom scenario you want to keep small the number of key players or people that might become key players.

In the non-foom you have the unhappy compromise between trying to avoid too many accidents and building up defense early vs practically everyone in time being a key player and needing to know how to handle AGI.

FWIW lesswrong has rarely felt like a comfortable place for me. Not sure why. Maybe I missed the fandom stage.

I did have a laugh now and again back in the day. Even then I think I came here for the "taking ideas seriously" thing that rationalists can do, than for the community.

I've argued before that we should understand the process of science (how much analysis vs data processing vs real world tests), in order to understand how likely it is that AGI will be able to do science quickly. Which impacts the types of threats we should expect. We should also look at the process of programming with a similar lens to see how much a human level programmer could be improved upon. There is lots of non-human bounded activity in the process of industrial scale programming, lots of it are in running automated test suites. Will AIs need... (read more)

I would add in animals if you are asking questions about the nature of general intelligence. For example people claim monkeys are better at certain tasks than humans. What does that mean for the notion of general intelligence, if anything?

What are the questions you are trying to answer about the first AGIs?

  • How they will behave?
  • What they will be capable of?
  • What is the nature of the property we call intelligence?

I find the second one much more interesting, with more data to be acquired. For the second one I would include things like modern computer hardware and what we have managed to achieve with it (and the nature and structure of those achievements).

3Scott Garrabrant
All of these, and general orientation around the problem, and what concrete things we should do.

I've got a bit more time now.

I agree "Things need to be done" in a rising tide scenario. However different things need to be done to the foom scenario. The distribution of AI safety knowledge is different in an important way.

Discovering ai alignment is not enough in the rising tide scenario. You want to make sure the proportion of aligned AIs vs misaligned AIs is sufficient to stop the misaligned AIs outcompeting the aligned AIs. There will be some misaligned AIs due to parts wear, experiments gone wrong, AIs aligned with insane people tha... (read more)

2cousin_it
Yes, secrecy is a bad idea in a rising tide scenario. But I don't think it's a good idea in a winner-take-all scenario either! I argued against it for years and like to think I swayed a few people.
3zulupineapple
I have a neat idea. If there were two comparable AGIs, they would effectively merge into one, even if they have unaligned goals. To be more precise, they should model how a conflict between them would turn out and then figure out a kind of contract that reaches a similar outcome without wasting the resources for a real conflict. Of course, if they are not comparable, then the stronger one could just devour the weaker one.

I've been trying to think about historical examples. Marxism while in some ways being strongly about conflict theory, they still wanted to keep the veneer of reasoned debate to get the backing of academics.

A quote from wikipedia from Popper.

Hegel thought that philosophy develops; yet his own system was to remain the last and highest stage of this development and could not be superseded. The Marxists adopted the same attitude towards the Marxian system. Hence, Marx's anti-dogmatic attitude exists only in the theory and not in the practice of ortho
... (read more)
2cousin_it
Thank you! I was trying to give an econ-centric counterargument to Robin's claim, but AI-centric strategic thinking (of which I've read a lot) is valuable too.
This is why I've always insisted, for example, that if you're going to start talking about "AI ethics", you had better be talking about how you are going to improve on the current situation using AI, rather than just keeping various things from going wrong.  Once you adopt criteria of mere comparison, you start losing track of your ideals—lose sight of wrong and right, and start seeing simply "different" and "same".

From: Guardians of the Truth

I'd put some serious time into that as well, if you can. If you think ... (read more)

Some of your comments appear to be hidden. I shall reply here with a question they bought to mind.

"then they take off like a rocket,"

I think it worth talking about whether it is sustainable. Whether they can do what is needed to be done at the current time when going at that high speed? Before people go too far down that path. Basically I'm asking, "But at what cost?"

The trust you have to have is that the person you are building with won't take the partially built rocket and and finish it for themselves to go off to a gambling den. That they too actually want to get groceries and aren't just saying that they do to gain your cooperation. You want to avoid cursing their sudden but inevitable betrayal.

You do want to get the groceries right?

The trickle down from religio to ops view of the world seems to de-emphasise intelligence gathering.

I think a big part of operations is making sure you have the correct information to do the things you need to do at the correct time. As such the informatation gathered regularly from ops inform strategy and tactics.

Load More