and more specifically you should not find yourself personally living in a universe where the history of your experience is lost. I say this because this is evidence that we will likely avoid a failure in AI alignment that destroys us, or at least not find ourselves in a universe where AI destroys us all, because alignment will turn out to be practically easier than we expect it to be in theory.
Can you elaborate on this idea? What do you mean by 'the history of your experience is lost'? Can you supply some links to read on this whole theory?
An AI will have a utility function. What utility function do you propose to give it?
What values would we give an AI if not human ones? Giving it human values doesn't necessarily mean giving it the values of our current society. It will probably mean distilling our most core moral beliefs.
If you take issue with that all you are saying is that you want an AI to have your values, rather than humanity's, as a whole.
Developing an AGI (and then ASI) will likely involve a serious of steps involving lower intelligences. There's already an AI arms race between several large technology companies and keeping your nose in front is already practiced because there's a lot of utility in having the best AI so far.
So it isn't true to say that it's simply a race without important intermediate steps. You don't just want to get to the destination first, you want to make sure your AI is the best for most of the race for a whole heap of reasons.
That's a partial list. It also takes good universities, a culture that produces a willingness to take risks, a sufficient market for good products, and I suspect a litany of other things.
I think once you've got a society that genuinely innovates started, it can be hard to kill that off, but it can be and has been done. The problem is, as you mentioned, very few societies have ever been particularly innovative.
It's easy to use established technology to build a very prosperous first world society. For example: Australia, Canada, Sweden. But it's much harder ...
I think it's an interesting point about innovation actually being very rare, and I agree. It takes a special combination of things for to happen and that combination doesn't come around much. Britain was extremely innovative a few hundred years ago. In fact, they started the industrial revolution, literally revolutionising humanity. But today they do not strike me as particularly innovative even with that history behind them.
I don't think America's ability to innovate is coming to end all that soon. But even if America continues to prosper, will that mean...
You have failed to answer my question. Why does anything at all matter? Why does anything care about anything at all? Why don't I want my dog to die? Obviously, when I'm actually dead, I won't want anything at all. But there is no reason I cannot have preferences now regarding events that will occur after I am dead. And I do.
In Australia we currently produce enough food for 60 million people. This is without any intensive farming techniques at all. This could be scaled up by a factor of ten if it was really necessary, but quality of life per capita would suffer.
I think smaller nations are as a general rule governed much better, so I don't see any positives in increasing our population beyond the current 24 million people.
I really like that you mention world government as an existential risk. It's one of the biggest ones. Competition is a very good risk reduction process. It has been said before that if we all lived in North Korea, it may well be that the future of humanity would be quite bleak indeed. North Korea is less stable now than it would be if it was the world's government because all sorts of outside pressure contribute to its instability (technology created by more free nations, pressure from foreign governments, etc).
No organisation can ever get it right all th...
There have been wars over land since humans have existed. And non interaction, even if initially widespread, clearly eventually stopped when it became clear the world wasn't infinite and that particular parts had special value and were contested by multiple tribes. Australia being huge and largely empty didn't stop European tribes from having a series of wars increasing in intensity until we had WW1 and WW2, which were unfathomably violent and huge clashes over ideology and resources. This is what happened in Europe, where multiple tribes of comparable st...
Extremely low. I have never believed any sort of pathogen could come close to wiping us out. They can be defeated by basic breather and biohazard technology. But the main key is that with improved and more accessible biotechnology, our ability to create vaccines and other defence mechanisms against pathogens is greatly enhanced. I actually think the better biotechnology gets, the less likely any pathogen is to wipe us out, even given the fact that terrorists will be able to misuse it more easily.
The development of Native Americans has been stunted and they simply exist within the controlled conditions imposed by the new civilization now. They aren't all dead, but they can't actually control their own destiny as a people. Native American reservations seem like exactly the sort of thing aliens might put us in. Very limited control over our own affairs in desolate parts of the universe with the addition of welfare payments to give us some sort of quality of life.
Can someone here come up with any sort of realistic value system a foreign civilisation might have that would result in it not destroying the human race, or at least permanently stunting our continued development, should they become aware of us?
As has come to light with research on super intelligences, an actor does not have to hate us to destroy us, but rather realise we conflict, even in a very minor way, with its goals. As a rapidly advancing intelligent civilisation, it is likely our continued growth and existence will hamper the goals of other intelli...
This doesn't seem very coherent.
As it happens, a perfect and truthful predictor has declared that you will choose torture iff you are alone.
OK. Then that means if I choose torture, I am alone. If I choose the dust specks, I am not alone. I don't want to be tortured, and don't really care about 3 ^^^ 3 people getting dust specks in their eyes, even if they're all 'perfect copies of me'. I am not a perfect utilitarian.
A perfect utilitarian would choose torture though, because one person getting tortured is technically not as bad from a utilitarian point of view as 3 ^^^ 3 dust specks in eyes.
I think a very interesting trait of humans is that we can for the most part collaboratively truth-seek on most issues, except those defined as 'politics', where a large proportion of the population, with varying IQs, some extremely intelligent, believe things that are quite obviously wrong to who anyone who has spent any amount of time seeking the truth on those issues without prior bias.
The ability for humans to totally turn off their rationality, to organise the 'facts' as they see them to confirm their biases, is nothing short of incredible. If humans t...
(1) and (2) are not premises. The conclusion of his argument is that either (1), (2) or (3) is very likely true. The argument is not supposed to show that we are living in a simulation.
The negation of (1) and (2) are premises if the conclusion is (3). So when I say they are "true" I mean that, for example, in the first case, that humans WILL reach an advanced level of technological development. Probably a bit confusing, my mistake.
...You seem to be saying that (2) is true -- that it is very unlikely that our post-human descendants will create a
We could have random number generators that choose the geometry an agent in our simulation finds itself in every time it steps into a new room. We could make the agent believe that when you put two things together and group them, you get three things. We could add random bits to an agent's memory.
There is no limit to how perverted a view of the world a simulated agent could have.
I am taking issue with the conclusion that we are living in a simulation even given premise (1) and (2) being true.
So I am struggling to understand his reply to my argument. In some ways it simply looks like he's saying either we are in a simulation or we are not, which is obviously true. The claim that we are probably living in a simulation (given a couple of assumptions) relies on observations of the current universe, which either are not reliable if we are in a simulation, or obviously are wrong if we aren't in a simulation.
If I conclude that there ar...
The "simulation argument" by Bostrom is flawed. It is wrong. I don't understand why a lot of people seem to believe in it. I might do a write up of this if anyone agrees with me, but basically, you cannot reason about without our universe from within our universe. It doesn't make sense to do so. The simulation argument is about using observations from within our own reality to describe something outside our reality. For example, simulations are or will be common in this universe, therefore most agents will be simulated agents, therefore we are s...
First, Bostrom is very explicit that the conclusion of his argument is not "We are probably living in a simulation". The conclusion of his argument is that at least one of the following three claims is very likely to be true -- (1) humans won't reach the post-human stage of technological development, (2) post-human civilizations will not run a significant number of simulations of their ancestral history, or (3) we are living in a simulation.
Second, Bostrom has addressed the objection you raise here (in his Simulation Argument FAQ, among other pla...
I think I agree with what you're saying for the most part. If your goal is, say, reducing suffering, then you have to consider the best way of convincing others to share your goal. If you started killing people who ran factory farms, you're probably going to turn a lot of the world against you, and so fail in your goal. And, you have to consider the best way of convincing yourself to continue performing your goal, now and into the future, since humans goals can change depending on circumstances and experiences.
In terms of guilt, finding little tricks to r...
You have to consider that humans don't have perfect utility functions. Even if I want to be a moral utilitarian, it is a fact that I am not. So I have to structure my life around keeping myself as morally utilitarian as possible. Brian Tomasik talks about this. It might be true that I could reduce more suffering by not eating an extra donut, but I'm going to give up on the entire task of being a utilitarian if I can't allow myself some luxuries.
Why would us launching a simulation use more processing power? It seems more likely that the universe does a set amount of information processing and all we are doing is manipulating that in constructive ways. Running a computer doesn't process more information than the wind blowing against a tree does; in fact, it processes far less.
So, the graph model of identity sort of works, but I feel it doesn't quite get to the real meat of identity. I think the key is in how two vertices of the identity graph are linked and what it means for them to be linked. Because I don't think the premise that a person is the same person they were a few moments ago is necessarily justified, and in some situations it doesn't meld with intuition. For example, a person's brain is a complex machine; imagine it were (using some extremely advanced technology) modified seriously while a person was still conscious...
Wouldn't there, then, be some copies of me not being tortured and one that is being tortured?
If I copied your brain right now, but left you alive, and tortured the copy, you would not feel any pain (I assume). I could even torture it secretly and you would be none the wiser.
So go back to the scenario - you're killed, there are some exact copies made of your brain and some inexact copies. It has been shown that it is possible to torture an exact copy of your brain while not torturing 'you', so surely you could torture one or all of these reconstructed brains and you would have no reason to fear?
So, let's say you die, but a super intelligence reconstructs your brain (using new atoms, but almost exactly to specification), but misplaces a couple of atoms. Is that 'you'?
If it is, let's say the computer then realises what it did wrong and reconstructs your brain again (leaving its first prototype intact), this time exactly. Which one is 'you'?
Let's say the second one is 'you', and the first one isn't. What happens when the computer reconstructs yet another exact copy of your brain?
If the computer told you it was going to torture the slightly-wrong cop...
I think consciousness arises from physical processes (as Denett says), but that's not really solving the problem or proving it doesn't exist.
Anyway, I think you are right in that if you think being mind-uploaded does or does not constitute continuing your personal identity or whatever, it's hard to say you are wrong. However, what if I don't actually know if it does, yet I want to be immortal? Then we have to study that to figure out what things we can do keep the real 'us' existing and what don't.
What if the persistence of personal identity is a meaningless pursuit?
If there's no objective right answer, then what does it mean to seek immortality? For example, if we found out that a simulation of 'you' is not actually 'you', would seeking immortality mean we can't upload our minds to machines and have to somehow figure out a way to keep the pink fleshy stuff that is our current brains around?
If we found out that there's a new 'you' every time you go to sleep and wake up, wouldn't it make sense to abandon the quest for immortality as we already die every night?
(Note, I don't actually think this happens. But I think the concept of personal identity is inextricably linked to the question of how separate consciousnesses, each feeling their own qualia, can arise.)
The thing is, I'm just not sure if it's even a reasonable thing to talk about 'immortality' because I don't know what it means for one personal identity ('soul') to persist. I couldn't be sure if a computer simulated my mind it would be 'me', for example. Immortality will likely involve serious changes to the physical form our mind takes, and once you start talking about that you get into the realm of thought experiments like the idea that if you put someone under a general anaesthetic, take out one atom from their brain, then wake them up, you have a simi...
If you define yourself by the formal definition of a general intelligence then you're probably not going to go too far wrong.
That's what your theory ultimately entails. You are saying that you should go from specific labels ("I am a democrat") to more general labels (" I am a seeker of accurate world models") because it is easier to conform to a more general specification. The most general label would be a formal definition of what it means to think and act on an environment for the attainment of goals.
I don't think your theory is particularly useful.
If an exact copy of you were to be created, it would have to be stuck in the hole as well. If the 'copy' is not in the hole, then it is not you, because it is experiencing different inputs and has a different brain state.