And of course you can throw black holes into black holes as well, and extract even more energy. The end game is when you have just one big black hole, and nothing left to throw in it. At that point you then have to change strategy and wait for the black hole to give off Hawking radiation until it completely evaporates.
But all these things can happen later - there's no reason for not going through a paperclip maximization step first, if you're that way inclined...
If your definition of "truth" is such that any method is as good as any other of finding it, then the scientific method really is no better than anything else at finding it. Of course most of the "truths" won't bear much resemblance to what you'd get if you only used the scientific method.
My own definition - proto-science is something put forward by someone who knows the scientific orthodoxy in the field, suggesting that some idea might be true. Pseudo-science is something put forward by someone who doesn't know the scientific orthodoxy, asserting that something is true.
Testing which category any particular claim falls into is in my experience relatively straightforward if you know the scientific orthodoxy already - as a pseudoscientist's idea will normally be considered absolutely false in certain aspects by those who know the orthodoxy. A...
In cases like this where we want to drive the probability that something is true as high as possible, you are always left with an incomputable bit.
The bit that can't be computed is - am I sane? The fundamental problem is that there are (we presume) two kinds of people, sane people, and mad people who only think that they are sane. Those mad ones of course come up with mad arguments which show that their sanity is just fine. They may even have supporters who tell them they are perfectly normal - or even hallucinatory ones. How can I show which category I am...
I suppose we all came across Bayesianism from different points of view - my list is quite a bit different.
For me the biggest one is that the degree to which I should believe in something is basically determined entirely by the evidence, and IS NOT A MATTER OF CHOICE or personal belief. If I believe something with degree of probability X, and see Y happen that is evidence for X, then the degree of probability Z which which I then should believe is a mathematical matter, and not a "matter of opinion."
The prior seems to be a get-out clause here, but...
Congratulations - this is what it's like to go from the lowest level of knowledge (Knows nothing and knows not that he knows nothing.) to the second lowest level. (Knows nothing, but at least knows that he knows nothing.)
The practical solution to this problem is that, in any decent organisation there are people much more competent than these two levels, and it's been obvious to them that you know nothing for much longer than it's been obvious to you. Their expectations will be set accordingly, and they will probably help you out - if you're willing to take some advice.
Which leads to two possible futures. In one of them, the AI us destroyed, and nothing else happens. In the other, you receive a reply to your command thus.
The command did not. But your attitude - I shall have to make an example of you.
Obviously not a strategy to get you to let the AI out based on its friendliness - quite the reverse.
So you're sure I'm not out of the box already? IRC clients have bugs, you see.
Then the following command shouldn't bother you! :)
AI DESTROYED
Since you're trying to put numbers on something which many of us regard as being certainly true, I'll take the liberty of slightly rephrasing your question.
How much confidence do I place in the scientific theory that ordinary matter is not infinitely divisible? In other words, that it is not true that no matter how small an amount of water I have, I can make a smaller amount by dividing it?
I am (informally) quite certain that water is not infinitely subdivisible. I don't think it's that useful an activity for me to try to put numbers on it, though. The pro...
Let's think about the computer that you're using to look at this website. It's able to do general purpose logic, which is in some ways quite a trivial thing to learn. It's really quite poor at pattern matching, where we and essentially all intelligent animals excel. It is able to do fast data manipulation, reading its own output back.
As I'm sure you know, there's a distinction between computing systems which, given enough memory, can simulate any other computing system and computing systems which can't. Critical to the former is the ability to form a store...
Because of what you can do with a train of thought.
"That mammoth is very dangerous, but would be tasty if I killed it."
"I could kill it if I had the right weapon"
"What kind of weapon would work?"
As against.... "That mammoth is very dangerous - run!"
Computer science is where this particular insight comes from. If you can lay down memories, execute loops and evaluate conditions, you can simulate anything. If you don't have the ability to read your own output, you can't.
If dolphins or chimps did have arbitrarily long chains of thought, they'd be able to do general reasoning, as we do.
What is the essential difference between human and animal intelligence? I don't actually think it's just a matter of degree. To put it simply, most brains are once-through machines. They take input from the senses, process it in conjunction with memories, and turn that into actions, and perhaps new memories. Their brains have lots of special-purpose optimizations for many things, and a surprising amount can be achieved like this. The brains are once-through largely because that's the fastest approach, and speed is important for many things. Human brains ar...
Evolution, as an algorithm, is very much better as an optimizer of an existing design than it is as a creator of a new design. Optimizing the size of the brain of a creature is, for evolution, an easy problem. Making a better, more efficient brain is a much harder problem, and happens slowly, comparatively speaking.
The optimization problem is essentially a kind of budgeting problem. If I have a budget of X calories per day, I can spend it on X kilos of muscle, or Y grams of brain tissue. Both will cost me the same amount of calories, and each brings its ow...
Although there is a nutritional argument for bigger brains in humans - the taming of fire allowed for much more efficient food usage - perhaps there is also some sense in which the human brain has recently become better, which in turn led it to become larger.
I think the interesting question is why we care for our future selves at all.
As kids, we tend not to. It's almost a standard that a child has a holiday, and a bit of homework to do during that holiday, then they will decide not to do the work at the beginning of the break. The reason is they care about their current selves, and not about their future self. Of course in due time the future becomes the present, and that same child has to spend the entire time at the end of their holiday working furiously on everything that's been left to the last minute. At ...
I agree - I think this is because Eliezer's intent is to explain what he believes to be right, rather than to worry too much about the arguments of those he doesn't agree with. An approach I entirely agree with - my experience is that debate is remarkably ineffective as a means of reaching new knowledge, whilst teaching the particular viewpoint you hold is normally much more enlightening to the listener, whether they agree with the viewpoint or not.
I think it is a mistake to tie the question of what reality is to the particulars of the physics of our actual universe. These questions are about what it is to have an external reality, and the answers to them should be the same whether the question is asked by us in our current universe, or by some other hapless inhabitants of a universe bearing a distinct resemblance to Minecraft.
I can imagine types of existence which don't include cause and effect - geometrical patterns are an example - there are relationships, but they are not cause and effect relatio...
It's meaningful and false, rather than meaningless, to say that on March 22nd, 2003, the particles in the center of the Sun spontaneously arranged themselves into a short-lived chocolate cake. This statement's truth or falsity has no consequences we'll ever be able to test experientally. Nonetheless, it legitimately describes a way reality could be, but isn't; the atoms in our universe could've been arranged like that on March 22nd 2003, but they weren't.
I actually think this a confusing statement. From a thermodynamic perspective, it's not impossible ...
Tech also seems quite vulnerable to monocultures. Think of file formats, for example. In the early days there are often several formats, but after a while most of them go extinct and the survivors end up being universally used. Image display formats, for example, fall largely into two categories - formats that every computer knows how to display, and formats that hardly anybody uses at all. (Image editing formats are different, I know.) How many word processors have you used recently that can't support .doc format ?
The most likely scenario is that there will be only one center of intelligence, and that although the intelligence isn't really there yet, the center is. You're using it now.
It surely depends on one's estimate of the numbers. It seems worthwhile doing something about possible asteroid impacts, for example.
If anyone accepts a pascals mugging style trade off with full knowledge of the problem,
Well, it's very well known that Pascal himself accepted it, and I'm sure there are others. So, off you go and do whatever it is you wanted to do.
To be honest, your ability to come through on this threat is a classic example of the genre - it's very, very unlikely that you are able to do it, but obviously the consequences if you were able to would be, er, quite bad. In this case my judgement of the probabilities is that we are completely justified in ignoring the threat.
Actually human Godel sentences are quite easy to construct.
For example, I can't prove that I'm not an idiot.
If I'm not an idiot, then I can perhaps make an argument that I'm not an idiot that seems reasonable to me, and that may persuade that I'm not an idiot.
However, if I am an idiot, then I can still perhaps make an argument that I'm not an idiot that seems reasonable to me.
Therefore any argument that I might make on whether I'm an idiot or not does not determine which of the two above states is the case. Whether I'm an idiot or not is therefore unprovab...
Although I wouldn't think of this particular thing as being an invention on his part - I'm not sure I've read that particular chain of thought before, but all the elements of the chain are things I've known for years.
However I think it illustrates the strength of Eliezer's writing well. It's a perfectly legitimate sequence of thought steps that leads in a clear and obvious way to the right answer. It's not new thought, but a particularly clear way of expressing something that many people have thought and written about in a less organised way.
To clarify - t...
It's easy to overcome that simply by being a bit more precise - you are saying that such and such a proposition is true in geometry X. Meaning that the axioms of geometry X genuinely do imply the proposition. That this proposition may not be true in geometry Y has nothing to do with it.
It is a different sense of true in that it isn't necessarily related to sensory experience - only to the interrelationships of ideas.
I agree - atoms and so forth are what our universe happens to consist of. But I can't see why that's relevant to the question of what truth is at all - I'd say that the definition of truth and how to determine it are not a function of the physics of the universe one happens to inhabit. Adding physics into the mix tends therefore to distract from the main thrust of the argument - making me think about two complex things instead of just one.
Of course the limited amount of knowledge available to the primitive tribe doesn't rule out the existence of George, but neither does it do much to justify the theory of George. What they know is that the ground shook, but they have no reasonable explanation of why.
There are, for them, many possible explanations they could dream up to explain the shaking. Preferring any one above the others without a reason to do so is a mistake.
At their postulated level of sophistication, I don't think they can do much better than "The Earth shook. It does that sometimes." Adding the bit about George and so forth is just unnecessarily multiplying entities, as Ockham might say.
People usually are not mistaken about what they themselves believe - though there are certain exceptions to this rule - yet nonetheless, the map of the map is usually accurate, i.e., people are usually right about the question of what they believe:
I'm not at all sure about this part - although I don't think it matters much to your overall case. I think one of our senses is a very much simplified representation of our own internal thought state. It's only just about good enough for us to make a chain of thought - taking the substance of a finished though...
They are truisms - in principle they are statements that are entirely redundant as one could in principle work out the truth of them without being told anything. However, principle and practice are rather different here - just because we could in principle reinvent mathematics from scratch doesn't mean that in practice we could. Consequently these beliefs are presented to us as external information rather than as the inevitable truisms they actually are.
Maps are models of the territory. And the usefulness of them is often that they make predictions about parts of the territory I haven't actually seen yet, and may have trouble getting to at all. The Sun will come up in the morning. There isn't a leprachaun colony living a mile beneath my house. There aren't any parts of the moon that are made of cheese.
I have no problem saying that these things are true, but they are in fact extrapolations of my current map into areas which I haven't seen and may never see. These statements don't meaningfully stand alone, ...
To summarise the argument further.
"A lot of people talk rubbish about AI. Therefore most existing predictions are not very certain."
That doesn't in itself mean that it's hard to predict AI - merely that there are many existing predictions which aren't that good. Whether we could do better if we (to take the given example) used the scientific method isn't something the argument covers.
Thanks - I've amended the final paragraph to change 'view' to 'outcome' throughout - hope it helps.
This whole post seems to be a conjecture about what quantum mechanics really means.
What we know about quantum mechanics is summed up in the equations. Interpretations of quantum mechanics aren't arguing about the equations, or the predictions of the equations. They are arguing about what it means that these equations give these predictions.
The important thing here is to understand what exactly these interpretations of quantum mechanics are talking about. They aren't talking about the scientific predictions, as all the interpretations are of the same equati...
The industrial revolution has some very tightly coupled advances, The key advance was making iron with coal rather than using charcoal. This reduced the price, and a large increase in quantity of manufacture followed. One of the immediate triggers was that England was getting rather short of wood, and the use of coal as a substitute started for iron-making and heating.
The breakthrough in steelmaking was initially luck - some very low sulphur coal was found and used in steelmaking. But luck arises often out of greater quantities of usage, and perhaps that w...
The model I have of human progress is this. Intelligence is not the limiting factor. Things are invented quite soon after they become possible and worthwhile.
So, let's take the steam engine. Although the principle of the steam turbine is known to the Greeks, actual steam engines are only commercially viable from the time of Newcomen's atmspheric engine. Why not earlier?
Well, there is an existing technology to displace, first of all, which is a couple of unfortunate animals walking in a circle driving an axle. This is far more fuel efficient than the steam ...
Also the argument applies equally well to lots of non-intellectual tasks where a cheap human could well be a replacement for an expensive machine.
I haven't put my finger on it exactly, but I am somewhat concerned that this post is leading us to argue about the meanings of words, whilst thinking that we are doing something else.
What can we really say about the world? What we ought to be doing is almost mathematically defined now. We have observations of various kinds, Bayes' theorem, and our prior. The prior ought really to start off as a description of our state of initial ignorance, and Bayes' theorem describes exactly how that initial state of ignorance should be updated as we see further observat...
Your other option is to sell the box to the highest bidder. That will probably be someone who's prepared to wait longer than you, and will therefore be able to give you a higher price than the utilons you'd have got out of the box yourself. You get the utilons today.
My top 2....
Looking at unlikely happenings more sensibly. Remembering that whenever something really unlikely happens to you, it's not a sign from the heavens. I must remember to take into account the number of other unlikely things that might have happened instead that I would also have noticed, and the number of things that happen in a typical time. In a city of a million people, meeting a particular person might seem like a one in a million chance. But if I know a thousand people in the city, and walk past a thousand people in an hour, the chance of bum...
This is a relatively common psychological problem. It's a common reaction to stress. You need to take it seriously, though, because for some people it can be a crippling, disabling thing. But there is stuff you can do.
First of all, acknowledge the truth of what your fear is saying - sudden catastrophe could happen without warning. But the flip side is that worldwide the vast majority of deaths don't come from sudden catastrophe. You should fear eating more than you fear such catastrophe - in terms of the real risk of it. It's fear, but not reasonable fear ...
Well you could go for something much more subtle, like using sugar of the opposite handedness on the other 'Earth'. I don't think it really changes the argument much whether the distinction is subtle or not.
It depends on your thought experiment - mathematics can be categorised as a form of thought experimentation, and it's generally helpful.
Thought experiments show you the consequences of your starting axioms. If your axioms are vague, or slightly wrong in some way, you can end up with completely ridiculous conclusions. If you are in a position to recognise that the result is ridiculous, this can help. It can help you to understand what your ideas mean.
On the other hand, it sometimes still isn't that helpful. For example, one might argue that an object can't ...
Here's what I tend to do.
On my first draft of something significant, I don't even worry about style - I concentrate on getting my actual content down on paper in some kind of sensible form. I don't worry about the style because I have more than enough problems getting the content right.
In this first draft, I think about structure. What ONE thing am I trying to say? What are the 2-5 sub-points of that one thing? Do these sub-points have any sub-points? Make a tree structure, and if you can't identify the trunk, go away until you can.
Then I go back and fix i...
As a purely practical measure, for really important occasions, I'll often plan in an activity at second-to-last which is actually unimportant and can be dropped. So, for example, if I have a job interview, my plan will be that, after I've found the entrance to the company office and there is as little left to go wrong as possible, I'll then, as a second-to-last activity, do something like go for a relaxed lunch at a nearby cafe, and then just stroll in at the ideal time.
On the day everything goes to pot, I can use up the time I planned for the second-to-la...
One thing that goes along with this is the idea that possible courses of action in any given situation can be sorted according to moral desirability. Of course in practice people differ about the exact ordering. But I've never heard anyone claim that in the moral sphere, B > A, C > B and simultaneously A > C. If in a moral scheme, you always find that A > B > C implies A > C, then you ought to be able to map to a utility function.
The only thing I'd add is that this doesn't map onto a materialist consequentialism. If you were part of the c...
This seems very similar to the experiment where black people were shown to do worse on intelligence tests after being reminded that they were black..
So this experiment (in my view) doesn't really help to answer whether analytical thinking reduces religious belief. What it does show is that a lot of people make that association, and that is more than enough to cause the priming effect.
It's the process of changing your mind about something when new evidence on something comes your way.
The different jargon acts as a reminder that the process ought not be an arbitrary one, but (well, in an ideal world anyway) should follow the evidence in a way defined by Bayes theorem.
I don't think there's any particular definition of what constitutes, belief, opinion and cognitive structure. It's all just beliefs, although some of it might then be practised habit.
I think there are some confusions here about the mind's eye, and the way the visual cortex works.
First of all, I suggest you do the selective attention test. Here will do. Selective attention test
This video illustrates the difference between looking at a scene and actually seeing it. Do pay attention closely or you might miss something important!
The bottom line is that when you look at the outside world, thinking that you see it, your brain is converting the external world of light images into an internal coding of that image. It cheats, royally, when it ...
I am a programmer, and have been for about 20 years or so. My impressions here...
Diagrams and visual models of programs have typically been disappointing. Diagrams based on basic examples always look neat, tidy, intuitive and useful. When scaling up to a real example, the diagram often looks like the inside of a box of wires - lines going in all directions. Where the simple diagram showed simple lines drawing boxes together, the complex one has the same problem as the wiring box - you have 40 different 'ends' of the lines, and it's a tedious job to pair th...
I think one ought to think about reductionism slightly separately from the particulars of the universe we actually live in. I think of it as rather like the opposite of a curiosity-stopper - instead of assuming that everything is ontologically basic and doesn't have underlying parts, we should assume that there may be underlying parts, and go look for them. Of course in our own universe that approach has been exceptionally fruitful.
The other part that works well is Occam's razor - the simplest explanation of any data set is not only the most lightweight wa...
The problem is that the utility isn't constant. If you, today are indifferent to what happens on future Tuesdays, then you will also think it's a bad thing that your future self cares what happens on that Tuesday. You will therefore replace your current self with a different self that is indifferent to all future Tuesdays, including the ones that it's in, thus preserving the goal that you have today.
If an omnipotent being wants you to believe something that isn't true, and is willing to use its omnipotence to convince you of the truth of that untruth, then there is nothing you can do about it. There is no observation that suffices to prove that an omnipotent being is telling the truth, as a malevolent omnipotence could make you believe literally anything - that you observed any given observation - or didn't, or that impossible things make sense, or sensible things are impossible.
This is one of a larger class of questions where one answer means that yo... (read more)