All of DuncanS's Comments + Replies

DuncanS120

If an omnipotent being wants you to believe something that isn't true, and is willing to use its omnipotence to convince you of the truth of that untruth, then there is nothing you can do about it. There is no observation that suffices to prove that an omnipotent being is telling the truth, as a malevolent omnipotence could make you believe literally anything - that you observed any given observation - or didn't, or that impossible things make sense, or sensible things are impossible.

This is one of a larger class of questions where one answer means that yo... (read more)

3Bound_up
Or accept that the self-defeating answer may well be right, but that it's veracity doesn't affect any decision.
DuncanS00

And of course you can throw black holes into black holes as well, and extract even more energy. The end game is when you have just one big black hole, and nothing left to throw in it. At that point you then have to change strategy and wait for the black hole to give off Hawking radiation until it completely evaporates.

But all these things can happen later - there's no reason for not going through a paperclip maximization step first, if you're that way inclined...

0Gunnar_Zarncke
Except if you're really a paperclip maximizer.
DuncanS20

If your definition of "truth" is such that any method is as good as any other of finding it, then the scientific method really is no better than anything else at finding it. Of course most of the "truths" won't bear much resemblance to what you'd get if you only used the scientific method.

0Eugine_Nier
Also most of these truths will eventually wind up putting you in a position where you start experiencing pain or even dying despite your "truth" telling you that you aren't. Or as Chesterton put it:
DuncanS30

My own definition - proto-science is something put forward by someone who knows the scientific orthodoxy in the field, suggesting that some idea might be true. Pseudo-science is something put forward by someone who doesn't know the scientific orthodoxy, asserting that something is true.

Testing which category any particular claim falls into is in my experience relatively straightforward if you know the scientific orthodoxy already - as a pseudoscientist's idea will normally be considered absolutely false in certain aspects by those who know the orthodoxy. A... (read more)

0redlizard
This seems like an excellent heuristic to me (and probably one of the key heuristics people actually use for making the distinction), not not valid as an actual definition. For example, Sir Roger Penrose's quantum consciousness is something I would classify as pseudoscience without a second thought, despite the fact that Penrose as a physicist should know and understand the orthodoxy of physics perfectly well.
0Brillyant
I don't like the word 'orthodoxy'. From my understanding, it means 'correct belief'. Correct is constant, whereas belief is changing. The perception of which beliefs are correct keeps moving. So, unorthodox beliefs can be correct. In this sense, orthodox is meaningless as a defining characteristic of what is good belief.
DuncanS40

In cases like this where we want to drive the probability that something is true as high as possible, you are always left with an incomputable bit.

The bit that can't be computed is - am I sane? The fundamental problem is that there are (we presume) two kinds of people, sane people, and mad people who only think that they are sane. Those mad ones of course come up with mad arguments which show that their sanity is just fine. They may even have supporters who tell them they are perfectly normal - or even hallucinatory ones. How can I show which category I am... (read more)

DuncanS50

I suppose we all came across Bayesianism from different points of view - my list is quite a bit different.

For me the biggest one is that the degree to which I should believe in something is basically determined entirely by the evidence, and IS NOT A MATTER OF CHOICE or personal belief. If I believe something with degree of probability X, and see Y happen that is evidence for X, then the degree of probability Z which which I then should believe is a mathematical matter, and not a "matter of opinion."

The prior seems to be a get-out clause here, but... (read more)

DuncanS00

Congratulations - this is what it's like to go from the lowest level of knowledge (Knows nothing and knows not that he knows nothing.) to the second lowest level. (Knows nothing, but at least knows that he knows nothing.)

The practical solution to this problem is that, in any decent organisation there are people much more competent than these two levels, and it's been obvious to them that you know nothing for much longer than it's been obvious to you. Their expectations will be set accordingly, and they will probably help you out - if you're willing to take some advice.

DuncanS20

Which leads to two possible futures. In one of them, the AI us destroyed, and nothing else happens. In the other, you receive a reply to your command thus.

The command did not. But your attitude - I shall have to make an example of you.

Obviously not a strategy to get you to let the AI out based on its friendliness - quite the reverse.

2handoflixue
I'd rather die to an already-unboxed UFAI than risk letting a UFAI out in the first place. My life is worth VASTLY less than the whole of humanity.
DuncanS30

So you're sure I'm not out of the box already? IRC clients have bugs, you see.

Then the following command shouldn't bother you! :)

AI DESTROYED

DuncanS10

Since you're trying to put numbers on something which many of us regard as being certainly true, I'll take the liberty of slightly rephrasing your question.

How much confidence do I place in the scientific theory that ordinary matter is not infinitely divisible? In other words, that it is not true that no matter how small an amount of water I have, I can make a smaller amount by dividing it?

I am (informally) quite certain that water is not infinitely subdivisible. I don't think it's that useful an activity for me to try to put numbers on it, though. The pro... (read more)

DuncanS30

Let's think about the computer that you're using to look at this website. It's able to do general purpose logic, which is in some ways quite a trivial thing to learn. It's really quite poor at pattern matching, where we and essentially all intelligent animals excel. It is able to do fast data manipulation, reading its own output back.

As I'm sure you know, there's a distinction between computing systems which, given enough memory, can simulate any other computing system and computing systems which can't. Critical to the former is the ability to form a store... (read more)

DuncanS-30

Because of what you can do with a train of thought.

"That mammoth is very dangerous, but would be tasty if I killed it."

"I could kill it if I had the right weapon"

"What kind of weapon would work?"

As against.... "That mammoth is very dangerous - run!"

Computer science is where this particular insight comes from. If you can lay down memories, execute loops and evaluate conditions, you can simulate anything. If you don't have the ability to read your own output, you can't.

If dolphins or chimps did have arbitrarily long chains of thought, they'd be able to do general reasoning, as we do.

2PeterisP
The examples of corvids designing and making specialized tools after observing what they would need to solve specific problems (placement of an otherwise unaccessible treat) seem to demonstrate such chains of thought.
0JoshuaZ
So what do you expect to be the signs of arbitrary general reasoning? Humans run out of memory eventually. If a dolphin or a chimp can do arbitrary reasoning but lacks the capacity to keep long-chains inside but for this, what would you expect to see. I'm still not sure what actual testable distinction would occur in these cases, although in so far as I can think of what might arguably be evidence, it looks like dolphins pass, as you can see in this article already linked to in this thread.
DuncanS50

What is the essential difference between human and animal intelligence? I don't actually think it's just a matter of degree. To put it simply, most brains are once-through machines. They take input from the senses, process it in conjunction with memories, and turn that into actions, and perhaps new memories. Their brains have lots of special-purpose optimizations for many things, and a surprising amount can be achieved like this. The brains are once-through largely because that's the fastest approach, and speed is important for many things. Human brains ar... (read more)

5orthonormal
Chimps can suss out recursive puzzles where you have color-coded keys and locks, and you need to unlock Box A to get Key B to unlock Box B to get Key C to unlock Box C which contains food. They even choose the right box to unlock when one chain leads to the food and the other doesn't. Sorry, there's not a difference of kind to be found here.
0JoshuaZ
If dolphins or chimps did or did not have chains of thought how would be able to tell the difference?
DuncanS40

Evolution, as an algorithm, is very much better as an optimizer of an existing design than it is as a creator of a new design. Optimizing the size of the brain of a creature is, for evolution, an easy problem. Making a better, more efficient brain is a much harder problem, and happens slowly, comparatively speaking.

The optimization problem is essentially a kind of budgeting problem. If I have a budget of X calories per day, I can spend it on X kilos of muscle, or Y grams of brain tissue. Both will cost me the same amount of calories, and each brings its ow... (read more)

JoshuaZ120

Although there is a nutritional argument for bigger brains in humans - the taming of fire allowed for much more efficient food usage - perhaps there is also some sense in which the human brain has recently become better, which in turn led it to become larger.

Human brains have been shrinking..

DuncanS00

I think the interesting question is why we care for our future selves at all.

As kids, we tend not to. It's almost a standard that a child has a holiday, and a bit of homework to do during that holiday, then they will decide not to do the work at the beginning of the break. The reason is they care about their current selves, and not about their future self. Of course in due time the future becomes the present, and that same child has to spend the entire time at the end of their holiday working furiously on everything that's been left to the last minute. At ... (read more)

0DanArmak
You don't need to go that far for an example. When a child is assigned homework for tomorrow, they often won't do it (unles forced to by their parents), because they care more about not doing it now than they do about not having done it tomorrow.
0diegocaleiro
It seems to me that you said something near to: "Assume planning is good or desirable" therefore I can show that long-term -> rational. To which I say: True. But planning is only good or desirable if you need to be an agent of commitment, long term, trustworthiness, etc... which a more short-termed person might not be. Say a post "iluminated" buddhist monk, for instance.
2pengvado
If you actually implemented an exponential decay, you would think that it was worthwhile that your past self enjoyed these benefits at the expense of your present self. The inconsistency is if you implement an exponential decay for the future but flat for the past.
DuncanS10

I agree - I think this is because Eliezer's intent is to explain what he believes to be right, rather than to worry too much about the arguments of those he doesn't agree with. An approach I entirely agree with - my experience is that debate is remarkably ineffective as a means of reaching new knowledge, whilst teaching the particular viewpoint you hold is normally much more enlightening to the listener, whether they agree with the viewpoint or not.

DuncanS00

I think it is a mistake to tie the question of what reality is to the particulars of the physics of our actual universe. These questions are about what it is to have an external reality, and the answers to them should be the same whether the question is asked by us in our current universe, or by some other hapless inhabitants of a universe bearing a distinct resemblance to Minecraft.

I can imagine types of existence which don't include cause and effect - geometrical patterns are an example - there are relationships, but they are not cause and effect relatio... (read more)

DuncanS00

It's meaningful and false, rather than meaningless, to say that on March 22nd, 2003, the particles in the center of the Sun spontaneously arranged themselves into a short-lived chocolate cake. This statement's truth or falsity has no consequences we'll ever be able to test experientally. Nonetheless, it legitimately describes a way reality could be, but isn't; the atoms in our universe could've been arranged like that on March 22nd 2003, but they weren't.

I actually think this a confusing statement. From a thermodynamic perspective, it's not impossible ... (read more)

2chaosmosis
More generally, Eliezer takes an explicit and very detail oriented approach to some aspects of some topics, but takes a very informal and relaxed approach to other topics. It seems to me that he pays attention to detail more when he agrees with the arguments he is outlining. This has the effect of making the arguments he opposes seem artificially weaker. This isn't directly relevant to the chocolate cake issue, but your comment reminded me of this.
DuncanS20

Tech also seems quite vulnerable to monocultures. Think of file formats, for example. In the early days there are often several formats, but after a while most of them go extinct and the survivors end up being universally used. Image display formats, for example, fall largely into two categories - formats that every computer knows how to display, and formats that hardly anybody uses at all. (Image editing formats are different, I know.) How many word processors have you used recently that can't support .doc format ?

The most likely scenario is that there will be only one center of intelligence, and that although the intelligence isn't really there yet, the center is. You're using it now.

DuncanS00

It surely depends on one's estimate of the numbers. It seems worthwhile doing something about possible asteroid impacts, for example.

DuncanS-20

If anyone accepts a pascals mugging style trade off with full knowledge of the problem,

Well, it's very well known that Pascal himself accepted it, and I'm sure there are others. So, off you go and do whatever it is you wanted to do.

To be honest, your ability to come through on this threat is a classic example of the genre - it's very, very unlikely that you are able to do it, but obviously the consequences if you were able to would be, er, quite bad. In this case my judgement of the probabilities is that we are completely justified in ignoring the threat.

0staticIP
Do you consider my pascals mugging to be less likely then the general examples of the genre, or do you think that all pascals muggings" probabilities are that we are completely justified in ignoring the threat."
DuncanS20

Actually human Godel sentences are quite easy to construct.

For example, I can't prove that I'm not an idiot.

If I'm not an idiot, then I can perhaps make an argument that I'm not an idiot that seems reasonable to me, and that may persuade that I'm not an idiot.

However, if I am an idiot, then I can still perhaps make an argument that I'm not an idiot that seems reasonable to me.

Therefore any argument that I might make on whether I'm an idiot or not does not determine which of the two above states is the case. Whether I'm an idiot or not is therefore unprovab... (read more)

DuncanS30

Although I wouldn't think of this particular thing as being an invention on his part - I'm not sure I've read that particular chain of thought before, but all the elements of the chain are things I've known for years.

However I think it illustrates the strength of Eliezer's writing well. It's a perfectly legitimate sequence of thought steps that leads in a clear and obvious way to the right answer. It's not new thought, but a particularly clear way of expressing something that many people have thought and written about in a less organised way.

To clarify - t... (read more)

DuncanS50

It's easy to overcome that simply by being a bit more precise - you are saying that such and such a proposition is true in geometry X. Meaning that the axioms of geometry X genuinely do imply the proposition. That this proposition may not be true in geometry Y has nothing to do with it.

It is a different sense of true in that it isn't necessarily related to sensory experience - only to the interrelationships of ideas.

DuncanS30

I agree - atoms and so forth are what our universe happens to consist of. But I can't see why that's relevant to the question of what truth is at all - I'd say that the definition of truth and how to determine it are not a function of the physics of the universe one happens to inhabit. Adding physics into the mix tends therefore to distract from the main thrust of the argument - making me think about two complex things instead of just one.

DuncanS140

Of course the limited amount of knowledge available to the primitive tribe doesn't rule out the existence of George, but neither does it do much to justify the theory of George. What they know is that the ground shook, but they have no reasonable explanation of why.

There are, for them, many possible explanations they could dream up to explain the shaking. Preferring any one above the others without a reason to do so is a mistake.

At their postulated level of sophistication, I don't think they can do much better than "The Earth shook. It does that sometimes." Adding the bit about George and so forth is just unnecessarily multiplying entities, as Ockham might say.

DuncanS10

People usually are not mistaken about what they themselves believe - though there are certain exceptions to this rule - yet nonetheless, the map of the map is usually accurate, i.e., people are usually right about the question of what they believe:

I'm not at all sure about this part - although I don't think it matters much to your overall case. I think one of our senses is a very much simplified representation of our own internal thought state. It's only just about good enough for us to make a chain of thought - taking the substance of a finished though... (read more)

DuncanS00

They are truisms - in principle they are statements that are entirely redundant as one could in principle work out the truth of them without being told anything. However, principle and practice are rather different here - just because we could in principle reinvent mathematics from scratch doesn't mean that in practice we could. Consequently these beliefs are presented to us as external information rather than as the inevitable truisms they actually are.

DuncanS00

Maps are models of the territory. And the usefulness of them is often that they make predictions about parts of the territory I haven't actually seen yet, and may have trouble getting to at all. The Sun will come up in the morning. There isn't a leprachaun colony living a mile beneath my house. There aren't any parts of the moon that are made of cheese.

I have no problem saying that these things are true, but they are in fact extrapolations of my current map into areas which I haven't seen and may never see. These statements don't meaningfully stand alone, ... (read more)

DuncanS70

To summarise the argument further.

"A lot of people talk rubbish about AI. Therefore most existing predictions are not very certain."

That doesn't in itself mean that it's hard to predict AI - merely that there are many existing predictions which aren't that good. Whether we could do better if we (to take the given example) used the scientific method isn't something the argument covers.

0Stuart_Armstrong
I don't really see how we could do that. Yes, most predictions are rubbish - but a lot are rubbish because predicting AI is not something we have good ways of doing.
DuncanS00

Thanks - I've amended the final paragraph to change 'view' to 'outcome' throughout - hope it helps.

DuncanS10

This whole post seems to be a conjecture about what quantum mechanics really means.

What we know about quantum mechanics is summed up in the equations. Interpretations of quantum mechanics aren't arguing about the equations, or the predictions of the equations. They are arguing about what it means that these equations give these predictions.

The important thing here is to understand what exactly these interpretations of quantum mechanics are talking about. They aren't talking about the scientific predictions, as all the interpretations are of the same equati... (read more)

0Mitchell_Porter
After some confusion about what you are trying to say, I'll just point out that you use "views" to first mean different interpretations, and then different worlds within the one interpretation, so I give up.
DuncanS20

The industrial revolution has some very tightly coupled advances, The key advance was making iron with coal rather than using charcoal. This reduced the price, and a large increase in quantity of manufacture followed. One of the immediate triggers was that England was getting rather short of wood, and the use of coal as a substitute started for iron-making and heating.

The breakthrough in steelmaking was initially luck - some very low sulphur coal was found and used in steelmaking. But luck arises often out of greater quantities of usage, and perhaps that w... (read more)

DuncanS60

The model I have of human progress is this. Intelligence is not the limiting factor. Things are invented quite soon after they become possible and worthwhile.

So, let's take the steam engine. Although the principle of the steam turbine is known to the Greeks, actual steam engines are only commercially viable from the time of Newcomen's atmspheric engine. Why not earlier?

Well, there is an existing technology to displace, first of all, which is a couple of unfortunate animals walking in a circle driving an axle. This is far more fuel efficient than the steam ... (read more)

3taelor
[...] Out of curiosity, what was it that made better metalugy possible?
DuncanS20

Also the argument applies equally well to lots of non-intellectual tasks where a cheap human could well be a replacement for an expensive machine.

DuncanS90

I haven't put my finger on it exactly, but I am somewhat concerned that this post is leading us to argue about the meanings of words, whilst thinking that we are doing something else.

What can we really say about the world? What we ought to be doing is almost mathematically defined now. We have observations of various kinds, Bayes' theorem, and our prior. The prior ought really to start off as a description of our state of initial ignorance, and Bayes' theorem describes exactly how that initial state of ignorance should be updated as we see further observat... (read more)

DuncanS100

Your other option is to sell the box to the highest bidder. That will probably be someone who's prepared to wait longer than you, and will therefore be able to give you a higher price than the utilons you'd have got out of the box yourself. You get the utilons today.

0Mestroyer
I don't think you are a wannabe Captain Kirk. I think you are Captain Kirk.
0A1987dM
Why does my fight-the-hypothetical module never think about that? (It does often think about options which wouldn't be available in the Least Convenient Possible World -- but not this one, until someone else points it out.)
DuncanS90

My top 2....

Looking at unlikely happenings more sensibly. Remembering that whenever something really unlikely happens to you, it's not a sign from the heavens. I must remember to take into account the number of other unlikely things that might have happened instead that I would also have noticed, and the number of things that happen in a typical time. In a city of a million people, meeting a particular person might seem like a one in a million chance. But if I know a thousand people in the city, and walk past a thousand people in an hour, the chance of bum... (read more)

DuncanS00

This is a relatively common psychological problem. It's a common reaction to stress. You need to take it seriously, though, because for some people it can be a crippling, disabling thing. But there is stuff you can do.

First of all, acknowledge the truth of what your fear is saying - sudden catastrophe could happen without warning. But the flip side is that worldwide the vast majority of deaths don't come from sudden catastrophe. You should fear eating more than you fear such catastrophe - in terms of the real risk of it. It's fear, but not reasonable fear ... (read more)

DuncanS00

Well you could go for something much more subtle, like using sugar of the opposite handedness on the other 'Earth'. I don't think it really changes the argument much whether the distinction is subtle or not.

DuncanS30

It depends on your thought experiment - mathematics can be categorised as a form of thought experimentation, and it's generally helpful.

Thought experiments show you the consequences of your starting axioms. If your axioms are vague, or slightly wrong in some way, you can end up with completely ridiculous conclusions. If you are in a position to recognise that the result is ridiculous, this can help. It can help you to understand what your ideas mean.

On the other hand, it sometimes still isn't that helpful. For example, one might argue that an object can't ... (read more)

DuncanS10

Here's what I tend to do.

On my first draft of something significant, I don't even worry about style - I concentrate on getting my actual content down on paper in some kind of sensible form. I don't worry about the style because I have more than enough problems getting the content right.

In this first draft, I think about structure. What ONE thing am I trying to say? What are the 2-5 sub-points of that one thing? Do these sub-points have any sub-points? Make a tree structure, and if you can't identify the trunk, go away until you can.

Then I go back and fix i... (read more)

DuncanS220

As a purely practical measure, for really important occasions, I'll often plan in an activity at second-to-last which is actually unimportant and can be dropped. So, for example, if I have a job interview, my plan will be that, after I've found the entrance to the company office and there is as little left to go wrong as possible, I'll then, as a second-to-last activity, do something like go for a relaxed lunch at a nearby cafe, and then just stroll in at the ideal time.

On the day everything goes to pot, I can use up the time I planned for the second-to-la... (read more)

DuncanS00

One thing that goes along with this is the idea that possible courses of action in any given situation can be sorted according to moral desirability. Of course in practice people differ about the exact ordering. But I've never heard anyone claim that in the moral sphere, B > A, C > B and simultaneously A > C. If in a moral scheme, you always find that A > B > C implies A > C, then you ought to be able to map to a utility function.

The only thing I'd add is that this doesn't map onto a materialist consequentialism. If you were part of the c... (read more)

1roystgnr
Suppose the universe has an inescapable Big Crunch or Heat Death ahead - is there a moral way to behave, when nothing you do will show in the material world in a googolplex years or so? Either way the answer is yes: all the materialist consequentialists need is a utility functional which has support at all times t rather than just at t_infinity.
1Matt_Simpson
This isn't necessary for the proof to work AFAICT. All you need is to be able to say is "In context A, action X is the moral action," i.e., there just needs to be a "best" action. Then set U(best action) > U(anything else). Every action you take has material consequences. You are, after all, made of material.
DuncanS40

This seems very similar to the experiment where black people were shown to do worse on intelligence tests after being reminded that they were black..

So this experiment (in my view) doesn't really help to answer whether analytical thinking reduces religious belief. What it does show is that a lot of people make that association, and that is more than enough to cause the priming effect.

DuncanS00

It's the process of changing your mind about something when new evidence on something comes your way.

The different jargon acts as a reminder that the process ought not be an arbitrary one, but (well, in an ideal world anyway) should follow the evidence in a way defined by Bayes theorem.

I don't think there's any particular definition of what constitutes, belief, opinion and cognitive structure. It's all just beliefs, although some of it might then be practised habit.

DuncanS160

I think there are some confusions here about the mind's eye, and the way the visual cortex works.

First of all, I suggest you do the selective attention test. Here will do. Selective attention test

This video illustrates the difference between looking at a scene and actually seeing it. Do pay attention closely or you might miss something important!

The bottom line is that when you look at the outside world, thinking that you see it, your brain is converting the external world of light images into an internal coding of that image. It cheats, royally, when it ... (read more)

DuncanS80

I am a programmer, and have been for about 20 years or so. My impressions here...

Diagrams and visual models of programs have typically been disappointing. Diagrams based on basic examples always look neat, tidy, intuitive and useful. When scaling up to a real example, the diagram often looks like the inside of a box of wires - lines going in all directions. Where the simple diagram showed simple lines drawing boxes together, the complex one has the same problem as the wiring box - you have 40 different 'ends' of the lines, and it's a tedious job to pair th... (read more)

0bogus
Part of this is probably due to VPLs not exposing the right abstractions--and of course, exposing an abstraction organically in a visual representation may be unfeasible. I looked at some instances of LabView programs linked in another comment, and there seemed to be a lot of repetition which would no-doubt be abstracted away in a text-based language.
DuncanS10

I think one ought to think about reductionism slightly separately from the particulars of the universe we actually live in. I think of it as rather like the opposite of a curiosity-stopper - instead of assuming that everything is ontologically basic and doesn't have underlying parts, we should assume that there may be underlying parts, and go look for them. Of course in our own universe that approach has been exceptionally fruitful.

The other part that works well is Occam's razor - the simplest explanation of any data set is not only the most lightweight wa... (read more)

DuncanS30

The problem is that the utility isn't constant. If you, today are indifferent to what happens on future Tuesdays, then you will also think it's a bad thing that your future self cares what happens on that Tuesday. You will therefore replace your current self with a different self that is indifferent to all future Tuesdays, including the ones that it's in, thus preserving the goal that you have today.

0Manfred
Good point. I have to remember not to confuse expected utility with future utility.
Load More