Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: linkhyrule5 25 October 2013 07:15:56PM 1 point [-]

... Um, the default assumption is that any given hypothesis is wrong, you can't get your priors to converge otherwise. Omnipotent intelligent beings are sufficiently complex that I'd need a few megabits in their favor before I gave them parity with physics.

Comment author: deepthoughtlife 25 October 2013 10:42:10PM 0 points [-]

No, that is not the default way to handle a hypothesis. The default is to ask: why should I believe this? If they have reasons, you look into it, assuming you care. If the reasons are false, expound upon that. If they are not false, you cannot simply claim that, since the proof is insufficient, it is false.

The point of my argument was that there was very little evidence either way. I implied that truth of the hypothesis would have no certain effect upon the world. Thus it is untestable, and completely unrelated to science. Therefore, any statement that it is false needs either a logical proof (all possible worlds), or to go on faith.

The physics analogy was on the other subject, of where we set the thresholds. In this case, even if we set them very low, we can say nothing. Your response makes more sense to the question of whether it is a belief you should personally adopt, not whether it is true or not.

Side Note: A few megabits? Really? You think you are that close to infallible? I know I'm not, even on logical certainties.

Comment author: AlanCrowe 23 October 2013 06:58:38PM 3 points [-]

Each compartment has its own threshold for evidence.

The post reminded me of Christians talking bravely about there being plenty of evidence for their beliefs. How does that work?

  • When evidence is abundant we avoid information overload by raising the threshold for what counts as evidence. We have the luxury of taking our decisions on the basis of good quality evidence and the further luxury of dismissing mediocre evidence as not evidence at all.

  • Evidence is seldom abundant. Usually we work with a middling threshold for evidence, doing the best we can with the mediocre evidence that the middle threshold admits to our councils, and accepting that we will sometimes do the wrong thing due to misleading evidence.

  • When evidence is scarce we turn our quality threshold down another notch, so we still have evidence, even if it is just a translation of a copy of an old text that is supposed to be eye witness testimony but was written down one hundred years after the event.

I think that the way it works with compartmentalization is that we give each compartment its own threshold. For example, an accountant is doing due diligence work on the prospect for The Plastic Toy Manufacturing Company. It looks like being a good investment, they have an exclusive contract with Disney for movie tie-ins. Look, it says so, right there in the prospectus. Naturally the accountant writes to Disney to confirm this. If Disney do not reply, that is a huge red flag.

On Sunday the accountant goes to Church. They have a prospectus, called the Bible, which makes big claims about their exclusive deal with God. When you pray to God to get confirmation, He ignores you. Awkward!

People have a sense of what it is realistic to expect by way of evidence which varies between the various compartments of their lives. In every compartment their beliefs are comfortably supported by a reasonable quantity and quality of evidence relative to the standard expected for that compartment.

Should we aim at a uniform threshold for evidence across all compartments? That ideas seems too glib. It is good to be more open and trusting in friendship and personal relationships than in business. One will not get far in artistic creation if one doubts ones own talent to the extent of treating it like a dodgy business partner.

Or maybe having a uniform threshold is exactly the right thing to do. That leaves you aware that in important areas of your life you have little evidence and your posteriori distributions have lots of entropy. Then you have to live courageously, trusting friends and lovers despite poor evidence and the risk of betrayal, trusting ones talent and finishing ones novel despite the risk that it is 1000 pages of unpublishable drek.

Comment author: deepthoughtlife 25 October 2013 06:31:45PM 1 point [-]

A uniform threshold is in fact a very bad idea, because different areas legitimately do have a different amount of available evidence. For instance, the threshold in physics is vastly higher than in neurology, even though both are tremendously complicated, because it is much easier to perform the testing in physics, where we can simply set more money to the task (build things such as the LHC, simply to check a few loose ends). If there is limited evidence, we still often have to come to a conclusion, and we need that conclusion to be right.

If talking about certain religious matters, there is virtually no evidence on either side. In fact, it may be that there cannot be a sufficient amount of evidence to determine its, no matter what threshold we set. I believe that this is true, which is why I am strongly agnostic.

In many ways, this is similar to being an atheist, (I definitely do not believe in any specific god or religion), but strong atheism requires even more faith than being religious. An omnipotent being is not a logical contradiction, while being capable of causing any kind of results to your testing, and thus there is absolutely no way to prove the nonexistence of an omnipotent being. It is perhaps possible to disprove that the omnipotent being does certain kinds of things regularly, but then the apologetics have the right to point out why your formulation doesn't apply to their god. At least the religious tend to admit the lack of evidence, and that they go by their faith.

Comment author: deepthoughtlife 28 June 2012 08:09:33AM 0 points [-]

I've always considered induction to be the most important part of reasoning. Without induction, it is impossible to actually have a premise in way except the most hypothetical. While I enjoy coming up with unsupported premises and seeing where they go deductively, they are not at all useful if you can't figure out at some point what is actually the case. An argument is only sound if it is valid and the premises are true. Induction allows you to actually stop at some point and say which premises are true.

Occam's razor is an example of a good inductive tool. It isn't strictly correct, but it works pretty well. I agree strongly with the idea that you use the best of your current capability to determine how to improve. You just have to hope you are starting at a good enough position that it ends up going in the right direction.

Comment author: deepthoughtlife 25 June 2012 07:49:40AM *  0 points [-]

Note: It mangled my footnote symbols. Simply go in order of appearance if you wish to find them.

As a strong agnostic, I must say I find the numbers given here amusing. Simply put, there is very little evidence either way, and it is highly likely that it is IMPOSSIBLE to have decent evidence either way about an omnipotent god. I believe that there is an infinitesimal possibility that there is any significant way to tell which way you should lean.* Therefore, I find these probabilities meaningless (but not uninteresting).

Now, probabilities on whether or not Zeus exists are much more doable. Within reason, the less powerful the god, the easier it should be to get evidence. At the extreme low end, a sufficiently advanced alien could truly be Prometheus.* We could prove he exists just by finding him.** Though that level of technology advantage would mean we might need to doubt the evidence anyway.

There are certain theologies that rule themselves out, but this is hardly a convincing argument against the remainder of them. It is true you should not unduly elevate religious beliefs out of the possibility that a particular one is true, but there is actual evidence in favor of them, such that people are not necessarily crazy to come to the exact opposite conclusion than the readers of this blog favor. The weighting of scant evidence can drastically skew the results, and the weighting is probably not rational on either side.* Additionally, it is clear that Christianity as a whole is not logically contradictory, because they have direct postulates that contradict the postulates used to show they are logically contradictory, which means you cannot add those postulates in if you want a deductive proof.*

*Honestly, I believe that the idea of having any sufficient evidence of whether or not an omnipotent being exists is a logical contradiction. There is no state of the universe which an omnipotent being would be unable to implement, and thus no state of the universe is evidence that an omnipotent being does not exist. (Technically, evidence for an omnipotent being could be gathered due to an extremely unlikely configuration of things, but similar evidence could be created by any sufficiently advanced being).

**That means he gave humans the advancement that was control over fire, and got punished over it by other sufficiently advanced beings. The punishment described in the legends could simply occur in VR, or in some grizzly fashion.

*Evidence against him would be much harder, but perhaps sufficiently advanced aliens have been recording all of humanity the entire time, and have proof we discovered fire independently.

**This may be a slightly self serving way of seeing things, but you can always include me in that statement if you think agnostics incorrectly weight the evidence.

*Their postulate: P Your Postulate: ~P The contradiction therefore only means one of the postulates is wrong, not that the conclusion is wrong. One such postulate would be that an omnibenevolent being would not allow such suffering if they could help it, but theirs is that they would. There is no proof either way; suffering might sometimes be beneficial. We cannot even eliminate certain forms of suffering as possibly being beneficial overall. This is usually where the free will arguments start being brought up, but chaos theory can explain it as well.

Comment author: deepthoughtlife 25 June 2012 05:25:53AM 2 points [-]

It seems likely that the reason people dislike markets is simpler than has been postulated. For every person, they will have pet issues. The market will simply ignore most of them.* Thus, the people blame the market for not getting what they want. They will wish to distort the market to get their specific item, that is, to destroy the market.** They won't agree on how to change it, but the destruction will be agreed upon.

*The market does not care about your issue unless there is an outlandish amount of money you are willing to pay, either alone or in aggregate.

**Even small distortions can vastly change the outcome in an unfavorable (to efficiency) direction. The recent quasi-depression can be blamed at least in part on such a distortion in the housing market.

Comment author: deepthoughtlife 27 May 2012 07:17:50AM 1 point [-]

I have no interest in evaluating languages based on how quickly they lead to money; only how they affect your ability to program. Additionally, I am not particularly experienced. I've only been programming for three or four years. Take my words with a grain of salt.

I remember one semester in college where I was using three separate programming languages at the same time. VB, Java, and Assembly (16-bit x86). Sometimes it lead to a small amount of confusion, but it was still a good experience. I would suggest beginning a second language soon after achieving minimal competence with your first language, but continuing to learn the first as well. Sticking too long to a single language encourages you to stick in a rut, and think that is the only way to do things. Go down as many paths as you are able to. Getting in a language rut is not a good idea.

Java is not the most amazing language in the programming world, but it is an extremely useful one to learn. It contains a C style syntax without some of the more difficult aspects. In other words, you learn how to program in the traditional way, but more easily. It is relatively verbose, and strict on some matters, but that teaches a methodical way of programming that is necessary to making good programs, but less strict languages do not teach. You can do anything in Java, and most of it is of reasonable difficulty. While you will need C++ to round out your C style, knowing Java is a very good stepping stone for it. It is an advantage to learning how to program that it is not interpreted (ignore that it compiles to bytecode), since that means you must have a coherent thought before writing it. When just learning, winging it is not the best strategy. Complexity should be approached by the newbie programmer as quickly as they are able to handle it.

Prolog might be the most amazing programming language, but I wouldn't recommend it to someone just learning how to program, and it isn't particularly transferable. It does give a person a way to think about the problem directly in a logical manner. The algorithms can often be expressed in a more pure form than in other languages, since it utilizes a search algorithm to actually get the answer rather than a series of commands. I would have liked to learn some algorithms in Prolog. As an example: factorial(0,1). factorial(1,1). factorial(N):= N>1, N * factorial(N-1). This looks almost exactly like the algorithm as it would be described. In fact, this could be used to intuitively explain the algorithm perfectly to someone who didn't know what factorial was. Notes: Prolog operates on equivalence. The first line declares that factorial(0) is logically equivalent to the number 1. Likewise factorial(1) is logically equivalent to 1. Disclaimer: As a (hopefully) just graduated CS student, I have written two different term papers and given a 36 minute presentation related to Prolog, but was only exposed to it these last few months, and have not yet had the chance to do a significant amount of programming in it. (19 units ftw.) I wish I had taken the opportunity to use it when I first heard of it a couple years ago; it was that or Lisp and I did Lisp.

On that note, Lisp. Definitely a good language, it can be very difficult to learn, especially due to the overabundance of parenthesis obscuring the fundamentally different approach to problems. I freely admit that my first attempt to learn Lisp went down in flames. Make sure you understand Lisp's fundamentals, because they are not the same as a more normal programming language. It takes real dedication to properly learn Lisp. Still, Lisp is one of those languages I wish I had spent more time on. It is a different and excellent way to think about problems, and is a very good language to expand your horizons after you already know how to program. While it is possible to write in a C style in Lisp (Common Lisp at least), make sure to avoid that temptation if you really want to understand it. It is especially good for recursion, and you are right about the whole code as lists thing being an interesting perspective. I didn't really learn much about Lisp macros. Okay, fine. I wish I had done Prolog and Lisp.

If you want to be a good programmer, then you must learn assembly of some form. It would be utterly insane to try to learn programming with assembly, (of course, some did, and I wouldn't be here on my computer if they hadn't.). Understanding how the machine actually (logically) works is a huge step toward being a good programmer. A user can afford to only know what is done, a programmer needs to learn how it is done, and why. High level languages will not make you a good programmer, even if they are what you end up using for your entire career. The two semesters I spent learning how the computer operates on a logical level were of extreme importance to me as a programmer, and assembly is an important part of that.

Comment author: TimFreeman 08 July 2011 11:13:35PM 7 points [-]

I have a fear that becoming skilled at bullshitting others will increase my ability to bullshit myself. This is based on my informal observation that the people who bullshit me tend to be a bit confused even when manipulating me isn't their immediate goal.

However, I do find that being able to authoritatively blame someone else who is using a well-known rhetorical technique for doing that is very useful, and therefore I have found reading "Art of Controversy" to be very useful. The obviously useful skill is to be able to recognize each rhetorical technique and be able to find a suitable retort in real time; the default retort is to name the rhetorical technique.

Comment author: deepthoughtlife 09 July 2011 07:32:54AM 2 points [-]

Why shouldn't you want to bullshit yourself? You'll get to believe you are the most successful man on earth, even after getting evicted. Your children will all be geniuses who will change the world, even after flunking out of high school. Your arguments will be untouchable, even after everyone else agrees you lost. Obviously, I believe said fear is highly legitimate, if the premise is true.

People who are talking bullshit do generally seem to be confused in my experience as well, but BS being caused at least in part by that confusion seems to be a highly likely scenario. Some things done in an external setting do affect similar internal processes, but not all.

An (quick and dirty) inductive argument follows:

Premise 1: It is far easier to BS than to logically analyze and respond. Premise 2: It is far faster to BS than to logically analyze and respond. Premise 3: People prefer to do things that are easier, ceteris paribus. Premise 4: People prefer to do things that are faster, ceteris paribus. Premise 5: People very strongly do not want to be wrong. Premise 6: Losing the argument is a significant proxy for being wrong. Premise 7: Winning the argument is a significant proxy for being right.

(Intermediate)Conclusion 1: If BS wins you the argument, you will prefer BS to logical analysis and response. (Intermediate)Conclusion 2: If BS loses you the argument, you will regard BS far more poorly as an option. (Intermediate)Conclusion 3: Being good enough at BS to consistently win (necessarily avoid losing) arguments drastically increases the chance you will not resort to logical analysis and response, at all. Final Conclusion: If you BS to others, you will BS to yourself.


On the idea that it is useful to know when another is using one of the devices of blowing smoke, you are obviously correct, but it can be very tempting to misuse such knowledge simply to browbeat your opponent, when they haven't actually done it. In a similar vein (though not directly on topic), sometimes a fallacy isn't really a fallacy in the precise context it is within (IE sometimes the appeal to authority is legitimate in an argument, especially to settle a minor point).

I must say one thing on the idea behind all this. While the ends occasionally justify the means, the idea that rational ends are best served via irrational means is extraordinarily likely to be incorrect. More likely, an inability to properly argue your point should have you questioning your point instead.

Comment author: deepthoughtlife 07 April 2011 06:58:59AM 2 points [-]

So far as I can tell, the real issue in telling someone that the only important thing is quality, is that it leads to a phenomenon known in some circles as "paralysis by analysis." For instance, a writer could spend a day debating whether or not a particular place needed a comma or not, and miss that the whole page is rubbish. In sports, it is often what is meant when someone is accused of "thinking too much." In football, a receiver might spend his time thinking about how to get around a defender once he has the ball, and forget to catch the ball.

Like Jeff Atwood, I am a programmer. Unlike Jeff Atwood, I do not have a Wikipedia entry -rightfully so. Also, unlike Jeff, I'm pretty new: unseasoned. So, unlike Jeff Atwood, I still remember the process of learning how to be a programmer.

As far as I can tell, this entry fits with my experiences so far in improving myself as a programmer. I didn't get better at it by theorizing about how to make a beautiful program; in fact, when I tried, I found out the basic truth every good programmer knows; "If you're just barely smart enough to write it, you are, by definition, not smart enough to debug it." I spent weeks thinking about it, getting nowhere, before I used a brute force technique to fix the trouble spot within hours, and still ended up with a pretty nice program.

I must take issue with what Jeff Atwood wrote though. The vast majority of time in a nontrivial program is spent thinking, whether beforehand, or while you're trying to parse the kludge that steadfastly refuses to work. The kludge, insoluble mass that it is, can be immensely harder to fix than replace, but the natural mechanism is always to fix. It isn't immediately obvious, but the solution you have written has an immense hold on your mind, especially since the actual act of entering it took considerable time and effort, so some due diligence in the initial decision is highly warranted.

Many programmers love iteration, which would be analogically described as follows. Take a lump of clay that is to be a statute of a man. Make it the size of the man. First iteration complete. Form arms and legs and a head roughly where they go. Iteration two complete. Carefully delineate the broad details, such as knees, elbows wrists necks, ankles, shape of torso. Iteration three complete. Make clear the placement of medium details, such as fingers, toes, ears eyes, mouth, nose. Iteration four complete. Add the fine details, roughly. Delineate joints, add nails, add scars, add hair. Iteration five complete. Make the joints , and nails, and scars, and hair, and all the other little details just about right. Iteration six complete. Decide what to improve and do so. Iteration seven complete. Check for errors. Fix them. Repeat until done. Iteration eight complete.

The analogy is actually pretty close. Only problem? Each iteration listed above could, and usually would, actually involve several iterations, and testing steps.

The other major solution is far more straightforward. Make the clay look like a man. Step one complete. Is it correct? If no, repeat. Done.

The second way has a much greater quantity of results, because it is a simpler, and quicker, way to make a figure of a man. The superiority of the iterative approach comes in when it is clear you will not get it right in any one try. If I make no mistakes, I may take 24 steps in an iterative approach where I would take 2 in the one go approach. The steps are, however, not equal in length (the iterative steps are much shorter). Let us make it 8 to 2.

This still looks like a slam dunk for the all at once approach. With errors, it stays that way with a low quality standard. If any of the first three are right for it, all at once is still better. Once an extremely high quality standard for the end result is used, however, it is quite likely that not one of the first 25 all at once clay men will be good enough. Even with a high standard, iteration is likely to not need an amplification of more than 2 or 3. 50 versus 24 now makes it a slam dunk in favor of iteration.

In the end, both methods actually would give about the same amount of experience, (though I don't have space to justify that here, and good arguments against it could be made) almost regardless of the number of steps necessary. Somewhere in the middle must be a crossover point, between iteration and doing it all at once. It behooves us to figure out where we are in relation to it.

The long-winded point here is that quantity (iteration) can produce high quality quicker than quality (get it right straight up), but only some of the time. A low quality standard is analogous to your low cost of failure, whereas the high cost of failure is likewise to the high quality standard. For beginners, the standard is low, and just doing it is probably the best way (though they still need to understand in order to make a real attempt).

For pure personal learning, among the experienced, it is far trickier to be sure. True failure is highly costly, as you learn the wrong thing, but it also is less likely, and minor failures can be learned from.

I'm relatively sure Jeff Atwood understands all of this, of course, but it isn't immediately obvious from his writing here. I'm not some guru, and this isn't diving wisdom wither, but it is always a good idea to keep in mind what is so basic, that the expert has forgotten to mention it exists. After all, he is here to push for the deviation, not simply the status quo.

Comment author: deepthoughtlife 02 February 2011 08:02:46AM 0 points [-]

Information that cannot be understood is not information at all to the person in question. Sometimes that simply means that the person who doesn't understand needs to learn how to understand it, but often it equates to simple fraud, ethically and economically speaking. A company with a thousand lawyers can always write something that you will not understand, and do it intentionally, just to screw people seeking a fair deal. I would posit that this is why so many people hate lawyers.

Comment author: deepthoughtlife 02 February 2011 07:36:46AM 0 points [-]

There are a few major problems with any certainty of the singularity. First, we might be too stupid to create a human level ai. Second, it might not possible, for some reason of which we are currently unaware, to create a human level AI. Third, importantly, we could be too smart.

How would that last one work? Maybe we can push technology to the limits ourselves, and no AI can be smart enough to push it further. We don't even begin to have enough knowledge to know if this is likely. In other words, maybe it will all be perfectly comprehensible to the us as of now, and therefore not a singularity at all.

Is it worth considering? Of course. Is it worth pursuing? Probably, (we need to wait for hindsight to know better than that), particularly since it will matter a great deal if and when it occurs. We simply can't assume that it will.

Johnicholas made a good comment I think on the point. What we have (and are) doing is very reminiscent of what Chalmers claims will lead to the singularity. I would go so far as to say that we are a singularity of sorts, beyond which the face of the world could never be the same. Our last century especially, as we went from what would, by analogy, be from the iron age to the beginning of the renaissance, or even further. Cars, Relativity, Quantum Mechanics, planes, radar, microwaves,two world wars, nukes, collapse of colonial system, interstates, computers, massive cold war, countless conflicts and atrocities, entry to and study of space, the internet, and that is just a brief survey, off the top of my head. We've had so many, that I'm not sure superhuman AI would be all that difficult to accept, so long as it was super morally speaking as well -which is, of course, not a given.

Any true AI that could not, with 100% accuracy be called friendly, should not exist.

View more: Next