Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Lumifer 01 October 2014 04:38:46PM 0 points [-]

Part of the scenario is that the ship is in fact not seaworthy, and went down on account of it.

That is just not true. The author of the quote certainly knew how to say "the ship was not seaworthy" and "the ship sank because it was not seaworthy". The author said no such things.

Part is that the shipowner knew it was not safe and suppressed his doubts. These are the actus reus and the mens rea that are generally required for there to be a crime.

You are mistaken. Suppressing your own doubts is not actus reus -- you need an action in physical reality. And, legally, there is a LOT of difference between an act and an omission, failing to act.

Comment author: RichardKennaway 01 October 2014 07:49:04PM 1 point [-]

The author of the quote certainly knew how to say "the ship was not seaworthy" and "the ship sank because it was not seaworthy". The author said no such things.

The author said:

He knew that she was old, and not over-well built at the first; that she had seen many seas and climes, and often had needed repairs...

and more, which you have already read. This is clear enough to me.

Suppressing your own doubts is not actus reus -- you need an action in physical reality.

In this case, an inaction.

And, legally, there is a LOT of difference between an act and an omission, failing to act.

In general there is, but not when the person has a duty to perform an action, knows it is required, knows the consequences of not doing it, and does not. That is the situation presented.

Comment author: Lumifer 30 September 2014 07:23:19PM 2 points [-]

An interesting quote. It essentially puts forward the "reasonable person" legal theory. But that's not what's interesting about it.

The shipowner is pronounced "verily guilty" solely on the basis of his thought processes. He had doubts, he extinguished them, and that's what makes him guilty. We don't know whether the ship was actually seaworthy -- only that the shipowner had doubts. If he were an optimistic fellow and never even had these doubts in the first place, would he still be guilty? We don't know what happened to the ship -- only that it disappeared. If the ship met a hurricane that no vessel of that era could survive, would the shipowner still be guilty? And, flipping the scenario, if solely by improbable luck the wreck of the ship did arrive unscathed to its destination, would the shipowner still be guilty?

Comment author: RichardKennaway 30 September 2014 11:07:06PM 2 points [-]

The shipowner is pronounced "verily guilty" solely on the basis of his thought processes.

Part of the scenario is that the ship is in fact not seaworthy, and went down on account of it. Part is that the shipowner knew it was not safe and suppressed his doubts. These are the actus reus and the mens rea that are generally required for there to be a crime. These are legal concepts, but I think they can reasonably be applied to ethics as well. Intentions and consequences both matter.

if solely by improbable luck the wreck of the ship did arrive unscathed to its destination, would the shipowner still be guilty?

If the emigrants do not die, he is not guilty of their deaths. He is still morally at fault for sending to sea a ship he knew was unseaworthy. His inaction in reckless disregard for their lives can quite reasonably be judged a crime.

Comment author: ShannonFriedman 29 September 2014 03:47:02PM 1 point [-]

Umm... why does this need to be pointed out?

To me, I was being nice and empathizing with the point made. This feels like I expressed vulnerability and you decided to sink your teeth in and/or rub my nose in shit to tell me what I've done wrong, except I don't actually understand what you're even trying to show me.

Comment author: RichardKennaway 30 September 2014 02:24:49PM 0 points [-]

Sorry, I didn't mean to be so abrasive. It's just that communication is, practically by definition, communication with people who are not oneself. It seemed to me that you were surprised to come up against this.

As for the original post itself, it seems to me, as it has to some others who have commented, that it talks around something that sounds like it might be interesting, but never says the thing itself.

Comment author: aberglas 30 September 2014 07:58:53AM 0 points [-]

A rock has no goal because it is passive.

But a worm's goal is most certainly to exist (or more precisely its genes) even though it is not intelligent.

Comment author: RichardKennaway 30 September 2014 11:45:58AM 0 points [-]

Is a volcano passive? Is water, as it flows downhill?

I'm trying to find where you are dividing things that have purposes from things that do not. Genes seem far too complicated and contingent to be that point. What do you take as demonstrating the presence or absence of purpose?

Comment author: eli_sennesh 29 September 2014 09:15:00PM 1 point [-]

I accidentally wrote the following as a Facebook comment to a friend, and am blatantly saving it here on grounds that it might become the core of a personal statement/research statement for previously-mentioned PhD application:

Many or even most competently-made proposals for AGI currently rely on Bayesian-statistical reasoning to handle the inherent uncertainty of real-world data and provide learning and generalization capabilities. Despite this, I find that the foundation of probability theory is still done using Kolmogorov's frequentist axiomization in terms of measure theory for spaces of total measure 1.0, while Bayesian statistics still justifies itself in terms of Cox's Theorem -- while pitching itself as the only reasonable extension of logic into real-valued uncertainty.

Problem: you can't talk about "rational agents believe according to Bayes" if you're trying to build a rational agent, because real-world agents have to be able to generalize above the level of propositional logic to quantified formulas (propositional formulas in which forall and exists quantifiers can appear), both first-order and sometimes higher-order. First-order and higher-order logics of certainty (in various kinds: classical, intuitionistic, relevance, linear, substructural) have been formalized in terms of model theory for definitions of truth and in terms of computation for definitions of proof -- this then gets extended into different mathematical foundations like ZFC, type theory, categorical foundations, etc.

I have done cursory Google searches to find extensions of probability into higher-order logics, and found nothing on the same level of rigor as the rest of logic. This is a big problem if we want to rigorously state a probabilistic foundation for mathematics, and we want to do that because a "foundation for mathematics" in terms of a logic, its model(s), and its proof system(s) is really a description of how to mechanize (computationally encode) knowledge. If your mathematical foundation can't encode quantified statements, that's a limit on what your agent can think. If it can't encode uncertainty, that's a limit on what your agent can think. If it can't encode those two together, THAT'S A LIMIT ON WHAT YOUR AGENT CAN THINK.

Which means that most AI/AGI work actually doesn't bother with sound reasoning.

Comment author: RichardKennaway 30 September 2014 07:01:04AM 2 points [-]

I find that the foundation of probability theory is still done using Kolmogorov's frequentist axiomization in terms of measure theory for spaces of total measure 1.0, while Bayesian statistics still justifies itself in terms of Cox's Theorem

Can you expand on that? The connection between Kolmogorov's and Cox's foundations and frequentist vs. Bayesian interpretations is not clear to me. The only mathematical difference is that Cox's axioms don't give you countable additivity, but that doesn't seem to be a frequentist vs. Bayesian point of dispute.

Comment author: AspiringRationalist 29 September 2014 05:08:11PM 1 point [-]

What are people here's favorite programming languages, for what application, and why?

Comment author: RichardKennaway 29 September 2014 07:25:19PM 8 points [-]

In all the substantial programming projects I've undertaken, what I think of the language itself has never been a consideration.

One of these projects needed to run (client-side) in any web browser, so (at that time) it had to be written in Java.

Another project had to run an a library embedded in software developed by other people and also standalone at the command line. I wrote it in C++ (after an ill-considered first attempt to write it in Perl), mainly because it was a language I knew and performance was an essential requirement, ruling out Java (at that time).

My current employment is developing a tool for biologists to use; they all use Matlab, so it's written in Matlab, a language for which I even have a file somewhere called "Reasons I hate Matlab".

If I want to write an app to run on OSX or iOS, the choices are limited to what Apple supports, which as far as I know is Objective C, C++, or (very recently) Swift.

For quick pieces of text processing I use Perl, because that happens to be the language I know that's most suited to doing that. I'm sure Python would do just as well, but knowing Perl, I don't need Python, and I don't care about the Perl/Python wars.

A curious thing is that while I've been familiar with functional languages and their mathematical basis for at least 35 years, I've never had occasion to write anything but toy programs in any of them.

The question I always ask myself about a whizzy new language is, "Can this be used to write an interactive app for [pick your intended platform] and have it be indistinguishable in look and feel from any app written in whatever the usual language is for that platform?" Unless the answer is yes, I won't take much interest.

A programming language, properly considered, is a medium for thinking about computation. I might be a better programmer for knowing the functional or the object-oriented ways of thinking about computation, but in the end I have to express my thoughts in a language that is available in the practical context.

Comment author: RichardKennaway 29 September 2014 12:41:47PM 1 point [-]

Is it a rock's goal to exist?

Comment author: ShannonFriedman 29 September 2014 01:44:43AM 1 point [-]

Yes. I will come back to this and fill in the missing piece, as I said to hairyfigment when they brought it to my attention.

To me the conclusion is obvious, but I can see how it is not to people who are not me, now that this has been pointed out to me. I want to take my time to figure out how to word it properly, and have been very busy with work. I will be getting to it either later tonight or tomorrow.

That said, I personally find it laughable that hairyfigment linked a piece that is clearly advertising propaganda IMHO after claiming that my post sounded like advertisement. Perhaps if I call myself an executive director this would not bother people? :) I had better be careful or I'm going to get this post entirely deleted... ;)

Comment author: RichardKennaway 29 September 2014 12:15:08PM 0 points [-]

To me the conclusion is obvious, but I can see how it is not to people who are not me, now that this has been pointed out to me.

Well then, I would like to point out a more general fact.

Everyone that you will ever deal with, in any way, is someone who is not you.

Comment author: Cyan 29 September 2014 12:12:42AM 1 point [-]

My conclusion: there might be an interesting and useful post to be written about how epistemic rationality and techniques for coping with ape-brain intersect, and ShannonFriedman might be capable of writing it. Not there yet, though.

Comment author: RichardKennaway 29 September 2014 08:09:35AM 3 points [-]

Entire subject of this site, surely?

Comment author: Punoxysm 23 September 2014 08:58:49PM *  0 points [-]

I was pondering the whole mass-downvote kerfuffle a while back, and even though I generally agree with the end result from gut instinct reasoning, I'm struck by the following:

The downvoter had an objective, and rationally used the tool of downvoting to achieve it rather than constraining himself arbitrarily. If HPJEV were a forum-dweller instead of a wizard, he would do the very same.

Comment author: RichardKennaway 27 September 2014 07:59:07AM 0 points [-]

If HPJEV were a forum-dweller instead of a wizard, he would do the very same.

Given the strong ethical view that HPJEV takes of lying, that would be grossly against his character. He might also say, as would I, that it's a short step from mass downvoting to what Yvain reports here.

View more: Next