Comment author: timtyler 29 November 2013 03:02:57PM *  0 points [-]

What is the most similar work that you know of?

For other non-fictional fear-mongering analysis of tech trends (outside MIRI/FHI) perhaps check out:

  • Hugo de-Garis
  • Bill Joy
  • Martin Ford
  • Martin Rees
  • Unabomber manifesto

In most of these cases, the end of the world is used as an attention-attracting superstimulus.

Of course, there are many well-known cases of marketing the apocalypse involving fiction as well.

Comment author: passive_fist 29 November 2013 01:17:31AM 8 points [-]

Reputational concerns apply to psychopaths too, and that's why not all of them turn violent. However it doesn't prevent all of them from turning violent.

Comment author: timtyler 29 November 2013 01:20:18PM *  4 points [-]

The point I was trying to make was more along the lines that choosing which parameters to model allows you to control the outcome you get. Those who want to recruit people to causes associated with preventing the coming robot apocalypse can selectively include competitive factors, and ignore factors leading to cooperation - in order to obtain their desired outcome.

Today, machines are instrumental in killing lots of people, but many of them also have features like air bags and bumpers, which show that the manufacturers and their customers are interested in safety features - and not just retail costs. Skipping safety features has disadvantages - as well as advantages - to the manufacturers involved.

Comment author: timtyler 29 November 2013 12:50:40AM 2 points [-]

People have predicted that corporations will be amoral, ruthless psychopaths too. This is what you get when you leave things like reputations out of your models.

Skimping on safety features can save you money. However, a reputation for privacy breaches, security problems and accidents doesn't do you much good. Why model the first effect while ignoring the second one? Oh yes: the axe that needs grinding.

Comment author: Daniel_Burfoot 24 November 2013 06:26:45PM *  6 points [-]

I’ve never seen any good general justification for parsimony...

This is a strange statement for a Bayesian to make. Perhaps he means that there is no reason to require absolute parsimony, which is true; sometimes if you have enough data you can justify the use of complex models. But Bayesian methods certainly require relative parsimony, in the sense that the model complexity needs to be small compared to the quantity of information being modeled. Formally, let A be the entropy of the prior distribution, and B be the mutual information between the observed data and the model parameter(s). Then unless A is small compared to B (relative parsimony), Bayesian updates won't substantially shift belief away from the prior, and the posterior will be just a minor modification of the prior, so the whole process of obtaining data and performing inference will have produced no actual change in belief.

The difference between the MDL philosophy and the Bayesian philosophy is actually quite minor. There are some esoteric technical arguments about things like whether one method or the other converges in the limit of infinite data, but at the end of the day the two philosophies say almost exactly the same thing.

Comment author: timtyler 26 November 2013 11:03:39AM -1 points [-]

Bayesian methods certainly require relative parsimony, in the sense that the model complexity needs to be small compared to the quantity of information being modeled.

Not really. Bayesian methods can model random noise. Then the model is of the same size as the data being modeled.

Comment author: timtyler 26 November 2013 10:51:21AM *  0 points [-]

I often use simple models–because they are less effort to fit and, especially, to understand. But I don’t kid myself that they’re better than more complicated efforts!

Recommended reading: Boyd and Richerson's Simple Models of Complex Phenomena.

Comment author: Strilanc 24 November 2013 06:02:01PM 5 points [-]

In practice, I often use simple models–because they are less effort to fit and, especially, to understand. But I don’t kid myself that they’re better than more complicated efforts!

Parsimony is a prior, not an end goal. At least, that's how it's used in Solomonoff induction.

The reason the Solomonoff prior doesn't apply to social sciences is because knowing the area of applicability gives you more information. Once you take that into account, as well as the fact that you don't have the input data or computational power to recompute the cumulative process that spit humans out so the simple low level theories are out of reach, your prior is skewed towards more complex models.

Comment author: timtyler 26 November 2013 10:46:25AM 0 points [-]

The reason the Solomonoff prior doesn't apply to social sciences is because knowing the area of applicability gives you more information.

That doesn't mean it doesn't apply! "Knowing the area of applicability" is just some information you can update on after starting with a prior.

Comment author: drethelin 25 November 2013 07:33:41PM 0 points [-]

probably not. I'm not exactly sure what you mean by this question since I don't full understand hamilton's rule but in general evolutionary stuff only needs to be close enough to correct rather than actually correct.

Comment author: timtyler 26 November 2013 10:42:19AM 2 points [-]

Losing information isn't a crime. The virtues of simple models go beyond Occam's razor. Often, replacing a complex world with a complex model barely counts as progress - since complex models are hard to use and hard to understand.

Comment author: drethelin 24 November 2013 05:57:41PM 16 points [-]

eh, this just seems like a repeat of arguments against greedy reductionism. Parsimony is good except when it loses information, but if you're losing information you're not being parsimonious correctly.

Comment author: timtyler 25 November 2013 11:30:56AM 0 points [-]

Parsimony is good except when it loses information, but if you're losing information you're not being parsimonious correctly.

So: Hamilton's rule is not being parsimonious "correctly"?

Comment author: knb 15 November 2013 06:30:58AM 4 points [-]

It's a really well-made video. However, they make the claim that the doubling period for Moore's Law is shrinking, which I think is false.

Comment author: timtyler 15 November 2013 11:31:16AM 2 points [-]

Shane Legg prepared this graph.

It was enough to convince him that there was some super-exponential synergy:

Comment author: ChrisHallquist 10 November 2013 05:40:28PM -1 points [-]

It's been suggested that the Flynn effect is mostly a matter of people learning a kind of abstract reasoning that's useful in the modern world, but wasn't so useful previously.

There's also a broader point to be made about why evolution would've built humans to be able to benefit from better software in the first place, that involves the cognitive niche hypothesis. Hmmm... I may need to do a post on the cognitive niche hypothesis at some point.

Comment author: timtyler 10 November 2013 06:40:49PM *  4 points [-]

There's also a broader point to be made about why evolution would've built humans to be able to benefit from better software in the first place, that involves the cognitive niche hypothesis.

I think we understand why humans are built like that. Slow-reproducing organisms often use rapidly-reproducing symbiotes to help them adapt to local environments. Humans using cultural symbionts to adapt to local regions of space-time is a special case of this general principle.

Instead of the cognitive niche, the cultural niche seems more relevant to humans.

View more: Prev | Next