Comment author: othercriteria 03 June 2012 03:23:46PM 2 points [-]

Yes, I think that was better, because the ground truth is Kepler's third law and jimrandomh pointed out your method actually recaptures a (badly obfuscated and possibly overfit) variant of it.

"Dimensionality" is totally relevant in any approach to supervised learning. But it matters even without considering the bias/variance trade-off, etc.

Imagine that you have an high-dimensional predictor, of which one dimension completely determines the outcome and the rest are noise. Your shortest possible generating algorithm is going to have to pick out the relevant dimension. So as the dimensionality of the predictor increases, the algorithm length will necessarily increase, just for information-theoretic reasons.

Comment author: Miller 03 June 2012 06:29:44PM *  0 points [-]

recaptures a (badly obfuscated and possibly overfit) variant of it.

How do you overfit Kepler's law?

edit: Retracted. I see now looking at the actual link the result wasn't just obfuscated but wrong, and so the manner in which it's wrong can overfit of course (and that matches the results).

Comment author: othercriteria 03 June 2012 02:18:05PM 5 points [-]

Using a high-powered black-box technique to regress a one-dimensional continuous outcome against a one-dimensional continuous predictor seems misguided.

If you want to characterize how well your evolutionary learning idea works, try it on data that you've generated, where you know the "underlying math". See if you can recover the program that generated the data or one that's equivalent to it. Or try it on really big, messy data where no one knows the right answer and see if you/it can do better than the obvious competitors like SVM, k-NN, CART, etc.

The middle ground of working on an easy/messy problem, where any sane method will give you and adequate answer but there's no known ground truth, is not going to make a very compelling story.

Comment author: Miller 03 June 2012 06:25:57PM 1 point [-]

Using a high-powered black-box technique to regress a one-dimensional continuous outcome against a one-dimensional continuous predictor seems misguided.

I don't get this. You could have a rather complicated generator for this data set. A simple regression would imply the data points were independent, but the value at time T may have [likely has] a relation to value at T-3. So it seems a good problem to me.

Comment author: Miller 02 June 2012 06:13:14PM 0 points [-]

I had an 80$ logitech keyboard (the illuminating short-stroke like a notebook variety), and when it began to deteriorate I swapped it with a 10$ Walmart special that was a slightly curved Microsoft one. I had been playing around on this typing speed site and was surprised to find that on the 5th attempt I had beaten my previous record with this new keyboard.

If I had a variety of keyboards at my disposable I think it would be an interesting exercise to test them in this fashion.

Comment author: TheOtherDave 02 June 2012 05:21:47PM 13 points [-]

What is your position on Will Newsome?

I frequently find Will's contributions obscurantist.

In general, I find obscurantism at best tedious, and more often actively upsetting, so I mostly ignore it when I encounter it. Occasionally I engage with it, in a spirit of personal social training.

That said, I accept that one reader's obscurantism is another reader's appropriate level of indirection. If it's valuable to other people, great... the cost to me is low.

At this point the complaining about it by various frustrated people has cost me more than the behavior itself, by about an order of magnitude.

Comment author: Miller 02 June 2012 05:59:27PM 1 point [-]

I frequently find Will's contributions obscurantist.

The same word came to mind, and it's common to his history of interactions, so seeing it here means I ascribe it to him rather than the logic of whatever underlying purpose he may have on this occasion.

Comment author: Miller 02 June 2012 05:54:53PM 6 points [-]

If your goal is to lower your credibility, why do that in the context of talking about credibility?

Comment author: Miller 19 May 2012 05:07:27PM 11 points [-]
Comment author: Miller 16 May 2012 02:04:44AM 0 points [-]

I believe I have a few results of this nature in my 23andme profile but, like most results there, they indicate e.g. that I might gain an extra .5 pounds compared to average on a high fat diet.

I got a kick when I logged in there and it said something to the effect of 'see how your genes affect your weight' and after entering height, age, and weight it told me that my genes were responsible for 2lb (whatever that meant).

It does also note lactose tolerance, alcohol and caffeine enzymes, coeliac disease risk, etc.

Comment author: Wei_Dai 13 May 2012 02:59:52AM 4 points [-]

No, I think the idea is to do coarse-grained scans, which the superintelligence will have to heavily process in order to infer the original brain structure. (Yeah, it's not clear this is possible even with a whole universe worth of computing power and whatever algorithmic breakthroughs a superintelligence might come up with.)

Comment author: Miller 13 May 2012 03:43:01AM *  5 points [-]

you periodically take neuroimaging scans of your brain and save them to multiple backup locations (1010 bits is only about 1 gigabyte)

I think I understand but I'm lost as to why that 10^10 is showing up here. Wouldn't it be whatever the scan happens to be rather than a reference to the compressed size of a human's unique experiences? We might plausibly have a 10^18 scan that is detailed in the wrong ways (like it carries 1024 bits per voxel of color channel info :p).

eta: In case it's not clear, I can't actually help you answer the question of just how useful a scan is.

Comment author: Miller 13 May 2012 02:49:22AM 3 points [-]

Ok so I'm presuming that an extremely fine grained scan stored with some naive compression is massively more than 10^14 synapse-bits. In order to store all that now in the information theoretic minimum, don't we need some kind of incredibly awesome compression algorithm NOW that we simply don't have?

In response to comment by lukeprog on AGI Quotes
Comment author: James_Miller 02 November 2011 08:15:19PM *  3 points [-]

EY changed it in the published version to:

"The AI neither hates you, nor loves you, but you are made out of atoms that it can use for something else."

In response to comment by James_Miller on AGI Quotes
Comment author: Miller 03 November 2011 05:42:06AM *  1 point [-]

Whether the AI loves -- or hates, you cannot fathom, but plans it has indeed for your atoms.

View more: Prev | Next