Wiki Contributions

Comments

This site isn't too active - maybe email someone from CFAR directly?

Man, this interviewer sure likes to ask dense questions. Bostrom sort of responded to them, but things would have gone a lot smoother if LARB guy (okay, Andy Fitch) had limited himself to one or two questions at a time. Still, it's kind of shocking the extent to which Andy "got it," given that he doesn't seem to be specially selected - instead he's a regular LARB contributor and professor in an MFA program.

Hm, the format is interesting. The end product is, ideally, a tree of arguments, with each argument having an attached relevance rating from the audience. I like that they didn't try to use the pro and con arguments to influence the rating of the parent argument, because that would be too reflective of audience composition.

Infinity minus one isn't smaller than infinity. That's not useful in that way.

The thing being added or subtracted is not the mere number of hypotheses, but a measure of the likelihood of those hypotheses. We might suppose an infinitude of mutually exclusive theories of the world, but most of them are extremely unlikely - for any degree of unlikeliness, there are an infinity of theories less likely than that! A randomly-chosen theory is so unlikely to be true, that if you add up the likelihoods of every single theory, they add up to a number less than infinity.

It is for this reason that it is important when we divide our hypotheses between something likely, and everything else. "Everything else" contains infinite possibilities, but only finite likelihood.

I think this neglects the idea of "physical law," which says that theories can be good when they capture the dynamics and building-blocks of the world simply, even if they are quite ignorant about the complex initial conditions of the world.

Can't this be modelled as uncertainty over functional equivalence? (or over input-output maps)?

Hm, that's an interesting point. Is what we care about just the brute input-output map? If we're faced with a black-box predictor, then yes, all that matters is the correlation even if we don't know the method. But I don't think any sort of representation of computations as input-output maps actually helps account for how we should learn about or predict this correlation - we learn and predict the predictor in a way that seems like updating a distribution over computations. Nor does it seem to help in the case of trying to understand to what extend two agents are logically dependent on one another. So I think the computational representation is going to be more fruitful.

Interesting that resnets still seem state of the art. I was expecting them to have been replaced by something more heterogeneous by now. But I might be overrating the usefulness of discrete composition because it's easy to understand.

Plausibly? LW2 seems to be doing okay, which is gonna siphon off posts and comments.

The dust probably is just dust - scattering of blue light more than red is the same reason the sky is blue and the sun looks red at sunset (Rayleigh scattering / Mie scattering). It comes from scattering off of particles smaller than a few times the wavelength of the light - so if visible light is being scattered less than UV, we know that lots of the particles are of size smaller than ~2 um. This is about the size of a small bacterium, so dust with interesting structure isn't totally out of the question, but still... it's probably just dust.

Load More