CronoDAS

Comments

Sorted by

many theoretical computer scientists think our conjecture is false

Does that mean that you (plural) are either members of a theoretical computer science community or have discussed the conjecture with people that are? (I have no idea who you are or what connections you may or may not have with academia in general.)

Looking good to his superiors is the one thing the Pointy-Haired Boss is actually good at. Several strips show that some of his ridiculous-seeming decisions make perfect sense from that perspective.

<irony>Robustly generalizible like noticing that bacteria aren't growing next to a certain kind of mold that contaminated your petri dish or that photographic film is getting fogged when there's no obvious source of light?</irony>

Elaborating on The Very General Helper Strategy: the first thing you do when planning a route by hand is find some reasonably up-to-date maps.

One thing that almost always tends to robustly generalize is improving the tools that people use to gather information and make measurements. And this also tends to snowball in unexpected ways - would anyone have guessed beforehand that the most important invention in the history of medicine would turn out to be a better magnifying glass? (And tools can include mathematical techniques, too - being able to run statistical analysis on a computer lets you find a lot of patterns you wouldn't be able to find if it was 1920 and you had to do it all by hand.)

With regards to AI, that might mean interpretability research?

Hmmm. Taking this literally, if I didn't know where I was going, one thing I might do is look up hotel chains and find out which ones suit my needs with respect to price level and features and which don't, so when I know what city I want to travel to, I can then find out if my top choices of hotel chain have a hotel in a convenient location there.

Meta-strategy: try to find things that are both relevant to what you want and mostly independent of the things you don't know about?

For some reason, this story generated a sense of dread in me - I kept waiting for the proverbial other shoe to drop.

Well, you could start by looking at the cosmetic differences achieved by dog breeders as a lower limit to what it is possible to acheive by tinkering with a genome...

Straight-up diminishing marginal utility of wealth, then?

Well, that's the Bay Area for you - ground zero for both computer-related things and the hippie movement.

The answer to your specific question about the Fermi Paradox is that, after an AI destroys its creators, the AI itself would presumably still be there to do whatever it wanted, which could include plans for the rest of the universe outside its solar system. So "AI that kills its creators" still leaves us with the question of why we haven't seen any AIs spreading through our galaxy either.

Load More