Posts

Sorted by New

Wiki Contributions

Comments

nothing has 100% certainty, nothing can have a 0% uncertainty

That's my understanding as well. I was trying to say that, if you were to formalize all this mathematically, and took the limit as number of Bayesian updates n went to infinity, uncertainty would go to zero.

Since we don't have infinite time to do an infinite number of updates, in practice there is always some level of uncertainty > 0%.

Comment 1: If anyone wants to comment or reply here, but can’t afford the karma hit, feel free to PM me and I’ll comment for you without listing your name. I have 169 karma to burn (97% positive!), from comments going back to Feb 2015. However, I’ve wanted to update to a different username, so I don’t mind destroying this one.


Comment 2: It might be wise not to discuss tactics where eugine can read it. (Also, causing lots of discussion might be his goal, but so far we haven’t talked about it much and it’s just been a background annoyance.)

Is there interest in a skype call or some other private forum to discuss possible solutions?

[pollid:1160]

I believe CronoDAS is referring to Algernon's Law. Gwern describes the issues pretty well here, including several classes on "loopholes" we might employ to escape the general rule.

The classifications of different types of loopholes is still pretty high level, and I'd love to see some more concrete and actionable proposals. So, don't take this as saying "this is old hat", but only as a jumping off point for further discussion.

This may not be a generalized solution, but it looks like you have rigorously defined a class of extremely common problems. I suspect deriving a solution from game theory would be the formalized version of John Stuart Mill trying to derive various principles of Liberty from Utilitarianism.

Meta: 4.5 hours to write, 30mins to take feedback and edit.

I always find this sort of info interesting. Same for epistemic status. It's nice to know whether someone is spit-balling a weird idea they aren't at all sure of, versus trying to defend a rigorous thesis. That context is often missing in online discussions, and I'd love for it to become the norm here. I suppose knowing how much time you spent writing something only gives me a lower bound on how much thought has gone into the idea total, and some ideas really can be fleshed out completely in a couple hours while others may take generations.

I was surprised to see mention of MIRI and Existential Risk. That means that they did a little research. Without that, I'd be >99% sure it was a scam.

I wonder if this hints at their methodology. Assuming it is a scam, I'd guess they find small but successful charities, then find small tight-knit communities organized around them and target those communities. Broad, catch-all nets may catch a few gullible people, but if enough people have caught on then perhaps a more targeted approach is actually more lucrative?

Really, it's a shame to see this happen even if no one here fell for it, because now we're all a little less likely to be receptive to weird requests/offers. I suspect it's useful for EAs to be able to make random requests of specific people. For example, I can imagine needing a couple hours or days of consulting work from a domain expert. In that situation, I'd be tempted to PM someone knowledgeable in that area, and offer to pay them for some consulting work on the side.

I can actually think of 2 instances where this community has done things like this out in the open (not PM), so it wouldn't surprise me if there are occasional private transactions. (I'd link to examples, but I'd rather not help a potential scammer improve on their methods.) Perhaps a solution would be to route anything that looks suspicious through Bitcoin, so that the transaction can't be cancelled? I wouldn't want to add trivial inconveniences to non-suspicious things, though.

Although compressing a complex concept down to a short term obviously isn't lossless compression, I hadn't considered how confusing the illusion of transparency might be. I would have strongly preferred that "Thinking Fast and Slow" continue to use the words "fast" and "slow". As such, these were quite novel points:

  • they don't immediately and easily seem like you already understand them if you haven't been exposed to that particular source

  • they don't overshadow people who do know them into assuming that the names contain the most important features

The notion of using various examples to "triangulate" a precise meaning was also a new concept to me too. It calls to mind the image of a Venn diagram with 3 circles, each representing an example. I don't think I have mental models for several aspects of learning. Gwern's write up on spaced repetition gave me an understanding about how memorization works, but it hadn't occurred to me that I had a similar gap in my model (or lack thereof) for how understanding works.

(I'm not sure the triangulation metaphor lends much additional predictive power. However, an explicit model is a step up from a vague notion that it's useful to have more examples with more variety.)

I've always hated jargon, and this piece did a good job of convincing me of its necessity. I plan to add a lot of jargon to an Anki deck, to avoid hand-waving at big concepts quite so much.

However, there are still some pretty big drawbacks in certain circumstances. A recent Slate Star Codex comment expressed it better than I ever have:

One cautionary note about “Use strong concept handles”: This leans very close to coining new terms, and that can cause problems.

Dr. K. Eric Drexler coined quite a few of them while arguing for the feasibility of atomically precise fabrication (aka nanotechnology): “exoergic”, “eutactic”, “machine phase”, and I think that contributed to his difficulties.

If a newly coined term spreads widely, great! Yes it will an aid to clarity of discussion. If it spreads throughout one group, but not widely, then it becomes an in-group marker. To the extent that it marks group boundaries, it then becomes yet another bone of contention. If it is only noticed and used within a very small group, then it becomes something like project-specific jargon – cryptic to anyone outside a very narrow group (even to the equivalent of adjacent departments), and can wind up impeding communications.

Meta note before actual content: I've been noticing of late how many comments on LW, including my own, are nitpicks or small criticisms. Contrarianism is probably the root of why our kind can't cooperate, and maybe even the reason so many people lurk and don't post. So, let me preface this by thanking you for the post, and saying that I'm sharing this just as an FYI and not as a critique. This certainly isn't a knock-down argument against anything you've said. Just something I thought was interesting, and might be helpful to keep in mind. :)

Clearly it was a moral error to assume that blacks had less moral weight than whites. The animal rights movement is basically just trying to make sure we don't repeat this mistake with non-human animals. (Hence the use of terms like "speciesism".) You use a couple reductio ad absurdum arguments with bacteria and video game characters, but it’s not entirely clear that we aren’t just socially biased there too. If the absurd turns out to be true, then the reductio ad absurdum fails. These arguments are valid ways of concluding "if A than B", but keep in mind that A isn't 100% certain.

There are actually some surprisingly intelligent arguments that insects, bacteria, some types of video game characters, and even fundamental particles might have non-zero moral weight. The question is what probability one gives to those propositions turning out to be true. IF one has reviewed the relevant arguments, and assigns them infinitesimally small credence, THEN one can safely apply the reductio ad absurdum. IF certain simple algorithms have no moral weight and the algorithms behind human brains have high moral weight, THEN algorithms almost as simple are unlikely to have whatever property gives humans value, while complex algorithms (like those running in dolphin brains) might still have intrinsic value.

As I understand it, Eliezer Yudkowski doesn't do much coding, but mostly purely theoretical stuff. I think most of Superintelligence could have been written on a typewriter based on printed research. I also suspect that there are plenty of academic papers which could be written by hand.

However, as you point out, there are also clearly some cases where it would take much, much longer to do the same work by hand. I'd disagree that it would take infinite time, and that it can't be done by hand, but that's just me being pedantic and doesn't get to the substance of the thing.

The questions that would be interesting to answer would be how much work falls into the first category and how much falls into the second. We might think of this as a continuum, ranging from 0 productivity gain from computers, to trillions of times more efficient. What sub-fields would and wouldn't be possible without today's computers? What types of AI research is enabled simply by faster computers, and which types are enabled by using existing AI?

Maybe I can type at 50 words a minute, but I sure as hell can't code at 50 WPM. Including debugging time, I can write a line of code every couple minutes, if I'm lucky. Looking back on the past 2 things I wrote, one was ~50 lines of code and took me at least an hour or two if I recall, and the other was ~200 lines and took probably a day or two of solid work. I'm just starting to learn a new language, so I'm slower than in more familiar languages, but the point stands. This hints that, for me at least, the computer isn't the limiting factor. It might take a little longer if I was using punch cards, and at worst maybe twice as long if I was drafting everything by hand, but the computer isn't a huge productivity booster.

Maybe there's an AI researcher out there who spends most of his or her day trying different machine learning algorithms to try and improve them. Even if not, It'd still take forever to crunch that type of algorithm by hand. It'd be a safe bet that anyone who spends a lot of time waiting for code to compile, or who rents time on a supercomputer, is doing work where the computer is the limiting factor. It seems valuable to know which areas might grow exponentially alongside Moore's law, and which might grow based on AI improvements, as OP pointed out.

I like this idea. I'd guess that a real economist would phrase this problem as trying to measure productivity. This isn't particularly useful though. Productivity is output (AI research) value over input (time), so this begs the question of how to measure the output half. (I mention it mainly just in case it's a useful search term.)

I'm no economist, but I do have an idea for measuring the output. It's very much a hacky KISS approach, but might suffice. I'd try and poll various researchers, and ask them to estimate how much longer it would take them to do their work by slide-rule. You could ask older generations of researchers the same thing about past work. You could also ask how much faster their work would have been if they could have done it on modern computers.

It would also probably be useful to know what fraction of researcher's time is spent using a computer. Ideally you would know how much time was spent running AI-specific programs, versus things like typing notes/reports into Microsoft Word. (which could clearly be done on a typewriter or by hand.) Programs like RescueTime could monitor this going forward, but you'd have to rely on anecdotal data to get a historical trend line. However, anecdote is probably good enough to get an order-of-magnitude estimate.

You'd definitely want a control, though. People's memories can blur together, especially over decades. Maybe find a related field for whom data actually does exist? (From renting time on old computers? There must be at least some records.) If there are old computer logs specific to AI researchers, it would be fantastic to be able to correlate something like citations/research paper or number of papers per researcher per year with computer purchases. (Did such-and-such universitys new punch-card machine actually increase productivity?) Publication rates in general are skyrocketing, and academic trends shift, so I suspect that publications is a hopelessly confounded metric on a timescale of decades, but might be able to show changes from one year to the next.

Another reason for good control group, if I remember correctly, is that productivity of industry as a whole didn't actually improve much by computers; people just think it was. It might also be worth digging around in the Industrial-Organizational Psychology literature to see if you can find studies involving productivity that are specific to AI research, or even something more generic like Computer Science. (I did a quick search on Google Scholar, and determined that all my search terms were far too common to narrow things down the the oddly-specific target.)

Load More