Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: snewmark 25 May 2016 12:53:46PM 0 points [-]

...so the total value of a lingering doubt should go to zero if investigated forever.

Very well written, I just wanted to confirm something, I was under the impression that since nothing has 100% certainty, nothing can have a 0% uncertainty, you could get closer and closer, but you can never actually reach it. If I'm wrong or misunderstanding this I would appreciate it if someone would correct me, thanks.

Comment author: MarsColony_in10years 26 August 2016 07:29:40PM 0 points [-]

nothing has 100% certainty, nothing can have a 0% uncertainty

That's my understanding as well. I was trying to say that, if you were to formalize all this mathematically, and took the limit as number of Bayesian updates n went to infinity, uncertainty would go to zero.

Since we don't have infinite time to do an infinite number of updates, in practice there is always some level of uncertainty > 0%.

Comment author: James_Miller 06 March 2016 11:10:49PM 4 points [-]

Then why are some people so much smarter than others?

Comment author: MarsColony_in10years 07 March 2016 12:19:38AM *  2 points [-]

I believe CronoDAS is referring to Algernon's Law. Gwern describes the issues pretty well here, including several classes on "loopholes" we might employ to escape the general rule.

The classifications of different types of loopholes is still pretty high level, and I'd love to see some more concrete and actionable proposals. So, don't take this as saying "this is old hat", but only as a jumping off point for further discussion.

Comment author: MarsColony_in10years 06 March 2016 02:04:56AM 1 point [-]

This may not be a generalized solution, but it looks like you have rigorously defined a class of extremely common problems. I suspect deriving a solution from game theory would be the formalized version of John Stuart Mill trying to derive various principles of Liberty from Utilitarianism.

Meta: 4.5 hours to write, 30mins to take feedback and edit.

I always find this sort of info interesting. Same for epistemic status. It's nice to know whether someone is spit-balling a weird idea they aren't at all sure of, versus trying to defend a rigorous thesis. That context is often missing in online discussions, and I'd love for it to become the norm here. I suppose knowing how much time you spent writing something only gives me a lower bound on how much thought has gone into the idea total, and some ideas really can be fleshed out completely in a couple hours while others may take generations.

Comment author: MarsColony_in10years 02 March 2016 03:37:29AM 5 points [-]

I was surprised to see mention of MIRI and Existential Risk. That means that they did a little research. Without that, I'd be >99% sure it was a scam.

I wonder if this hints at their methodology. Assuming it is a scam, I'd guess they find small but successful charities, then find small tight-knit communities organized around them and target those communities. Broad, catch-all nets may catch a few gullible people, but if enough people have caught on then perhaps a more targeted approach is actually more lucrative?

Really, it's a shame to see this happen even if no one here fell for it, because now we're all a little less likely to be receptive to weird requests/offers. I suspect it's useful for EAs to be able to make random requests of specific people. For example, I can imagine needing a couple hours or days of consulting work from a domain expert. In that situation, I'd be tempted to PM someone knowledgeable in that area, and offer to pay them for some consulting work on the side.

I can actually think of 2 instances where this community has done things like this out in the open (not PM), so it wouldn't surprise me if there are occasional private transactions. (I'd link to examples, but I'd rather not help a potential scammer improve on their methods.) Perhaps a solution would be to route anything that looks suspicious through Bitcoin, so that the transaction can't be cancelled? I wouldn't want to add trivial inconveniences to non-suspicious things, though.

Comment author: malcolmocean 21 February 2016 10:02:26AM 3 points [-]

"I've always hated jargon, and this piece did a good job of convincing me of its necessity."

:)

Feels good to change a mind. I'm curious if there were any parts of the post in particular that connected for you.

Comment author: MarsColony_in10years 23 February 2016 04:06:54AM *  1 point [-]

Although compressing a complex concept down to a short term obviously isn't lossless compression, I hadn't considered how confusing the illusion of transparency might be. I would have strongly preferred that "Thinking Fast and Slow" continue to use the words "fast" and "slow". As such, these were quite novel points:

  • they don't immediately and easily seem like you already understand them if you haven't been exposed to that particular source

  • they don't overshadow people who do know them into assuming that the names contain the most important features

The notion of using various examples to "triangulate" a precise meaning was also a new concept to me too. It calls to mind the image of a Venn diagram with 3 circles, each representing an example. I don't think I have mental models for several aspects of learning. Gwern's write up on spaced repetition gave me an understanding about how memorization works, but it hadn't occurred to me that I had a similar gap in my model (or lack thereof) for how understanding works.

(I'm not sure the triangulation metaphor lends much additional predictive power. However, an explicit model is a step up from a vague notion that it's useful to have more examples with more variety.)

Comment author: MarsColony_in10years 21 February 2016 06:33:06AM 4 points [-]

I've always hated jargon, and this piece did a good job of convincing me of its necessity. I plan to add a lot of jargon to an Anki deck, to avoid hand-waving at big concepts quite so much.

However, there are still some pretty big drawbacks in certain circumstances. A recent Slate Star Codex comment expressed it better than I ever have:

One cautionary note about “Use strong concept handles”: This leans very close to coining new terms, and that can cause problems.

Dr. K. Eric Drexler coined quite a few of them while arguing for the feasibility of atomically precise fabrication (aka nanotechnology): “exoergic”, “eutactic”, “machine phase”, and I think that contributed to his difficulties.

If a newly coined term spreads widely, great! Yes it will an aid to clarity of discussion. If it spreads throughout one group, but not widely, then it becomes an in-group marker. To the extent that it marks group boundaries, it then becomes yet another bone of contention. If it is only noticed and used within a very small group, then it becomes something like project-specific jargon – cryptic to anyone outside a very narrow group (even to the equivalent of adjacent departments), and can wind up impeding communications.

Comment author: MarsColony_in10years 18 February 2016 09:09:56PM 7 points [-]

Meta note before actual content: I've been noticing of late how many comments on LW, including my own, are nitpicks or small criticisms. Contrarianism is probably the root of why our kind can't cooperate, and maybe even the reason so many people lurk and don't post. So, let me preface this by thanking you for the post, and saying that I'm sharing this just as an FYI and not as a critique. This certainly isn't a knock-down argument against anything you've said. Just something I thought was interesting, and might be helpful to keep in mind. :)

Clearly it was a moral error to assume that blacks had less moral weight than whites. The animal rights movement is basically just trying to make sure we don't repeat this mistake with non-human animals. (Hence the use of terms like "speciesism".) You use a couple reductio ad absurdum arguments with bacteria and video game characters, but it’s not entirely clear that we aren’t just socially biased there too. If the absurd turns out to be true, then the reductio ad absurdum fails. These arguments are valid ways of concluding "if A than B", but keep in mind that A isn't 100% certain.

There are actually some surprisingly intelligent arguments that insects, bacteria, some types of video game characters, and even fundamental particles might have non-zero moral weight. The question is what probability one gives to those propositions turning out to be true. IF one has reviewed the relevant arguments, and assigns them infinitesimally small credence, THEN one can safely apply the reductio ad absurdum. IF certain simple algorithms have no moral weight and the algorithms behind human brains have high moral weight, THEN algorithms almost as simple are unlikely to have whatever property gives humans value, while complex algorithms (like those running in dolphin brains) might still have intrinsic value.

Comment author: Lumifer 06 February 2016 06:05:22PM 0 points [-]

I'd try and poll various researchers, and ask them to estimate how much longer it would take them to do their work by slide-rule.

The answer would be "infinity" -- you can't do AI work by slide-rule. What next?

Comment author: MarsColony_in10years 06 February 2016 08:35:09PM *  1 point [-]

As I understand it, Eliezer Yudkowski doesn't do much coding, but mostly purely theoretical stuff. I think most of Superintelligence could have been written on a typewriter based on printed research. I also suspect that there are plenty of academic papers which could be written by hand.

However, as you point out, there are also clearly some cases where it would take much, much longer to do the same work by hand. I'd disagree that it would take infinite time, and that it can't be done by hand, but that's just me being pedantic and doesn't get to the substance of the thing.

The questions that would be interesting to answer would be how much work falls into the first category and how much falls into the second. We might think of this as a continuum, ranging from 0 productivity gain from computers, to trillions of times more efficient. What sub-fields would and wouldn't be possible without today's computers? What types of AI research is enabled simply by faster computers, and which types are enabled by using existing AI?

Maybe I can type at 50 words a minute, but I sure as hell can't code at 50 WPM. Including debugging time, I can write a line of code every couple minutes, if I'm lucky. Looking back on the past 2 things I wrote, one was ~50 lines of code and took me at least an hour or two if I recall, and the other was ~200 lines and took probably a day or two of solid work. I'm just starting to learn a new language, so I'm slower than in more familiar languages, but the point stands. This hints that, for me at least, the computer isn't the limiting factor. It might take a little longer if I was using punch cards, and at worst maybe twice as long if I was drafting everything by hand, but the computer isn't a huge productivity booster.

Maybe there's an AI researcher out there who spends most of his or her day trying different machine learning algorithms to try and improve them. Even if not, It'd still take forever to crunch that type of algorithm by hand. It'd be a safe bet that anyone who spends a lot of time waiting for code to compile, or who rents time on a supercomputer, is doing work where the computer is the limiting factor. It seems valuable to know which areas might grow exponentially alongside Moore's law, and which might grow based on AI improvements, as OP pointed out.

Comment author: MarsColony_in10years 06 February 2016 06:30:10AM 2 points [-]

I like this idea. I'd guess that a real economist would phrase this problem as trying to measure productivity. This isn't particularly useful though. Productivity is output (AI research) value over input (time), so this begs the question of how to measure the output half. (I mention it mainly just in case it's a useful search term.)

I'm no economist, but I do have an idea for measuring the output. It's very much a hacky KISS approach, but might suffice. I'd try and poll various researchers, and ask them to estimate how much longer it would take them to do their work by slide-rule. You could ask older generations of researchers the same thing about past work. You could also ask how much faster their work would have been if they could have done it on modern computers.

It would also probably be useful to know what fraction of researcher's time is spent using a computer. Ideally you would know how much time was spent running AI-specific programs, versus things like typing notes/reports into Microsoft Word. (which could clearly be done on a typewriter or by hand.) Programs like RescueTime could monitor this going forward, but you'd have to rely on anecdotal data to get a historical trend line. However, anecdote is probably good enough to get an order-of-magnitude estimate.

You'd definitely want a control, though. People's memories can blur together, especially over decades. Maybe find a related field for whom data actually does exist? (From renting time on old computers? There must be at least some records.) If there are old computer logs specific to AI researchers, it would be fantastic to be able to correlate something like citations/research paper or number of papers per researcher per year with computer purchases. (Did such-and-such universitys new punch-card machine actually increase productivity?) Publication rates in general are skyrocketing, and academic trends shift, so I suspect that publications is a hopelessly confounded metric on a timescale of decades, but might be able to show changes from one year to the next.

Another reason for good control group, if I remember correctly, is that productivity of industry as a whole didn't actually improve much by computers; people just think it was. It might also be worth digging around in the Industrial-Organizational Psychology literature to see if you can find studies involving productivity that are specific to AI research, or even something more generic like Computer Science. (I did a quick search on Google Scholar, and determined that all my search terms were far too common to narrow things down the the oddly-specific target.)

Comment author: MarsColony_in10years 30 January 2016 06:05:08AM *  8 points [-]

I could get behind most of the ideas discussed here, but I'm wary of the entire "Standards of Discourse and Policy on Mindkillers" section. It's refreshing to have a section of the internet not concerned with politics. Besides, I don't think the world is even Pareto optimized, so I don't think political discussions are even useful, since acquiring better political views incur opportunity costs. Why fight the other side to gain an inch of ground when we could do something less controversial but highly efficient at improving things? I'm all for discussing weird, counterintuitive, and neglected topics, but politics is only interesting for the same reason soap operas and dramas are interesting. The most viral arguments aren't necessarily the most worthwhile.

As for mandatory Crocker's rules, the wiki article has this to say:

Crocker emphasized, repeatedly, in Wikipedia discourse and elsewhere, that one could only adopt Crocker's rules to apply to oneself, and could not impose them on a debate or forum with participants who had not opted-in explicitly to these rules, nor use them to exclude any participant.

I suspect that if Crocker's rules were mandatory for participation in something, there would be a large number of people who would be pushed into accepting them. I don't think this would actually improve anything. Lots of people invoking Crocker's rules is a symptom of a healthy discourse, not a cause of it. Personally, I know that when I invoke Crocker's rules I have a much smaller knee-jerk reaction to criticism. LessWrong can already be a blunt place at times, probably more than is optimal.

I probably have 50 pages of random notes and thoughts that would be appropriate to post here, but haven't. Some are things I started to write specifically for LW, but never finished polishing. I suspect both the community and I would benefit from the discussion, but honestly it takes an order of magnitude more time for me to get something to a state where I would be willing to post it here. That's probably twice as much time as for me to be willing to post something on Facebook. I get diminishing returns from rereading the same thing over and over again, but it's much more difficult to achieve activation energy here. I suspect that difference is mostly due to the subjective feel of the group.

View more: Next