Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Manfred 05 January 2017 01:39:13AM 4 points [-]

"Proper scoring rule" just means that you attain the best score by giving the most accurate probabilities you can. In that sense, any concave proper scoring rule will give you a good feedback mechanism. The reason people like log scoring rule is because it corresponds to information (the kind you can measure in bits and bytes), and so a given amount of score increase has some meaning in terms of you using your information better.

The information measured by your log score is identical to Shannon's idea of information carried by digital signals. When a binary event is completely unknown to you, you can gain 1 bit of information by learning about it. For events that you can predict to high accuracy, the entropy of the event (according to your distribution) is lower, and you gain less information by learning the result. In fact, if you look at the expected score, it goes to zero as the event becomes more and more predictable (though you're still incentivized to answer correctly).

But I think this leaves out something interesting that I don't have a good answer for, which that this straightforward interpretation only works when you, the human, don't screw up. When you do screw up, I'm not sure there's a clear interpretation of score.

Comment author: wubbles 07 January 2017 03:21:00PM 0 points [-]

The logarithmic scoring rule measures the information carried by the event given your predictions. Reducing its expectation corresponds to reducing the information carried by the event when it arrives.

Comment author: michaelkeenan 29 December 2016 05:02:29AM *  2 points [-]

[I misinterpreted wubbles above; I retract this comment.]

I think we should reserve the "epistemic status" thing for authors to describe their own works. Using it to insult a work seems pointlessly snarky. The useful part could be communicated with just "Probably BS" or "I think this is probably BS". Leaving it at that would avoid the useless connotation about the author's thought process, which is unknowable by others.

Comment author: wubbles 29 December 2016 01:28:00PM 5 points [-]

I was using it to describe my own comment. I'll try to think of a way to make that clearer in the future.

Comment author: wubbles 28 December 2016 02:41:00AM 1 point [-]

Epistemic status: probably BS This could be a causal explanation for why engineers are seen as having poor social skills and the usefullness of ASD traits in engineering. If you aren't sensitive to the productive conversations being bad socially, and so don't disrupt them, you will learn more.

As for salons the fact that a hostess lead the conversation and selected the guests meant that the conversation had to be interesting to her. Those who didn't have anything interesting to say, or disrupted interesting conversations, wouldn't be invited back. Sadly wikipedia doesn't say much about how they were run. They seem to also have selected books to read, which would steer the conversation towards those books.

Comment author: Alexei 10 December 2016 08:13:54AM 8 points [-]

It's likely you'll address this in future posts, but I'm curious now. To me it seems like CFAR played a very important role in attracting people to the bay. "Come for the rationality, stay for the x-risk." I have a feeling that with this pivot it'll be harder to attract people to the community. What are your thoughts on that?

Comment author: wubbles 14 December 2016 03:46:18PM 0 points [-]

I'm not sure how much of this was CFAR and x-risk vs. programming and autism. Certainly a lot of the people at the SF meetup were not CFARniks based on my completely unscientific examination of my memory. The community's survival and growth is secondary to X-risk solving now, even if before the goal was to make a community devoted to these arts.

Comment author: Algernoq 30 November 2015 07:04:41AM *  3 points [-]

Computing power per dollar has doubled every ~2 years for the past 40 years, per Moore's Law.

Can biology keep up? The next generation of humans would reach adulthood in 20 years, at which time computers would have ~1024x today's processing power. That's a pretty high bar for selective breeding or genetic modification to keep up.

Comment author: wubbles 30 November 2015 12:34:48PM 0 points [-]

Humans share our values and can generally be overwhelmed with sheer numbers should they not. Making them is unlikely to be dangerous. The same cannot be said for unfriendly AIs. We still have no idea how to make a friendly AI, and it could easily be a century before we begin to have an idea. Even if biology cannot keep up, improving intelligence in the short run will have positive effects on human productivity in the short run, which will get the goal of making a friendly AI closer.

I'm not saying that biological modification will ultimately bring about singularity or even extremely dramatic changes in human capabilities. Rather I think it will address many talent shortages simultaneously, for not that much money. I'm proposing it as an idea we can implement now.

Comment author: ChristianKl 30 November 2015 09:33:27AM 0 points [-]

However, the impact is likely to be limited due to the cost of these methods which will prevent them from having population-wide influence

Why do you think so?

Comment author: wubbles 30 November 2015 12:19:45PM *  1 point [-]

Look at the cost of IVF: According to http://www.momjunction.com/articles/much-ivf-treatment-cost-india_0074672/ it is $450,000 Rs, which is $6,000 dollars. IVF is a prerequisite for the sort of genetic tampering we are talking about, unless you want to use rabies as a carrier. IVF is widely practiced and has few barriers to entry, making me think it won't get much cheaper. That is a lot of money for many parts of the world. To think this cost will come down in the next 10-20 years significantly requires believing that significant advances in automation of the process are possible: that might be true.

Financial incentives to have smarter children are likely to work better in those regions where $6,000 is a lot of money. It's possible that combining both strategies works even better.

Smarter humans, not artificial intellegence

-3 wubbles 30 November 2015 03:48AM

I'm writing this article to explain some of the facts that have convinced me that increasing average human intelligence through traditional breeding and genetic manipulation is likelier to reduce existential risks in the short and medium term then studying AI risks, while providing all kinds of side benefits.

Intelligence is useful to achieve goals, including avoiding existential risks. Higher intelligence is associated with many diverse life outcomes improving, from health to wealth. Intelligence may have synergistic effects on economic growth, where average levels of intelligence matter more for wealth then individual levels. Intelligence is a polygenetic trait with strong heritability. Sexual selection in the Netherlands has resulted in extreme increases in average height over the past century: sexual selection for intelligence might do the same. People already select partners for intelligence, and egg donors are advertised by SAT score.

AI research seems to be intelligence constrained. Very few of those capable of making a contribution are aware of the problem, or find it interesting. The Berkeley-MIRI seminar has increased the pool of those aware of the problem, but the total number of AI safety researchers remain small. So far very foundational problems remain to be solved. This is likely to take a very long time: it is not unusual for mathematical fields to take centuries to develop. Furthermore, we can work on both strategies at once and observe spillover from one into the other, as the larger intelligence baseline translates into an increase on the right tail of the distribution.

How could we accomplish this? One idea, invented by Robert Heinlein, as far as I know, is to subsidize marriages between people of higher than usual intelligence and their having children.  This idea has the benefit of being entirely non-coercive. It is however unclear how much these subsidies would need to be to influence behavior, and given the strong returns to intelligence in life outcomes, unclear that they can further influence behavior.

Another idea would be to conduct genetic studies to find genes which influence genetics, and conduct genome modification. This plan suffers from illegality, lack of knowledge of genetic factors of intelligence,  and absence of effective means for genome editing (they tried CRISPR on human embryos: more work is needed). However, the result of this work can be sold for money, thus opening the possibility of using VC money to develop it. Illegality can be dealt with by influencing jurisdictions. However, the impact is likely to be limited due to the cost of these methods which will prevent them from having population-wide influence, instead becoming yet another advantage the affluent attempt to purchase. These techniques are likely to have vastly wider application, and so will be commercially developed anyway.

In conclusion genetic modification of humans to increase intelligence is practical in the near terms, and it may be worth diverting some effort to investigating it further.

Comment author: wubbles 21 September 2015 11:08:05PM 1 point [-]

I don't agree with this line of argument. Suppose there are five employes, and they all press the button. Each receives 100 utils, and loses 4, leading to a net gain of 96 each. Why is this not the ethically correct outcome, even for a deontologist?

Comment author: ChristianKl 14 April 2014 10:22:16AM *  1 point [-]

I think roughly a month ago I had an discussion about using Anki to learn biology data on LW. The person complained about the perceived inability of Anki to be text only. He rather wants to learn using things like Venn diagrams because they are better at displaying information then pure text.

The problem is that it's not straightforward to simple create a Venn diagram while creating Anki card or while discussing on LessWrong. It takes extra time. With a bit of smart UI design we might have an UI that makes it easy to make points via diagrams. Of course that means we need to think about how to create good diagrams for a bunch of other semantic constructs.

Especially if your default medium of data entry isn't a keyboard but a multitouch device having a bunch of diagrams might be better than text. Text developed in an environment where space was expensive. Today keyboards are simply amazing technology that make text into very easy.

I could imagine that the necessary technology won't be developed in customer applications like facebook but in a field like biology where it's very important to express complex ideas in an easy to understand manner. A series of big diagrams might just perform better than a bunch of long and convoluted sentences.

It's easier to upload and store. Text takes less space. Uploading it to a network or sending it to a friend takes less bandwidth.

Today that might be a concern. I don't think it will be in 20 years. I think a large part of why Google Wave failed was because it was just too slow.

Text is easier to search (this refers both to searching within a given piece of text and to locating a text based on some part of it or some attributes of it).

Speech to text to technology should make this easier in the future.

You can't play background music while consuming audio-based content, but you can do it while consuming text.

I think you can play low volume music in the background of a podcast.

Comment author: wubbles 31 August 2015 01:34:54AM 0 points [-]

I can consume text at a rate sometimes as high as 26 words a second. I cannot do that with audio. If we had text-to-speech, I would use it for turning audio into text, and consuming the text. Or the author could use it and produce text, which they could then edit. Frequently when talking we do all sorts of things we don't do when writing: repeat ourselves, use funny turns of phrase, search for words, etc. The bandwidth advantage to the consumer of a small amount of work for the producer makes text continue to be valuable.

As far as diagrams go in technical areas, there are some famous pictures in mathematics. These pictures inevitably mean nothing without text. Transmitting abstract ideas, and in particular transmitting subtle variations in how solid something is, doesn't seem compatible with diagrams. Diagrams are good for some concepts, but it's still an art to get good ones. Creating them is expensive, and sometimes they don't work. On the other hand it's hard to beat a good graph for communicating numerical data easily and letting the viewer draw appropriate inferences.

Comment author: wubbles 22 August 2015 06:03:50PM 0 points [-]

What if some policies correlate with kinds of arguments people find appealing? An argument from natural law against the legality of homosexuality is unlikely to convince anyone who doesn't love St. Thomas Aquinas, while the liberty principle won't convince a single Dominican. Then again this is more a problem of ethical foundations than factual arguments, so perhaps by separating values from beliefs you can finesse this difficulty.

View more: Next