Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: I_D_Sparse 09 March 2017 05:46:56AM 4 points [-]

I must admit to some amount of silliness – the first thought I had upon stumbling onto LessWrong, some time ago, was: “wait, if probability does not exist in the territory, and we want to optimize the map to fit the territory, then shouldn’t we construct non-probabilistic maps?” Indeed, if we actually wanted our map to fit the territory, then we would not allow it to contain uncertainty – better some small chance of having the right map, then no chance, right? Of course, in actuality, we don’t believe that (p with x probability) with probability 1. We do not distribute our probability-mass over actual states of reality, but rather, over models of reality; over maps, if you will! I find it helpful to visualize two levels of belief: on the first level, we have an infinite number of non-probabilistic maps, one of which is entirely correct and approximates the territory as well as a map possibly can. On the second level, we have a meta-map, which is the one we update; it consists of probability distributions over the level-one maps. What are we actually optimizing the level-two map for, though? I find it misleading to talk of “fitting the territory”; after all, our goal is to keep a meta-map that best reflects the state of the data we have access to. We alter our beliefs based (hopefully!) on evidence, knowing full well that this will not lead us to a perfect picture of reality, and that a probabilistic map can never reflect the territory.

Comment author: Houshalter 09 March 2017 09:09:18PM 1 point [-]

I think a concrete example is good for explaining this concept. Imagine you flip a coin and then put your hand over it before looking. The state of the coin is already fixed on one value. There is no probability or randomness involved in the real world now. The uncertainty of it's value is entirely in your head.

Comment author: Houshalter 09 March 2017 01:46:56PM 1 point [-]

From Surely You're Joking Mr. Feynman:

Topology was not at all obvious to the mathematicians. There were all kinds of weird possibilities that were “counterintuitive.” Then I got an idea. I challenged them: "I bet there isn't a single theorem that you can tell me - what the assumptions are and what the theorem is in terms I can understand - where I can't tell you right away whether it's true or false."

It often went like this: They would explain to me, "You've got an orange, OK? Now you cut the orange into a finite number of pieces, put it back together, and it's as big as the sun. True or false?"

"No holes."

"Impossible!

"Ha! Everybody gather around! It's So-and-so's theorem of immeasurable measure!"

Just when they think they've got me, I remind them, "But you said an orange! You can't cut the orange peel any thinner than the atoms."

"But we have the condition of continuity: We can keep on cutting!"

"No, you said an orange, so I assumed that you meant a real orange."

So I always won. If I guessed it right, great. If I guessed it wrong, there was always something I could find in their simplification that they left out.

Actually, there was a certain amount of genuine quality to my guesses. I had a scheme, which I still use today when somebody is explaining something that I’m trying to understand: I keep making up examples. For instance, the mathematicians would come in with a terrific theorem, and they’re all excited. As they’re telling me the conditions of the theorem, I construct something which fits all the conditions. You know, you have a set (one ball)—disjoint (two halls). Then the balls turn colors, grow hairs, or whatever, in my head as they put more conditions on. Finally they state the theorem, which is some dumb thing about the ball which isn’t true for my hairy green ball thing, so I say, “False!”

If it’s true, they get all excited, and I let them go on for a while. Then I point out my counterexample.

“Oh. We forgot to tell you that it’s Class 2 Hausdorff homomorphic.”

“Well, then,” I say, “It’s trivial! It’s trivial!” By that time I know which way it goes, even though I don’t know what Hausdorff homomorphic means.

I guessed right most of the time because although the mathematicians thought their topology theorems were counterintuitive, they weren’t really as difficult as they looked. You can get used to the funny properties of this ultra-fine cutting business and do a pretty good job of guessing how it will come out.

Comment author: Pablo_Stafforini 25 February 2017 10:17:05PM 0 points [-]

Eliezer has not published a detailed explanation of his estimates, although he has published many of his arguments for his estimates.

Eliezer wrote this in 1999:

My current estimate, as of right now, is that humanity has no more than a 30% chance of making it, probably less. The most realistic estimate for a seed AI transcendence is 2020; nanowar, before 2015.

Comment author: Houshalter 26 February 2017 02:04:57AM 2 points [-]

Yudkowsky has changed his views a lot over the last 18 years though. A lot of his earlier writing is extremely optimistic about AI and it's timeline.

Comment author: Houshalter 24 February 2017 10:39:16PM *  1 point [-]

This is by far my favorite form of government. It's a great response whenever the discussion of "democracy is the best form of government we have" comes up. Some random notes in no particular order:

Sadly getting support for this in the current day is unlikely because of the huge negative associations with IQ tests. Even literacy tests for voters are illegal because of a terrible history of fake tests being used by poll workers to exclude minorities. (Yes the tests were fake like this one, where all the answers are ambiguous and can be judged as correct or incorrect depending on how the test grader feels about you.)


This doesn't actually require the IQ testing portion though. I believe the greatest problem with democracy is that voters are mostly uninformed. And they have no incentive to get informed. A congress randomly sampled from the population though, would be able to hear issues and debates in detail. Even if they are average IQ, I think it would be much better than the current system. And you could use this congress of "average" representatives to vote for other leaders like judges and presidents, who would be more selected for intelligence.

In fact you could just use this system to randomly select voters from the population. Get them together so they can discuss and debate in detail, and know their votes really matter. And then have them vote on the actual leaders and representatives like a normal election. I believe something like this is mentioned at the end of the article.

Of course I still like and approve of the IQ filtering idea. But I think these two ideas are independent, and the IQ portion is always going to be the most controversial.


I think the sortition should be entirely opt-in, just like normal voting is. This selects for people that actually care about politics and want to be representatives. Which might select for IQ a bit on it's own. And prevents you from getting uninterested people that are bored out of their mind by politics.


One could argue such a system would be unrepresentative of minority groups. If they have lower IQs or are less likely to opt in. However the current system isn't representative at all. Look at the makeup of congress now. Different demographics are more or less likely to vote in elections as it is. And things like gerrymandering and just regular geographic-based voting distort representation a lot. And yet somehow it still mostly works, and I don't think this system could be any worse in that dimension.

But if it is a concern, you could just resample groups to represent the general population. So if women are half as likely to opt-in, women that do opt-in should be made twice as likely to be selected. I'm not sure if this is a good or desirable thing to do, just that it would quell these objections.


Selecting for the top 1% of IQ is too much filtering. You really don't want to create an incentive to game IQ tests. At least not too much. And remember IQ tests are not perfect, they can be practiced to improve your score. You also don't want a bunch of representatives that are freaks of nature, that have brains really good at Raven's Matrices and nothing else. There are multiple dimensions to intelligence, and while they correlate, the correlation isn't 100%. I'd arbitrarily go with the top 5% - the best scorer out of 20. Even that seems high.


All the discussion about how the system could be corrupted is ridiculous. People had the same objections to regular democracy. How do we trust that the poll workers and vote counters are reliable? What's to stop a vast conspiracy of voting fraud?

Somehow we've mostly solved these problems and votes are trusted. When issues arise, we have a court system that seems to be relatively fair about resolving them. And it's still not perfect. We have stuff like gerrymandering that wouldn't be an issue with sortition based systems.


I hope the mods don't remove this for violating the politics rule. While it is technically about political systems, it's only in a meta sense. Talking about the political system itself, not specific policies or ideologies. There is nothing particularly left or right wing about these ideas. I don't think anyone is likely to be mindkilled by it.

Comment author: Lumifer 13 February 2017 10:33:32PM *  7 points [-]

An interesting metaphor, given how the balrog basically went back to sleep after eating the local (and only the local) dwarves. And after some clumsy hobbitses managed to wake him up again, he was safely disposed of by a professional. In no case did the balrog threaten the entire existence of the Middle-Earth.

Comment author: Houshalter 16 February 2017 07:00:19AM 3 points [-]

In the first draft of the lord of the rings, the Balrog ate the hobbits and destroyed middle Earth. Tolkien considered this ending unsatisfactory, if realistic, and wisely decided to revise it.

Comment author: username2 13 February 2017 04:22:46PM 2 points [-]

Are there interesting youtubers lesswrong is subscribed to ? I never really used youtube and after watching history of japan I get the feeling I'm missing out on some stuff.

Comment author: Houshalter 15 February 2017 03:22:15AM *  2 points [-]

It's really going to depend on your interests. I guess I'll just dump my favorite channels here.

I enjoy some math channels like Numberphile, computerphile, standupmaths, 3blue1brown, Vi Hart, Mathologer, singingbanana, and some of Vsauce.

For "general interesting random facts" there's Tom Scott, Wendover Productions, CGP Grey, Lindybeige, Shadiversity. and Today I Found Out.

Science/Tech/etc: engineerguy, Kurzgesagt, and C0nc0rdance.

Miscellaneous: kaptainkristian, CaptainDisillusion, and the more recent videos of suckerpinch.

Politics: I unsubscribed from most political content a long time ago. But Last Week Tonight and Vox are pretty good.

Humor: That's pretty subjective, but I think everyone should know about The Onion. Also Fitzthislewitz.

Comment author: Lumifer 11 February 2017 01:21:45AM 3 points [-]

A fair point, but I still expect gene-level interventions to work better and be developed noticeably earlier than any "cures" for low IQ in adults or even kids. Notably, after the low-hanging fruits have been picked (malnutrition, lead, etc.), there are no clear avenues for advancement. At the moment we don't have a clue as to where even to start looking.

Comment author: Houshalter 11 February 2017 07:40:36PM 0 points [-]

Well there is a lot of research into treatments for dementia, like the neurogenesis drug I mentioned above. I think it's quite plausible they will stumble upon general cognitive enhancers that improve healthy people.

Comment author: Lumifer 10 February 2017 05:56:02PM 9 points [-]

Let's define "stupidity" as "low IQ" where IQ is measured by some standard tests.

IQ is largely hereditary (~70%, IIRC) and polygenic. This mean that attempting to "cure" it by anything short of major genetic engineering will have quite limited upside.

There are cases where IQ is depressed from its "natural" level (e.g. by exposure to lead) and these are fixable or preventable. However if you're genetically stupid, drugs or behavioral changes won't help.

we could, for instance, sequence a lot of peoples' DNA, give them all IQ tests, and do a genome-wide association study, as a start.

We could and people do that. If you're interested in IQ research, look at Greg Cochran or James Thompson or Razib Khan, etc. etc.

We could see affirmative action for stupid people. Harvard would boast about how many stupid people it admitted.

That, ahem, is exactly what's happening already :-/

Comment author: Houshalter 10 February 2017 10:15:52PM 5 points [-]

Just because it's genetic doesn't mean it's incurable. Some genetic diseases have been cured. I've read of drugs that increase neurogenesis, which could plausibly increase IQ. Scientists have increased the intelligence of mice by replacing their glial cells with better human ones.

Comment author: khafra 08 February 2017 12:21:50PM 3 points [-]

Point 8, about the opacity of decision-making, reminded me of something I'm surprised I haven't seen on LW before:

LIME, Local Interpretable Model-agnostic Explanations, can show a human-readable explanation for the reason any classification algorithm makes a particular decision. It would be harder to apply the method to an optimizer than to a classifier, but I see no principled reason why an approach like this wouldn't help understand any algorithm that has a locally smooth-ish mapping of inputs to outputs.

Comment author: Houshalter 08 February 2017 05:44:09PM 0 points [-]

I wasn't aware that method had a name, but I've seen that idea suggested before when this topic comes up. For neural networks in particular, you can just look at the gradients of the inputs to see how it's output changes as you change each input.

I think the problem people have, is that just tells you what the machine is doing. Not why. Machine learning can never really offer understanding.

For example, there was a program created specifically for the purpose of training human understandable models. It worked by fitting the simplest possible mathematical expression to the data. And the hope was that simple mathematical expressions would be easy to interpret by humans.

One biologist found an expression that perfectly fit his data. It was simple, and he was really excited by it. But he couldn't understand what it meant at all. And he couldn't publish it, because how can you publish an equation without any explanation?

Comment author: Stuart_Armstrong 07 February 2017 10:06:44AM 0 points [-]

Instead of using the somewhat complicated GAN thing, you can just have it try to predict the next letter a human would type.

How do you trade that off against giving an actually useful answer?

Comment author: Houshalter 07 February 2017 02:04:36PM 1 point [-]

Same as with the GAN thing. You condition it on producing a correct answer (or whatever the goal is.) So if you are building a question answering AI, you have it model the probability distribution something like P(human types this character | human correctly answers question). This could be done simply by only feeding it examples of correctly answered questions as it's training set. Or you could have it predict what a human might respond if they had n days to think about it.

Though even that may not be necessary. What I had in mind was just having the AI read MIRI papers and produce new ones just like them. Like a superintelligent version of what people do today with markov chains or RNNs to produce writing in the style of an author.

Yes these methods do limit the AI's ability a lot. It can't do anything a human couldn't do, in principle. But it can automate the work of humans and potentially do our job much faster. And if human ability isn't enough to build an FAI, well you could always set it to do intelligence augmentation research instead.

View more: Next