Comment author: Houshalter 28 September 2016 05:33:31PM 4 points [-]

I don't know if this is lesswrong material, but I found it interesting. Cities of Tomorrow: Refugee Camps Require Longer-Term Thinking

“the average stay today in a camp is 17 years. That’s a generation.” These places need to be recognized as what they are: “cities of tomorrow,” not the temporary spaces we like to imagine. “In the Middle East, we were building camps: storage facilities for people. But the refugees were building a city,” Kleinschmidt said in an interview. Short-term thinking on camp infrastructure leads to perpetually poor conditions, all based on myopic optimism regarding the intended lifespan of these places.

Many refugees may never be able return home, and that reality needs to be realized and incorporated into solutions. Treating their situation as temporary or reversible puts people into a kind of existential limbo; inhabitants of these interstitial places can neither return to their normal routines nor move forward with their lives..

From City of Thorns:

The UN had spent a lot of time developing a new product: Interlocking Stabilized Soil Blocks (ISSBs), bricks made of mud, that could be used to build cheap houses in refugee camps. It had planned to build 15,000 such houses in Ifo 2 but only managed to construct 116 before the Kenyan government visited in December 2010 and ordered the building stopped. The houses looked too much like houses, better even than houses that Kenyans lived in, said the Department for Refugee Affairs, not the temporary structures and tents that refugees were supposed to inhabit.

From reddit:

Peru had an uprising in the 1980s in which the brutality of the insurgents, the Sendero Luminoso, caused mass migration from the Andes down to the coast. Lima's population grew from perhaps a million to its current 8.5 million in a decade. This occurred through settlements in pure desert, where people lived in shacks made of cardboard and reed matting. These were called "young villages", Pueblos Jóvenes.

Today, these are radically different. Los Olivos is now a lower-middle-class suburb, boasting one of the largest shopping malls in South America, gated neighborhoods, mammoth casinos and plastic surgery clinics. All now have schools, clinics, paved roads, electricity and water; and there is not a cardboard house in sight. (New arrivals can now buy prefab wooden houses to set up on more managed spaces, and the state runs in power and water.)

Zaatari refugee camp in Jordan, opened 4 years ago seems to be well on it's way to becomming a permanent city. It has businesses, permanent structures, and it's own economy.

In response to Linkposts now live!
Comment author: Houshalter 28 September 2016 04:24:57PM 7 points [-]

This is really awesome and could change the fate of lesswrong. I really think this will bring people back (at least more than any other easy to implement change.) I personally expect to spend more time here now, at least.

One thing to take note of is that lesswrong, by default, sorts by /new. As the volume of posts increases, it may be necessary to change the default sort to /hot or /top/?t=week. Especially if you want it to be presentable to newcomers or even old timers coming back to the site, you want them to see the best links first.

Comment author: Sable 26 September 2016 10:08:43AM 3 points [-]

I was at the vet a while back; one of my dogs wasn't well (she's better now). The vet took her back, and after waiting for a few minutes, the vet came back with her.

Apparently there were two possible diagnosis: let's call them x and y, as the specifics aren't important for this anecdote.

The vet specifies that, based on the tests she's run, she cannot tell which diagnosis is accurate.

So I ask the vet: which diagnosis has the higher base rate among dogs of my dog's age and breed?

The vet gives me a funny look.

I rephrase: about how many dogs of my dog's breed and age get diagnosis x versus diagnosis y, without running the tests you did?

The vet gives me another funny look, and eventually replies: that doesn't matter.

My question for Lesswrong: Is there a better way to put this? Because I was kind of speechless after that.

Comment author: Houshalter 26 September 2016 05:08:24PM 5 points [-]

"Base rate" is statistics jargon. I would ask something like "which disease is more common?" And then if they still don't understand, you can explain that its probably the disease that is most common, without explaining Bayes rule.

Comment author: Good_Burning_Plastic 24 September 2016 10:15:06AM *  0 points [-]

When people say "a dollar was worth more 50 years ago", you don't reply "nuh uh, a dollar has always been worth exactly one dollar."

Yes, but "a dollar is now worth $x" where x is different from 1 is still meaningless unless you specify you're talking about today's dollar vs some other year's dollar specifically.

Comment author: Houshalter 24 September 2016 02:20:42PM 1 point [-]

That's correct, but usually I don't see that mistake made about IQ. On a handful of occasions I've seen someone say "we could raise the average IQ by 10 points" or something like that, and some pedant responds that "the average IQ must always be 100". Which is technically correct, but misses the point. It makes it difficult to have discussions about IQ over time.

Comment author: Val 19 September 2016 11:05:57PM 1 point [-]

First of all, IQ tests aren't designed for high IQ, so there's a lot of noise there and this is probably mainly noise.

Indeed. If an IQ test claims to provide accurate scores outside of the 70 to 130 range, you should be suspicious.

There are so many misunderstandings about IQ in the general population, ranging from claims like "the average IQ is now x" (where x is different from 100), to claims of a famous scientist having had an IQ score over 200, and claims of "some scientists estimating" the IQ of a computer, an animal, or a fictional alien species. Or things as simple as claiming to calculate an IQ score based on a low number (usually less than 10) of trivia questions about basic geography and names of celebrities.

Comment author: Houshalter 23 September 2016 06:17:45PM 2 points [-]

"the average IQ is now x" (where x is different from 100)

I think you are just being pedantic. When people say something like "the flynn effect has raised the average IQ has increased by 10 points over the last 50 years", they mean that the average person would score 10 points higher on a 1950's IQ test. See also the value of money, which also changes over time due to inflation. When people say "a dollar was worth more 50 years ago", you don't reply "nuh uh, a dollar has always been worth exactly one dollar."

claims of "some scientists estimating" the IQ of a computer, an animal, or a fictional alien species.

I mean it's impossible to do any kind of serious estimate. But I don't think the idea of a linear scale of intelligence is inherently meaningless. So you could give a very rough estimate where nonhuman intelligences would fall on it, and where that would put them relative to humans with such and such IQ.

Comment author: ChristianKl 23 September 2016 02:14:32PM 0 points [-]

Google car also uses machine learning. That still doesn't mean that it tries to emulate a human driver. The article doesn't say that the car predicts what a human driver would do.

How do you enforce that the Ai should "try to drive with as little risk as possible"?

There's the example of the Google car waiting for the woman in the wheelchair who chased ducks. That's behavior you get from the way Google algorithm cares about safety that you wouldn't get from emulating human drivers.

Comment author: Houshalter 23 September 2016 02:41:17PM 0 points [-]

Google uses machine learning, but it's not based on it. There is a difference between a special "stop sign detector" function, and an "end to end" approach where a single algorithm learns everything.

Comma.ai's business model is to pay people to upload their dashcam footage, and train neural networks based on it. As far what I described is their approach.

Comment author: Houshalter 23 September 2016 01:05:53PM 2 points [-]

Replace "give human heroin" with "replace the human with another being whose utility function is easier to satisfy, like a rock", and this conclusion seems sort of trivial. It has nothing to do with whether or not humans are rational. Heroin is an example of a thing that modifies our utility functions. Heroin might as well replace the human with a different entity, that has a slightly different utility function.

In fact I don't see how the human in this situation is being irrational at all. Not doing heroin unless you are already addicted seems like a reasonable behavior.

Comment author: ChristianKl 23 September 2016 11:28:06AM 1 point [-]

It's not my impression that self driving cars simply try to copy what a human does in any case. The AI don't violate speed limits and generally try to drive with as little risk as possible. Humans drive very differently.

Comment author: Houshalter 23 September 2016 12:47:29PM *  0 points [-]

You might be thinking of Google's self driving car which seems like it was designed from the ground up with traditional programming. I am thinking of system's like Comma.ai's which use machine learning to train self driving cars, by predicting what a human driver would do.

Of course you can put a regulator on the gas pedal and prevent the AI from speeding. But other issues are more difficult to control. How do you enforce that the Ai should "try to drive with as little risk as possible"? We have very few training examples of accidents, and we can't let the car experiment under real conditions.

My guess on how to solve this issue is to develop a way to "speak" with the AI. So we can see what it is thinking, and tell it what we would prefer it to do. But this is difficult and there is little research on methods to do this, yet.

In response to Against Amazement
Comment author: Houshalter 23 September 2016 11:23:22AM 3 points [-]

Juergen Schmidhuber has a theory of artificial curiosity. His theory proposes that seeking confusion is actually a good thing. Agents that seek out situations where surprising things happen, put their internal models to the test and learn the most. And that's all curiosity is.

Amazement is just a form of curiosity. People who are interested in AlphaGo have had their internal models of AI progress challenged, and are updating them.

Comment author: Houshalter 23 September 2016 10:40:57AM *  1 point [-]

Here is a real world control problem: Self driving cars. Companies are currently taking dash cam footage of people driving, and using it to train AIs to drive cars.

There is a serious problem with this. The AIs can learn to predict exactly what a human would do. But humans aren't actually optimal drivers. They make tons of mistakes. They have slow reaction times. They fail to notice things. They don't apply the optimal braking or acceleration, they speed, they don't make optimal turns, etc.

AIs trained on human data end up mimicking all of these imperfections. Then combined with the AIs own imperfections, you get a subpar driver. At best, if the AI is perfect, you get a driver that is equally as good as a human, but not necessarily any better.

Self driving cars are a perfect test case for AI control methods, and a perfect way to encourage mainstream researchers to consider the control problem. There will be many similar cases in the future as AIs start being applied to real world problems in open ended domains. Or wherever there is a hard to define goal function to measure the AI by.

View more: Prev | Next