Filter This month

You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Comment author: Houshalter 28 September 2016 05:33:31PM 4 points [-]

I don't know if this is lesswrong material, but I found it interesting. Cities of Tomorrow: Refugee Camps Require Longer-Term Thinking

“the average stay today in a camp is 17 years. That’s a generation.” These places need to be recognized as what they are: “cities of tomorrow,” not the temporary spaces we like to imagine. “In the Middle East, we were building camps: storage facilities for people. But the refugees were building a city,” Kleinschmidt said in an interview. Short-term thinking on camp infrastructure leads to perpetually poor conditions, all based on myopic optimism regarding the intended lifespan of these places.

Many refugees may never be able return home, and that reality needs to be realized and incorporated into solutions. Treating their situation as temporary or reversible puts people into a kind of existential limbo; inhabitants of these interstitial places can neither return to their normal routines nor move forward with their lives..

From City of Thorns:

The UN had spent a lot of time developing a new product: Interlocking Stabilized Soil Blocks (ISSBs), bricks made of mud, that could be used to build cheap houses in refugee camps. It had planned to build 15,000 such houses in Ifo 2 but only managed to construct 116 before the Kenyan government visited in December 2010 and ordered the building stopped. The houses looked too much like houses, better even than houses that Kenyans lived in, said the Department for Refugee Affairs, not the temporary structures and tents that refugees were supposed to inhabit.

From reddit:

Peru had an uprising in the 1980s in which the brutality of the insurgents, the Sendero Luminoso, caused mass migration from the Andes down to the coast. Lima's population grew from perhaps a million to its current 8.5 million in a decade. This occurred through settlements in pure desert, where people lived in shacks made of cardboard and reed matting. These were called "young villages", Pueblos Jóvenes.

Today, these are radically different. Los Olivos is now a lower-middle-class suburb, boasting one of the largest shopping malls in South America, gated neighborhoods, mammoth casinos and plastic surgery clinics. All now have schools, clinics, paved roads, electricity and water; and there is not a cardboard house in sight. (New arrivals can now buy prefab wooden houses to set up on more managed spaces, and the state runs in power and water.)

Zaatari refugee camp in Jordan, opened 4 years ago seems to be well on it's way to becomming a permanent city. It has businesses, permanent structures, and it's own economy.

Comment author: MrMind 28 October 2016 07:46:08AM 3 points [-]

I won't be able to create a new Open thread on monday (I will be at our national version of Comic-Con). Can someone East of US create it?
Community service is good karma. Literally.

Comment author: TheOtherDave 27 October 2016 12:53:30AM 3 points [-]

This comment taken out of context kind of delighted me.

Comment author: NancyLebovitz 26 October 2016 02:01:23AM 3 points [-]

Can someone here come up with any sort of realistic value system a foreign civilisation might have that would result in it not destroying the human race, or at least permanently stunting our continued development, should they become aware of us?

Not being bored. Living systems (and presumably more so for living systems that include intelligence) show more complex behavior than dead systems.

Comment author: Viliam 25 October 2016 10:26:00PM *  3 points [-]

The part about healthcare is USA-specific, but the relationship between total hours and total pay is nonlinear at other places, too.

In Slovakia, the healthcare is set up so that everyone pays a fixed fraction of their income, and then everyone receives exactly the same healthcare regardless of how much they paid. So it shouldn't have any impact on hourly rate.

Yet, it is difficult to find a part-time work on the market. When I tried it, I had to work for 50% of my previous salary just to reduce the work to 4 days a week, and the employer still believed they were doing me a favor. (After a few weeks I decided that getting 50% of money for 80% of time is not a smart deal, so I quit.)

I believe the problem is signalling. Almost everyone is okay with working full-time; especially men. (Women can use having small kids as an excuse for a part-time job, but that also dramatically reduces their hourly rate, which is an important part of the pay gap.) If you are a man unwilling to work full-time, it makes you weird.

So it's not like the employer literally needs you there 5 days a week. It's simply a decision to not hire a weirdo, when there are non-weird candidates available. If you differ from the majority by not willing to work 5 days a week, 8 hours a day, who knows what else is weird about you? Why take the unnecessary risk? Also, well-paid employees are supposed pretend they love their job; and by asking for a part-time job you show too clearly that you actually care about something else more.

Thus, I sometimes had jobs where I was able to spend up to 50% of my working time just browsing websites from the company computer. But no comparably well paid option where I could officially work 4 days a week, or 6 hours a day, and then simply go home.

(I was also trying to get home office, so that instead of browsing the web I could do something useful. But the companies where the employees spend much time online are usually on some level aware of what is happening, so they don't allow home office. As long as everyone must stay in the building the whole day, the management can keep pretending that people are actually working.)

I believe that if for example 50% of people working in some profession would demand part-time work, this problem would mostly disappear. Then, wanting to work part-time would simply be normal. But that's a coordination problem, and I don't even know how many people would actually be interested in working part-time if that would be a legitimate option (with the same hourly rate).

Comment author: Lumifer 25 October 2016 03:06:41PM 2 points [-]

The issue is the standard "The AI neither loves you nor hates you, but you're made out of atoms...". The Europeans did not desire to wipe out Native Americans, they just wanted land and no annoying people who kept on shooting arrows at them.

Comment author: Lumifer 25 October 2016 02:39:08PM 2 points [-]

Because it would not fit into our values to consider exterminating them as the primary choice.

Did you ask the Native Americans whether they hold a similar opinion?

Comment author: Val 25 October 2016 02:10:22PM 3 points [-]

If we developed practical interstellar travel, and went to a star system with an intelligent species somewhat below our technological level, our first choice would probably not be annihilating them. Why? Because it would not fit into our values to consider exterminating them as the primary choice. And how did we develop our values like this? I guess at least in some part it's because we evolved and built our civilizations among plenty of species of animals, some of which we hunted for food (and not all of them to extinction, and even those which got extinct, wiping them out was not our goal), some of which we domesticated, and plenty of which we left alone. We also learned that other species besides us have a role in the natural cycle, and it was never in our interest to wipe out other species (unless in rare circumstances, when they were a pest or a dangerous disease vector).

Unless the extraterrestrial species are the only macroscopic life-form on their planet, it's likely they evolved among other species and did not exterminate them all. This might lead to them having cultural values about preserving biodiversity and not exterminating species unless really necessary.

Comment author: siIver 25 October 2016 08:22:00AM 3 points [-]

Should probably have been posted in the open thread (not meant as a reproach)

Comment author: Houshalter 25 October 2016 07:20:41AM *  3 points [-]

The premise this article starts with is wrong. The argument goes that AIs can't take over the world, because they can't predict things much better than humans can. Or, conversely, that they will be able to take over because they can predict much better than humans.

Well so what if they can predict the future better? That's certainly one possible advantage of AI, but it's far from the only one. My greatest fear/hope of AI is that it will be able to design technology much better than humans. Humans didn't evolve to be engineers or computer programmers. It's really just an accident we are capable of it. Humans have such a hard time designing complex systems, keeping track of so many different things in our head, etc. Already these jobs are restricted to unusually intelligent people.

I think there are many possible optimizations to the mind to improve at these kinds of tasks. There are rare humans that are very good at these tasks, showing that human brains aren't anywhere near the peak. An AI that is optimized for them, will be able to design technologies we can't even dream of. We could theoretically make nanotechnology today, but there are so many interacting parts and complexities, humans are just unable to manage it. The internet has so much bugged software running it. It could probably be pwned in a weekend by a sufficiently powerful programming AI.

And the same is perhaps true with designing better AI algorithms, an AI optimized towards AI research, would be much better at it than humans.

Comment author: gwern 24 October 2016 11:10:41PM 3 points [-]

Along the lines of my earlier GCTA, I've written a Wikipedia article on genetic correlations.

Comment author: Mac 24 October 2016 06:49:44PM *  3 points [-]

what's the most annoying part of your life/job?

Pain. Moderate but constant pain from old sports injuries makes me: spend money on pain meds and counter irritants, work longer hours because the pain is distracting and reduces my productivity, limit physical activity and travel, deviate from an optimal exercise routine, fall into a black hole of grumpiness occasionally.

how much would you pay for a solution?

If by "solution" you mean an easy, one-time, guaranteed fix: $10,000

Comment author: Lumifer 21 October 2016 02:54:55PM 2 points [-]

You are literally asking me to solve the FAI problem right here and now.

No, I'm asking you to specify it. My point is that you can't build X if you can't even recognize X.

You seem to think Value Learning is the hard problem, getting an AI to learn what humans actually want.

Learning what humans want is pretty easy. However it's an inconsistent mess which involves many things contemporary people find unsavory. Making it all coherent and formulating a (single) policy on the basis of this mess is the hard part.

From your point of view. You gave me examples of values which you consider bad, as an argument against FAI. I'm showing you that CEV would eliminate these things.

Why would CEV eliminate things I find negative? This is just a projected typical mind fallacy. Things I consider positive and negatve are not (necessarily) things many or most people consider positive and negative. Since I don't expect to find myself in a privileged position, I should expect CEV to eliminate some things I believe are positive and impose some things I believe are negative.

Later you say that CEV will average values. I don't have average values.

If they knew all the arguments for and against religion, then their values would be more like ours. They would see how bad killing people is, and that their religion is wrong.

I see no evidence to believe this is true and lots of evidence to believe this is false.

You are essentially saying that religious people are idiots and if only you could sit them down and explain things to them, the scales would fall from their eyes and they will become atheists.This is a popular idea, but it fails real-life testing very very hard.

Comment author: gwern 21 October 2016 01:50:46AM *  3 points [-]

I don't find it convincing. Even though it's long, I don't recognize any of the examples as being 'Ra' ness and I can't think of any examples of 'Ra' in my own experience. The name 'Ra' is also not that great as unlike some of the other reifications going around like Yvain's 'Moloch', which at least have some intuitive connection with their concept, 'Ra' seems pretty much arbitrary.

EDIT: Obormot and saturn2 on IRC note that 'Ra' seems in her telling to slightly overlap with the whole complacent-elite meritocracy going on in the Ivy League & Wall Street, of the Twilight of the Elites type.

Comment author: Houshalter 21 October 2016 12:40:33AM 3 points [-]

most AI safety researchers have not done any research into the topic of (practical) AI research, so their opinions are irrelevant. How is this statement any different?

Because that statement is simply false. Researchers do deal with real world problems and datasets. There is a huge overlap between research and practice. There is little or no overlap between AI risk/safety research, and current machine learning research. The only connection I can think of, is that people familiar with reinforcement learning might have a better understanding of AI motivation.

Really? There's a lot of frequent posters here that don't hold the Bostrom extremist view. skeptical_lurker and TheAncientGeek come to mind.

I didn't say there wasn't dissent. I said it wasn't an outlier view, and seems to be the majority opinion.

But if this site really has an orthodoxy, then it has no remaining purpose to me. Goodbye.

Look I'm sorry if I came across as overly hostile. I certainly welcome any debate and discussion on this issue. If you have anything to say feel free to say it. But your above comment didn't really add anything. There was no argument, just an appeal to authority, and calling GP "extremist" for something that's a common view on this site. At the very least, read some of the previous discussions first. You don't need to read everything, but there is a list of posts here.

Comment author: James_Miller 20 October 2016 04:00:23PM 3 points [-]

Megyn Kelly walked by me once. If she had handed me a knife and asked me to remove my own heart and give it to her, part of my brain would have felt obligated to comply.

Comment author: siIver 20 October 2016 01:41:10AM *  3 points [-]

This may be a naive and over-simplified stance, so educate me if I'm being ignorant--

but isn't promiting anything that speeds up AI reasearch the absolute worst thing we can do? If the fate of humanity rests on the outcome of the race between solving the friendly AI problem and reaching intelligent AI, shouldn't we only support research that goes exclusively into the former, and perhaps even try to slow down the latter? The link you shared seems to fall into the latter category, aiming for general promotion of the idea and accelerating research.

Feel free to just provide a link if the argument has been discussed before.

Comment author: Manfred 20 October 2016 01:04:15AM *  3 points [-]

Depends on information. If people retain memories, so that each person-moment follows from a previous one, then knowing only that I suddenly find myself in a room means I'm probably in room A. If people are memory-wiped at some interval, then this increases the probability I should assign to being in room B - probability of being in a specific room, given that your state of information is that you suddenly find yourself in a room, is proportional to the number of times "I have suddenly found myself in a room" is somebody's state of information.

The above is in fact true. So here's a fun puzzler for you: why is the following false?

"If you tell me the exact time, then my room must more likely be B, because there are 1000 times more people in room B at that time. Since this holds for all times you could tell me, it is always true that my room is probably B, so I'm probably in room B."

Hint: Assuming that room B residents "live" 1,000,000 times longer than room A residents, how does their probability of being in room B look throughout their life, assuming they retain their memories?

Comment author: turchin 19 October 2016 11:02:27AM 3 points [-]

The page http://lesswrong.com/r/discussion/new/ returns error for me for 12 hours, but other pages are fine. Is it only my glitch?

error text: "You have encountered an error in the code that runs Less Wrong. The site maintainers have been informed and will get to it is as soon as they can. In the unlikely event that you've bumped into this error before and think that no-one is paying attention, please report the error and how to reproduce it on http://code.google.com/p/lesswrong/issues/list'

If the error is localised you might still find awesome Less Wrong content in the Main article area or in the Discussion area.

Comment author: ChristianKl 18 October 2016 08:56:25PM 3 points [-]

What empirical evidence do you have observed to back you belief that this technique is valuable?

Comment author: Lumifer 18 October 2016 07:57:18PM 3 points [-]

I don't have a most important value.

Comment author: username2 17 October 2016 05:02:08PM *  3 points [-]

This is deserving of a much longer answer which I have not had the time to write and probably won't any time soon, I'm sorry to say. But in short summary human drives and morals are more behaviorist that utilitarian. The utility function approximation is just that, an approximation.

Imagine you have a shovel, and while digging you hit a large rock and the handle breaks. What that shovel designed to break, in sense that its purpose was to break? No, shovels are designed to dig holes. Breakage, for the most part, is just an unintended side-effect of the materials used. Now in some cases things are intended to fail early for safety reasons, e,g, to have the shovel break before your bones will. But even then this isn't some underlying root purpose. The purpose of the shovel is still to dig holes. The breakage is more a secondary consideration to prevent undesirable side effects in some failure modes.

Does learning that the shovel breaks when it exceeds normal digging stresses tell you anything about the purpose / utility function of the shovel? Pedantically, a little bit if you accept the breaking point being a designed-in safety consideration. But it doesn't enlighten us about the hole digging nature at all.

Would you rather put dust in the eyes of 3^^^3 people, or torture one individual to death? Would you rather push one person onto the trolley tracks to save five others? These are failure mode analysis of edge cases. The real answer is I'd rather have dust in no one's eyes and nobody tortured, and nobody hit by trolleys. Making an arbitrary what-if tradeoff between these scenarios doesn't tell us much about our underlying desires because there isn't some consistent mathematical utility function underlying our responses. At best it just reveals how we've been wired by genetics and upbringing and present environment to prioritize our behaviorist responses. Which is interesting, to be sure. But not very informative, to be honest.

Comment author: James_Miller 17 October 2016 01:00:05PM *  3 points [-]

Yes, I agree. It shows children are trying to guess the teacher's password and are not doing math. Interestingly, when I asked my son this question he said you couldn't find the answer. When I asked how he knew that he said he has seen other math problems where you don't have enough information to solve.

Comment author: SithLord13 17 October 2016 12:51:29PM 3 points [-]

I think the issue here might be slightly different than posed. I think the real issue is that children instinctively assume they're running on corrupted hardware. For all priors in math, they've had a solvable problem. They've had problems they couldn't solve, and then been shown it was a mistake on their part. Without good cause, why would they suddenly assume all their priors are wrong, and not just that they're failing to grasp it? Given their priors and information, it's ration to expect that they missed something.

Comment author: gworley 16 October 2016 12:41:31AM 3 points [-]

medium makes it a little hard to find the rss feeds, but it's at:

https://medium.com/feed/map-and-territory

Comment author: CronoDAS 15 October 2016 09:49:23PM 3 points [-]

Is there an RSS feed for new posts?

Comment author: WhySpace 15 October 2016 06:42:05PM 3 points [-]

If the majority of minds with moral weight are the result of an intelligent mind's decision, then the link between complexity and frequency may be weak. Pain is a strong motivator for some things, even if it's bad at motivating creativity, so perhaps there would still be an incentive to create more pain. This is extremely speculative though.

The bigger worry would be that forces like Moloch and evolution may favor pain. Wild animals appear to have much more pain in their lives than pleasure. Even if the carrot was a more effective motivator than the stick for something, if pain was simpler and more robust evolution would still favor it.

This would be especially important for things like Boltzmann brains. To me it seems unlikely to me that things like trees or insects can suffer, but if they can we'd have a very hard time relating to minds so different from our own. With so little evidence, the choice of a good prior is crucial, so it would be useful to have a prior for the predominance of suffering over happiness.

Comment author: scarcegreengrass 13 October 2016 11:57:11AM 2 points [-]

Oh, this is much more complete, thanks.

Wow, it's surreal to hear Obama talking about Bostrom, Foom, and biological x risk.

Comment author: DanArmak 12 October 2016 02:55:20PM 3 points [-]

Without commenting on whether this presentation matches the original metaethics sequence (with which I disagree), this summary argument seems both unsupported and unfalsifiable.

  1. No evidence is given for the central claim, that humans can and are converging towards a true morality we would all agree about if only we understood more true facts.
  2. We're told that people in the past disagreed with us about some moral questions, but we know more and so we changed our minds and we are right while they were wrong. But no direct evidence is given for us being more right. The only way to judge who's right in a disagreement seems to be "the one who knows more relevant facts is more right" or "the one who more honestly and deeply considered the question". This does not appear to be an objectively measurable criterion (to say the least).
  3. The claim that ancients, like Roman soldiers, thought slavery was morally fine because they didn't understand how much slaves suffer is frankly preposterous. Roman soldiers (and poor Roman citizens in general) were often enslaved, and some of them were later freed (or escaped from foreign captivity). Many Romans were freedmen or their descendants - some estimate that by the late Empire, almost all Roman citizens had at least some slave ancestors. And yet somehow these people, who both knew what slavery was like and were often in personal danger of it, did not think it immoral, while white Americans in no danger of enslavement campaigned for abolition.
Comment author: WalterL 12 October 2016 02:46:05PM -1 points [-]

Yes, those with my values will live here, in Gondor. Your folks can live other there, in Mordor. Our citizens will no longer come into contact and conflict with one another, and peace will reign forever.

What, these segregated regions THEMSELVES come into conflict? Absurd. What would you even call a conflict that was between large groups of people? That could never happen. Everyone who shares my value system knows that lots of people would die, and we all agree that nothing could be worth that.

Comment author: Lumifer 11 October 2016 06:58:47PM *  3 points [-]

It's socially acceptable to twirl and manipulate small objects in your hands, from pens to stress balls. If you need to get your mouth involved, it's mostly socially acceptable to chew on pens. Former smokers used to hold empty pipes in their mouths, just for comfort, but it's hard to pull off nowadays unless you're old or a fully-blown hipster.

Comment author: Lumifer 11 October 2016 06:38:33PM 2 points [-]

We don't know what an AI which maximizes human values is because we don't know what human values are at the necessary level of precision. Not to mention the assumption that the AI will be a maximizer and that values can be maximized.

Comment author: James_Miller 11 October 2016 02:33:02PM 3 points [-]

She could read "The Basic AI Drives" to him at night.

Comment author: ChristianKl 10 October 2016 09:51:24PM 2 points [-]

It's certainly not a blog that tries to pander the reader.

Comment author: Lumifer 10 October 2016 06:42:04PM 3 points [-]

Brain drain has been a concern of some for a long time.

Comment author: 9eB1 10 October 2016 05:01:32PM 3 points [-]

I would be very interested in this as well. In the meantime, there is a subreddit for the site that has a thread with best posts for a new reader, and a thread on people's favorite things from TLP.

Comment author: pcm 10 October 2016 04:44:24PM 3 points [-]

I suspect attempted telekinesis is relevant.

Comment author: DanArmak 10 October 2016 04:18:24PM 3 points [-]

We don't have an AGI that doesn't kill us. Having one would be a significant step towards FAI. In fact, "a human-equivalent-or-better AGI that doesn't do anything greatly harmful to humanity" is a pretty good definition of FAI, or maybe "weak FAI".

Comment author: turchin 10 October 2016 02:28:19PM 3 points [-]

Good point, but my question was about what we can do to raise chances that it will be friendly AI.

Comment author: ChristianKl 10 October 2016 01:05:28PM 3 points [-]

The standard way to learn massage is through taking a course.

I would also recommend Betty Martin's 3-Minute game as a secular message like practice: https://www.youtube.com/watch?v=auokDp_EA80

Comment author: roland 10 October 2016 12:20:15PM 3 points [-]

Is the following a rationality failure? When I make a stupid mistake that caused some harm I tend to ruminate over it and blame myself a lot. Is this healthy or not? The good thing is that I analyze what I did wrong and learn something from it. The bad part is that it makes me feel terrible. Is there any analysis of this behaviour out there? Studies?

In response to Quantum Bayesianism
Comment author: chron 09 October 2016 06:53:59PM 3 points [-]

Did anyone else find the banner at the top of the article (about preferring secondary and tertiary sources to primary ones) more interesting (about the problems with wikipedia) than the article itself?

Comment author: jimmy 08 October 2016 07:11:22PM 3 points [-]

An excellent post, but not Scott :)

http://mindingourway.com/confidence-all-the-way-up/

Comment author: CellBioGuy 07 October 2016 11:09:11PM *  3 points [-]

The short version before I get a chance to write more posts:

Primates appear to be an interestingly potentiated lineage, prone to getting smart when they get large, due to differences in brain development established 50 megayears ago that make their brains much more impressive per unit volume than most larger mammals. The great apes other than humans actually seem to run into energetic limits to feeding their brains and have smaller brains than you'd expect for a primate of their size, while humans are right on the generic primate trendline. Birds are another potentiated lineage - their brains are about 6x as compact as a comparable primate brain.

Eukaryotes are really weird. The one thing that is incontrovertible these days is that the classic 3-domains-of-life idea, with eukaryotes and archaea as sister clades, is turning out to be wrong. Eukaryotes are turning out to have come from a fusion/symbiosis of a bacterium and something that fits entirely within the archaeal domain. Various people who are studying their origin and evolution have their pet models and hold to them too tightly and fight each other bitterly, though some things are finally coming out for sure. A lot of their weird features may come from particular population genetic selective pressures that come from competition between copies of the mitochondrial genome, and a lot of others may come from the fact that they invented sex and have low population sizes both of which allow types of evolution and genetic drift that you are much less likely to see in the eubacteria or archaebacteria, the two 'primary' domains (whose separation represent the deepest branch in the tree of life). But the fact that ALL eukaryotes have a huge constellation of weird traits with no intermediate forms means their origin was a weird event, and opinions vary on if that means it was a singular extremely unlikely event or if all those weird properties come logically from how they formed, and on if there was strong first-mover-advantage.

And, of course, our data on the breadth of life that exists on Earth is incomplete... I love the specialized journals on electron microscopy, now and then you see a paper where someone just sees something really bizarre and reports on what they saw without a clue as to what it is. If you want a really perplexing paper, take a look at this: http://jmicro.oxfordjournals.org/content/61/6/423.full.pdf+html (alternately https://dl.dropboxusercontent.com/u/67168735/a%20unique%20organism%20from%20the%20deep%20sea.pdf ) "A Unique Microorganism from the Deep Sea"

Comment author: Unnamed 07 October 2016 06:15:59AM 3 points [-]
Comment author: Houshalter 07 October 2016 06:11:12AM 3 points [-]

In that case, the magic wand has a high degree of optimization power. It is extremely good at converting things it touches into gold, in all possible environments. But it is perfectly plain that the wand is not intelligent. So that definition of intelligence is mistaken.

The wand isn't generally intelligent. Maybe by some stretch of the definition we could sorta say it's "intelligent" at the task of turning things to gold. But it can't do any tasks other than turning things into gold. The whole point of AGI is general intelligence. That's what the G stands for.

Humans are generally intelligent. We can apply our brains to widely different tasks, including many that we weren't evolved to be good at at all. From playing Go to designing rockets. Evolution is generally intelligent. It can find remarkably good designs for totally arbitrary objective functions.

I think general optimization ability is a perfectly fine definition of intelligence. It includes things like humans and evolution, and some kinds of simple but general AI, but excludes things like animals and domain specific AI. It defines intelligence only by results. If you can optimize an arbitrary goal you are intelligent. It doesn't try to specify what the internal mechanisms should be, just whether or not they work. And it's continuous - you can have a degree of very stupid optimizer like evolution, all the way to very good/intelligent ones like humans.

Intelligence is the ability to engage in abstract thought. You could characterize that as pattern recognition, except that it is the ability to recognize patterns in patterns in patterns, recursively.

This definition is really vague. You are just shoving the hard problem of defining intelligence into the hard problem of defining "abstract thought". I guess the second sentence kind of clarifies that you mean. But it's not clear at all that humans even meet that definition. Do humans recognize patterns in patterns? I don't think so. I don't think we are consciously aware of the vast majority of our pattern recognition ability.

The most intelligent AI we have, is not remotely close to that. It can only recognize very particular patterns in very particular sorts of data.

Not really. Deep neural networks are extraordinary general. The same networks that win at Go could be applied to language translation, driving cars, playing pacman, or recognizing objects in an image.

One example of that was the case of AlphaGo, where Eliezer called it "superintelligent with bugs,"

The exact quote is "superhuman with bugs". In the context, he was describing the fact that the AI plays far above human level. But still makes some mistakes a human might not make. And it's not even clear when it makes mistakes, because it is so far above human players and may see things we don't see, that makes those moves not mistakes.

The orthogonality thesis is a similar mistake of that kind; something that is limited to seeking a limited goal like "maximize paperclips" cannot possibly be intelligent, because it cannot recognize the abstract concept of a goal.

A paperclip maximizer can recognize the concept of a goal. It's not stupid, it just only cares about paperclips. In the same way humans are programmed by evolution to maximize sex, social status, and similarly arbitrary goals, there is no reason an AI couldn't be programmed to maximize paperclips. Again, perhaps humans are not intelligent by your definition.

Unless you believe there is some magical point where there is a sudden change from stupid to intelligent, we are still extremely far off from intelligent machines.

Yeah that seems quite obviously true. Just look at the chimpanzees. By some accounts the main difference in human brains is they are just scaled up primate brains - 3 times as large, with a bit more sophisticated language ability. And suddenly you go from creatures that can barely master simple tools and can't communicate ideas, to creatures capable of technological civilization. 500 million years of evolution refined the mammal brain to get chimps, but only about a million was needed to go from stupid animals to generally intelligent humans.

I don't see any reason to believe AI progress should be linear. In practice it is clearly not. Areas of AI often has sudden discontinuities or increasing rates of progress. I don't see any reason why there can't be a single breakthrough that causes enormous progress, or why even incremental progress must be slow. If evolution can make brains by a bunch of stupid random mutations, surely thousands of intelligent engineers can do so much better on a much shorter time scale.

as is evident in the behavior of children, which passes imperceptibly from stupid to intelligent.

This isn't a valid analogy at all. Baby humans still have human brains running the same algorithms as adult humans. Their brains are just slightly smaller and have had less time to learn and train. Individual AIs may increase in ability linearly as they grow and learn. But the AI algorithms themselves have no such constraint, someone could theoretically figure out the perfect AI algorithm tomorrow and code it up. There is certainly no law of nature that says AI progress must be slow.

Comment author: gjm 06 October 2016 11:58:35PM -1 points [-]

You are misreading Jacobian

Plausible guess, but actually my error was different: I hadn't noticed the bit of Jacobian's comment you quote there; I read what you wrote and made the mistake of assuming it was correct.

Those words "once you've decided on a course of action" were your words. I just quoted them. It does indeed appear that they don't quite correspond to what Jacobian wrote, and I should have spotted that, but the original misrepresentation of Jacobian's position was yours rather than mine.

(But I should make clear that you misrepresented Jacobian's position by making it look less unreasonable and less easy for you to attack, so there's something highly creditable about that.)

Comment author: Lumifer 06 October 2016 03:11:50PM 3 points [-]

So, if the emotional empathy should be discarded, why should I help all those strangers? The only answer that the link suggests is "social propriety".

But social propriety is a fickle thing. Sometimes it asks you to forgive the debts of the destitute, and sometimes it asks you to burn the witches. Without empathy, why shouldn't you cheer at the flames licking the evil witch's body? Without empathy, if there are some kulaks or Juden standing in the way of the perfect society, why shouldn't you kill them in the most efficient manner at your disposal?

Comment author: Lumifer 06 October 2016 02:52:56PM 3 points [-]

As with any complex phenomenon in a complex system, there is going to be a laundry list of contributing factors, none of which is the cause (in the sense that fixing just that cause will fix the entire problem). We can start with

  • Genetic factors (such as lower IQ)
  • Historical factors, which in turn flow into
  • Cultural factors (such as distrust of the government / law enforcement) and
  • Economic factors (from being poor to having a major presence in the drug trade)

The opinions about the relative weights of these factors are going to differ and in the current political climate I don't think a reasonable open discussion is possible.

Comment author: siIver 06 October 2016 02:35:05PM *  3 points [-]

I think this is the first article in a long time that straight up changed my opinion in a significant way. I always considered empathy a universally good thing – in all forms. In fact I held it as one of the highest values. But the logic of the article is hard to argue with.

I still tentatively disagree that it [emotional empathy] inherently bad. Following what I read, I'd say it's harmful because it's overvalued/misunderstood. The solution would be to recognize that it's an egoistical thing – as I'm writing this I can confirm that I think this now. Whereas cognitive empathy is the selfless thing.

Doing more self-analysis, I think I already understood this on some level, but I was holding the concept of empathy in such high regards that I wasn't able to consciously criticize it.

I think this article is something that people outside of this community really ought to read.

Comment author: moridinamael 06 October 2016 02:20:37PM 3 points [-]
Comment author: SodaPopinski 06 October 2016 11:52:27AM 3 points [-]

CellBioGuy all your astrobiology posts are great I'd be happy to read all of those. This may be off the astrobiology topic but I would love to see a post with your opinion on the foom question. For example do you agree with Gwern's post about there not being complexity limitations preventing runaway self-improving agents?

Comment author: philh 05 October 2016 10:40:48AM 2 points [-]

I'd find all of these interesting, particularly the first three and the last.

I'm glad you're back.

Comment author: Manfred 04 October 2016 07:06:49PM *  3 points [-]

The AI has to do what humans mean (rather than e.g. not following your orders and just calculating more digits of pi) before you start talking at it, because you are relying on it interpreting that sentence how you meant it.

The hard part is not figuring out good-sounding words to say to an AI. The hard part is figuring out how to make an actual, genuine computer program that will do what you mean.

Comment author: Gunnar_Zarncke 04 October 2016 04:29:17PM 3 points [-]

Do you have a mathematical formulation for it? (That will be the first question by the physics consultant mentioned above)

Comment author: Fluttershy 04 October 2016 01:43:49AM *  3 points [-]

I'm sorry! Um, it probably doesn't help that much of the relevant info hasn't been published yet; this patent is the best description that will be publicly available until the inventors get more funding. From the patent:

By replacing the volume of the vasculature (from 5 to 10 percent of the volume of tissues, organs, or whole organisms) with a gas, the vasculature itself becomes a “crush space” that allows stresses to be relieved by plastic deformation at a very small scale. This reduces the domain size of fracturing...

So, pumping the organ full of cool gas (not necessarily oxygen) is done for reasons of cooling the entire tissue at the same time, as well as to prevent fracturing, rather than for biological reasons.

ETA: To answer your last question, persufflation would be done on both cooling and rewarming.

Comment author: Lumifer 03 October 2016 09:26:06PM 3 points [-]

Is it falsifiable? Which empirical observations/experiments can falsify it?

Comment author: Florian_Dietz 03 October 2016 08:22:13PM *  3 points [-]

Is there an effective way for a layman to get serious feedback on scientific theories?

I have a weird theory about physics. I know that my theory will most likely be wrong, but I expect that some of its ideas could be useful and it will be an interesting learning experience even in the worst case. Due to the prevalence of crackpots on the internet, nobody will spare it a glance on physics forums because it is assumed out of hand that I am one of the crazy people (to be fair, the theory does sound pretty unusual).

Comment author: Luke_A_Somers 03 October 2016 12:01:11AM 3 points [-]

It might help, though - if you suddenly stop applying the magnetic fields, then it might freeze more abruptly than if you simply lower the temperature. That could reduce the extent of crystallization and thus damage.

Comment author: Viliam 02 October 2016 07:26:27PM *  3 points [-]

I saw the 2015 remake of Death Note, and I was so disappointed. :(

The goal of the remake was probably to make the conflict of two highly intelligent opponents more accessible to an audience of normies. Not a bad idea per se; I actually liked some of the changes.

The problem is that while making the changes, they introduced a few obvious logical errors, probably as a side effect of trying to make some scenes more dramatic. Which matters a lot in a story based on the premise that two highly intelligent opponents are fighting by exploiting each other's smallest mistakes; and then something completely stupid happens and no one notices, most likely because the author of the remake didn't notice it.

I'll try to avoid being unnecessarily specific; but here is the general pattern: In the story universe, it is possible to cast magical spells on other people. If certain preconditions are met, the magic makes people follow a script specified by the caster. If the preconditions are not met, nothing happens. (There is no such thing as partially meeting the preconditions; it's either yes or no.)

However, at least twice in the series the following happens: The mage casts the spell with a sequence of unlikely actions on someone. The victim does the unlikely action A, then does the unlikely action B, and then... as a big surprise... at last moment it turns out they don't do the remaining unlikely action C! How is that possible? Turns out someone else outsmarted the mage and made some of the preconditions fail, so the magic spell didn't work.

I guess at this moment the audience is supposed to cheer for the smart opponent, but I am left scratching my head: so, if the preconditions of the spell were not met, how was it possible in the first place that the victim did the unlikely actions A and B? The magic spell was cast in privacy; the victim had no chance to know the values of A, B, C. The victim didn't expect the spell to be cast; in one case the victim's unusual behavior was a new information for the opponent. It wasn't a coincidence; in one case the victim went to a specified abandoned place and pretended to be dead.

I generally don't mind something slightly illogical here and then, if the plot requires it. But in this specific case, it ruined the essence of the story. After this, "intelligent opponents cleverly gaining information by exploiting each other's small mistakes" became merely an applause light without substance.

Comment author: turchin 02 October 2016 12:04:33AM 3 points [-]

In the Trent's article even mentioned possible species of Dinos who may be able have intelligent explosion. http://www.strangehorizons.com/2009/20090713/trent-a.shtml

It means that we could find really interesting (and dangerous) things during excavations in Antarctica?

Comment author: Gram_Stone 01 October 2016 01:31:11AM 3 points [-]

I know it was slightly tangential, but the organ matchmaking software was really interesting to me. I doubt this is how the idea was conceived, but I think when you look at it in a particular way, it seems like a really elegant solution to an important coordination problem.

(Content note: organ trade.)

Currently, a big stumbling block with organ trade is that suppliers can only supply organs in an altruistic context because of moral intuitions about the respective sanctities of life and money; buying and selling organs is impure. This is really bad because it limits most donations to those from family members and those from people who donate their organs upon death. Family members aren't always compatible, and organs from cadavers don't last as long and are more likely to contain cancers. If you're incompatible with a family member or you want to splurge on the extra expected lifespan afforded by a non-cadaveric organ, you can't sell your organ and use the money to buy a compatible/non-cadaveric one from someone else. There seem to be a great many trades that don't take place because of solvable spatial and temporal constraints that would be avoided by the use of a medium of exchange like money. So, the legal organ trade is relatively inefficient in most countries.

The really cynical version of the problem that you're trying to solve, before you ever write the extremely important matchmaking algorithms that are easy to overlook from this point of view, is "How do I efficiently allocate organs without buying or selling them?" The current system of organ donation is practically a barter system, there must be a coincidence of wants between donor and recipient. So, one interpretation of what matchmaking software does is make pledges to donate organs into a medium of exchange when you can't use the normal medium of exchange for political reasons. When you make pledges to donate into a credible signal that you will in fact donate given the satisfaction of a certain set of easily verifiable conditions, you can use that signal in place of money to make more complex trades that you couldn't otherwise make with 'pure barter'.

Kind of useless armchair scholarship I guess, but I thought it was elegant.

(After cursory research I lean pro-legal organ trade (something more market-like than what exists today); willing to expand on this if anyone's interested in collecting a new contrarian opinion.)

Comment author: gjm 01 October 2016 01:02:35AM -1 points [-]

But why should the probability for lower-complexity hypotheses be any lower?

It shouldn't, it should be higher.

If you just meant "... be any higher?" then the answer is that if the probabilities of the higher-complexity hypotheses tend to zero, then for any particular low-complexity hypothesis H all but finitely many of the higher-complexity hypotheses have lower probability. (That's just part of what "tending to zero" means.)

Comment author: ChristianKl 30 September 2016 10:42:42PM 3 points [-]

For all those reasons Nassim Taleb wrote about, it's a bad idea to treat risk like it can be that precisely measured.

Comment author: Elo 30 September 2016 12:48:06AM -2 points [-]

I am no expert, but I wonder if you could run a monte-carlo on your expected responses. Do the questions you ask give you enough information to yield results?

Just not sure if your questions are honing correctly. Chances are there are people that know better than me.

In response to Seven Apocalypses
Comment author: wallowinmaya 29 September 2016 04:30:29PM *  3 points [-]

I don't understand why you exclude risks of astronomical suffering ("hell apocalypses").

Below you claim that those risks are "Pascalian" but this seems wrong.

Comment author: Vaniver 29 September 2016 01:36:18PM 3 points [-]

My impression is that activity begets more activity--if there were 0 posts today, having your self-post be the post for the day is more bothersome than if there were 10 posts today. But we can look at this in a month and see how it turned out.

Comment author: DanArmak 28 September 2016 06:34:06PM -1 points [-]

Thanks for linking this! It was well written and interesting, and I'm glad to have discovered a new blog to read.

Suggestion for a summary of the link:

Some arguments are really signals targeting some in-group. Outsiders frequently miss the intended meaning. These arguments are simply nonsensical if taken literally ("Bashar Assad was a Mossad agent sent to kill Syrian children"). This may be because clearly stating the intended meaning is politically incorrect outside the ingroup, or it may just be a rhetorical device.

It's important to recognize such arguments, even if we are unsure what they are really intended mean. Consider such alternative explanations when people say what seems to be nonsense or clearly wrong.

Comment author: helldalgo 28 September 2016 06:01:45PM 3 points [-]

A surprising movie that met many of these guidelines: Oculus. It's a horror movie, though, not a happy movie. The characters are smart and empathetic and it has Katie Sackhoff in it.

In response to Linkposts now live!
Comment author: ike 28 September 2016 03:54:41PM 3 points [-]

In feedly, I need to click once to get to the post and a second time to get to the link. Can you include a link within the body of the RSS so I can click to it directly?

Comment author: Pfft 28 October 2016 03:08:40AM *  2 points [-]

It sounds pretty spectactular!

I found one paper about comets crashing into the sun, but unfortunately they don't consider as big comets as you do--the largest one is a "Hale-Bopp sized" one, which they take to be 10^15 kg (which already seems a little low, Wikipedia suggests 10^16 kg.)

I guess the biggest uncertainty is how common so big comets are (so, how often should we expect to see one crash into the sun). In particular, I think the known sun-grazing comets are much smaller than the big comet you consider.

Also, I wonder a bit about your 1 second. The paper says,

The primary response, which we consider here, will be fast formation of a localized hot airburst as solar atmospheric gas passes through the bow-sock. Energy from this airburst will propagate outward as prompt electromagnetic radiation (unless or until bottled up by a large increase in optical depth of the surrounding atmosphere as it ionizes), then in a slower secondary phase also involving thermal conduction and mass motion as the expanding hot plume rises.

If a lot of the energy reaching the Earth comes from the prompt radiation, then it should arrive in one big pulse. On the other hand, if the comet plunges deep into the sun, and most of the energy is absorbed and then transmitted via thermal conduction and mass motion, then that must be a much slower process. By comparison, a solar flare involves between 10^20 and 10^25 J, and it takes several minutes to develop.

Comment author: Lumifer 27 October 2016 03:02:48PM 2 points [-]

This should be right up LW's alley. Reconstruct dead people as... chatbots? Quote:

And one day it will do things for you, including keeping you alive. You talk to it, and it becomes you.

Comment author: MrMind 27 October 2016 07:13:48AM 2 points [-]

Define "rationality things"...

they're full of guys who'll hit on her no doubt!

So what? Do you think that is different in any street or store? On the opposite I would say that rationality meetups would tend to be more respectful of boundaries than the average group of human beings.

Comment author: MrMind 27 October 2016 07:08:30AM 2 points [-]

The world is certainly not going to change

We'll see about that...

Comment author: chron 27 October 2016 03:20:39AM 2 points [-]

The study is more about how telling lies primes (for lack of a better word) you to tell more lies, not just lies to cover up the original lie.

Comment author: entirelyuseless 27 October 2016 02:02:19AM 2 points [-]

It is on the bets registry. I am Unknown with a new username.

Comment author: CronoDAS 26 October 2016 01:46:37PM 2 points [-]

Nice plus-size women's clothing is obnoxiously expensive, compared to standard sizes.

Comment author: Viliam 26 October 2016 09:43:16AM 2 points [-]

Slashdot comment:

why do people slap "AI" label on unnecessarily? ... this is mere data analysis.

Comment author: Viliam 26 October 2016 09:37:33AM *  2 points [-]

I would appreciate a service that would provide long-term guidance to programmers. More like a guild than like a job agency.

I imagine something like this: You would pay a small yearly membership fee. In return you would get an access to a guild forum (where members provide information to each other), and a subscription to a digital newspaper (where the guild provides the most important information to the members). For a higher fee, you could get some personal counseling or training. The guild would provide information about the job market; e.g. which technologies are currently in demand, and how much salary get the people who know them. It would also provide information about new technologies; e.g. a short description and a link to more resources, with an expert opinion about why this is an important thing, and how is it connected with other technologies.

Essentially, the problem I am trying to solve here is that as long as I work in a company, it is easy to lose the sight of the larger picture. My company may use a technology X, while the rest of the world is moving towards Y, but I don't notice it because I spend most of my time reading about X and solving problems related to X; the nature of my work creates a huge selection bias. And the people I talk with most often, i.e. my colleagues, have a selection bias in exactly the same direction.

Of course, once in a while I will hear about Y; but I don't know whether it is really a trend, or just another hype. Even looking at the job market, which technologies are most required, provides a distorted picture: sometimes companies are looking for X because it's the new trend, and sometimes companies are looking for X because it's a crappy technology no one wants to use, so those job positions remain open forever.

I would also love to be able to get better background info about my possible future employers; to be blunt, how much "what they tell you at the job interview" differs from how things are actually done once you are there. (I realize this would be difficult, probably impossible. First, there is a risk of the bad employers suing you for libel. Second, people may have various incentives to provide false information; e.g. to astroturf for their own company, or to badmouth competition.) Or some insider info; for example if the company is working on a several large projects, you may want to get to the project A, but you should really avoid the project B.

Some of these things can be (at least partially) solved by other ways, such as specialized websites (e.g. stack exchange, including their workplace forum) or tech conferences or knowing the right people and keeping in touch with them. But it would be convenient to have everything in one package, also with some coaching. Someone you could ask to help you with your career, to identify some blind spots you may have, to give you a honest estimate of how much money you could make with your skills if you apply for the right job or learn the additional technology that is currently missing in your portfolio. In best case, to also give you some advice about passive investment and early retirement, work-life balance, etc.

Comment author: Pimgd 26 October 2016 08:18:04AM 2 points [-]

Is this a validation of "What tangled webs we weave when we seek to lie and deceive"?

View more: Prev | Next