Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: username2 25 May 2017 10:26:27AM *  0 points [-]

That's like saying a paranoid schizophrenic can solve his problems by performing psychoanalysis against a copy of himself. However I doubt another paranoid schizophrenic would be able to provide very good or effective therapy.

In short you are assuming a working AGI exists to do the debugging, but the setup is that the AGI itself is flawed! Nearly every single engineering project ever demonstrates that things don't work on the first try, and when an engineered thing fails it fails spectacularly. Biology is somewhat unique in its ability to recover from errors, but only specialized categories of errors that it was trained to overcome in its evolutionary environment.

As an engineering professional I find it extremely unlikely that an AI could successfully achieve hard take-off on the first try. So unlikely that it is not even worth thinking about -- LHC creating black holes level of unlikely. When developing AI it would be prudent to seed the simulated environments it is developed and tested inside of with honeypots, and see if it attempts any of the kinds of failure modes x-risk people are worried about. Then and there with an actual engineering prototype would be an appropriate time to consider engineering proactive safeguards. But until then it seems a bit like worrying about aviation safety in the 17th century and then designing a bunch of safety equipment for massive passenger hot air balloons that end up being of zero use in the fixed wing aeroplane days of the 20th century.

Comment author: username2 25 May 2017 10:14:18AM 0 points [-]

I'm not a utilitarian. Sorry to be so succinct in reply to what was obviously a well written and thoughtful comment, but I don't have much to say with respect to utilitarian arguments over AI x-risk because I never think about such things.

Regarding your final points, I think the argument can be convincingly made -- and has been made by Steven Pinker and others -- that technology has overwhelmingly been beneficial to the people of this planet Earth in reducing per-capita disease & violence. Technology has for the most part cured disease, not "brought it", and nuclear weapons have kept conflicts localized in scale since 1945. There's been some horrors since WW2, to be sure, but nothing on the scale of either the 1st or 2nd world war, at least not in global conflict among countries allied with adversarial nuclear powers. Nuclear weapons have probably saved far more lives in the generations that followed than the combined populations of Hiroshima and Nagasaki (to say nothing of the lives spared by an early end to that war). Even where technology has been failing us -- climate change, for example -- it is future technology that holds the potential to save us and the sooner we develop it the better.

All things being equal, it is my own personal opinion that the most noble thing a person can do is to push forward the wheels of progress and help us through the grind of leveling up our society as quickly as possible, to relieve pain and suffering and bring greater prosperity to the world's population. And before you say "we don't want to slow progress, we just want some people to focus on x-risk as well" keep in mind that the global pool of talent is limited. This is a zero-sum game where every person working on x-risk is a technical person explicitly not working on advancing technologies (like AI) that will increase standards of living and help solve our global problems. If someone chooses to work on AI x-risk, they are probably qualified to work directly on the hard problems of AI itself. By not working on AI they are incrementally slowing down AI efforts, and therefore delaying access to technology that could save the world.

So here's a utilitarian calculation for you: assume that AGI will allow us to conquer disease and natural death, by virtue of the fact that true AGI removes scarcity of intellectual resources to work on these problems. It's a bit of a naïve view, but I'm asking you to assume it only for the sake of argument. Then every moment someone is working on x-risk problems instead, they are potentially delaying the advent of true AGI by some number of minutes, hours, or days. Multiply that by the number of people who die unnecessary deaths every day -- hundreds of thousands -- and that is the amount of blood on the hands of someone who is capable but chooses not to work on making the technology widely available as quickly as possible. Existential risk can only be justified as a more pressing concern if can be reasonably demonstrated to have a higher probability of causing more deaths than inaction.

The key word there is reasonable. I have too much experience in this world building real things to accept arguments based on guesswork or convoluted philosophy. Show me the code. Demonstrate for me (in a toy but realistic environment) an AI/proto-AGI that turns evil, and give me reasonable technical justification for why we should expect the same properties in larger, more complex environments. Without actual proof I will forever remain unconvinced, because in my experience there are just too many bullshit justifications one can create which pass internal review, and even convince a panel of experts, but fall apart as soon as it tested by reality.

Which brings me to the point I made above: you think you know how AI will go evil/non-friendly and destroy the world? Well go build one in a box and write a paper about it. But until you actually do that, and show me a replicatable experiment, I'm really not interested. I'll go back to setting an ignore bit on all this AI x-risk nonsense and start pushing the wheel of progress forward before that body count rises too far.

Comment author: Mitchell_Porter 25 May 2017 09:29:48AM 0 points [-]

Suppose there's some idea, X, which you think might help to solve a problem, Y. And there's also a dumb version of X, X', which you know doesn't work, but which still has enthusiasts.

And then one day there's a headline: CAN IDEA X SOLVE PROBLEM Y? Only you find out that it's actually X', the dumb version of X, that is being presented to the world as X... and nothing is done to convey the difference between X' and the version of X that actually warrants attention.

That is, more or less, the situation I find myself in, with respect to this article. I wish there were some snappier way to convey the situation, without talking about X and X' and so on, but I haven't found a way to do it.

Problem Y is: explain why quantum mechanics works, without saying that things don't have properties until they are measured, and so on.

Idea X is, these days, usually called Bohmian mechanics. To the Schrodinger equation, which describes the time evolution of the wavefunction of quantum mechanics, it adds a classical equation of motion for the particles, fields, etc. The particles, fields, etc., evolve on a trajectory in state space which follows the probability current in state space, as defined by the Schrodinger equation.

The original version of this idea is due to de Broglie, who proposed that particles are guided by waves. This was called pilot-wave theory, because the wave "pilots" the particle.

Pilot-wave theory was proposed in the very early days of quantum theory, before the significance of entanglement was properly appreciated. The significance of entanglement is that you don't have one wavefunction per particle, you just have one big wavefunction which provides probabilities for joint configurations of particles.

A pilot-wave theory for many particles, in the form that de Broglie originally proposed - one wave per particle - contains no entanglement, and can't reproduce the multi-particle predictions of quantum mechanics, as Bell's theorem and many other theorems show. Bohmian mechanics can reproduce those predictions, because in Bohmian mechanics, the wavefunction that does the piloting is the single, entangled, multi-particle wave used in actual quantum mechanics.

All this is utterly basic knowledge for the people who work on Bohmian mechanics today. But meanwhile, apparently a group of people who work on fluid dynamics, have rediscovered de Broglie's original idea - "wave guiding a particle" - and are now promoting it as a possible explanation of quantum mechanics. They don't seem to care about the theorems proving that you can't get Bell-type correlations without using entangled waves.

So basically, this article describes the second-rate researchers in this field - in this case, people who are doing the equivalent of trying to force the square peg into the round hole - as if they are the intellectual leaders who define it!

Comment author: Miller 25 May 2017 06:20:58AM *  0 points [-]

Prediction is intelligence. Why is there not more discussion about stock picks here? Is it low status? Does everyone believe in strong forms of efficient market ?

(edited -- curious where it goes without leading the witness)

Comment author: Manfred 25 May 2017 06:14:39AM 0 points [-]

My knowledge of it is pretty superficial, but I'm pretty confused about how it represents states with a superposition of particle numbers. For fixed number of (non relativistic) particles you can always just put the interesting mechanics (including spin, electromagnetic charge, etc!) in the wavefunction and then add an epiphenomenal ontologically-fundamental-particle like a cherry on top. We'll, epiphenomenal in the Von Neumann measurement paradigm, presumably advocates think it plays some role in measurement, but I'm still a bit vague on that.

Anyhow, for mixtures of particle numbers, I genuinely don't know how a Bohmian is supposed to get anything intuitive or pseudo-classical.

Comment author: lmn 25 May 2017 05:10:01AM 0 points [-]

Also stuff like this.

Comment author: btrettel 25 May 2017 04:34:34AM 0 points [-]

This reminds me of something my father, a retired patent examiner, told me once. For a certain legal procedure the US Patent Office has a form letter a lawyer can use that contains all of the relevant information in a convenient format. My father was amazed by lawyers who refused to use it and instead wrote their own version of it. This seems like a waste of time for both the lawyer and examiner. When my father asked why, at least one lawyer told him that they believed the standard form had legal implications they didn't like, though my father insisted that case law made it clear that was wrong here.

Another (cynical) hypothesis is that these lawyers are paid by the hour and that they actively wanted to waste time.

Comment author: hg00 25 May 2017 03:16:22AM *  0 points [-]

I don't like the precautionary principle either, but reversed stupidity is not intelligence.

"Do you think there's a reason why we should privilege your position" was probably a bad question to ask because people can argue forever about which side "should" have the burden of proof without actually making progress resolving a disagreement. A statement like

The burden of proof therefore belongs to those who propose restrictive measures.

...is not one that we can demonstrate to be true or false through some experiment or deductive argument. When a bunch of transhumanists get together to talk about the precautionary principle, it's unsurprising that they'll come up with something that embeds the opposite set of values.

BTW, what specific restrictive measures do you see the AI safety folks proposing? From Scott Alexander's AI Researchers on AI Risk:

The “skeptic” position seems to be that, although we should probably get a couple of bright people to start working on preliminary aspects of the problem, we shouldn’t panic or start trying to ban AI research.

The “believers”, meanwhile, insist that although we shouldn’t panic or start trying to ban AI research, we should probably get a couple of bright people to start working on preliminary aspects of the problem.

(Control-f 'controversy' in the essay to get more thoughts along the same lines)

Like Max More, I'm a transhumanist. But I'm also a utilitarian. If you are too, maybe we can have a productive discussion where we work from utilitarianism as a shared premise.

As a utilitarian, I find Nick Bostrom's argument for existential risk minimization pretty compelling. Do you have thoughts?

Note Bostrom doesn't necessarily think we should be biased towards slow tech progress:

...instead of thinking about sustainability as is commonly known, as this static concept that has a stable state that we should try to approximate, where we use up no more resources than are regenerated by the natural environment, we need, I think, to think about sustainability in dynamical terms, where instead of reaching a state, we try to enter and stay on a trajectory that is indefinitely sustainable in the sense that we can contain it to travel on that trajectory indefinitely and it leads in a good direction.

http://www.stafforini.com/blog/bostrom/

So speaking from a utilitarian perspective, I don't see good reasons to have a strong pro-tech prior or a strong anti-tech prior. Tech has brought us both disease reduction and nuclear weapons.

Predicting the future is unsolved in the general case. Nevertheless, I agree with Max More that we should do the best we can, and in fact one of the most serious attempts I know of to forecast AI has come out of the AI safety community: http://aiimpacts.org/ Do you know of any comparable effort being made by people unconcerned with AI safety?

Comment author: fortyeridania 25 May 2017 03:12:24AM 0 points [-]
Comment author: Lumifer 25 May 2017 12:03:40AM 1 point [-]

You probably mean Eliza and the story of Weizenbaum's secretary.

Comment author: Bound_up 24 May 2017 11:36:45PM 0 points [-]

I'm trying to find the story of a computer therapist that was rated more positively than a real therapist.

If I remember correctly, the computer would basically rephrase and repeat back whatever you said to it, and people actually found this quite helpful.

Anybody know where I can find that? Maybe a link?

Comment author: madhatter 24 May 2017 11:02:28PM 0 points [-]

I really recommend the book Superforecasting by Philip Tetlock and Dan Gardner. It's an interesting look at the art and science of forecasting, and those who repeatedly do it better than others.

Comment author: ChristianKl 24 May 2017 06:16:08PM 0 points [-]

Once an AI reaches human level intelligence and can run multiple instances in parallel it doesn't require a human debugger but can be debugged by another AGI instance.

That what human level AGI is per definition about.

Comment author: username2 24 May 2017 06:09:46PM *  0 points [-]

It was not a point about timelines, but rather the viability of a successful runaway process (vs. one that gets stuck in a silly loop or crashes and burns in a complex environment). It becomes harder to imagine a hard takeoff of an evil AI when every time it goes off the rails it requires intervention of a human debugger to get back on track.

Comment author: CronoDAS 24 May 2017 05:11:48PM 1 point [-]

Link's broken - it leads back to this page.

Comment author: ChristianKl 24 May 2017 04:47:53PM *  0 points [-]

They demonstrate a lack of rigor and a naïve under appreciation of the difficulty of making anything work in production at all, much less out smart the human race.

This sounds, like you think you disagree about time-lines. When do you think AGI that's smarter than the human race will be created? What's the probability that it will get created before: 2050, 2070, 2100, 2150, 2200 and 2300?

Comment author: username2 24 May 2017 04:25:51PM *  0 points [-]

Yes and yes and yes (those are all examples mentioned in the article). If you have a specific example of a quantum phenomenon that pilot wave theory doesn't exhibit, I'd like to know. Pilot wave advocates claim that pilot wave theory results in the same predictions, although I haven't had time to chase down sources or work this out for myself.

Comment author: username2 24 May 2017 04:21:27PM 0 points [-]

Yes, the proactionary principle:

http://www.maxmore.com/proactionary.html

Comment author: btrettel 24 May 2017 03:07:41PM 0 points [-]

None of these are at the Library of Congress, unfortunately. Frequently their catalog includes books not listed on WorldCat. I'm away from my university for the summer, so there's no way I can do an interlibrary loan right now.

Comment author: Lumifer 24 May 2017 02:44:04PM 1 point [-]

An interesting question.

My first thought was Nomic, but I don't know how viable is it with two players only. Hmm...

Comment author: Raemon 24 May 2017 02:16:13PM 6 points [-]

I don't think it would have made sense to condense the links (AFAICT they aren't very thematically connected) but I would say:

a) posting 5 things in a row feels spammy, I'd personally have waited at least a day in between them. I realize you're cross-posting from EA and they're already written but it's still good form to wait)

b) when posting a link post a good practice is to include a comment that explains some context for the link.

Comment author: g_pepper 24 May 2017 12:45:34PM 1 point [-]

Per the article:

Droplets can also seem to “tunnel” through barriers, orbit each other in stable “bound states,” and exhibit properties analogous to quantum spin and electromagnetic attraction. When confined to circular areas called corrals, they form concentric rings analogous to the standing waves generated by electrons in quantum corrals.

and

Like an electron occupying fixed energy levels around a nucleus, the bouncing droplet adopted a discrete set of stable orbits around the magnet, each characterized by a set energy level and angular momentum.

In response to Political ideology
Comment author: Viliam 24 May 2017 10:54:01AM *  7 points [-]

Five links without any summary? Please don't do this.

Comment author: Viliam 24 May 2017 10:52:21AM 1 point [-]

So, does the bouncing oil droplet also tunnel through barriers, spontaneously arise or annihilate, and occupy discrete energy levels?

Because to me this seems like merely an analogy that works in some aspects, but fails in other aspects.

Comment author: Viliam 24 May 2017 09:58:14AM 1 point [-]

There is already a book on this topic: Creating a Life Together: Practical Tools to Grow Ecovillages and Intentional Communities. Yes, it focuses on ecological communities, but most of the lessons seem to be universal.

Some things I remember:

  • Don't rely on people merely saying "I will totally join the community", no matter how convincing they sound. When it comes time to actually buy some land or building, expect that less than half of them will actually join. (Worst case: you spend your money or take a loan to buy the land/building, only to find out that actually no one joins you. Yes, there will always be an excuse: the timing is wrong; we wanted to live in a forest, but not that part of a forest, etc.) Take it as a serious project, make written agreements.

  • Make sure all of you can agree on the same vision. Put that vision in writing, because people have selective memory, and a few months later one will remember that "we agreed on X", while another will rememeber that "it was always supposed to be Y". Make sure you agree on the near-mode details, not just the far-mode applause lights. (There was an example of a group of people who moved to forest to get away from civilization. Turned out, half of them opposed civilization in principle, other half just wanted to live in a more green and less stressful environment away from the town. They started okay, but an unsolvable conflict emerged when the latter part wanted to bring internet connection to the village, which the former part opposed in principle.)

  • Think about all details of the life style: What kind of sexual behavior do you expect in your communities? What is your position on drugs? Are people going to have kids, and does that require increased safety or quiet at night? What kinds of religion are accepted? Is it okay if community members participate in politics? In other words, communicate explicitly and in detail what behavior will be okay, and what behavior will not be okay.

  • Make a formal decision-making process. Saying "oh, we will just solve everything by a consensus" is pretty much a disaster guaranteed to happen. (Consensus is easy while people generally agree with each other. You need a method to make decisions when they don't. Without clear rules, some people will try to win by increasing pressure, and soon everyone will go: "unless you do it my way, I quit".)

  • Avoid insane people, or generally people who generate tons of drama around them. One such person can be enough to destroy the whole community; there will be already enough problems happening naturally. Have formal rules for accepting new member of the community (e.g. some trial period, and approval by majority of existing members).

  • Make sure your community has someone with technical skills, and someone with people skills.

Comment author: Thomas 24 May 2017 08:17:03AM 3 points [-]

At least, condense all those links of yours in one, please.

Comment author: unonessunocentomila 24 May 2017 07:36:10AM 0 points [-]

Although acknowledging the WIKI definitions, in order to properly distinguish between the two, I deem it useful to analyze the etymology of the terms. Nothing new, after all. Just modern. Let us not forget our roots...

Interestingly, the two concepts originate from opposites.

"INTELLIGENCE" comes from the latin word INTELLIGENTIA - INTELLIGENTIAE, out of the latin verb INTELLIGERE, wich derives out of INTUS (within) or INTRA (inside) and LEGERE (choose).

"RATIONALITY" comes from the latin word RATIO - RATIONIS (calculation, reason, advantage, part), same root as in RATION, RATIO (relationship or quotient), RATIONALE or RATE...

In both cases, the aim of a higher self-awareness obviously "lurks upon the waters"...

But the term intelligence appears to subtend to the idea of SELECTING, whereas the term rationality appears to imply the concept of COMPARING.

In the former case, of course, one must first separate objects that she already has knowledge of. In the latter, one must first put together objects that she has no knowledge of.

XO

Comment author: Oscar_Cunningham 24 May 2017 06:59:35AM 4 points [-]

Arimaa was designed for this purpose and computers are now better at it.

Comment author: hg00 24 May 2017 05:47:49AM *  1 point [-]

You describe the arguments of AI safety advocates as being handwavey and lacking rigor. Do you believe you have arguments for why AI safety should not be a concern that are more rigorous? If not, do you think there's a reason why we should privilege your position?

Most of the arguments I've heard from you are arguments that AI is going to progress slowly. I haven't heard arguments from AI safety advocates that AI will progress quickly, so I'm not sure there is a disagreement. I've heard arguments that AI may progress quickly, but a few anecdotes about instances of slow progress strike me as a pretty handwavey/non-rigorous response. I could just as easily provide anecdotes of unexpectedly quick progress (e.g. AIs able to beat humans at Go arrived ~10 years ahead of schedule). Note that the claim you are going for is a substantially stronger one than the one I hear from AI safety folks: you're saying that we can be confident that things will play out in one particular way, and AI safety people say that we should be prepared for the possibility that things play out in a variety of different ways.

FWIW, I'm pretty sure Bostrom's thinking on AI predates Less Wrong by quite a bit.

Comment author: knb 24 May 2017 04:49:19AM 2 points [-]

I think it's mainly about combining two click-friendly buzzwords in a novel way.

Comment author: btrettel 24 May 2017 04:11:55AM 0 points [-]

R. D. Monson, "Experimental studies of cylindrical and sheet jets with and without forced nozzle vibrations" M.S. thesis, Dept. of Mech. Engr., Univ. of Calif., Davis (December 1980).

UC Davis refuses to loan this for unknown reasons. What I find odd is that it has already been digitized. UC Davis students might be able to download it here. Let me know if you can download it.

Comment author: jsalvatier 24 May 2017 02:42:28AM 1 point [-]

There's not that many that I know of. I do think its much more intuitive and lets you build more nuanced models that are useful for social sciences. You can fit the exact model that you want instead of needing to fit your case in a preexisting box. However, I don't know of too many examples where this is hugely practically important.

The lack of obviously valuable use cases is part of why I stopped being that interested in MCMC, even though I invested a lot in it.

There is one important industrial application of MCMC: hyperparameter sampling in Bayesian optimization (Gaussian Processes + priors for hyper parameters). And the hyperparameter sampling does substantially improve things.

Comment author: gworley 24 May 2017 02:22:35AM 0 points [-]

If we are looking for intentional communities that do work, we need look no further than modern organizations like corporations. We may not like the communities they create, but we can't deny the corporations that survive for long tend to have some reason they are able to do it and it must involve coordinate the actions of thousands of people. WalMart is perhaps the most successful intentional community of all time.

Comment author: James_Miller 24 May 2017 01:08:45AM *  1 point [-]

In what kind of two "person" game would a human have the greatest advantage over a computer? Let's make the game entirely self-contained with objective scoring, no manipulation of real-world objects, and no advantage for knowing real world facts. How far away is "go" from such a game?

Comment author: satt 23 May 2017 10:29:17PM *  0 points [-]

I agree with the normative statement that pensioners who pay in are "entitled to get something out", but it's a new claim. My comment, like the bit of entirelyuseless's comment to which it responded, was about an empirical claim.

Pensioners have paid into the system, though.

The fact remains that there is a big group of people in Europe who can, in fact, claim government cash even if they declare that they have worked, and could work, but just don't want to work. Insofar as entirelyuseless's general point was that someone has to work to keep an economy going, that point is well taken, but the empirical claim about Europe is materially false.

(And, regarding the more general argument, if there were a basic income, the vast majority of people claiming it would likewise have paid into the system, through general taxation. So the fact of paying in doesn't do a very good job of distinguishing a BI from a pension scheme.)

Yes, it's a Ponzi scheme

How so? To my mind, a defining property of a Ponzi scheme is that it's fraudulent, deceptive (or at least opaque) about where it finds the money that it disburses. But — in my European country, anyway — the government publishes annual accounts for its pension fund, which are, as far as I know, uncooked books. Check 'em out.

Comment author: Raemon 23 May 2017 09:12:11PM 0 points [-]

Oh, it actually talks about this at the end, shame on me for not reading to the end before commenting.

(In my defense, it was really long. :P)

Comment author: korin43 23 May 2017 08:07:10PM 0 points [-]

Does it use anything non-local? The experiments in the article use macroscopic fluids, which presumably don't have non-local effects.

Comment author: hg00 23 May 2017 08:03:13PM 0 points [-]

A lot more folks will be using it soon...

?

Comment author: fubarobfusco 23 May 2017 07:56:46PM 0 points [-]

I'm curious if there's much record of intentional communities that aren't farming communes.

Oneida comes to mind. They had some farming (it was upstate New York in the 1850s, after all) but also a lot of manufacturing — most famously silverware. The community is long gone, but the silverware company is still around.

Comment author: whpearson 23 May 2017 07:55:47PM 0 points [-]

I think we need an association for AGI safety.

I'm poking around using some prototyping tools to try and get something to explain my thoughts about what it might do.

https://www.weld.io/agi-safety-association

Feedback welcome on the content (warning presentation is bad, do not let operations people near the frontend!)

Comment author: Lumifer 23 May 2017 07:50:15PM 2 points [-]

As a good heuristic, any time the headline ends with the question mark, in 90+% of the cases the answer is "No".

Comment author: WalterL 23 May 2017 07:31:24PM 2 points [-]

Ke Jie is playing AlphaGo. He lost the first game last night, and is generally expected to lose the next 2. According to my buddy in Google they aren't planning to do any more big 'vs the strong player' style exhibitions, so tonight is your last chance to see this John Henry story in real time.

Comment author: WalterL 23 May 2017 07:29:31PM 5 points [-]

Is it wrong that I'm hoping that I click on this link and it just goes to a web site with word 'No', in huge letters?

Comment author: lukeprog 23 May 2017 07:28:54PM 0 points [-]

Thanks for briefly describing those Doctor Who episodes.

Comment author: Lumifer 23 May 2017 07:19:10PM 0 points [-]

A point for LW 2.0: don't be vulnerable to a spam-vomit script attack (e.g. by using posting-rate caps for new accounts).

Comment author: whpearson 23 May 2017 05:56:00PM 0 points [-]

I would never argue for inaction. I think this line of thinking would argue for efforts being made to make sure any AGI researchers were educated but no efforts were made to make sure other people were (in the most extreme case).

But yep we may as well carry on as we are for the moment.

Comment author: ChristianKl 23 May 2017 05:09:18PM 1 point [-]

The fact that journalists at a mainstream publication use the metaphor of machine learning to explain the actions of the president is noteworthy. Five-years ago you would be hard pressed for a journalist who thinks that his audience would understand machine learning enough to get the metaphor.

Comment author: CronoDAS 23 May 2017 05:06:05PM 0 points [-]

Ah, pilot wave theory. It gets around the "no local realism" theorem by using non-local hidden variables...

Comment author: Oscar_Cunningham 23 May 2017 05:00:08PM 0 points [-]

Cool (heh). Good thinking!

Comment author: ChristianKl 23 May 2017 04:51:42PM *  1 point [-]

For Feldenkrais there's a supportive meta-review that concludes:
Further research is required; however, in the meantime, clinicians and professionals may promote the use of FM (Feldenkrais Method) in populations interested in efficient physical performance and self-efficacy.

Video games can treat PTSD

The link points to an article that doesn't provide evidence that the treatment works.

service dogs

From the linked paper:

The overarching theme in the literature that cut across those addressed in this review was the need for further empirical research. It is evident given the extent of anecdotal evidence that PSD are effective in the management of PTSD. There are challenges and difficulties with the use of PSD as a treatment as indicated in the review. And the evidence, whether scientific or interpretative, about the exact nature of the challenges and the effectiveness, including the conditions that influence effectiveness, is still lacking.

Basically, the tenor of your article seems to support some treatments that are are only supported by anecdotes if they are nearer to the mainstream while rejecting other form of therapies that are less mainstream.

Comment author: korin43 23 May 2017 04:44:46PM 0 points [-]

Note that the theory seems to have been around since the 1930's, but these experiments are new (2016).

Comment author: korin43 23 May 2017 04:42:51PM 1 point [-]

"The experiments involve an oil droplet that bounces along the surface of a liquid. The droplet gently sloshes the liquid with every bounce. At the same time, ripples from past bounces affect its course. The droplet’s interaction with its own ripples, which form what’s known as a pilot wave, causes it to exhibit behaviors previously thought to be peculiar to elementary particles — including behaviors seen as evidence that these particles are spread through space like waves, without any specific location, until they are measured.

Particles at the quantum scale seem to do things that human-scale objects do not do. They can tunnel through barriers, spontaneously arise or annihilate, and occupy discrete energy levels. This new body of research reveals that oil droplets, when guided by pilot waves, also exhibit these quantum-like features."

Comment author: Daniel_Burfoot 23 May 2017 04:39:56PM 1 point [-]

Can someone give me an example problem where this particular approach to AI and reasoning hits the ball out of the park? In my mind, it's difficult to justify a big investment in learning a new subfield without a clear use case where the approach is dramatically superior to other methods.

To be clear, I'm not looking for an example of where the Bayesian approach in general works, I'm looking for an example that justifies the particular strategy of scaling up Bayesian computation, past the point where most analysts would give up, by using MCMC-style inference.

(As an example, deep learning advocates can point to the success of DL on the ImageNet challenge to motivate interest in their approach).

Comment author: siIver 23 May 2017 04:32:14PM 0 points [-]

That sounds dangerously like justifying inaction.

Literally speaking, I don't disagree. It's possible that spreading awareness has a net negative outcome. It's just not likely. I don't discourage looking into the question, and if facts start pointing the other way I can be convinced. But while we're still vaguely uncertain, we should act on what seems more likely right now.

Comment author: Thomas 23 May 2017 03:02:59PM 0 points [-]

Of course.

And this explains a lot. The so called Faint Sun Paradox is then not a problem at all.

Early Earth was much warmer despite of a fainter Sun mostly thanks to its faster rotation. Partly also because of a smaller distance to the Sun back then, but mostly because of its faster rotation.

It's quite elementary if you think about it.

Comment author: lifelonglearner 23 May 2017 03:01:04PM 0 points [-]

Yep! Romeo Stevens also has some very well explained articles here on LW. This one and this one

Comment author: username2 23 May 2017 02:52:37PM *  1 point [-]

It was a long time from the abacus until the electronic pocket calculator. Even for programmable machines Babbage and Lovelace predate implementation by the better part of a century. You can prove a point in a toy environment long before the complexity of supported environments reaches that of the real world.

Yes, I read and walked away unconvinced from the same old tired, hand-wavey arguments of Superintelligence. All my criticisms above apply as much to Bostrom as the LW AI x-risk community that gave birth or at least a base platform to him.

Comment author: cousin_it 23 May 2017 12:50:41PM *  2 points [-]

I think the rate of cooling depends on temperature much more than the rate of warming up, because T_sun - T_planet >> T_planet - T_space. So a faster rotating planet should be warmer.

Comment author: ahzamvr 23 May 2017 11:33:36AM 0 points [-]

It is a nice article, impressive. keep writing

Comment author: Davidmanheim 23 May 2017 11:27:14AM 2 points [-]

I really like the idea here, but think it's important to be more careful about recommendations. There are community members (Gwern, Scott of SSC,) who have done significant research on many areas discussed here, and have fantastic guides to some parts. Instead of compiling a lot of advice, perhaps you could find which things aren't covered well already, link to those that are, and try to investigate others more thoroughly.

Comment author: knb 23 May 2017 07:25:42AM 8 points [-]

This is a good example of the type of comment I would like to be able to downvote. Utterly braindead political clickbait.

Comment author: hg00 23 May 2017 06:35:23AM *  0 points [-]

I'm sure the first pocket calculator was quite difficult to make work "in production", but nonetheless once created, it vastly outperformed humans in arithmetic tasks. Are you willing to bet our future on the idea that AI development won't have similar discontinuities?

Also, did you read Superintelligence?

Comment author: SoerenE 23 May 2017 05:49:02AM 0 points [-]

Also, it looks like the last time slot is 2200 UTC. I can participate from 1900 and forward.

I will promote this in the AI Safety reading group tomorrow evening.

Comment author: jsalvatier 23 May 2017 03:45:27AM 7 points [-]

Funny enough, as a direct result of reading the sequences, I got super obsessed with Bayesian stats and that eventually resulted in writing PyMC3 (which is the software used in the book).

Comment author: buybuydandavis 23 May 2017 01:41:40AM 0 points [-]

Did they even have "Saddam is faking it" as a possibility?

Comment author: morganism 22 May 2017 11:49:15PM 0 points [-]

"We have the equivalent of a dynamic neural network running our government. It’s ethics free and fed by biased alt-right ideology. And, like most opaque AI, it’s largely unaccountable and creates feedback loops and horrendous externalities. The only way to intervene would be to disrupt the training data itself, which seems unlikely, or hope that his strategy is simply ineffective.

https://www.bloomberg.com/view/articles/2017-02-06/donald-trump-is-the-singularity

"In a prior column, I discussed the notion that Trump behaves like a machine learning algorithm. Well, his path-independent theory of mind fits perfectly into that metaphor. I’d argue that Trump's path independence operates on multiple levels. It's evident at a meta-political level when he takes a stab at sweeping campaign promises that he never intends to fulfill. It's also visible at the micro level, even within a given sentence: In his very strange recent interview with The Economist, for example, he kept attempting to adjust his message to obtain approval from his interviewers. He keeps things vague, and then pokes his way into a given explanation, but leaves himself room to change direction in case he senses disapproval.

https://www.bloomberg.com/view/articles/2017-05-21/donald-trump-s-path-independent-theory-of-mind

Comment author: morganism 22 May 2017 11:47:39PM 0 points [-]

Producing fertilizer from air, using a plasma from a jacobs ladder.

https://phys.org/news/2017-05-fertilizer-air-efficient.html

"In his experiments the GA reactor in particular appeared to be the most suited to producing nitrogen oxides. In this reactor, under atmospheric pressure, a plasma-front (a kind of mini lightning bolt) glides between two diverging metal surfaces, starting with a small opening (2 mm) to a width of 5 centimeters. This expansion causes the plasma to cool to room temperature. During the trajectory of the 'lightning', the nitrogen (N2) and oxygen (O2) molecules react in the immediate vicinity of the lightning front to nitrogen oxides (NO and NO2)."

"Patil optimized this reactor and at a volume of 6 liters per minute managed to achieve an energy consumption level of 2.8 MJ/mole, quite an improvement on the commercially developed methods that use approximately 0.5 MJ/mole. With the theoretical minimum of Patil's reactor, however, being that much lower (0.1 MJ/mole), in the long term this plasma technique could be an energy-efficient alternative to the current energy-devouring ammonia and nitrate production. An added benefit is that Patil's method requires no extra raw materials and production can be generated on a small scale using renewable energy, making his technique ideally suited for application in remote areas that have no access to power grids, such as parts of Africa, for instance."

Comment author: morganism 22 May 2017 11:44:34PM 0 points [-]

"New role in cells suggested for ATP Known as an energy carrier, molecule can also solubilize proteins"

"investigated the effects of ATP on the aggregation of several proteins. They found that ATP could prevent the aggregation of two proteins known to form amyloid clumps. For a third protein, ATP was further able to dissolve fibers of already aggregated protein. And ATP kept proteins in boiled egg white from aggregating.

“Most healthy cell functions require that proteins remain soluble at enormous intracellular concentrations, without aggregating into pathogenic deposits,” write Allyson M. Rice and Michael K. Rosen of the University of Texas Southwestern Medical Center in a perspective accompanying the paper. “The cell may exploit a natural hydrotrope to keep itself in a functioning, dynamic state.”

http://cen.acs.org/articles/95/i21/New-role-cells-suggested-ATP.html

Comment author: RedMan 22 May 2017 11:29:05PM 0 points [-]

Thank you for clarifying, in the long run, there was stability and we do not fully understand it...I believe that my assertion about the transition being messy and involving the collapse of bronze age civilizations rather than their persistence still stands though.

My point is that new developments upended the old social order, and cleared the way for the eventual rise of alternatives. Today, similar levels of destruction will be challenging to recover from, because infrastructure, once trashed, leads to things like the birth defect rate in Fallujah, not just empty space where new things can be built, and battlefields which yiels bumper crops.

Comment author: RedMan 22 May 2017 11:23:41PM *  0 points [-]

They certainly swung. I'm not certain that they successfully imposed their will on the activities of the nation states they attacked. Neither of them are comparable to Alaric, one is comparable to https://en.m.wikipedia.org/wiki/Bernard_Délicieux who despite making a big scene, had no immediate or meaningful impact on the institution he rebelled against.

Do you have a better, easier example of what I've described, or do you disagree with the broad statement in addition to the specific example of Flint?

Comment author: morganism 22 May 2017 11:17:19PM 0 points [-]

The Vanishing Middle Class: Prejudice and Power in a Dual Economy,

"the United States is shifting toward an economic and political makeup more similar to developing nations than the wealthy, economically stable nation it has long been."

https://theintellectualist.co/study-mit-economist-u-s-regressed-third-world-nation-citizens/

he parallels are unsettling. As noted by the Institute for New Economic Thinking:

"In the Lewis model of a dual economy, much of the low-wage sector has little influence over public policy. Check. The high-income sector will keep wages down in the other sector to provide cheap labor for its businesses. Check. Social control is used to keep the low-wage sector from challenging the policies favored by the high-income sector. Mass incarceration – check. The primary goal of the richest members of the high-income sector is to lower taxes. Check. Social and economic mobility is low. Check."

https://mitpress.mit.edu/vanishing

Comment author: gworley 22 May 2017 10:57:23PM 0 points [-]

this feels like it matches with what i've seen. i wonder if there are other studies replicating similar effects?

Comment author: gworley 22 May 2017 10:56:17PM 1 point [-]

Abstract:

Although evidence-based algorithms consistently outperform human forecasters, people often fail to use them after learning that they are imperfect, a phenomenon known as algorithm aversion. In this paper, we present three studies investigating how to reduce algorithm aversion. In incentivized forecasting tasks, participants chose between using their own forecasts or those of an algorithm that was built by experts. Participants were considerably more likely to choose to use an imperfect algorithm when they could modify its forecasts, and they performed better as a result. Notably, the preference for modifiable algorithms held even when participants were severely restricted in the modifications they could make (Studies 1–3). In fact, our results suggest that participants’ preference for modifiable algorithms was indicative of a desire for some control over the forecasting outcome, and not for a desire for greater control over the forecasting outcome, as participants’ preference for modifiable algorithms was relatively insensitive to the magnitude of the modifications they were able to make (Study 2). Additionally, we found that giving participants the freedom to modify an imperfect algorithm made them feel more satisfied with the forecasting process, more likely to believe that the algorithm was superior, and more likely to choose to use an algorithm to make subsequent forecasts (Study 3). This research suggests that one can reduce algorithm aversion by giving people some control—even a slight amount—over an imperfect algorithm’s forecast.

Comment author: philh 22 May 2017 10:53:40PM 2 points [-]

Thanks! They were even reasonably painless, I didn't need to make up a password.

Comment author: gilch 22 May 2017 10:29:30PM 0 points [-]

I'm not sure what you're implying. Most people close to me are not even aware that I advocate cryonics. I expect this will change once I get my finances sorted out enough to actually sign up for cryonics myself, but for most people, cryonics alone already flunks the Absurdity heuristic. Likewise with many of the perfectly rational ideas here on LW, including the logical implications of quantum mechanics and cosmology, like Subjective Immortality. Linking more "absurditiess" seems unlikely to help my case in most instances. One step at a time.

Comment author: whpearson 22 May 2017 09:49:49PM 0 points [-]

(Un)luckily we don't have many examples of potentially world destroying arms races. We might have to adopt the inside view. We'd have to look at how much mutual trust and co-operation there is currently for various things. Beyond my current knowledge.

By the research aspect, I think research can be done without the public having a good understanding of the problems. E.g. cern/CRISPR. I can also think of other bad outcomes of the the public having an understanding of AIrisk. It might be used as another stick to take away freedoms, see the war on terrorism and drugs for examples of the public's fears.

Convincing the general public of AIrisk seems like shouting fire in crowded movie theatre, it is bound to have a large and chaotic impact on society.

This is the best steelman of this argument, that I can think of at the moment. I'm not sure I'm convinced. But I do think we should put more brain power into this question.

Comment author: lifelonglearner 22 May 2017 09:16:30PM 1 point [-]

The Bayesian method is the natural approach to inference, yet it is hidden from readers behind chapters of slow, mathematical analysis. The typical text on Bayesian inference involves two to three chapters on probability theory, then enters what Bayesian inference is. Unfortunately, due to mathematical intractability of most Bayesian models, the reader is only shown simple, artificial examples. This can leave the user with a so-what feeling about Bayesian inference. In fact, this was the author's own prior opinion.

Bayesian Methods for Hackers is designed as a introduction to Bayesian inference from a computational/understanding-first, and mathematics-second, point of view. Of course as an introductory book, we can only leave it at that: an introductory book. For the mathematically trained, they may cure the curiosity this text generates with other texts designed with mathematical analysis in mind. For the enthusiast with less mathematical-background, or one who is not interested in the mathematics but simply the practice of Bayesian methods, this text should be sufficient and entertaining.

Just started reading this text, and I currently find it very instructive for someone trying to get a handle on Bayesianism from a CS perspective.

Comment author: Variable 22 May 2017 08:49:33PM 0 points [-]

You cannot avoid AI race unless all developed countries come into agreement to stop all AI development. Chances of that happening are too low. Most probably some military projects are far away from known ones. However it does not mean we cannot help/affect the situation.

Comment author: tristanm 22 May 2017 08:26:42PM *  2 points [-]

Sometimes I feel that these deep-learning tutorials I encounter on the web (of which I've encountered a great many) usually don't mention how little time will be spent actually designing and running a deep-learning model.

The problem is that, in "the wild", you almost never encounter any situations that resemble the scenarios presented in these tutorials. For example, for our company, a typical project encountered might look like this:

  • Client gives you access to an enormous database with terabytes of unstructured text data like intracompany emails, computer network usage, key-card swipes, transaction data if it's a financial firm, etc., distributed over a wide array of servers that may or may not have the correct software you need in order to run your stuff.
  • They will ask you to find some vaguely-defined thing like "fraud" or "network intrusion".
  • Nothing is labeled, so you won't be able to make use of supervised learning techniques which is what these deep-learning models are most successful at.
  • The attributes defining the data are very opaque, and there is a separate expert for maintaining some subset of the attributes that has to be found, consulted with, and the knowledge from them then incorporated into your often complex understanding of how the data is generated and represented.
  • Most of the work is spent doing the above, and figuring out how to do the complex joins and transformations necessary to input the data into a machine-learning algorithm (and with deep nets, all fields have to be numeric and within a specific range).
  • Clients often demand deep-learning-level accuracy while also demanding complete model transparency. They almost never tolerate black-box models.
  • Clients would usually rather continue to use their archaic, hand-designed if-statement rather than modern AI techniques unless the latter meets all of their requirements perfectly - the burden of proof lies on you for proving your method is guaranteed to be a return on their investment with zero risk for them.

I will add a caveat that:

  • Occasionally we do encounter some projects that not only have labeled data but also enough data with enough straightforward, numeric features that neural networks can be used and are easily shown to be more successful. But this is usually the exception rather than the rule.

I sometimes wonder if we're getting an unusual subset of major corporations with these characteristics, but these are pretty major, large firms that seem to share many of the same business practices with each other, so I would somewhat doubt that.

But in general, I think that there seems to be a far larger share of articles covering how to do basic things with Keras and Tensorflow, and too few on the "hard problems" of data science work.

Comment author: ChristianKl 22 May 2017 07:49:04PM 0 points [-]

Why is it cruel to have to write job applications?

View more: Next