Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: username2 25 May 2017 10:26:27AM *  0 points [-]

That's like saying a paranoid schizophrenic can solve his problems by performing psychoanalysis against a copy of himself. However I doubt another paranoid schizophrenic would be able to provide very good or effective therapy.

In short you are assuming a working AGI exists to do the debugging, but the setup is that the AGI itself is flawed! Nearly every single engineering project ever demonstrates that things don't work on the first try, and when an engineered thing fails it fails spectacularly. Biology is somewhat unique in its ability to recover from errors, but only specialized categories of errors that it was trained to overcome in its evolutionary environment.

As an engineering professional I find it extremely unlikely that an AI could successfully achieve hard take-off on the first try. So unlikely that it is not even worth thinking about -- LHC creating black holes level of unlikely. When developing AI it would be prudent to seed the simulated environments it is developed and tested inside of with honeypots, and see if it attempts any of the kinds of failure modes x-risk people are worried about. Then and there with an actual engineering prototype would be an appropriate time to consider engineering proactive safeguards. But until then it seems a bit like worrying about aviation safety in the 17th century and then designing a bunch of safety equipment for massive passenger hot air balloons that end up being of zero use in the fixed wing aeroplane days of the 20th century.

Comment author: ChristianKl 25 May 2017 03:12:25PM 0 points [-]

However I doubt another paranoid schizophrenic would be able to provide very good or effective therapy.

I don't see a reason for why being a paranoid schizophrenic makes a person unable to lead another person through a CBT process.

As an engineering professional I find it extremely unlikely that an AI could successfully achieve hard take-off on the first try.

The assumption of an AGI achieving hard take-off on the first try is not required for the main arguments about AGI risk being a problem.

The fact that the AGI first doesn't engage in particular harmful action X doesn't imply that if you let it self modify a lot it still doesn't engage in action X.

Comment author: Miller 25 May 2017 06:20:58AM *  0 points [-]

Prediction is intelligence. Why is there not more discussion about stock picks here? Is it low status? Does everyone believe in strong forms of efficient market ?

(edited -- curious where it goes without leading the witness)

Comment author: ChristianKl 25 May 2017 02:34:25PM 0 points [-]

Even if you don't believe in the efficient market, picking publically traded stocks yourself means that you believe you can win in zero-sum games against professional investors who are supported by huge computer models and research analysts.

If you have inside information about a company that's not known to the professional investors you might make good trades but you are also violating the law and it would be stupid to publically talk about your trades and their justification on a forum like this.

Another way to make money is to bet on effects that are illiquid enough that professional investors aren't interested. If you have found a trade from which $10,000 can be extracted you are also not benefiting from being public about your predictions. A friend for example used to do arbitrage between different bitcoin markets. While that happened to be profitable, it was stupid to talk about it in a public venue like this.

Comment author: MrMind 25 May 2017 12:35:56PM 0 points [-]

Differences with MIRI?

Comment author: ChristianKl 25 May 2017 02:19:21PM 0 points [-]

MIRI doesn't do outreach and is not an organization where it's easy to become a member.

Comment author: MrMind 25 May 2017 12:28:05PM 0 points [-]

How about Dixit?

Comment author: ChristianKl 25 May 2017 02:18:10PM 0 points [-]

That's no two person game and there's likely an advantage to knowing real world facts.

Comment author: username2 24 May 2017 06:09:46PM *  0 points [-]

It was not a point about timelines, but rather the viability of a successful runaway process (vs. one that gets stuck in a silly loop or crashes and burns in a complex environment). It becomes harder to imagine a hard takeoff of an evil AI when every time it goes off the rails it requires intervention of a human debugger to get back on track.

Comment author: ChristianKl 24 May 2017 06:16:08PM 0 points [-]

Once an AI reaches human level intelligence and can run multiple instances in parallel it doesn't require a human debugger but can be debugged by another AGI instance.

That what human level AGI is per definition about.

Comment author: username2 20 May 2017 12:23:46PM *  2 points [-]

Ok so I'm in the target audience for this. I'm an AI researcher that doesn't take AI risk seriously and doesn't understand the obsession this site has with AI x-risk. But the thing is I've read all the arguments here and I find them unconvincing. They demonstrate a lack of rigor and a naïve under appreciation of the difficulty of making anything work in production at all, much less out smart the human race.

If you want AI people to take you seriously, don't just throw more verbiage at them. There is enough of that already. Show them working code. Not friendly AI code -- they don't give a damn about that -- but an actual evil AI that could conceivably have been created by accident and actually have cataclysmic consequences. Because from where I sit that is a unicorn, and I stopped believing in unicorns a long time ago.

Comment author: ChristianKl 24 May 2017 04:47:53PM *  0 points [-]

They demonstrate a lack of rigor and a naïve under appreciation of the difficulty of making anything work in production at all, much less out smart the human race.

This sounds, like you think you disagree about time-lines. When do you think AGI that's smarter than the human race will be created? What's the probability that it will get created before: 2050, 2070, 2100, 2150, 2200 and 2300?

Comment author: knb 23 May 2017 07:25:42AM 8 points [-]

This is a good example of the type of comment I would like to be able to downvote. Utterly braindead political clickbait.

Comment author: ChristianKl 23 May 2017 05:09:18PM 1 point [-]

The fact that journalists at a mainstream publication use the metaphor of machine learning to explain the actions of the president is noteworthy. Five-years ago you would be hard pressed for a journalist who thinks that his audience would understand machine learning enough to get the metaphor.

Comment author: ChristianKl 23 May 2017 04:51:42PM *  1 point [-]

For Feldenkrais there's a supportive meta-review that concludes:
Further research is required; however, in the meantime, clinicians and professionals may promote the use of FM (Feldenkrais Method) in populations interested in efficient physical performance and self-efficacy.

Video games can treat PTSD

The link points to an article that doesn't provide evidence that the treatment works.

service dogs

From the linked paper:

The overarching theme in the literature that cut across those addressed in this review was the need for further empirical research. It is evident given the extent of anecdotal evidence that PSD are effective in the management of PTSD. There are challenges and difficulties with the use of PSD as a treatment as indicated in the review. And the evidence, whether scientific or interpretative, about the exact nature of the challenges and the effectiveness, including the conditions that influence effectiveness, is still lacking.

Basically, the tenor of your article seems to support some treatments that are are only supported by anecdotes if they are nearer to the mainstream while rejecting other form of therapies that are less mainstream.

Comment author: cousin_it 22 May 2017 11:06:28AM *  0 points [-]

If the current system had no other benefits, except unemployment benefits which were available for a limited time and on condition of writing job applications, then yeah I'd consider it cruel and prefer mine. Mostly I was responding to entirelyuseless's comment. They pointed out that UBI might hurt society by removing the incentive to work, so I tried to devise a similarly simple system that would support unemployed people without removing the incentive.

Comment author: ChristianKl 22 May 2017 07:49:04PM 0 points [-]

Why is it cruel to have to write job applications?

Comment author: cousin_it 21 May 2017 08:54:48PM *  0 points [-]

Even if you're willing to work, some job offers are objectively pretty bad (let's say it's a five hour commute, the work is hazardous, and the salary isn't enough for your food and medicine). Do you think people should die if they refuse such offers and better ones aren't available? I'd prefer to legally define what constitutes a "reasonable" job for a given person, and allow anyone to walk into a government office and receive either a reasonable job offer or a welfare check. If the market is good at providing reasonable jobs, as some libertarians seem to think, then the policy is cheap because the government clerk can just call up Mr Market.

Comment author: ChristianKl 22 May 2017 07:17:40AM 0 points [-]

I'd prefer to legally define what constitutes a "reasonable" job for a given person, and allow anyone to walk into a government office and receive either a reasonable job offer or a welfare check.

This proposal sounds to me like you are not aware of how our present system actually works.

The idea of a market economy isn't that it's the job of the government to hand out jobs. It's not the role of the government to produce jobs. It's rather employees who need labor to get things done, that they want to have done.

As a result, a person who seeks welfare is generally expected to apply to jobs a write job applications. Do you find the practice of telling a welfare recipient to write job applications to be wrong or do you just don't know?

View more: Next