Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: turchin 21 January 2017 11:56:04AM *  4 points [-]

I am writing an article about fighting aging as a cause for effective altruism - early draft, suggestions welcome.

And also an article and a map about "global vs local solutions of the AI safety problem" - early draft, suggestions welcome.

Comment author: pico 21 January 2017 07:01:35PM 2 points [-]

Please PM me a draft of your fighting aging article if you want to - I can read it and offer feedback

Comment author: username2 24 October 2016 07:44:12AM 0 points [-]

There is pretty much no use cases that benefit from high latency clusters of computers. We're talking hundreds or thousands of times less efficient. Nice idea in theory, doesn't hold up in practice.

Comment author: pico 24 October 2016 09:36:17AM 0 points [-]

Neural networks seem like they would benefit from high-latency clusters. If you divide the nodes up into 100 clusters during training, and you have ten layers, it might take each cluster 0.001s to process a single sample. So the processing time per cluster is maybe 100-1000 times less than the total latency, which is acceptable if you have 10,000,000 samples and can allow some weight updates to be a bit out of order. Also, if you just want the forward pass of the network, that's the ideal case, since there are no state updates.

In general, long computations tend to be either stateless or have slowly changing state relative to the latency, so parallelism can work.

Comment author: Liron 23 October 2016 08:25:01PM 2 points [-]

Haha the problem is that even if you have a pretty souped up gaming desktop, its computing power is probably worth less than the power costs, so you'd basically be selling just your room's power.

Maybe you live in a dorm and you don't have to pay for that power, but even then, we're talking about pennies.

The problem of "college students are annoyingly poor" is a big niche. What do you know about converting your time to money through your computer?

Comment author: pico 24 October 2016 04:31:30AM *  0 points [-]

Good point, though there should be value on the other end at least. For example if 100 people on a network each need more than their laptop's computing power 1% of the time, in the ideal case, the average person would get a 100 times speed up for that 1% of the time without providing a credit card. So they could train an image classifier in 6 minutes instead of 10 hours.

Also I should admit that I'm only poor in the relative sense - I need rice, beans, and a few dozen square feet, and I have those things covered.

Hmm it probably is more lucrative to convert my time to money, though I think it's better to invest my time in increasing my future earnings, which would probably be way better than what I could make as a part-time-working college student.

Actually, my biggest gripe about my life right now is that college is inefficient in so many ways (500 person lectures, required classes that are mostly wastes of time, absurd tuition), yet I don't know how I could get the things I like about it (flexible schedule, great peers, some extremely good teachers, excuse to be a student) somewhere else.

Comment author: pico 23 October 2016 07:46:00PM 0 points [-]

Like most college students, I am annoyed that I am poor. I would like a way to sell the spare computing power of my laptop over the Internet to people who would pay for it, like deep learning folks. I would be willing to share 50% of the profits with anyone who can figure out how to do this.

Comment author: turchin 12 December 2015 09:48:10AM 1 point [-]

"Musk: I think the best defense against the misuse of AI is to empower as many people as possible to have AI. If everyone has AI powers, then there’s not any one person or a small set of individuals who can have AI superpower. Altman: "I expect that [OpenAI] will [create superintelligent AI], but it will just be open source and useable by everyone <...> Anything the group develops will be available to everyone", "this is probably a multi-decade project <...> there’s all the science fiction stuff, which I think is years off, like The Terminator or something like that. I’m not worried about that any time in the short term"

It's like giving everybody a nuclear reactor and open source knowledge about how to make a bomb. Looks like to result in disaster.

I would like to call this type of thinking "billionaire arrogance" bias. A billionaire thinks that the fact that he is rich is an evidence that he is most clever person in world. But in fact it is evidence that he was lucky before.

Comment author: pico 12 December 2015 11:33:04PM 5 points [-]

Being a billionaire is evidence more of determination than of luck. I also don't think billionaires believe they are the smartest people in the world. But like everyone else, they have too much faith in their own opinions when it comes to areas in which they're not experts. They just get listened to more.

Comment author: pico 12 December 2015 05:05:19AM *  0 points [-]

You can tell pretty easily how good research in math or physics is. But in AI safety research, you can fund people working on the wrong things for years and never know, which is exactly the problem MIRI is currently crippled by. I think OpenAI plans to get around this problem by avoiding AI safety research altogether and just building AIs instead. That initial approach seems like the best option. Even if they contribute nothing to AI safety in the near-term, they can produce enough solid, measurable results to keep the organization alive and attract the best researchers, which is half the battle.

What troubles me is that OpenAI could set a precedent for AI safety as a political issue, like global warming. You just have to read the comments on the HN article to find that people don't don't think they need any expertise in AI safety to have strong opinions about it. In particular, if Sam Altman and Elon Musk have some false belief about AI safety, who is going to prove it to them? You can't just do an experiment like you can in physics. That may explain why they have gotten this far without being able to give well-thought-out answers on some important questions. What MIRI got right is that AI safety is a research problem, so only the opinions of the experts matter. While OpenAI is still working on ML/AI and producing measurable results, it might work to have the people who happened to be wealthy and influential in charge. But if they hope to contribute to AI safety, they will have to hand over control to the people with the correct opinions, and they can't tell who those people are.

Comment author: ChristianKl 22 October 2015 10:43:17PM 2 points [-]

Do you think fact checking is an inherently more difficult problem then what Watson can do?

Comment author: pico 22 October 2015 11:42:56PM 5 points [-]

It depends what level of fact checking is needed. Watson is well-suited for answering questions like "What year was Obama born?", because the answer is unambiguous and also fairly likely to be found in a database. I would be very surprised if Watson could fact check a statement like "Putin has absolutely no respect for President Obama", because the context needed to evaluate such a statement is not so easy to search for and interpret.

Comment author: pico 22 October 2015 10:41:48PM 4 points [-]

I'm still fairly skeptical that algorithmically fact-checking anything complex is tractable today. The Google article states that "this is 100 percent theoretical: It’s a research paper, not a product announcement or anything equally exciting." Also, no real insights into nlp are presented; the article only suggests that an algorithm could fact check relatively simple statements that have clear truth values by checking a large database of information. So if the database has nothing to say about the statement, the algorithm is useless. In particular, such an approach would be unable to fact-check the Fiorina quote you used as an example.

Comment author: pico 17 October 2015 06:59:53AM 3 points [-]

Proposition: how much you should prioritize using currently available life extension methods depends heavily on how highly you value arbitrary life extension. The exponential progress of technology means that on the small chance a healthier lifestyle* nontrivially increases your lifespan, there is a fairly good chance you get arbitrary life extension a result. So the outcome is pretty binary - live forever or get an extra few months. If you're content with current lifespans, as most people seem to be, the chance at immortality is probably still small enough to ignore.

*healthier than the obvious (exercise, don't smoke, etc.)

Comment author: skeptical_lurker 06 October 2015 10:54:30PM 10 points [-]

Yes, but I don't know how helpful it would be. It could however, unmask sockpuppets which would be useful.

Comment author: pico 08 October 2015 02:35:24AM 3 points [-]

In general there should be a way to outsource forum moderation tasks like these, rather than everyone in charge of a community having to do it themselves.

View more: Next