Comment author: turchin 12 December 2015 09:48:10AM 1 point [-]

"Musk: I think the best defense against the misuse of AI is to empower as many people as possible to have AI. If everyone has AI powers, then there’s not any one person or a small set of individuals who can have AI superpower. Altman: "I expect that [OpenAI] will [create superintelligent AI], but it will just be open source and useable by everyone <...> Anything the group develops will be available to everyone", "this is probably a multi-decade project <...> there’s all the science fiction stuff, which I think is years off, like The Terminator or something like that. I’m not worried about that any time in the short term"

It's like giving everybody a nuclear reactor and open source knowledge about how to make a bomb. Looks like to result in disaster.

I would like to call this type of thinking "billionaire arrogance" bias. A billionaire thinks that the fact that he is rich is an evidence that he is most clever person in world. But in fact it is evidence that he was lucky before.

Comment author: pico 12 December 2015 11:33:04PM 5 points [-]

Being a billionaire is evidence more of determination than of luck. I also don't think billionaires believe they are the smartest people in the world. But like everyone else, they have too much faith in their own opinions when it comes to areas in which they're not experts. They just get listened to more.

Comment author: pico 12 December 2015 05:05:19AM *  0 points [-]

You can tell pretty easily how good research in math or physics is. But in AI safety research, you can fund people working on the wrong things for years and never know, which is exactly the problem MIRI is currently crippled by. I think OpenAI plans to get around this problem by avoiding AI safety research altogether and just building AIs instead. That initial approach seems like the best option. Even if they contribute nothing to AI safety in the near-term, they can produce enough solid, measurable results to keep the organization alive and attract the best researchers, which is half the battle.

What troubles me is that OpenAI could set a precedent for AI safety as a political issue, like global warming. You just have to read the comments on the HN article to find that people don't don't think they need any expertise in AI safety to have strong opinions about it. In particular, if Sam Altman and Elon Musk have some false belief about AI safety, who is going to prove it to them? You can't just do an experiment like you can in physics. That may explain why they have gotten this far without being able to give well-thought-out answers on some important questions. What MIRI got right is that AI safety is a research problem, so only the opinions of the experts matter. While OpenAI is still working on ML/AI and producing measurable results, it might work to have the people who happened to be wealthy and influential in charge. But if they hope to contribute to AI safety, they will have to hand over control to the people with the correct opinions, and they can't tell who those people are.

Comment author: ChristianKl 22 October 2015 10:43:17PM 2 points [-]

Do you think fact checking is an inherently more difficult problem then what Watson can do?

Comment author: pico 22 October 2015 11:42:56PM 5 points [-]

It depends what level of fact checking is needed. Watson is well-suited for answering questions like "What year was Obama born?", because the answer is unambiguous and also fairly likely to be found in a database. I would be very surprised if Watson could fact check a statement like "Putin has absolutely no respect for President Obama", because the context needed to evaluate such a statement is not so easy to search for and interpret.

Comment author: pico 22 October 2015 10:41:48PM 4 points [-]

I'm still fairly skeptical that algorithmically fact-checking anything complex is tractable today. The Google article states that "this is 100 percent theoretical: It’s a research paper, not a product announcement or anything equally exciting." Also, no real insights into nlp are presented; the article only suggests that an algorithm could fact check relatively simple statements that have clear truth values by checking a large database of information. So if the database has nothing to say about the statement, the algorithm is useless. In particular, such an approach would be unable to fact-check the Fiorina quote you used as an example.

Comment author: pico 17 October 2015 06:59:53AM 3 points [-]

Proposition: how much you should prioritize using currently available life extension methods depends heavily on how highly you value arbitrary life extension. The exponential progress of technology means that on the small chance a healthier lifestyle* nontrivially increases your lifespan, there is a fairly good chance you get arbitrary life extension a result. So the outcome is pretty binary - live forever or get an extra few months. If you're content with current lifespans, as most people seem to be, the chance at immortality is probably still small enough to ignore.

*healthier than the obvious (exercise, don't smoke, etc.)

Comment author: skeptical_lurker 06 October 2015 10:54:30PM 10 points [-]

Yes, but I don't know how helpful it would be. It could however, unmask sockpuppets which would be useful.

Comment author: pico 08 October 2015 02:35:24AM 3 points [-]

In general there should be a way to outsource forum moderation tasks like these, rather than everyone in charge of a community having to do it themselves.

Comment author: Thomas 07 October 2015 08:44:23AM *  1 point [-]

WA is quite impressive in some sub-fields. But not nearly enough. What I want are all possible known relations your nick "Ruzeil" has with anything else. A picture (all known pictures) of you and anybody else who may use it as a nick or a (sur)name etc. Then all your posts here and all those who discussed with you...

If there is a known relation anywhere in this world, that relation should be in this GLT. Then you filter out (and aggregate) as you want. Well, the interface let you do it easily and an API exists as well.

Perhaps 10^20 records are in the table, you can play with. The number grows and grows. And you can access to view all of them.

Every relation in this table has its own probability. Some quite high, some not. And are constantly updated as well. Even the number of possible attributes of a relation in the table develops as well.

Needless to say, you can use the table to see networks of relations between elements of any list you choose to provide to this GLT.

Comment author: pico 07 October 2015 07:52:13PM 3 points [-]

Setting aside whether or not this is useful, I'm not convinced that the implementation you described is practical. Google based search on hyperlinks specifically because that was easy to implement. Is there a smaller search space than the entirety of human knowledge on which this would still be useful?

Comment author: MarsColony_in10years 06 October 2015 10:55:16PM 0 points [-]

Perhaps this did generate some traffic, but LessWrong doesn't have adds. And any publicity this generated was bad publicity, since Roko's argument was far too weird to be taken seriously by almost anyone.

It doesn't look like anyone benefited. Eliezer made an ass of himself. I would guess that he was rather rushed at the time.

Comment author: pico 07 October 2015 05:07:25AM 3 points [-]

At worst, it's a demonstration of how much influence LessWrong has relative to the size of its community. Many people who don't know this site exists know about Roko's basilisk now.

Comment author: RichardKennaway 06 October 2015 10:00:22AM 4 points [-]

I think genuinely dangerous ideas are hard to come by though.

Daniel Dennett wrote a book called "Darwin's Dangerous Idea", and when people aren't trying to play down the basilisk (i.e. almost everywhere), people often pride themselves on thinking dangerous thoughts. It's a staple theme of the NRxers and the manosphere. Claiming to be dangerous provides a comfortable universal argument against opponents.

I think there are, in fact, a good many dangerous ideas, not merely ideas claimed to be so by posturers. Off the top of my head:

  • Islamic fundamentalism (see IS/ISIS/ISIL).
  • The mental is physical.
  • God.
  • There is no supernatural.
  • Utilitarianism.
  • Superintelligent AI.
  • How to make nuclear weapons.
  • Atoms.

Ideas like that usually don't pop into the heads of random, uninformed strangers.

They do, all the time, by contagion from the few who come up with them, especially in the Internet age.

Comment author: pico 07 October 2015 03:17:28AM 3 points [-]

Sorry, should have defined dangerous ideas better - I only meant information that would cause a rational person to drastically alter their behavior, and which would be much worse for society as a whole when everyone is told at once about it.

Comment author: Bryan-san 06 October 2015 02:57:41AM 3 points [-]

At the end of the day, I hope this will have been a cowpox situation and lead people to be better informed at avoiding actual dangerous information hazard situations in the future.

I seem to remember reading a FAQ for "what to do if you think you have an idea that may be dangerous" in the past. If you know what I'm talking about, maybe link it at the end of the article?

Comment author: pico 06 October 2015 07:52:33AM 2 points [-]

I think genuinely dangerous ideas are hard to come by though. They have to be original enough that few people have considered them before, and at the same time have powerful consequences. Ideas like that usually don't pop into the heads of random, uninformed strangers.

View more: Next