Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Could I summarize this as: deontological and virtue-ethical moral principles are often useful approximations to consequentialist reasoning, that are faster to apply to a situation and therefore often preferable as a guide to desired behavior?

I expect B8 is the major factor. Before social media, if you had a bad idea and two of your five close friends told you they didn’t think it was a good idea, you’d drop it. Now five random ‘friends’ will tell you how insightful you are and how blind everyone else is. You’ve publicly stated your belief in the idea and got social proof. That makes it that much harder to drop.

People individually don’t have more bad ideas than before, but there is much more selection pressure in favor of them.

We want to show that given any daemon, there is a smaller circuit that solves the problem.

Given any random circuit, you can not, in general, show whether it is the smallest circuit that produces the output it does. That's just Rice's theorem, right? So why would it be possible for a daemon?

Don't worry about not being able to convince Lubos Motl. His prior for being correct is way too high and impedes his ability to consider dissenting views seriously.

Given there are friendly human intelligences, what would have to be true about the universe in order for friendly AGI’s to be impossible?

A list that is probably vastly incomplete. It seems very likely that there have been vehicle attacks for as long as vehicles exist. What would be the odds of no one in the past 100 years, no angry spouse, disgruntled ex-employee or lunatic, having thought of taking revenge on the cruel world by ramming a vehicle into people? Wouldn’t a prior on the order of at least one such event per 1 million vehicles per year be more likely to yield correct predictions than 0, for events before, say, the year 2005?

In that triad the meta-contrarian is broadening the scope of the discussion. They address what actually matters, but that doesn’t change that the contrarian is correct (well, a better contrarian would point out the number of deaths due to Ebola is far less than any of those examples and Ebola doesn’t seem a likely candidate to evolve into a something causing an epidemic) and that the meta-contrarian has basically changed the subject.

Suppose the Manhattan project was currently in progress, meaning we somehow had the internet, mobile phones, etc. but not nuclear bombs. You are a smart physicist that keeps up with progress in many areas of physics and at some point you realize the possibility of a nuclear bomb. You also foresee the existential risk this poses.

You manage to convince a small group of people of this, but many people are skeptical and point out the technical hurdles that would need to be overcome, and political decisions that would need to be taken, for the existential risk to become reality. They think it will all blow over and work itself out. And most people fail to grasp enough of the details to have a rational opinion about the topic.

How would you (need to) go about convincing a sufficient amount of the right people that this development poses an existential risk?

Would you subsequently try to convince them we should preemptively push for treaties, and aggressive upholding of those treaties, to prevent the annihilation of the human species? How would you get them to cooperate? Would you try to convince them to put as much effort as possible into a Manhattan project to develop an FAI that can subsequently prevent any other AI from becoming powerful enough to threaten them? Another approach?

I’m probably treading well trodden ground, but it seems to me that knowledge about AI safety is not what matters. What matters is convincing enough sufficiently powerful people that we need such knowledge before AGI becomes reality. Which should result in regulating AI development or urgently pushing for obtaining knowledge on AI safety or ...

Without such people involved the net effect of the whole FAI community is a best effort skunkworks project attempting to uncover FAI knowledge, disseminate it as wide as possible and pray to god those first achieving AGI will actually make use of that knowledge. Or perhaps attempting to beat Google, the NSA or China to it. That seems like a hell of a gamble to me and although much more within the comfort zone of the community, vastly less likely to succeed than convincing Important People.

But I admit that I am clueless as to how that should be done. It’s just that it makes “set aside three years of your life to invest in AI safety research” ring pretty desperate and suboptimal to me.