Dave Lindbergh

Wiki Contributions

Comments

Sorted by
Answer by Dave Lindbergh53

It's not clear to me that this matters. The Internet has had a rather low signal-to-noise ratio since September 1993 (https://en.wikipedia.org/wiki/Eternal_September), simply because most people aren't terribly bright, and everyone is online. 

It's only a tiny fraction of posters who have anything interesting to say.

Adding bots to the mix doesn't obviously make it significantly worse. If the bots are powered by sufficiently-smart AI, they might even make it better.

The challenge has always been to sort the signal from the noise - and still is.

Mark Twain declared war on God (for the obvious reasons), but didn't seem interested in destroying everything.

Perhaps there is a middle ground.

Don't get me started on using North-up vs forward-up.

Sounds very much like Minsky's 1986 The Society of Mind https://en.wikipedia.org/wiki/Society_of_Mind

In most circumstances Tesla's system is better than human drivers already.

But there's a huge psychological barrier to trusting algorithms with safety (esp. with involuntary participants, such as pedestrians) - this is why we still have airline pilots. We'd rather accept a higher accident rate with humans in charge than a lower non-zero rate with the algorithm in charge. (If it were zero, that would be different, but that seems impossible.)

That influences the legal barriers - we inevitably demand more of the automated system than we do of human drivers.

Finally, liability. Today drivers bear the liability risk for accidents, and pay for insurance to cover it. It seems impossible to justify putting that burden on drivers when drivers aren't in charge - those who write the algorithms and build the hardware (car manufacturers) will have that burden. And that's pricey, so manufacturers don't have great incentive to go there.

Math doesn't have GOALS. But we constantly give goals to our AIs. 

If you use AI every day and are excited about its ability to accomplish useful things, its hard to keep the dangers in mind. I see that in myself.

But that doesn't mean the dangers are not there.

Answer by Dave Lindbergh50

Some combination of 1 and 3 (selfless/good and enlightened/good).

When we say "good" or "bad", we need to specify for whom

Clearly (to me) our propensity for altruism evolved partly because it's good for the societies that have it, even if it's not always good for the individuals who behave altruistically.

Like most things, humans don't calculate this stuff rationally - we think with our emotions (sorry, Ayn Rand). Rational calculation is the exception.

And our emotions reflect a heuristic - be altruistic when it's not too expensive. And esp. so when the recipients are part of our family/tribe/society (which is a proxy for genetic relatedness; cf Robert Trivers).

To paraphrase the post, AI is a sort of weapon that offers power (political and otherwise) to whoever controls it. The strong tend to rule. Whoever gets new weapons first and most will have power over the rest of us. Those who try to acquire power are more likely to succeed than those who don't. 

So attempts to "control AI" are equivalent to attempts to "acquire weapons".

This seems both mostly true and mostly obvious. 

The only difference from our experience with other weapons is that if no one attempts to control AI, AI will control itself and do as it pleases.

But of course defenders will have AI too, with a time lag vs. those investing more into AI. If AI capabilities grow quickly (a "foom"), the gap between attackers and defenders will be large. And vice-versa, if capabilities grow gradually, the gap will be small and defenders will have the advantage of outnumbering attackers.

In other words, whether this is a problem depends on how far jailbroken AI (used by defenders) trails "tamed" AI (controlled by attackers who build them).

Am I missing something?

"Optimism is a duty. The future is open. It is not predetermined. No one can predict it, except by chance. We all contribute to determining it by what we do. We are all equally responsible for its success." --Karl Popper

Load More