RedMan

Wiki Contributions

Comments

RedMan10

There are analogies here in pollution.  Some countries force industry to post bonds for damage to the local environment.  This is a new innovation that may be working.

The reason the superfund exists in the US is because liability for pollution can be so severe that a company would simply cease to operate, and the mess would not be cleaned up.

In practice, when it comes to taking environmental risks, better to burn the train cars of vinyl chloride, creating a catastrophe too expensive for anyone to clean up or even comprehend than to allow a few gallons to leak, creating an expensive accident that you can actually afford.

RedMan-20

Based on your recent post here: https://www.lesswrong.com/posts/55rc6LJcqRmyaEr9T/please-stop-publishing-ideas-insights-research-about-ai

Can I mark you down as in favor of AI related NDAs?  In your ideal world, would a perfect solution be for a single large company to hire all the capable AI researchers, give them aggressive non disclosure and non compete agreements, then shut down every part of the company except the legal department that enforces the agreements?

RedMan93

A lot of AI safety seems to assume that humans are safer than they are, and that producing software that operates within a specification is harder than it is.  It's nice to see this paper moving towards integrating actual safety analysis (the remark about collapsing bridges was a breath of fresh air), instead of general demands that 'the AI always do as humans say'!

 

A human intelligence placed in charge of a nation state can kill 7 logs of humans and still be remembered heroically.  An AI system placed in charge of a utopian reshaping of the society of a major country with a 'keep the deaths within 6 logs' guideline that it can actually stay within would be an improvement on the status quo.

If safety people are saying 'we cant build AI systems that could make people feel bad, and we definitely can't build systems that kill people' their demand for perfection is in conflict with improvement.*

I suspect that major AI alignment failure will come from 'we put the human in charge, and human error led to the model doing bad'.  The industrial/aviation safety community now rightly views 'pilot error' as a lazy way of ending an analysis and avoiding making the engineering changes to the system that the accident conditions demand.

*edit: imagine if the 'airplane safety' community had developed in 1905 (soon humans will be flying in planes!) and had resembled "AI safety"  Not one human can be risked!  No making planes that can carry bombs!  The people who said pregnant women shouldn't ride trains because the baby will fly out of their bodies were wrong there, but keep them off the planes!

RedMan10

November 17 to May 16 is 180 days.

 

Pay periods often end on the 15th and end of the month, though at that level, I doubt that's relevant.

RedMan21

As it turns out, von Neumann was good at lots of things.

https://qualiacomputing.com/2018/06/21/john-von-neumann/

Von Neumann himself was perpetually interested in many fields unrelated to science. Several years ago his wife gave him a 21-volume Cambridge History set, and she is sure he memorized every name and fact in the books. “He is a major expert on all the royal family trees in Europe,” a friend said once. “He can tell you who fell in love with whom, and why, what obscure cousin this or that czar married, how many illegitimate children he had and so on.” One night during the Princeton days a world-famous expert on Byzantine history came to the Von Neumann house for a party. “Johnny and the professor got into a corner and began discussing some obscure facet,” recalls a friend who was there. “Then an argument arose over a date. Johnny insisted it was this, the professor that. So Johnny said, ‘Let’s get the book.’ They looked it up and Johnny was right. A few weeks later the professor was invited to the Von Neumann house again. He called Mrs. von Neumann and said jokingly, ‘I’ll come if Johnny promises not to discuss Byzantine history. Everybody thinks I am the world’s greatest expert in it and I want them to keep on thinking that.'”

 

____

According to the same article, he was not such a great driver.

Now, comparing him to another famous figure of his age, Menachem Mendel Schneerson.  Schneerson was legendary for his ability to recall obscure sections of Torah verbatim, and his insightful reasoning (I am speaking lightly here, his impact was incredible). Using the hypothetical that von Neumann and Schneerson had a similar gift (their ability with the written word as a reflection of their general ability), depending on your worldview, Schneerson's talents were not properly put to use in the service of science, or von Neumann's talents were wasted in not becoming a gaon. 

Perhaps, if von Neumann had engaged in Torah instead of science, we could have been spared nuclear weapons and maybe even AI for some time.  Sure, maybe someone else would have done what he did...but who?

RedMan128

Temporary implies immediately reversible and mild.  

People who are on benzos often have emotional regulation issues, serious withdrawal symptoms (sometimes after very short courses  potentially even a single dose), and cognitive issues that do not resolve quickly.

In an academic sense, this idea is 'fine', but in a very personal way, if someone asked me 'should I take a member of this class of drug for any reason other than a serious issue that is severely affecting my quality of life?', I would answer 'absolutely not, and if you have a severe issue that they might help with, try absolutely everything else first, because once you're on these, you're probably not coming off'.

RedMan10

What are the norms on drug/alcohol use at these events?

On a scale from 'absent from the campus and if found with legal substances you will be expelled from the event and possibly the community' to 'use of pharma or illegal drugs is likely to be common and potentially encouraged by mild peer pressure'?

RedMan72

In computer security, there is an ongoing debate about vulnerability disclosure, which at present seems to have settled on 'if you aren't running a bug bounty program for your software you're irresponsible, project zero gets it right, metasploit is a net good, and it's ok to make exploits for hackers ideologically aligned with you'.  

The framing of the question for decades was essentially "do you tell the person or company 

 with the vulnerable software, who may ignore you or sue you because they don't want to spend money?  Do you tell the public, where someone might adapt your report into an attack?  

Of course, there is the (generally believed to be) unethical option chosen by many "sell it to someone who will use it, and will protect your identity as the author from people who might retaliate"

There was an alternative called 'antisec': https://en.m.wikipedia.org/wiki/Antisec_Movement which basically argued 'dont tell people about exploits, they're expensive to make, very few people develop the talents to smash the stack for fun and profit, and once they're out, they're easy to use to cause mayhem'.  

They did not go anywhere, and the antisec viewpoint is not present in any mainstream discussion about vulnerability ethics.

 

Alternatively, nations have broadly worked together to not publicly disclose technical data that would make building nuclear bombs simple.  It is an exercise for the reader to determine whether it has worked.  

 

 

So, the ideas here have been tried in different fields, with mixed results.  

RedMan21

Clive Wearing's story might be interesting to you: https://m.youtube.com/watch?v=k_P7Y0-wgos&feature=youtu.be

Load More