Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Alexei 13 April 2017 08:56:48PM 0 points [-]

Hmm, I'm skeptical a barter system would work. I don't think I've seen a successful implementation of it anywhere, though I do hear about people trying.

Yes, we've considered paying people, but that's not scalable. (A good 3-5 page explanation might take 10 hours to write.)

Comment author: Riothamus 14 April 2017 07:02:18PM 2 points [-]

I had not imagined a strict barter system or scaling of paid content; the objective in both cases is only to make up the difference between the value content producers want versus the value they expect for the first wave.

The point of diminishing returns would be hard to judge for paid content, but perhaps the two strategies could work together: survey prospective content producers for the content they want to see, and then pay for the most popular subjects to draw the rest. Once you have enough content established to draw the first wave of voluntary content producers, everything else can build off of that for no/minimal further investment.

That being said, it would probably be a good idea to keep surveying and perhaps paying for content on a case by case, say to alleviate a dry spell of contributions or if there is some particular thing which is in high demand but no one is volunteering to produce.

What about a contest with a cash award of some kind? This could drive a lot of content for a fixed upfront investment, and then you would also have the ability to select among the entries for the appropriate style and nuance, which reduces the risk of getting unsatisfactory work.

Comment author: Riothamus 12 April 2017 06:28:46PM 2 points [-]

I see finding high-quality content producers was a problem; you reference math explanations specifically.

I notice that people are usually good at providing thorough and comprehensible explanations in only their chosen domains. That being said, people are interested in subjects beyond those they have mastered.

I wonder if it is possible to approach quality content producers with the question of what content they would like to passively consume, and then try and approach networks of content producers at once. For example: find a game theory explainer who wants to read about complex analysis; a complex analysis explainer who wants to read about music theory; a music theory explainer who wants to read about game theory.

Then you can approach all three at once with the premise that if they explain the thing they are good at, they will also be able to read the thing they want to be explained well to them, on the same platform. There's a similar trick being explored for networks of organ donations.

Also, was there any consideration given to the simple mechanism of paying people for quality explanations? I expect a reasonable core of value could be had for low cost.

Comment author: roystgnr 06 April 2017 09:28:21PM 1 point [-]

The simple illustration is geometry; defending a territory requires 360 degrees * 90 degrees of coverage, whereas the attacker gets to choose their vector.

But attacking a territory requires long supply lines, whereas defenders are on their home turf.

But defending a territory requires constant readiness, whereas attackers can make a single focused effort on a surprise attack.

But attacking a territory requires mobility for every single weapons system, whereas defenders can plug their weapons straight into huge power plants or incorporate mountains into their armor.

But defending against violence requires you to keep targets in good repair, whereas attackers have entropy on their side.

But attackers have to break a Schelling point, thereby risking retribution from otherwise neutral third parties, whereas defenders are less likely to face a coalition.

But defenders have to make enough of their military capacity public for the public knowledge to serve as a deterrent, whereas attackers can keep much of their capabilities a secret until the attack begins.

But attackers have to leave their targets in an economically useful state and/or in an immediately-militarily-crippled state for a first strike to be profitable, whereas defenders can credibly precommit to purely destructive retaliation.

I could probably go on for a long time in this vein.

Overall I'd still say you're more likely to be right than wrong, but I have no confidence in the accuracy of that.

Comment author: Riothamus 07 April 2017 08:40:46PM 0 points [-]

None of these are hypotheticals, you realize. The prior has been established through a long and brutal process of trial and error.

Any given popular military authority can be read, but if you'd like a specialist in defense try Vaubon. Since we are talking about AI, the most relevant (and quantitative) information is found in the work done on nuclear conflict; Von Neumann did quite a bit of work aside from the bomb, including coining the phrase Mutually Assured Destruction. Also of note would be Herman Kahn.

Comment author: bogus 04 April 2017 08:07:39PM *  0 points [-]

As a result, every country that has a computer science department in their universities is now a potential belligerent, and violent conflict without powerful AI has been effectively ruled out.

"Powerful AI" is really a defense-favoring technique, in any "belligerent" context. Think about it, one of the things "AIs" are expected to be really good at is prediction and spotting suspicious circumstances (this is quite true even in current ML systems). So predicting and defending against future attacks becomes much easier, while the attacking side is not really improved in any immediately useful way. (You can try and tell stories about how AI might make offense easier, but the broader point is, each of these attacks plausibly has countermeasures, even if these are not obvious to you!)

The closest historical analogy here is probably the first stages of WWI, where the superiority of trench warfare also heavily favored defense. The modern 'military-industrial complexes' found in most developed countries today are also a 'defensive' response to subsequent developments in military history. In both cases, you're basically tying up a whole lot of resources and manpower, but that's little more than an annoyance economically. Especially compared to the huge benefits of (broadly 'friendly') AI in any other context!

Comment author: Riothamus 05 April 2017 10:01:30PM 0 points [-]

I disagree, for two reasons.

  1. AI in conflict is still only an optimization process; it remains constrained by the physical realities of the problem.

  2. Defense is a fundamentally harder problem than offense.

The simple illustration is geometry; defending a territory requires 360 degrees * 90 degrees of coverage, whereas the attacker gets to choose their vector.

This drives a scenario where the security trap prohibits non-deployment of military AI, and the fundamental problem of defense means the AIs will privilege offensive solutions to security problems. The customary response is to develop resilient offensive ability, like second-strike...which leaves us with a huge surplus of distributed offensive power.

My confidence is low that catastrophic conflict can be averted in such a case.

Comment author: whpearson 04 April 2017 02:10:54PM 4 points [-]

Arguments for openness:

  • Everyone can see the bugs/ logical problems with your design.
  • Decreases the chance of arms race, depending upon psychology of the participants. And also black ops to your team. If I think people are secretly developing an intelligence breakthrough I wouldn't trust them and would develop my own in secret. And/or attempt to sabotage their efforts and steal their technology (and win). If it is out there, there is little benefit to neutralizing your team of safety researchers.
  • If something is open you are more likely to end up in a multi-polar world. And if the intelligence that occurs only has a chance of being human aligned you may want to reduce variance by increasing the number of poles.
  • If an arms race is likely despite your best efforts it is better that all the competitors have any of your control technology, this might require them to have your tech stack.

If someone is developing in the open, it is good proof that they are not unilaterally trying to impose their values on the future.

The future is hard, I'm torn on the question of openness.

Comment author: Riothamus 04 April 2017 07:36:35PM 1 point [-]

I am curious about the frequency with which the second and fourth points get brought up as advantages. In the historical case, multipolar conflicts are the most destructive. Forestalling an arms race by giving away technology also sets that technology as the mandatory minimum.

As a result, every country that has a computer science department in their universities is now a potential belligerent, and violent conflict without powerful AI has been effectively ruled out.

Comment author: morganism 03 April 2017 09:16:19PM 3 points [-]

The F35 fighter program is shaping up much worse than even the detractors thought. Many, many problems here, not least:

"Whenever a squadron deploys, it must establish an ALIS hub wherever the F-35 is deployed. Crews set up an ALIS Standard Operating Unit (SOU), which consists of several cases of computer equipment. Technicians will use these to set up a small mainframe which must then be plugged into the world-wide ALIS network. It took several days for the crews to get ALIS working on the local base network. After extensive troubleshooting, IT personnel figured out they had to change several settings on Internet Explorer so ALIS users could log into the system. This included lowering security settings, which DOT&E noted with commendable understatement was “an action that may not be compatible with required cybersecurity and network protection standards.”

The ALIS data must go wherever a squadron goes. Crews must transfer the data from the squadron’s main ALIS computers at the home station to the deployed ALIS SOU before the aircraft are permitted to fly missions. This process took three days during the Mountain Home deployment. This was faster than in earlier demonstrations, but Lockheed Martin provided eight extra ALIS administrators for the exercise. It is unclear if the contractor or the Air Force will include this level of support in future deployments. When the squadron redeployed back to Edwards at the end of the exercise, it took administrators four days to transfer all the data back to the main ALIS computer. Delays of this kind will limit the F-35’s ability to rapidly deploy in times of crisis. Even if the jets can be positioned in enough time to respond to a crisis, problems like lengthy uploading times could keep them on the ground when they are needed in the sky. An aircraft immobilized on the ground is a target, not an asset.

Another time-consuming process involves adding new aircraft to each ALIS standard operating unit. Every time an F-35 is moved from one base to another where ALIS is already up, it must be inducted into that system. It takes 24 hours. Thus, when an F-35 deploys to a new base, an entire day is lost as the data is processed. And only one plane at a time can upload. If an entire squadron, typically 12 aircraft, needed to be inducted, the entire process would take nearly two weeks, forcing a commander to slowly roll out his F-35 aircraft into combat."


Comment author: Riothamus 03 April 2017 10:35:07PM 0 points [-]

I have high hopes that the ongoing catastrophe of this system will discredit the entire design philosophy of the project, and the structure of priorities that governed it. I want it to be a meta-catastrophe, in other words.

The site looks very good. How do you find the rest of it?

In response to Jocko Podcast
Comment author: RainbowSpacedancer 06 September 2016 09:54:09PM *  2 points [-]

...it is valuable to have an example of somebody who reliably executes on his the philosophy of "Decide to do it, then do it." If you find that you didn't do it, then you didn't truly decide to do it. In any case, your own choice or lack thereof is the only factor. "Discipline is freedom." If you adopt this habit as your reality, it become true.

It's possible I'm getting to confused with the language here but I've struggled to apply this advice in my own life. I'll decide that I'm not going to snack at work anymore and then find myself snacking anyway once the time comes. It seems to reflect a naivete in regards to how willpower and habits work.

It sounds good and I've listened to 4 episodes now and Jocko doesn't seem to elaborate on how exactly this process is supposed to work. What is the difference between deciding and truly deciding? What is the habit of 'discipline is freedom' and how does one adopt it as their reality?

I come away from the podcast inspired for a few hours but with no lasting change.

Comment author: Riothamus 15 September 2016 03:31:45PM 0 points [-]

Here is a method I use to good effect:

1) Take a detailed look at the pros and cons of what you want to change. This is sometimes sufficient by itself - more than once I have realized I simply get nothing out what I'm doing, and the desire goes away by itself.

2) Find a substitution for those pros.

Alternatively, think about an example of when you decided to do something and then actually did it, and try to port the methods over. Personal example: I recently had a low-grade freakout over deciding to do a particular paperwork process that is famously slow and awful, and brings up many deeply negative feelings for me. Then I was cleaning my dutch oven, and reflected on getting a warranty replacement actually took about three months and several phone calls, which is frustrating but perfectly manageable. This gives me confidence that monitoring a slow administrative process is achievable, and I am more likely to complete it now.

Comment author: Val 09 September 2016 09:23:13PM 0 points [-]

Isn't the "Do I live in a simulation?" question practically indistinguishable from the question "does God exist?", for a sufficiently flexible definition for "God"?

For the latter, there are plenty of ethical frameworks, as well as incentives for altruism, developed during the history of mankind.

Comment author: Riothamus 12 September 2016 06:25:27PM 0 points [-]

On the grounds that those ethical frameworks rested on highly in-flexible definitions for God, I am skeptical of their applicability. Moreover, why would we look at a different question where we redefine it to be the first question all over again?

Comment author: Riothamus 12 September 2016 06:17:58PM 6 points [-]

I think the basic income is an interesting proposal for a difficult problem, but I downvoted this post.

  1. This is naked political advocacy. Moreover, the comment is hyperbole and speculation. A better way to address this subject would be to try and tackle it from an EA perspective - how efficient is giving cash compared to giving services? How close could we come if we wanted to try it as charity?

  2. The article is garbage. Techcrunch is not a good source for anything, even entertainment in my opinion. The article is also hyperbolic and speculative, while being littered with Straw Man, Ad Hominem, and The Worst Argument In the World. If you are interested in the topic, a much better place to go look would be the sidebar of the subreddit dedicated to basic income.

Bad arguments for a bad purpose with no data doesn't make for quality discussion.

Comment author: TheAncientGeek 30 August 2016 08:46:27AM *  1 point [-]

We cannot qualitatively distinguish between ontologies, except through the other qualities we were already examining.

If that is supposed to mean that every ontology comes with its own isolated, tailor-made criteria, and that there are no others ... then I dont think the situation is quite that bad: it;'s partly true, but there are also criteria that span ontologies, like parsimony.

We don't have a way of searching for new ontologies.

The point is that we don't have a mechanical, algorithmic way of searching for new ontologies. (It's a very lesswronging piece of thinking to suppose that means there is no way at all). Clearly , we come up with new ontologies from time to time. In the absence of an algorithm for constructing ontologies, doing so is more of a createive process, and in the absence of algorithmic criteria for evaluating them, doing so is more like an aesthetic process.

My overall points are that

1) Philosophy is genuinely difficult..its failure to churn out results rapidly isn't due to a boneheaded refusal to adopt some one-size-fits all algorithm such as Bayes...

2) ... because there is currently no algorithm that covers everything you would want to do.

So it looks like all we have done is go from best possible explanation to best available explanation where some superior explanation occupies a space of almost-zero in our probability distribution.

it's a one word difference, but it's very significant difference in terms of implications. For instance, we can;t quantify how far the best available explanation is from the best possible explanation. That can mean that the use of probablistic reasoning does't go far enough.

Comment author: Riothamus 30 August 2016 03:08:41PM *  0 points [-]

If that is supposed to mean that every ontology comes with its own isolated, tailor-made criteria, and that there are no others

I mean to say we are not ontologically motivated. The examples OP gave aren't ontological questions, only questions with ontological implications, which makes the ontology descriptive rather than prescriptive. That the implications carry forward only makes the description consistent.

In the scholastic case, my sense of the process of moving beyond Aristotle is that it relied on things happening that disagreed with Aristotle, which weren't motivated by testing Aristotle. Architecture and siege engines did for falling objects, for example.

I agree with your points. I am now experiencing some disquiet about how slippery the notion of 'best' is. I wonder how one would distinguish whether it was undefinable or not.

View more: Next