Posts

Sorted by New

Wiki Contributions

Comments

I had not imagined a strict barter system or scaling of paid content; the objective in both cases is only to make up the difference between the value content producers want versus the value they expect for the first wave.

The point of diminishing returns would be hard to judge for paid content, but perhaps the two strategies could work together: survey prospective content producers for the content they want to see, and then pay for the most popular subjects to draw the rest. Once you have enough content established to draw the first wave of voluntary content producers, everything else can build off of that for no/minimal further investment.

That being said, it would probably be a good idea to keep surveying and perhaps paying for content on a case by case, say to alleviate a dry spell of contributions or if there is some particular thing which is in high demand but no one is volunteering to produce.

What about a contest with a cash award of some kind? This could drive a lot of content for a fixed upfront investment, and then you would also have the ability to select among the entries for the appropriate style and nuance, which reduces the risk of getting unsatisfactory work.

I see finding high-quality content producers was a problem; you reference math explanations specifically.

I notice that people are usually good at providing thorough and comprehensible explanations in only their chosen domains. That being said, people are interested in subjects beyond those they have mastered.

I wonder if it is possible to approach quality content producers with the question of what content they would like to passively consume, and then try and approach networks of content producers at once. For example: find a game theory explainer who wants to read about complex analysis; a complex analysis explainer who wants to read about music theory; a music theory explainer who wants to read about game theory.

Then you can approach all three at once with the premise that if they explain the thing they are good at, they will also be able to read the thing they want to be explained well to them, on the same platform. There's a similar trick being explored for networks of organ donations.

Also, was there any consideration given to the simple mechanism of paying people for quality explanations? I expect a reasonable core of value could be had for low cost.

None of these are hypotheticals, you realize. The prior has been established through a long and brutal process of trial and error.

Any given popular military authority can be read, but if you'd like a specialist in defense try Vaubon. Since we are talking about AI, the most relevant (and quantitative) information is found in the work done on nuclear conflict; Von Neumann did quite a bit of work aside from the bomb, including coining the phrase Mutually Assured Destruction. Also of note would be Herman Kahn.

I disagree, for two reasons.

  1. AI in conflict is still only an optimization process; it remains constrained by the physical realities of the problem.

  2. Defense is a fundamentally harder problem than offense.

The simple illustration is geometry; defending a territory requires 360 degrees * 90 degrees of coverage, whereas the attacker gets to choose their vector.

This drives a scenario where the security trap prohibits non-deployment of military AI, and the fundamental problem of defense means the AIs will privilege offensive solutions to security problems. The customary response is to develop resilient offensive ability, like second-strike...which leaves us with a huge surplus of distributed offensive power.

My confidence is low that catastrophic conflict can be averted in such a case.

I am curious about the frequency with which the second and fourth points get brought up as advantages. In the historical case, multipolar conflicts are the most destructive. Forestalling an arms race by giving away technology also sets that technology as the mandatory minimum.

As a result, every country that has a computer science department in their universities is now a potential belligerent, and violent conflict without powerful AI has been effectively ruled out.

I have high hopes that the ongoing catastrophe of this system will discredit the entire design philosophy of the project, and the structure of priorities that governed it. I want it to be a meta-catastrophe, in other words.

The site looks very good. How do you find the rest of it?

Here is a method I use to good effect:

1) Take a detailed look at the pros and cons of what you want to change. This is sometimes sufficient by itself - more than once I have realized I simply get nothing out what I'm doing, and the desire goes away by itself.

2) Find a substitution for those pros.

Alternatively, think about an example of when you decided to do something and then actually did it, and try to port the methods over. Personal example: I recently had a low-grade freakout over deciding to do a particular paperwork process that is famously slow and awful, and brings up many deeply negative feelings for me. Then I was cleaning my dutch oven, and reflected on getting a warranty replacement actually took about three months and several phone calls, which is frustrating but perfectly manageable. This gives me confidence that monitoring a slow administrative process is achievable, and I am more likely to complete it now.

On the grounds that those ethical frameworks rested on highly in-flexible definitions for God, I am skeptical of their applicability. Moreover, why would we look at a different question where we redefine it to be the first question all over again?

I think the basic income is an interesting proposal for a difficult problem, but I downvoted this post.

  1. This is naked political advocacy. Moreover, the comment is hyperbole and speculation. A better way to address this subject would be to try and tackle it from an EA perspective - how efficient is giving cash compared to giving services? How close could we come if we wanted to try it as charity?

  2. The article is garbage. Techcrunch is not a good source for anything, even entertainment in my opinion. The article is also hyperbolic and speculative, while being littered with Straw Man, Ad Hominem, and The Worst Argument In the World. If you are interested in the topic, a much better place to go look would be the sidebar of the subreddit dedicated to basic income.

Bad arguments for a bad purpose with no data doesn't make for quality discussion.

If that is supposed to mean that every ontology comes with its own isolated, tailor-made criteria, and that there are no others

I mean to say we are not ontologically motivated. The examples OP gave aren't ontological questions, only questions with ontological implications, which makes the ontology descriptive rather than prescriptive. That the implications carry forward only makes the description consistent.

In the scholastic case, my sense of the process of moving beyond Aristotle is that it relied on things happening that disagreed with Aristotle, which weren't motivated by testing Aristotle. Architecture and siege engines did for falling objects, for example.

I agree with your points. I am now experiencing some disquiet about how slippery the notion of 'best' is. I wonder how one would distinguish whether it was undefinable or not.

Load More