I see finding high-quality content producers was a problem; you reference math explanations specifically.
I notice that people are usually good at providing thorough and comprehensible explanations in only their chosen domains. That being said, people are interested in subjects beyond those they have mastered.
I wonder if it is possible to approach quality content producers with the question of what content they would like to passively consume, and then try and approach networks of content producers at once. For example: find a game theory explainer who want...
None of these are hypotheticals, you realize. The prior has been established through a long and brutal process of trial and error.
Any given popular military authority can be read, but if you'd like a specialist in defense try Vaubon. Since we are talking about AI, the most relevant (and quantitative) information is found in the work done on nuclear conflict; Von Neumann did quite a bit of work aside from the bomb, including coining the phrase Mutually Assured Destruction. Also of note would be Herman Kahn.
I disagree, for two reasons.
AI in conflict is still only an optimization process; it remains constrained by the physical realities of the problem.
Defense is a fundamentally harder problem than offense.
The simple illustration is geometry; defending a territory requires 360 degrees * 90 degrees of coverage, whereas the attacker gets to choose their vector.
This drives a scenario where the security trap prohibits non-deployment of military AI, and the fundamental problem of defense means the AIs will privilege offensive solutions to security problems. T...
I am curious about the frequency with which the second and fourth points get brought up as advantages. In the historical case, multipolar conflicts are the most destructive. Forestalling an arms race by giving away technology also sets that technology as the mandatory minimum.
As a result, every country that has a computer science department in their universities is now a potential belligerent, and violent conflict without powerful AI has been effectively ruled out.
I have high hopes that the ongoing catastrophe of this system will discredit the entire design philosophy of the project, and the structure of priorities that governed it. I want it to be a meta-catastrophe, in other words.
The site looks very good. How do you find the rest of it?
Here is a method I use to good effect:
1) Take a detailed look at the pros and cons of what you want to change. This is sometimes sufficient by itself - more than once I have realized I simply get nothing out what I'm doing, and the desire goes away by itself.
2) Find a substitution for those pros.
Alternatively, think about an example of when you decided to do something and then actually did it, and try to port the methods over. Personal example: I recently had a low-grade freakout over deciding to do a particular paperwork process that is famously slow and ...
On the grounds that those ethical frameworks rested on highly in-flexible definitions for God, I am skeptical of their applicability. Moreover, why would we look at a different question where we redefine it to be the first question all over again?
I think the basic income is an interesting proposal for a difficult problem, but I downvoted this post.
This is naked political advocacy. Moreover, the comment is hyperbole and speculation. A better way to address this subject would be to try and tackle it from an EA perspective - how efficient is giving cash compared to giving services? How close could we come if we wanted to try it as charity?
The article is garbage. Techcrunch is not a good source for anything, even entertainment in my opinion. The article is also hyperbolic and speculative, while bei
If that is supposed to mean that every ontology comes with its own isolated, tailor-made criteria, and that there are no others
I mean to say we are not ontologically motivated. The examples OP gave aren't ontological questions, only questions with ontological implications, which makes the ontology descriptive rather than prescriptive. That the implications carry forward only makes the description consistent.
In the scholastic case, my sense of the process of moving beyond Aristotle is that it relied on things happening that disagreed with Aristotle, whic...
This sounds strongly like we have no grounds for considering ontology at all when determining what the best possible explanation.
So it looks like all we have done is go from best possible explanation to best available explanation where some superior explanation occupies a space of almost-zero in our probability distribution.
Echo chamber implies getting the same information back.
It would be more accurate to say we will inevitably reach a local maxima. Awareness of the ontological implications should be a useful tool in helping us recognize when we are there and which way to go next.
Without pursuing the analysis to its maximal conclusions, how can we distinguish the merits of different ontologies?
If the artificial intelligence from emulation is accomplished through tweaking an emulation and/or piling on computational resources, why couldn't it be accomplished before we start emulating humans?
Other primates, for example. Particularly in the case of the destructive-read and ethics-of-algorithmic-tweaks, animal testing will surely precede human testing. To the extent a human brain is just a primate brain with more computing power, another primate with better memory and clock speed should serve almost as effectively.
What about other mammals with culture and communication, like a whales or dolphins?
Something not a mammal at all, like Great Tits?
Is anyone in a position to offer some criticism (or endorsement) of the work produced at Gerwin Schalk's lab?
I attended a talk given by Dr. Schalk in April 2015, where he described a new method of imaging the brain, which appeared to be a better-resolution fMRI (the image in the talk was a more precise image of motor control of the arm, showing the path of neural activity over time). I was reminded of it because Dr. Schalk spent quite a bit of time emphasizing doing the probability correctly and optimizing the code, which seemed relevant when the recent criticism of fMRI software was published.
This is enough of a problem for small medical practices in the US that it outweighs a good bedside manner and confidence in the doctor's medical ability.
I am confident that this has a large effect on the success of an individual practice; it may fall under the general heading of business advice for the individual practitioner. Even for a single-doctor office, a good secretary and record system will be key to success.
This information comes chiefly from experience of and interviews with specialists (dermatology and gynaecology) in the US.
I know this is banal, but ensure excellent administration.
Medical expertise is only relevant once you see the patient. Your ability to judge the evidence requires getting access to it; this means you need to be able to correctly send requests, get the data back, and keep all this attached to the correct patient.
Scheduling, filing and communication. Lacking these, medical expertise is meaningless. So get the best damn admin and IT you can possibly afford.
Let me try to restate, to be sure I have understood correctly:
We cannot stop once we have exhausted the evidence because explanations of equal predictive power have different ontological implications, and these implications must be accounted for in determining the best explanation. Further, we don't have a way to exclude other ontological implications we have not considered.
Question: why don't the ontological implications of our method of analysis constrain us to observing explanations with similar ontological implications?
So am I correct in inferring that this program looks for any mathematical correlations in the data, and returns the simplest and most consistent ones?
This is a useful bit of clarification, and timely.
Would that change if there was a mechanism for describing the criteria for the best explanation?
For example, could we show from a body of evidence the minimum entropy, and therefore even if there are other explanations they are at best equivalent?
There is a LessWrong wiki entry for just this problem: https://wiki.lesswrong.com/wiki/Simulation_argument
The rabbit hole problem is solved by recognizing when we have made the best determination we can with current information. Once that is done, we stop.
Understanding that beliefs are our knowledge of reality rather than reality itself has some very interesting effects. The first is that our beliefs do not have to take the form of singular conclusions, such as we are or are not in a simulation; instead our belief can take the form of a system of conclusio...
I had not imagined a strict barter system or scaling of paid content; the objective in both cases is only to make up the difference between the value content producers want versus the value they expect for the first wave.
The point of diminishing returns would be hard to judge for paid content, but perhaps the two strategies could work together: survey prospective content producers for the content they want to see, and then pay for the most popular subjects to draw the rest. Once you have enough content established to draw the first wave of voluntary conten... (read more)