How do you evaluate whether any given model is useful of not?
One way is to simulate a perfect computational agent, assume perfect information, and see what kind of models it would construct.
If you reject the notion of an external reality that is accessible to us in at least some way, then you cannot really measure the performance of your models against any kind of a common standard.
Solomonoff induction provides a universal standard for "perfect" inductive inference, that is, learning from observations. It is not entirely parameter-free, so...
I've got feeling that the implicit LessWrong'ish rationalist theory of truth is, in fact, some kind of epistemic (Bayesian) pragmatism, i.e. "true is that what is knowable using probability theory". May also throw in "..for a perfect computational agent".
My speculation is that the declared LW's sympathy towards the correspondence theory of truth stems from political / social reasons. We don't want to be confused with the uncritically thinking masses - the apologists of homoeopathy or astrology justifying their views by "yeah, I don...
What do you mean by "direct access to the world"?
Are you familiar with Kant? http://en.wikipedia.org/wiki/Noumenon
This description fits philosophy much better than science.
As for your options, have you considered the possibility that 99% of people have never formulated a coherent philosophical view on the theory of truth?
I'd love to hear a more qualified academic philosopher discuss this, but I'll try. It's not that the other theories are intuitively appealing, it's that the correspondence theory of truth has a number of problems, such as the problem of induction.
Let's say the one day we create a complete simulation of a universe where the physics almost completely match ours, except some minor details, such as that some specific types of elementary particles, e.g. neutrinos are never allowed to appear. Suppose that there are scientists in the simulation, and they work out...
I meant that as a comment to this:
the information less useful than what you'd get by just asking a few questions.
It's easy to lie when answering to questions about your personality on e.g. a dating site. It's harder, more expensive, and sometimes impossible to lie via signaling, such as via appearance. So, even though information obtained by asking questions is likely to be much richer than information obtained from appearances, it is also less likely to be truthful.
..assuming the replies are truthful.
I think universalism is an obvious Schelling point. Not just moral philosophers find it appealing, ordinary people do it too (at least when thinking about it in an abstract sense). Consider Rawls' "veil of ignorance".
Mountaineering or similar extreme activities is one option.
Are there any moral implications of accepting the Many Worlds interpretation, and if so what could they be?
For example, if the divergent copies of people (including myself) in other branches of Multiverse should be given non-insignificant moral status, then it's one more argument against the Epicurean principle that "as long as we exist, death is not here". My many-worlds self can die partially - that is, just in some of the worlds. So I should to reduce the number of worlds in which I'm dead. On the other hand, does it really change anything compared to "I should reduce the probability that I'm dead in this world"?
Is there some reason to think that physiognomy really works? Reverse causation is probably the main reason, e.g. tall people are more likely to be seen as leaders by others, so they are more likely to become leaders. Nevertheless, is there something beyond that?
Is there some reason to think that physiognomy really works?
It is the case that appearances encode lots of information, because lots of things are correlated. For example, height correlates with intelligence, probably because of generic health factors (like nutrition). Nearsightedness and intelligence are correlated, but whether this is due to different use of the eyes in childhood or engineering constraints with regards to the brain and the skull is not yet clear. The aspect ratio of the face correlates with uterine testosterone levels, which correlate...
Funny, I thought escaping in their own private world was not something exclusive to nerds. In fact most people do that. Schoolgirls escape in fantasies about romance. Boys in fantasies about porn. Gamers in virtual reality. Athletes in fantasies about becoming famous in sport. Mathletes - about being famous and successful scientists. Goths - musicians or artists. And so on.
True, not everyone likes to escape in sci-fi or fantasy, but that's because different minds are attracted by different kinds of things. D&D is a relatively harmless fantasy. I'm not...
the solution will involve fixing things that made one a "tempting" bullying target
So a nerd, according to the OP, is someone who:
But even if take for granted that this is a correct description of a nerd, these are very different issues and require very different solutions.
The last problem is simple to fix at the level of society and ought to be fixed there. A hate against specific social groups should not be acceptable, not ...
If UGC is true, then one should doubt recursive self-improvement will happen in general
This is interesting, can you expand on this? I feel there clearly are some arguments in complexity theory against AI as an existential risk, and that these arguments would deserve more attention.
To sidetrack a bit, as I've argued in a comment, if it turns out that many important problems are practically unsolvable in realistic timescales, any superintelligence would be unlikely to get strategic advantage. The support for this idea is much more concrete than the specul...
Why do you think that the fundamental attribution error is a good point where to start someone's introduction in rational thinking? There seems to be a clear case of the Valley of bad rationality here. Fundamental attribution is a powerful psychological tool. It allows us to take personal responsibility for our successes while blaming the environment for our failures. Now assume that this tool is taken away from a person, leaving all his/her other beliefs intact. How exactly would this improve his/her life?
I also don't get why thinking that "the rude ...
I don't think that overfitting is a good metaphor for your problem. Overfitting involves building a model that is more complicated than an optimal model would be. What exactly is the model here, and why do you think that learning just a subset of the course's material leads to building a more complicated model?
Instead, your example looks like a case of sampling bias. Think of the material of whole course as the whole distribution, and of the exam topics as a subset of that distribution. "Training" your brain with samples just from that subset is ...
There is a semi-official EA position on immigration
Could you describe what this position is? (or give a link) Thanks!
We usually don't worry about personality changes because they're typically quite limited. Completely replacing brain biochemistry would be a change on a completely different scale.
And people occasionally do worry about these changes even now, especially if they're permanent, and if/when they occur in others. Some divorces are made because the partner of a person "does not see the same man/woman she/he fell in love with".
Taxing the upper middle class is a generally good idea; they are the ones most capable and willing to pay taxes. Many European countries apply progressive tax rates. Calling it a millionanaire tax is a misnomer, or course, but otherwise I would support that (I'm from Latvia FYI)
Michael O. Church is certainly an interesting writer, but you should take into account that he basically is a programmer with no academic qualifications. Most of his posts appear to be wild generalizations of experiences personally familiar to him. (Not exclusively his own experiences, of course.) I suspect that he suffers heavily from the narrative fallacy.
A good writeup. But you downplay the role of individual attention. No textbook is going to have all the answers to questions someone might formulate after reading the material. They also won't provide help to students who get stuck doing exercises. In books, it's either nothing or all (the complete solution).
The current system does not do a lot of personalized teaching because the average university has a tightly limited amount of resources per student. The very rich universities (such as Oxford) can afford to give a training personalized to a much larger extent, via tutors.
What are some good examples of rationality as "systematized winning"? E.g. a personal example of someone who practices rationality systematically for a long time, and there are good reasons to think doing that has substantially improved their life.
It's easy to name a lot of famous examples where irrationally has caused harm. I'm looking for the opposite. Ideally, some stories that could interest intelligent, but practically minded people who have no previous exposure to the LW memeplex.
The easiest examples are typically business examples, but there's always the risk of the thing people attributing their success to not being the actual cause of their success. ("I owe it all to believing in myself" vs. "I owe it all to sleeping with the casting director.")
I think the cleanest example is Buffet and Munger, whose stated approach to investing is "we're not going to be ashamed of only picking obviously good investments." They predated LW by a long while, but they're aware of the Heuristics and Biases literature (consider this talk Munger gave on it in 1995).
The answer to the specific question about technetium is "it's complicated, and we may not know yet", according to physics Stack Exchange.
For the general question "why are some elements/isotopes less or more stable" - generally an isotope is more stable if it has a balanced number of protons and neutrons .
I know what SI is. I'm not even pushing the point that SI not always the best thing to do - I'm not sure if it is, as it's certainly not free of assumptions (such as the choice of the programming language / Turing machine), but let's not go into that discussion.
The point I'm making is different. Imagine a world / universe where nobody has any idea what SI is. Would you be prepared to speak to them, all their scientists, empiricists and thinkers and say that "all your knowledge is purely accidental, you unfortunately have absolutely no methods for dete...
Let me clarify my question. Why do you and iarwain1 think there are absolutely no other methods that can be used to arrive at the truth, even if they are sub-optimal ones?
Why can't there be other criteria to prefer some theories over other theories, besides simplicity?
Perhaps you can comment this opinion that "simpler models are always more likely" is false: http://www2.denizyuret.com/ref/domingos/www.cs.washington.edu/homes/pedrod/papers/dmkd99.pdf
Perhaps one could say that an agent in the sense that matters for this discussion is something with a personal identity, a notion of self (in a very loose sense).
Intuitively, it seems that tool AIs are safer because they are much more transparent. When I run a modern general purpose constraint-solver tool, I'm pretty sure that no AI agent will emerge during the search process. When I pause the tool somewhere in the middle of the search and examine its state, I can predict exactly what the next steps are going to be - even though I can hardly predict the ul...
See this discussion in The Best Textbooks on Every Subject
I agree that the first few chapters of Jaynes are illuminating, haven't tried to read further. Bayesian Data Analysis by Gelman feels much more practical at least for what I personally need (a reference book for statistical techniques).
The general pre-requisites are actually spelled out in the introduction of Jayne's Probability Theory. Emphasis mine.
...The following material is addressed to readers who are already familiar with applied mathematics at the advanced undergraduate level or preferably hi
Scott Aaronson has formulated it in a similar way (quoted from here):
...whenever it’s been possible to make definite progress on ancient philosophical problems, such progress has almost always involved a [kind of] “bait-and-switch.” In other words: one replaces an unanswerable philosophical riddle Q by a “merely” scientific or mathematical question Q′, which captures part of what people have wanted to know when they’ve asked Q. Then, with luck, one solves Q′.
Of course, even if Q′ is solved, centuries later philosophers might still be debating the exac
In human society and at the highest scale, we solve the agent-principal problem by separation of powers - legislative, executive, and judiciary powers of state typically are divided in independent branches. This naturally leads to a categorization of AI-capabilities:
AI with legislative power (the power to make new rules)
AI with with high-level executive power (the power to make decisions)
AI with with low-level executive power (to carry out orders)
AI with a rule-enforcing power
AI with a power to create new knowledge / make suggestions for decision
I don't know about the power needed to simulate the neurons, but my guess is that most of the resources are spent not on the calculations, but on interprocess communication. Running 302 processes on a Raspberry Pi and keeping hundreds of UDP sockets open probably takes a lot of its resources.
The technical solution is neither innovative nor fast. The benefits are in its distributed nature (every neuron could be simulated on a different computer) and in simplicity of implementation. At least while 100% faithfullness to the underlying mathematical model is no...
It would be interesting to see more examples of modern-day non-superintelligent domain-specific analogues of genies, sovereigns and oracles, and to look at their risks and failure modes. Admittedly, this is only an inductive evidence that does not take into account the qualitative leap between them and superintelligence, but it may be better than nothing. Here are some quick ideas (do you agree with the classification?):
Oracles - pocket calculators (Bostrom's example); Google search engine; decision support systems.
Genies - industrial robots; GPS drivi
It's ok, as long as the talking is done in sufficiently rigorous manner. By an analogy, a lot of discoveries in theoretical physics have been made long before they could be experimentally supported. Theoretical CS also has good track record here, for example, the first notable quantum algorithms were discovered long before the first notable quantum computers were built. Furthermore, the theory of computability mostly talks about the uncomputable (computations that cannot be realized and devices that cannot be built in this universe), so has next to no prac...
To be honest, I initially had trouble understanding your use of "oversight" and had to look up the word in a dictionary. Talking about the different levels of executive power given to AI agents would make more sense to me.
I agree. For example, this page says that: "in order to provide a convincing case for epigenetic inheritance, an epigenetic change must be observed in the 4th generation. "
So I wonder why they only tested three generations. Since F1 females are already born with the reproductive cells from which F2 will grow, the organism of a F0 exposes both of these future generations to itself and its environment. That some information exchange takes place there is not that surprising, but the effect may be completely lost in F3 generation.
I've always thought of the MU hypothesis as a derivative of Plato's theory of forms, expressed in a modern way.
This is actually one of the standard counterarguments against the need for friendly AI, at least against the notion that is should be an agent / be capable of acting as an agent.
I'll try to quickly summarize the counter-counter arguments Nick Bostrom gives in Superintelligence. (In the book, AI that is not agent at all is called tool AI. AI that is an agent but cannot act as one (has no executive power in the real world) is called oracle AI.)
Some arguments have already been mentioned:
Making your mental contents look innocuous while maintaining their semantic content sounds potentially very hard
Even humans are capable of producing content (e.g. program code) where the real meaning is obfuscated. For some entertainment, try to look at this Python script in Stack Exchange Programming puzzles, and try to guess what it really does. (The answer is here.)
I couldn't even have a slice of pizza or an ice cream cone
Slippery slope, plain and simple. http://xkcd.com/1332/
Reducing mean consumption does not imply never eating ice cream.
You should have started by describing your interpretation of what the word "flourish" means. I don't think it's a standard one (any links to prove the opposite?). For now this thread is going nowhere because of disagreements on definitions.
Two objections for these calculations - first, they do not take into account the inherent inefficiency of meat production (farm animals only convert a few percent of the energy in their food to consumable products), its contribution to global carbon emission and pollution. Second, they do not take into account the animals displaced and harmed by the indirect effects of meat production. It requires larger areas for farmlands than vegetarian or seafood based diets would.
chickens flourish
Not many vegetarians would agree. Is farm chicken life is worth living? Does the large number of farm chickens really have net positive effect on animal wellbeing?
Animals that aren't useful
What about the recreational value of wild animals?
The practically relevant philosophical question is not "cam science understand consciousness?", but "what can infer from observing the correlates of consciousness, or from observing their absence?". This is the question that for example anesthesiologists have to deal with on a daily basis.
When formulated as this, the problem is really not that different from other scientific problems where causality must be detected. Detecting causal relations is famously hard - but it's not impossible. (We're reasonably certain, for example, that smoki...
Your article feels too abstract to really engage the reader. I would start with a surprise element (ok, you do this to some extent); have at least one practical anecdote; include concrete and practical conclusions (what life lessons follow from what the reader has learned?).
Worse, I feel that your article might in fact lead to several misconceptions about dual process theory. (At least some of the stuff does not match with my own beliefs. Should I update?)
First, you make a link between System 1 and emotions. But System 1 is still a cognitive system. It's h...
Example of a mathematical fact: a formula for calculating correlation coefficient. Example of a statistical intuition: knowing when to conclude that close-to-zero correlation implies independence. (To see the problem, see this picture for some datasets in which variables are uncorrelated, but not independent.)
Be careful here. Statistical intuition does not come naturally to humans - Kahneman and others have written extensively about this. Learning some mathematical facts (relatively simple to do) without learning the correct statistical intuitions (hard to do) may well have negative utility. Unjustified self confidence is an obvious outcome.
I agree with pragmatist (the OP) that this is a problem for the correspondence theory of truth.
Usefulness? Just don't say "experimental evidence". Don't oversimplify epistemic justification. There are many aspects - how well knowledge fits with existing models, with observations, what is it's predictive power, what is it's instrumental value (does it help to achieve one's goals) etc. F... (read more)