This post is a decent first approximation. But it is important to remember that even successful communication is almost always occurring on more than just one of these levels at once.
Personally I find it useful to think of communication as having spontaneous layers of information which may include things like asserting social context, acquiring knowledge, reinforcing beliefs, practicing skills, indicating and detecting levels of sexual interest, and even play. And by spontaneous layers, I mean that we each contribute to the scope of a conversation, and th...
In retrospect, spelling words out loud, something I do tend to do with a moderate frequency, is something I've gotten much better at over the past ten years. I suspect that I've hijacked my typing skill to the task, as I tend to error correct my verbal spelling in exactly the same way. I devote little or no conscious thought or sense mode to the spelling process, except in terms of feedback.
As for my language skills, they are at least adequate. However, I have devoted special attention to improving them so I can't say that I don't share some bias away from being especially capable.
When you're trying to communicate facts, opinions, and concepts - most especially concepts - it is a useful investment of effort to try to categorize both your audience's crystallography and your own.
This is something of an oversimplification. Categories are one possible first step, but eventually you will need more nuance than that. I suggest forming estimates based on the communication being serving also as a sequence of experiments. And being very strict about not ruling things out, especially if you have not managed to beat down your typical mind fa...
Arguably, as seminal as the sequences are treated, why are the "newbies" the only ones who should be (re)reading them?
The number of assertions needed is now so large that it may be difficult for a human to acquire that much knowledge.
Especially given these are likely significantly lower bounds, and don't account for the problems of running on spotty evolutionary hardware, I suspect that the discrepancy is even greater than it first appears.
What I find intriguing about this result is that essentially it is one of the few I've seen that has a limit description of consciousness: you have on one hand a rating of complexity of your "conscious" cognitive system an...
This should not be underestimated as an issue. Status as we use it here and at overcoming bias tends to be simplified into something not unlike a monetary model.
It is possible to try to treat things like status reductively, but in the current discussion it will hopefully suffice to characterize it with more nuance than "social wealth".
If you only expect to find one empirically correct cluster of contrarian beliefs, then you will most likely find only one, regardless of what exists.
Treating this is as a clustering problem we can extract common clusters of beliefs from the general contrarian collection and determine degrees of empirical correctness. Presupposing a particular structure will introduce biases on the discoveries you can make.
there's really no reason those numbers should too much higher than they are for a random inhabitant of the city
Actually simply being in the local social network of the victim should increase the probability of involvement by a significant amount. This would of course be based on population, murder rates, and so on. And likely would also depend on estimates of criminology models for the crime in question.
Proof of how dangerous this sort of list can be.
I entirely forget about:
After all, how can you advance even pure epistemic rationality without constructing your own experiments on the world?
Or more succinctly and broadly, learn to:
pay attention
correct bias
anticipate bias
estimate well
With a single specific enumeration of means to accomplish these competencies you risk ignoring other possible curricula. And you encourage the same blind spots for the entire community of aspiring rationalists so educated.
This parallels some of the work I'm doing with fun-theoretic utility, at least in terms of using information theory. One big concern is what measure of complexity to use, as you certainly don't want to use a classical information measure - otherwise Kolmogorov random outcomes will be preferred to all others.
Lies, truth, and radical honesty are all that get in the way in understanding what is going on here.
You are communicating with someone, several of the many constantly changing layers (in addition to status signaling, empathy broadcasting, and performatives) of this communication are the transfer of information from you to that someone. The effectiveness of the communication of this information and its accuracy when received is something we can talk about fairly easily in terms of both instrumental (effectiveness) and epistemic (accurate) rationality.
To cl...
My post does describe a distinct model based on a Many Worlds interpretation where the probabilities are computed differently based on whether entanglement occurs or not - i.e. whether the universes influence each other. It is distinct from the typical model of decoherence.
As for photosythesis, it ought to behave in much the same way, as a network of states propagating through entangled universes, with the interactions of the states in those branches causing the highest probabilities to be assigned to the branches which have the lowest energy barriers.
Of...
It's as though no one here has ever heard of the bystander effect. The deadline is January 15th. Setting up a wiki page and saying "Anyone's free to edit." is the equivalent to killing this thing.
Also this is a philosophy, psychology, and technology journal, which means that despite the list of references for Singularity research you will also need to link this with the philosophical and/or public policy issues that the journal wants you to address (take a look at the two guest editors).
Another worry to me is that in all the back issues of this journal I looked over, the papers were almost always monographs (and baring that 2). I suspect that having many authors might kill the chances for this paper.
First of all consider a computer is incomplete without a program, so lets just think of a programmed computer - whether in hardware or software doesn't matter for our purposes.
This gives us a system that goes from some known start state to some outcome state through a series of intermediate steps. If each of these steps is deterministic, then the entire system reaches the same outcome in all universes where it had the same starting point.
If those steps were stochastic, perhaps because there is chance of memory corruption in our computer or because of a r...
I meant that setting the limit to no preference for a given C doesn't equate to a globally continuous function. But that when you adjust your preferences function to approximate the discontinuous function by a continuous one, the result will contain (at least one) no preference point between any two A < B.
Now perhaps there is a result which says that if you take the limit as you set all discontinuous C to no preference, that the resulting function is complete, consistent, transitive, and continuous, but I wouldn't take that to be automatic.
Consider, f...
We are talking about the same thing here just at different levels of generality. The function you describe is the same as the one I'm describing, except on a much narrower domain (only a single binary lottery between A and B). Then you project the range to just a question about C.
In the specific function you are talking about, you must hold that this is true for all A, B, and C to get continuity. In the function I describe, the A, B, and C are generalized out, so the continuity property is equivalent to the continuity of the function.
I was talking about utility functions, but I can see your point about generalizing the result to the mapping from arbitrary dilemmas to preferences. Realize though, that preference space isn't discrete.
You can describe it as the function from a mixed dilemma to the joint relation space for < and =. Which you can treat as a somewhat more complex version of the ordinals (certainly you can construct a map to a dense version of the ordinals if you have at least 2 dilemmas and dense probability space). That gives you a notion of the preference space where a...
That is my reading of it too. I know Stuart is putting forward analytic results here, I was concerned that this one was not correctly represented.
Note, Independence II does not imply Independence, without using at least the consistency axiom.
If we're using the Independence II as an axiom, you should be a little more precise, when you introduced it above, you referred to the base four axioms, including continuity.
Now, I only noticed consistency needed to convert between the two Independence formulations, which would make your statement correct. But on the face of things, it looks like you are trying to show a money pump theorem under discontinuous preferences by calling upon the continuity axiom.
Correct, by definition, if you have a dense set (which by default we treat the probability space as) and we map it into another space than either that space is also dense, in which case the converging sequences will have limits or it will not be dense (in which case continuity fails). In the former case, continuity reduces to point-wise continuity.
Note, setting the limit to "no preference" does not resolve the discontinuity. But by intermediate value, there will exist at least one such point in any continuous approximation of the discontinuous function.
Nice to see Europe catching up with, say India in this regard.
Does that answer your question?
This has been helpful. I'm much more familiar with the mathematics than the economics. Presently, I'm more worried about the mathematical chicanery involved in approximating a consistent continuous utility function out of things.
But does doesn't the money pump result for non-independence rely on continuity? Perhaps I missed something there.
(Of note, this is what happens when I try to pull out a few details which are easy to relate and don't send entirely the wrong intuition - can't vouch for accuracy, but at least it seems we can talk about it.)
Sorry I left this out. It's a huge simplification, but treat the set of p as a discrete subset set in the standard topology.
I'm very busy at the moment, but the short version is that one of my good candidates for a utility component function, c, has, c(A) < c(B) < c(pA + (1-p)B) for a subset of possible outcomes A and B, and choices of p.
This is only a piece of the puzzle, but if continuity in the von Neumann-Morgenstern sense falls out of it, I'll be surprised. Some other bounds are possible I suspect.
Of note, you don't explain why discontinuous preferences necessarilly cause vulnerability to money pumping.
I'm concerned about this largely because the von Neumann-Morgenstern continuity axiom is problematic for constructing a functional utility theory from "fun theory".
Glad to see something like this.
Fair enough. Although in considering the implications of more than two options for the other conditions, I noticed something else worrisome.
The solution you present weakens a social welfare function, after all if I have two voters, and they vote (10,0,5) and (0,10,5) the result is an ambiguous ordering, not a strict ordering as required by Arrow's theorem (which is really a property of very particular endomorphisms on permutation groups).
It seems like a classic algorithmic sacrifice of completeness for power. Was that your intent?
Note, according to the wikipedia article listed, Arrow's theorem is valid "if the decision-making body has at least two members and at least three options to decide among". This makes me suspect the Pareto-efficiency counter-example as this assumes we have only 2 options.
What worries me about this tact is that I'm sufficiently clever to realize that in conducting a vast and complex research program to empirically test humanity to determine a global reflectively consistent utility function, I will be changing the utility trade-offs of humanity.
So I might as well make sure that I conduct my mass studies in such a way to ensure that the outcome is both correct and easier for me to perform my second much longer (essentially infinitely longer) time phase of my functioning.
So said AI would determine and then forever follow exactly what humanity's hidden utility function is. But there is no guarantee that this is a particularly friendly scenario.
I have a similar result, except I've never experienced a stimulant effects from anything other than blood sugar I'm not certain I can discount sleepiness. Also, I suffer from a migraine condition which has a much more severe affect on my mental faculties on a day-to-day basis.
And since improper sleeping is one of my triggers - "Happiness is getting enough sleep." Not too much, not too little.
This seems like the conflict between two deep seated heuristics, hence it would be difficult at best to argue for the right one.
Instead, I suggest a synthetic approach. Stop treating the two intuitions as a false dichotomy, and consider the continuum between them (or even beyond them).
This is essentially an instance of availability bias. Of course, the most interesting case, rather than just a declarative hypothesis elevated among the other inhabitants of the hypothesis space for that particular question, models have other effects that go far beyond merely availability.
This is because our initial model won't just form the first thing we think of when we examine the question, but some of the very structures we use when we formulate the question. Indeed, how we handle our models is easily responsible for the majority of the biases that h...
I expect that one source of the problem is seen in equating these two situations. On one hand you have 100 copies of the same movie. On the other hand, you have 100 distinct humans you could pay to save. To draw a direct comparison you would need to treat these as 100 copies of some idealized stranger. In which case the scope insensitivity might (depending on how you aggregate the utility of a copy's life) make more sense as a heuristic.
And this sort of simplification is likely one part of what is happening when we naively consider the questions:
...How muc
It's just that with two distinctly different conclusions from the results mentioned from two different sources: the article authors (in the abstract) and Gerald Weissmann, M.D., Editor-in-Chief (in the news article), I place a much lower confidence in later being a reasonable reading of the research paper.
But of course we could quite safely argue about readings and interpretations indefinitely. I'd point you to Derrida and Hermeneutics if you want to go that route.
In any case, I'll update my estimates on the likelihood of the research paper having an err...
So, perhaps the news article was based on press release that was based on the journal article. My point was that it was not produced solely from the abstract.
I don't see why this is your point? In the very least it doesn't present counter evidence to my claim that the abstract contains information not present in the news article which mitigates or negates the concerns of the original comment.
But the abstract does not make any "just right" claims, unlike the summary on science daily. Which is what you where complaining about.
The abstract reads - we did an incremental test, and even at the lowest dosage we found an effect. This suggest that low dosages could be effective. I don't see anything wrong with that reasoning.
The science daily summary is simply misrepresenting it. So, the original commenter isn't missing something in the science news, it is science daily who made the error.
The following sounds like a control measurement was taken:
"Blood and urine samples were collected before and after each dose of DHA and at 8 wk after arrest of supplementation."
Also note, that the abstract doesn't say that 200mg is ideal as the science daily description does it says:
"It is concluded that low consumption of DHA could be an effective and nonpharmacological way to protect healthy men from platelet-related cardiovascular events."
Well, the article abstract isn't consistent with the description you linked to. One of the dangers of paraphrasing science.
I'm interested, especially since this will likely be the closest such meet-up to State College, PA. I'm not the only one here, so I can ask around. Although, obviously, our transportation logistics will be more complicated.
No. The Medawar zone is more about scientific discoveries as marketable products to the scientific community, not the cultural and cognitive pressures of those communities which affect how those products are used as they become adopted.
Different phenomena, although there are almost certainly common causes.
Oh yes, but it's not just a prediliction for simple models in the first place, but also a tendency to culturally and cognitively simplify the model we access to use - even if the original model had extensions to handle this case and even to the tune of orders of magnitude of error.
Of course sometimes it may be worth computing an estimate that is (unknown to you) orders of magnitude off, in a very short amount of time. Certainly if the impact of the estimate is delayed and subtle less conscious trade-offs may factor in between cognitive effort to access and use a more detailed model and the consequences of error. Yet another form of akrasia.
Generally (and therefore somewhat inaccurately) speaking, one way that our brains seem to handle the sheer complexity computing in the real world us is a tendency to simplify the information we gather.
In many cases these sorts of extremely simple models didn't start that way. They may have started with more parameters and complexity. But as they were repeated, explained and applied the model becomes, in effect, simpler. The example begins to represent the entire model, rather than serving to show only a piece of it.
Technically the exponential radioactive...
Don't have many mantras, although I stress the importance of understanding before trying to solve.
One that does stand out is more of a question:
"What am I not thinking here?" or "What are we forgetting here?" - Followed by estimations based on meta-biases and human error tendencies to make some hypotheses where cognitive, social, or cultural blind spots might be. And then comes the testing, followed by more hypotheses. And so on.
After all, every field of thought is developed by humans. It's a common point of failure.
Procrastination and laziness may be kinds of akrasia, but simply because that are the type most talked about here does not mean that they are an exhaustive description of "weaknesses of will". One example I find easy to bring up is trying to move while we are in pain. There are definite moments where a crisis of will occurs, and if you have a sharp shooting pain in your leg while walking you will either change your movement against your intended direction or overcome that moment and escape the akrasia for a time.
I do, however, suspect that this community would do a better job at fighting akrasia if we did not confound it solely with procrastination and "laziness".
You're right in that this, among other topics, I owe a top level post.
Although one worry I have with trying to lay out inferential steps is that some of these ideas (this one included) seem to encounter a sort of Xeno's paradox for full comprehension. It stops being enough to be willing to take the next step, it becomes necessary to take the inferential limit to get to the other side.
Which means that until I find a way to map people around that phenomena I'm hesitant in giving a large scale treatment. Just because it was the route I took, doesn't mean it's a good way to explain things generally, ala Typical Mind Fallacy born out by evidence.
But in any case I will return to it when I have the time.
Building on some of the more non-trivial theories of fun - specifically cognitive science research focusing on the human response to learning there is a direct relationship between human perception of subjectively unpleasant qualia and the complexity impact on the human of that qualia.
Admittedly extending this concept of suffering beyond humanity is a bit questionable. But it's better than a tautological or innately subjective definition, because with this model it is possible to estimate and compare with more intuitive expectations.
One nice effect of ha...
What I keep coming to here is, doesn't the entire point of this post come to the situations where the parameters in question, the bias of the coins, are not independent? And doesn't this contradict?
Which leads me to read the later half of this post as, we can (in principle, perhaps not computably) estimate 1 complex parameter with 100 data sets better than 100 independent unknown parameters from individual data sets. This shouldn't be surprising. I certainly don't find it as such.
The first half just points ou... (read more)