All of minusdash's Comments + Replies

I'm not talking about back and forth between true and false, but between two explanations. You can have a multimodal probability distribution and two distant modes are about equally probable, and when you update, sometimes one is larger and sometimes the other. Of course one doesn't need to choose a point estimate (maximum a posteriori), the distribution itself should ideally be believed in its entirety. But just as you can't see the rabbit-duck as simultaneously 50% rabbit and 50% duck, one sometimes switches between different explanations, similarly to a... (read more)

I don't really understand what you mean about math academia. Those references would be appreciated.

JonahS140

The top 3 answers to the MathOverflow question Which mathematicians have influenced you the most? are Alexander Grothendieck, Mikhail Gromov, and Bill Thurston. Each of these have expressed serious concerns about the community.

  • Grothendieck was actually effectively excommunicated by the mathematical community and then was pathologized as having gone crazy. See pages 37-40 of David Ruelle's book A Mathematician's Brain.

  • Gromov expresses strong sympathy for Grigory Perelman having left the mathematical community starting on page 110 of Perfect Rigor. (You

... (read more)

Those are indeed impressive things you did. I agree very much with your post from 2010. But the fact that many people have this initial impression shows that something is wrong. What makes it look like a "twilight zone"? Why don't I feel the same symptoms for example on Scott Alexander's Slate Star Codex blog?

Another thing I could pinpoint is that I don't want to identify as a "rationalist", I don't want to be any -ist. It seems like a tactic to make people identify with a group and swallow "the whole package". (I also don't think people should identify as atheist either.)

2ChristianKl
Nobody forces you to do so. Plenty of people in this community don't self identify that way.
3JonahS
I'm sympathetic to everything you say. In my experience there's an issue of Less Wrongers being unusually emotionally damaged (e.g. relative to academics) and this gives rise to a lot of problems in the community. But I don't think that the emotional damage primarily comes from the weird stuff that you see on Less Wrong. What one sees is them having born the brunt of the phenomenon that I described here disproportionately relative to other smart people, often because they're unusually creative and have been marginalized by conformist norms Quite frankly, I find the norms in academia very creepy: I've seen a lot of people develop serious mental health problems in connection with their experiences in academia. It's hard to see it from the inside: I was disturbed by what I saw, but I didn't realize that math academia is actually functioning as a cult, based on retrospective impressions, and in fact by implicit consensus of the best mathematicians of the world (I can give references if you'd like) .
2[anonymous]
I've always thought that calling yourself a "rationalist" or "aspiring rationalist" is rather useless. You're either winning or not winning. Calling yourself by some funny term can give you the nice feeling of belonging to a community, but it doesn't actually make you win more, in itself.
minusdash220

I prefer public discussions. First, I'm a computer science student who took courses in machine learning, AI, wrote theses in these areas (nothing exceptional), I enjoy books like Thinking Fast and Slow, Black Swan, Pinker, Dawkins, Dennett, Ramachandran etc. So the topics discussed here are also interesting to me. But the atmosphere seems quite closed and turning inwards.

I feel similarities to reddit's Red Pill community. Previously "ignorant" people feel the community has opened a new world to them, they lived in darkness before, but now they fo... (read more)

8Risto_Saarelma
There's also the whole Lesswrong-is-dying thing that might be contribute to the vibe you're getting. I've been reading the forum for years and it hasn't felt very healthy for a while now. A lot of the impressive people from earlier have moved on, we don't seem to be getting that many new impressive people coming in and hanging out a lot on the forum turns out not to make you that much more impressive. What's left is turning increasingly into a weird sort of cargo cult of a forum for impressive people.
Vaniver100

Thanks for the detailed response! I'll respond to a handful of points:

Previously "ignorant" people feel the community has opened a new world to them, they lived in darkness before, but now they found the "Way" ("Bayescraft") and all this stuff is becoming an identity for them.

I certainly agree that there are people here who match that description, but it's also worth pointing out that there are actual experts too.

the general public, who are just irrational automata still living in the dark.

One of the things I find most... (read more)

3[anonymous]
The applicable word is metaphysics. Acausal trade is dabbling in metaphysics to "solve" a question in decision theory, which is itself mere philosophizing, and thus one has to wonder: what does Nature care for philosophies? By the way, for the rest of your post I was going, "OH MY GOD I KNOW YOUR FEELS, MAN!" So it's not as though nobody ever thinks these things. Those of us who do just tend to, in perfect evaporative cooling fashion, go get on with our lives outside this website, being relatively ordinary science nerds.

PCA doesn't tell much about causality though. It just gives you a "natural" coordinate system where the variables are not linearly correlated.

2VoiceOfRa
Right, one needs to use additional information to determine causality.

What do you mean by getting surprised by PCAs? Say you have some data, you compute the principal components (eigenvectors of the covariance matrix) and the corresponding eigenvalues. Were you surprised that a few principal components were enough to explain a large percentage of the variance of the data? Or were you surprised about what those vectors were?

I think this is not really PCA or even dimensionality reduction specific. It's simply the idea of latent variables. You could gain the same intuition from studying probabilistic graphical models, for example generative models.

1RomeoStevens
Surprised by either. Just finding a structure of causality that was very unexpected. I agree the intuition could be built from other sources.
minusdash110

You asked about emotional stuff so here is my perspective. I have extremely weird feelings about this whole forum that may affect my writing style. My view is constantly popping back and forth between different views, like in the rabbit-duck gestalt image. On one hand I often see interesting and very good arguments, but on the other hand I see tons of red flags popping up. I feel that I need to maintain extreme mental efforts to stay "sane" here. Maybe I should refrain from commenting. It's a pity because I'm generally very interested in the topi... (read more)

4ChristianKl
That sounds like you engage in binary thinking and don't value shades of grey of uncertainty enough. You feel to need to judge arguments for whether they are true or aren't and don't have mental categories for "might be true, or might not be true". Jonah makes strong claims for which he doesn't provide evidence. He's clear about the fact that he hasn't provided the necessary evidence. Given that you pattern match to "crackpot" instead of putting Jonah in the mental category where you don't know whether what Jonah says is right or wrong. If you start to put a lot of claims into the "I don't know"-pile you don't constantly pop between belief and non-belief. Popping back and forth means that the size of your updates when presented new evidence are too large. Being able to say "I don't know" is part of genuine skepticism.
7[anonymous]
Seconded, actually, and it's particular to LessWrong. I know I often joke that posting here gets treated as submitting academic material and skewered accordingly, but that is very much what it feels like from the inside. It feels like confronting a hostile crowd of, as Jonah put it, radical agnostics, every single time one posts, and they're waiting for you to say something so they can jump down your throat about it. Oh, and then you run into the issue of having radically different priors and beliefs, so that you find yourself on a "rationality" site where someone is suddenly using the term "global warming believer" as though the IPCC never issued multiple reports full of statistical evidence. I mean, sure, I can put some probability on, "It's all a conspiracy and the official scientists are lying", but for me that's in the "nonsense zone" -- I actually take offense to being asked to justify my belief in mainstream science. As much as "good Bayesians" are never supposed to agree to disagree, I would very much like if people would be up-front about their priors and beliefs, so that we can both decide whether it's worth the energy spent on long threads of trying to convince people of things.
6JonahS
Thanks so much for sharing. I'm astonished by how much more fruitful my relationships have became since I've started asking. I think that a lot of what you're seeing is a cultural clash: different communities have different blindspots and norms for communication, and a lot of times the combination of (i) blindspots of the communities that one is familiar with and (ii) respects in which a new community actually is unsound can give one the impression "these people are beyond the pale!" when the actual situation is that they're no less rational than members of one's own communities. I had a very similar experience to your own coming from academia, and wrote a post titled The Importance of Self-Doubt in which I raised the concern that Less Wrong was functioning as a cult. But since then I've realized that a lot of the apparently weird beliefs on LWers are in fact also believed by very credible people: for example, Bill Gates recently expressed serious concern about AI risk. If you're new to the community, you're probably unfamiliar with my own credentials which should reassure you somewhat: * I did a PhD in pure math under the direction of Nathan Dunfield, who coauthored papers with Bill Thurston, who formulated the geometrization conjecture which Perelman proved and in doing so won one of the Clay Millennium Problems. * I've been deeply involved with math education for highly gifted children for many years. I worked with the person who won the American Math Society prize for best undergraduate research when he was 12. * I worked at GiveWell, which partners with with Good Ventures, Dustin Moskovitz's foundation. * I've done fullstack web development, making an asynchronous clone of StackOverflow (link). * I've done machine learning, rediscovering logistic regression, collaborative filtering, hierarchical modeling, the use of principal component analysis to deal with multicollinearity, and cross validation. (I found the expositions so poor that it was faster
3Vaniver
I would be very interested in hearing elaboration on this topic, either publicly or privately.

Qualitative day-to-day dimensionality reduction sounds like woo to me. Not a bit more convincing than quantum woo (Deepak Chopra et al.). Whatever you're doing, it's surely not like doing SVD on a data matrix or eigen-decomposition on the covariance matrix of your observations.

Of course, you can often identify motivations behind people's actions. A lot of psychology is basically trying to uncover these motivations. Basically an intentional interpretation and a theory of mind are examples of dimensionality reduction in some sense. Instead of explaining beha... (read more)

5JonahS
See Rationality is about pattern recognition, not reasoning. Your tone is condescending, far outside of politeness norms. In the past I would have uncharitably written this off to you being depraved, but I've realized that I should be making a stronger effort to understand other people's perspectives. So can you help me understand where you're coming from on an emotional level?
minusdash190

"impression that more advanced statistics is technical elaboration that doesn't offer major additional insights"

Why did you have this impression?

Sorry for the off-topic, but I see this a lot in LessWrong (as a casual reader). People seem to focus on textual, deep-sounding, wow-inducing expositions, but often dislike the technicalities, getting hands dirty with actually understanding calculations, equations, formulas, details of algorithms etc (calculations that don't tickle those wow-receptors that we all have). As if these were merely some minor... (read more)

5VoiceOfRa
Probably because of the human tendency to overestimate the importance of any knowledge one happens to have and underestimate the importance of any knowledge one doesn't. (Is there a name for this bias?)
1[anonymous]
Don't say the p-word, please ;-). I do agree that more real-life understanding is gained from just obtaining a broad scientific education than from going wow-hunting. But of course, I would say that, since I'm a fanatical textbook purchaser.
4RomeoStevens
I think having the concept of PCAs prevents some mistakes in reasoning on an intuitive day to day level of reasoning. It nudges me towards fox thinking instead of hedgehog thinking. Normal folk intuition grasps at the most cognitively available and obvious variable to explain causes, and then our System 1 acts as if that variable explains most if not all the variance. Looking at PCAs many times (and being surprised by them) makes me less likely to jump to conclusions about the causal structure of clusters of related events. So maybe I could characterize it as giving a System 1 intuition for not making the post hoc ergo propter hoc fallacy. Maybe part of the problem Jonah is running in to explaining it is that having done many many example problems with System 2 loaded it into his System 1, and the System 1 knowledge is what he really wants to communicate?
6JonahS
Groupthink I guess: other people who I knew didn't think that it's so important (despite being people who are very well educated by conventional standards, top ~1% of elite colleges). Disclaimer: I know that I'm not giving enough evidence to convince you: I've thought about this for thousands of hours (including working through many quantitative examples) and it's taking me a long time to figure out how to organize what I've learned. I already have been using dimensionality reduction (qualitatively) in my day to day life, and I've found that it's greatly improved my interpersonal relationships because it's made it much easier to guess where people are coming from (before people's social behavior had seemed like a complicated blur because I saw so many variables without having started to correctly identify the latent ones). You seem to be making overly strong assumptions with insufficient evidence: how would you know whether this was the case, never having met me? ;-)

It can still be evidence-based, just on a larger budget. I mean, you can get higher quality examinations, like MRI and CT even if the public insurance couldn't afford it. Just because they wouldn't do it by default and only do it for your money doesn't mean it's not evidence based. Evidence-based medicine doesn't say that this person needs/doesn't need this treatment/examination, it gives a risk/benefit/cost analysis. The final decision also depends on the budget.

This may be a case of ignoring people who are bad in both intellectual and physical things. Those people are just not salient, the same way as people think smart people are ugly and beautiful people are dumb. It may simply be that the ugly and dumb people go unnoticed. This is Berkson's paradox: Even if A and B are independent, they are dependent conditioned on (A or B).

-1[anonymous]
Absolutely. The stereotype of the smart geek/nerd comes from the fact that when people are ugly/socially awkward/weird, other people get positively surprised that they are smart and really notice that. It is like, they would pretty much "written them off" as low-status unimportant people to be ignored, and thus they get surprised that they actually have useful virtues, and should not be so easily ignored because while how they say things is not popular, what they say is often true and insightful. While the dumb nerd/geek just gets ignored forever.

I'd say an expert in any field has better intuitions (hidden, unverbalized knowledge) than what they can express in words or numbers. Therefore, I'd assume that the decision that it's not worth doing the examination should take priority over the numerical estimate that he made up after you asked.

It may be better to ask the odds in such cases, like 1 to 10,000 or 1 to a million. Anyway, it's really hard to express our intuitive, expert-knowledge in such numbers. They all just look like "big numbers".

Another problem is that nobody is willing to put... (read more)

0ChristianKl
It's quite easy to get more expensive healthcare. On the other hand that doesn't mean the healthcare is automatically better. If you are willing to pay for any treatment out of your own pocket then a doctor can treat you in a way that's not being payed for by an insurance company because it's not evidence-based medicine.
0Adam Zerner
It seemed to me that the proposition was made under false assumptions. Specifically, I value my life way more than most people do, and I value the costs of time/money/pain less than most people do. He seemed to have been assuming that I value these things in a similar way to most people. Yeah, I understand this now. Previously I hadn't thought enough about it. So given that I am willing to spend money for my health, and that I can't count on doctors to presume that, it seems like I should make that clear to them so they can give me more personalized advice.

Saying 99.9999% seems a mouthful. Would you have preferred an answer like this instead: https://www.youtube.com/watch?v=7sWpSvQ_hwo :)

0Adam Zerner
If brevity was the issue, I wouldn't have expected him to say 5 instead of 9. And I would have expected him to use stronger language than he did. My honest impression is that he thinks that the chances that it's something are really small, but nothing approaching infinitesimally small.

Yes I'm familiar with his most famous paper and what he says about medical research findings. Has he ever endorsed MetaMed in particular? If peer reviewed research finding are often false, how can MetaMed tell the difference without trying to replicate them? Different research papers use different assumptions, differently calibrated measurements, different subjects, it seems very hard to aggregate this in practice, although I'm not a medical researcher. Why should I believe that a company started by futurists and entrepreneurs would be up to this task? Where is the evidence for the actual efficacy of their particular methodology, as evaluated by independent third-parties?

2Lumifer
Yes, it is. However there is, for example, the Cochrane Collaboration which is dedicated to exactly that. You should not. I am not arguing that MetaMed is better than everyone else or even that it is very good. I am arguing that it's not evil, not dangerous (relative to the usual baseline), and a useful thing to have around. It's goal is not to provide you with THE TRUTH, it's goal is to give you a digestible summary of the current research on topics of particular interest to you. Often this summary functions as a second opinion, or it could provide the context for making medical decisions. It is as fallible as the rest of contemporary medicine.

Upon more reflection, I'm not able to defend my point and my thoughts are confused and therefore I'm gravitating towards the established and mainstream viewpoint that only licensed and authorized doctos should do doctor stuff. On uncertain territory it's better to stick to well-known landmarks. Since I'm not confident in my capability of a deep enough analysis of the pros and cons, I feel that the way to convince me would be to first convince people who are experts of the medical field and of the regulations, towards whom I already have an established chain of trust.

2Lumifer
Yay for more reflection! :-) I would recommend to continue with even more reflection, now about that chain of trust you say you have established. Ioannidis would probably be helpful and you can google up his actual papers.
4Lumifer
Really? You consider MetaMed unethical and dangerous. Robin Hanson considers it a useful source of second opinions but thinks it may not be all that much better than second opinions from other sources (e.g. doctors). You say "I find this absolutely shocking and reading the endorsement of this company on this website" Robin Hanson says "Even so, I would very much like to see a much stronger habit of getting second opinions, and a much larger industry to support that habit. I thus hope that MetaMed succeeds." MetaMed has been granted immunity from lawsuits..? The legal risks are lesser if you are a licensed MD.
Lumifer100

this company takes medicine's research results and then say they can handle and aggregate it better than the doctors who are licensed to do exactly that

Doctors are not licensed to "handle and aggregate" research results. They are licensed to treat people as best as they can and keeping up with the latest academic research is not a requirement for keeping their license. In fact, most doctors are too busy treating people to allocate enough time to read research.

this post induces distrust in legally-professed medicine

I consider this a good th... (read more)

Life, sin, disease, redness, maleness and indeed dogness "may" also be like electromagnetism. The English language may also be a fundamental part of the universe and maybe you could tell if "irregardless" or "wanna" are real English words by looking into a microscope or turning your telescope to certain parts of the sky, or maybe by looking at chicken intestines, who knows. I know some people think like this. Stuart Hameroff says that morality may be encoded into the universe at the Planck scale. So maybe that's where you shou... (read more)

0johnsonmx
Although life, sin, disease, redness, maleness, and dogness are (I believe) inherently 'leaky' / 'fuzzy' abstractions that don't belong with electromagnetism, this is a good comment. If a hypothesis is scientific, it will make falsifiable predictions. I hope to have something more to share on this soon.
7Lumifer
That's an excellent idea and I endorse it :-P Particularly the "non-supervised" part. I am not quite sure what do you mean by "spinoff", though. What is spun off what? What exactly is a "legally supervised hospital" and who's doing the supervising? Do tell. Is it to make lawyers rich?

I don't like the expression "carve reality at the joints", I think it's very vague and hard to verify if a concept carves it there or not. The best way I can imagine this is that you have lots of events or 'things' in some description space and you can notice some clusterings, and you pick those clusters as concepts. But a lot depends on which subspace you choose and on what scale you're working... 'Good' may form a cluster or may not, I just don't even know how you could give evidence either way. It's unclear how you could formalize this in prac... (read more)

0TheAncientGeek
Asking "how do qualia systematically relate to physics" is not a useless question, since answering it would make physicalism knowledge with no element of commitment.
2johnsonmx
I think we're still not seeing eye-to-eye on the possibility that valence, i.e., whatever pattern within conscious systems innately feels good, can be described crisply. If it's clear a priori that it can't, then yes, this whole question is necessarily confused. But I see no argument to that effect, just an assertion. From your perspective, my question takes the form: "what's the thing that all dogs have in common?"- and you're trying to tell me it's misguided to look for some platonic 'essence of dogness'. Concepts don't work like that. I do get that, and I agree that most concepts are like that. But from my perspective, your assertion sounds like, "all concepts pertaining to this topic are necessarily vague, so it's no use trying to even hypothesize that a crisp mathematical relationship could exist." I.e., you're assuming your conclusion. Now, we can point to other contexts where rather crisp mathematical models do exist: electromagnetism, for instance. How do you know the concept of valence is more like 'dogness' than electromagnetism? Ultimately, the details, or mathematics, behind any 'universal' or 'rigorous' theory of valence would depend on having a well-supported, formal theory of consciousness to start from. It's no use talking about patterns within conscious systems when we don't have a clear idea of what constitutes a conscious system. A quantitative approach to valence needs a clear ontology, which we don't have yet (Tononi's IIT is a good start, but hardly a final answer). But let's not mistake the difficulty in answering these questions with them being inherently unanswerable. We can imagine someone making similar critiques a few centuries ago regarding whether electromagnetism was a sharply-defined concept, or whether understanding it matters. It turned out electromagnetism was a relatively sharply-defined concept: there was something to get, and getting it did matter. I suspect a similar relationship holds with valence in conscious systems. I'm

Well, you need to decide if it's worth discussing further. Anonymous internet comment sections are often very low quality (tribalism, astroturfing, trolling etc.). If you think they are just trolling you, then ignore them. If you think they want to have a discussion, then you should defend your point or concede that you just stated a layman's opinion.

Comments don't have the space to put your own academic paper there defending what you claim.

They are right, I think. People don't have endless time to discuss things with everyone. You put a statement on the table. Now I look at it and superficially see that it's some sort of economics statement. Now should I waste my time examining your standpoint further? Can I expect to get well founded opinions from you? Will I profit from this exchange? If I can verify that you indeed have spent a long time thinking about economics and other people have considered your economy related thinking processes good enough to give you a diploma, then I can expect to... (read more)

0[anonymous]
You are right, many people just want to make sure they are not wasting their time, but when they come back with "if you are not an expert then your conclusion is false" I think they are showing that time was not their priority, but just to state the claim was false. If they came back with "I prefer to talk to an expert" or "how can I believe you?" it would indicate what you say above. Also, I simplified above with a simple claim as my starting statement. Normally a claim is below an article that already explains the issue and I may affirm that with the addition of my opinion. For example after this article about the war on science on National Geographic: http://ngm.nationalgeographic.com/2015/03/science-doubters/achenbach-text I may just write a comment: We have a huge bias towards intuitive conclusions rather than taking the time to understand the facts. Then someone might write "are you a psychiatrist?". I respond "no" and they follow with a "then you don't know what you are talking about".

Yes, exactly. Hallucinations and altered consciousness periods don't simply mean that your sane and usual rational mind is still there and it simply receives strange visual inputs as if you were enjoying a movie. Sometimes your very own thought processes are disturbed, it's not like a little rational homunculus can always remain skeptical. So if you then try to think about journals and science, it won't feel like a better alternative hypothesis. You will be genuinely confused and maybe imagine reading something in a journal that you didn't, or imagine that... (read more)

Good is a complex concept, not an irreducible basic constituent of the universe. It's deeply rooted in our human stuff like metabolism (food is good), reproduction (sex is good), social environment (having allies is good) etc. We can generalize from this and say that the general pattern of "good" things is that they tend to reinforce themselves. If you feel good, you'll strive to achive the same later. If you feel bad, you'll strive to avoid feeling that in the future. So if an experience makes more of it then it's good, otherwise it's bad.

Note t... (read more)

-1michielper
It seems to me that good and bad are actually easy to define indeed. Minusdash gives a definition: Good is a state an entity strives to obtain (again). This is a functional definition and that should be enough. How states are physically represented in other beings is unknown and is in my opinion irrelevant.
027chaos
Thanks, that's exactly what I was trying to say!
1johnsonmx
It seems like you're making two very distinct assertions here: first, that valence is not a 'natural kind', that it doesn't 'carve reality at the joints', and is impossible to form a crisp, physical definition of; and second, that valence is highly connected to drives that have been evolutionarily advantageous to have. The second is clearly correct; the first just seems to be an assertion (one that I understand, and I think reasonable people can hold at this point, but that I disagree with).

I don't know how limited plasticity is. Speculation: maybe if we put on some color filter glasses that changes red with green or somehow mixes up the colors, then maybe even after a long time we'd still have the experience of the original red, even when looking at outside green material. Okay, let's say it's not plastic enough, we'd still feel an internal red qualia. But in what sense?

What if the brain would truly rewire to recognize plants and moldy fruit etc. in the presence of "red" perception and the original "green" pattern would f... (read more)

0Richard_Kennaway
That would be an interesting experiment to do. We already know that people can adapt to wearing lenses that invert the picture or shift it laterally. Changing the colours while maintaining differences would be a little more complicated but quite feasible. You would need something similar to a VR headset, with a front-facing camera in front of each eye. The camera sensors would be connected, via some electronics to process the colours in any desired way, to the screen that each eye would see. This would be doable by a hobbyist with the necessary technical know-how. It might be as simple as cannibalising a couple of pocket cameras and switching some of the connections to the screen on the back.

"what are the characteristic mathematics of (i.e., found disproportionally in) self-identified pleasurable brain states?"

Certain areas of the brain get more active and certain hormones get into the bloodstream. How does this help you out?

This all seems to be about the "qualia" problem. Take another example. How would you know if an alien was having the experience of seeing the color red? Well, you could show it red and see what changes. You could infer it from its behavior (for example if you trained it that red means food - if indeed the alien eats food).

Similarly you could tell that it's suffering when it does something to avoid an ongoing situation, and if later on it would very much prefer not to go under the same conditions ever again.

I don't think there is anything special... (read more)

0johnsonmx
I see the argument, but I'll note that your comments seem to run contrary to the literature on this: see, e.g., Berridge on "Dissecting components of reward: ‘liking’, ‘wanting’, and learning", as summed up by Luke in The Neuroscience of Pleasure. In short, behavior, memory, and enjoyment ('seeking', 'learning', and 'liking' in the literature) all seem to be fairly distinct systems in the brain. If we consider a being with a substantially different cognitive architecture, whether through divergent evolution or design, it seems problematic to view behavior as the gold standard of whether it's experiencing pleasure or suffering. At this point it may be the most practical approach, but it's inherently imperfect. My strong belief is that although there is substantial plasticity in how we interpret experiences as positive or negative, this plasticity isn't limitless. Some things will always feel painful; others will always feel pleasurable, given a not-too-highly-modified human brain. But really, I think this line of thinking is a red herring: it's not about the stimulus, it's about what's happening inside the brain, and any crisp/rigorous/universal principles will be found there. Is valence a 'natural kind'? Does it 'carve reality at the joints'? Intuitions on this differ (here's a neat article about the lack of consensus about emotions). I don't think anger, or excitement, or grief carve reality at the joints- I think they're pretty idiosyncratic to the human emotional-cognitive architecture. But if anything about our emotions is fundamental/universal, I think it'd have to be their valence.

There's also a linguistic issue here. The English "and" doesn't simply mean mathematical set theoretical conjunction in everyday speech. Indeed, without using words like "given" or "suppose" or a long phrase such as "if we already know that", we can't easily linguistically differentiate between P(Y | X) and P(Y, X).

"How likely is it that X happens and then Y happens?", "How likely is it that Y happens after X happened?", "How likely is it that event Y would follow event X?". All these are ambiguous in everyday speech. We aren't sure whether X has hypothetically already been observed or it's a free variable, too.

0Brilliand
In my experience, the english "and" can also be interpreted as separating two statements that should be evaluated (and given credit for being right/wrong) separately. Under that interpretation, someone who says "A and B" where A is true and B is false is considered half-right, which is better than just saying "B" and being entirely wrong. Though, looking back at the original question, it doesn't appear to use the word "and", so problems with that word specifically aren't very relevant to this article.

Or maybe it's just outrageous to ask for $40 when it's clearly possible to sell it for $20. So you kind of punish the shop that asks for $40 because you see them as dishonest and morally repulsive. Sometimes you also have to pay attention to what behavior you encourage with your actions. Not only the immediate dollar value.

Why don't Christmas tree sellers sell the last, leftover Christmas trees for much cheaper, right before Christmas? Because then lots of people would just wait until that time and then buy it cheap. If buyers know that the seller will rat... (read more)

Propositional knowledge and introspection may be analogous to running a virtual machine in user-space, in which you can instantiate the redness object. But that's not a redness object in the real (non-virtual) program. The "real" running program only has user-space objects that are required for the execution of the virtual machine (virtual registers, command objects, etc).

Desiring a mysterious explantion is like wanting to be in a room with no people inside. Once you explain it it's not mysterious any more. The property depends on your actions: emptyness is destroyed by you entering, mysticism is destroyed by you explaining it. Just an alternative to the map-territory way of putting it

minusdash-30

Control theory is an actual, real engineering subject. Not only this self-help, psychology, wow, mind-blown, feel-good BS.

8Vaniver
Yep! That's one of the reasons I find the application of controls to psychology as interesting as I do. I've taken about two years of controls classes at the graduate level, with a heavy emphasis on their use in aerospace (as my examples might suggest), though I don't yet use it in my professional work (which is more pointed towards numerical optimization and machine learning).
6Richard_Kennaway
Given that the article begins: links the Wikipedia article on control theory in engineering, and goes on to give a brief overview of what control theory is in engineering, I cannot see what your comment has to do with the article.

I guess you can't want to want stuff. When you genuinely want something (not prestige but an actual goal) you'll easily be in the "flow experience" and lose track of time and actually progress toward the goal without having to force yourself. Actually you have to force yourself to stop in order to sleep and eat because you'd just do this thing all day if you could! Find the thing where you slip into flow easily and do the most efficient thing that's at the same time quite similar to this activity.

Somewhat related: I think we do have a 3D map of the environment even for things that we aren't looking at at the moment. For example I feel as if I had a device in my brain that keeps track of which people are in which parts of the house right now (or where some emotionally-loaded objects are). I don't have to exert conscious effort specifically for this.

Another thing: it's interesting to think about why we can see dots and lines and shapes at all. By this I mean, why do these low-level things reach our conscious awareness? You aren't consciously aware of... (read more)

0CCC
I do not appear to have that - or, at least, I don't get much use out of it if it's in here. While I can keep track of who is where in the house, I do so more in the form of a list of Last Known Locations, not in any sort of map (2D or 3D). Possibly related - I am notorious for getting lost easily while driving, and can get very badly turned around if I am merely a short distance away from where I should be. I tend to navigate by memorising a route from A to B, as a list of directions (turn left at the third corner, then it's the fourth street on the right...) and then I get into trouble if I can't follow that route. (Nowadays, I tend to lean heavily on GPS when going to new places).

I remember not really "getting" these illusions when I was a kid. I just didn't find them interesting, it looked too straightforward.

The idea of a "2D screen inside our head" is not our natural intuition. Before learning about these things, I just felt that I simply percieve the environment around me. I don't see a flat pixel grid in front of me when I walk around, I rather have a model of the environment that I continuously update and I percieve the objects "from where they are", just like I feel leg pain as if it were "... (read more)

0CCC
I don't see a flat pixel grid when I walk around, either; I see a 3D scene (generally only where I'm currently looking; I mean, I can recall where things are when I'm not looking at them, but they're not in my current visual model, that memory has to be stored elsewhere). And yet, a lot of optical illusions work for me; because (as in the case of the illusion in this article) the drawing is close enough to what the reality looks like to fool my "scene reconstruction" module in my brain, and I reconstuct the relevant 3D scene when I look at it. Some optical illusions (such as this one ) work by being able to fool my scene reconstruction module in two different ways...
minusdash130

That's a triad too: naive instinctive signaling / signaling-aware people disliking signaling / signaling is actually a useful and necessary thing.