All of Bobertron's Comments + Replies

I understand your post to be about difficult truths related to politics, but you don't actually give examples (except "what Trump has said is 'emotionally true'") and the same idea applies to simplifications of complex material in science etc. I just happened upon an example from a site teaching drawing in perspective (source):

Now you may have heard of terms such as one point, two point or three point perspective. These are all simplifications. Since you can have an infinite number of different sets of parallel lines, there are technically an i

... (read more)

Suppose X is the case. When you say "X" your opposite will believe Y, which is wrong. So, even though "X" is the truth, you should not say it.

Your new idea as I understand it: Suppose saying "Z" will let your opposite will believe X. So, even though saying "Z" is, technically, lying, you should say "Z" because the listener will come to have a true believe.

(I'm sorry if I misunderstood you or you think I'm being uncharitable. But even if I misunderstood I think others might misunderstand in a similar way, so... (read more)

0Bound_up
I think you've hit upon one of the side effects of this approach All the smart people will interpret your words differently and note them to be straightforwardly false. You can always adjust your speaking to the abilities of the intelligent and interested, and they'll applaud you for it, but you do so at the cost of reaching everybody else

When playing around in the sandbox, simpleton always bet copy cat (using default values put a population of only simpleton and copycat). I don't understand why.

1LawrenceC
The reason for this is because of the 5% chance for mistakes. Copycat does worse vs both Simpleton and Copycat than Simpleton does against itself.

"Just being stupid" and "just doing the wrong thing" are rarely helpful views

I agree. What I means was something like: If the OP describes a skill, then the first problem (the kid that wants to be a writer) is so very easy to solve, that I feel I'm not learning much about how that skill works. The second problem (Carol) seems too hard for me. I doubt it's actually solvable using the described skill.

I think this misses the point, and damages your "should" center

Potentially, yes. I'm deliberately proposing something that ... (read more)

Interesting article. Here is the problem I have: In the first example, "spelling ocean correctly" and "I'll be a successful writer" clearly have nothing to do with each other, so they shouldn't be in a bucket together and the kid is just being stupid. At least on first glance, that's totally different from Carol's situation. I'm tempted to say that "I should not try full force on the startup" and "there is a fatal flaw in the startup" should be in a bucket, because I believe "if there is a fatal flaw in the star... (read more)

1Vaniver
"Just being stupid" and "just doing the wrong thing" are rarely helpful views, because those errors are produced by specific bugs. Those bugs have pointers to how to fix them, whereas "just being stupid" doesn't. I think this misses the point, and damages your "should" center. You want to get into a state where if you think "I should X," then you do X. The set of beliefs that allows this is "Smoking is bad for my health," "On net I think smoking is worth it," and "I should do things that I think are on net worth doing." (You can see how updating the first one from "Smoking isn't that bad for my health" to its current state could flip the second belief, but that is determined by a trusted process instead of health getting an undeserved veto.)
0Kaj_Sotala
If you think that successful writers are talented, and that talent means fewer misspellings, then misspelling things is evidence of you not going to be a successful writer. (No, I don't think this is a very plausible model, but it's one that I'd imagine could be plausible to a kid with a fixed mindset and who didn't yet know what really distinguishes good writers from the bad.)

Here are some things that I, as an infrequent reader, find annoying about the LW interface.

  • The split between main and discussion doesn't make any sense to me. I always browse /r/all. I think there shouldn't be such a distinction.
  • My feed is filled with notices about meetups in faraway places that are pretty much guaranteed to be irrelevant to me.
  • I find the most recent open thread to be pretty difficult to find on the side bar. For a minute I thought it just wasn't there. I'd like it if the recent open thread and rationality quotes were sticked at the top of r/discussion.
1MrMind
To alleviate this partly, you could search for the open_thread tag, it's quite rare for an open thread not to have it.

I don't get this (and I don't get Benquo's OP either. I don't really know any statistics. Only some basic probability theory.).

"the process has a 95% chance of generating a confidence interval that contains the true mean". I understand this to mean that if I run the process 100 times, 95 times the resulting CI contains the true mean. Therefore, if I look at random CI amongst those 100 there is a 95% chance that the CI contains the true mean.

"Effective self-care" or "effective well-being".

Okay. The "effective"-part in Effective Altruism" refers to the tool (rationality). "Altruism" refers to the values. The cool thing about "Effective Altruism", compared to rationality (like in LW or CFAR), is that it's specific enough that it allows a community to work on relatively concrete problems. EA is mostly about the global poor, animal welfare, existential risk and a few others.

What I'd imagine "Effective self-care" would be about is su... (read more)

2DataPacRat
Consider this line to have gotten an extra thumbs-up from me. :) The fact that you have highlit the differences between these two closely-related concepts, which I hadn't managed to think through on my own, means this thread has been worthwhile whatever the result of the poll might be.

None of this is a much my strongly held beliefs as my attempt to find flaw with the "nuclear blackmail" argument.

I don't understand. Could you correct the grammar mistakes or rephrase that?

The way I understand the argument isn't that the status quo in the level B game is perfect. It isn't that Trump is a bad choice because his level B strategy is taking too much risk and therefore bad. I understand the argument as saying: "Trump doesn't even realize that there is a level B game going on and even when he finds out he will be unfit to play in that game".

As I understand it you are criticizing Yudkowski's ideology. But MrMind wants to hear our opinion on whether or not Scott and Yudkowski's reasoning was sound, given their ideologies.

8WalterL
I'm not trying to criticize Yudkowki's ideology. It seems to be basically Sailor Moon's. I wish him the best, and will benefit vastly if it works out for him. I'm saying that when he talks about the people who supported Trump, ("People who voted for Trump are unrealistically optimists,") he is making a factual error.

I've read it those two books after LW. Assuming you have read the sequences: It wasn't a total waste, but from my memory I would recommend What Intelligence Tests Miss only if you have an interest specifically in psychology, IQ or the heuristics and biases field. I would not recommend it simply because you have a casual interest in rationality and philosophy ("LW-type stuff") or if you've read other books about heuristics and biases. The Robot's Rebellion is a little more speculative and therefore more interesting, Robot's Rebellion and What Intelligence Test Miss also have a significant overlap in covered material.

I haven't read "Good and Real" or "Thinking, Fast and Slow" yet, because I think that I won't learn something new as a long term Less Wrong reader. In the case of "Good and Real" part seems to be about physics and I don't think I have the physics background to profit from hat (I feel a refresher on high school physics would be more appropirate for me). In the case of "Thinking, Fast and Slow" I have already read books by Keith Stanovich (What Intelligence Tests Miss and The Robot's Rebellion) and some chapters of academic books edited by Kahneman.

Does anyone think those two books are still worth my time?

0Viliam
It also depends on how fast you read. And whether you only want information for yourself, or possibly to educate other people (because telling other people to read something in Kahneman will seem high-status, while telling them to read the Sequences may feel cultish to them). By the way, have you read Stanovich before or after LW? Was that worth your time?
6Viliam
Yep. I'll try to make a short summary of some arguments in the article and comments: Why people want to be mean: * it signals strength (in the ancient environment it shows you are not afraid of being hit in return); * it signals intellectual superiority e.g. in the form of sarcasm; * if you already have a reputation, you can win debates quickly; * it helps you put distance towards people you want to avoid. What are the negative impacts of meanness: * you may be wrong, but you have already proposed a solution ("the other person is stupid"); * if there is a misunderstanding, hostile reaction lowers the chance of explaining or increases the time needed, compared with a polite request for clarification; * people will different experience will seem especially wrong to you, so this effect will be even stronger there; * you spread bad mood, which harms curiosity and exploration; * you signal that you are bad at cooperation, bad at managing your emotions, not caring about other people; * people stop listening to you and start avoiding you; * you lose possible allies.

Me, too! I've taken the survey and would like to receive some free internet points.

4buybuydandavis
Much more handy to have one link to rule them all. Thanks.

"Verständnis" seems totally wrong to me. It's from the verb "verstehen" (to understand, to comprehend). It usually means "understanding" ("meinem Verständnis nach" -> "according to my understanding"). Maybe if you use it in a sentence?

I think "Vermutung" (and it's synonyms) is pretty much what I was looking for. Maybe it's even better than "belief" in some ways, since "belief" suggests a higher degree of confidence than "Vermutung" does.

"unterstützen" (to... (read more)

0Tem42
I don't usually make a mental distinction between understanding and belief, but that is probably not common.

A different German speaker here.

In English you have a whole cloud of related words: mind, brain, soul, I, self, consciousness, intelligence. I don't think it's much of a problem that German does not have perfect match for "mind". The "mind-body-Problem" would be "Leib-Seele-Problem", where "seele" would usually be translated as "soul". The German wikipedia page for philosophy of mind does use the English word "mind" once to distinguish that meaning for "Geist" from a different concept fr... (read more)

0Tem42
Native English speaker, so I may be way off... but surely 'beliefs' would be 'Verständnis'? And for 'evidence', wouldn't you usually use a verb ('to provide evidence') instead of a noun, something like 'unterstützen'?

I intuitively feel that there really are objective morals (or: objective mathematics, actual free will, tables and chairs, minds).

Therefore, there really are objective morals (etc.).

"Morals" is just a word. But unlike some other words, it's not 100% clear to me what it means. There is no physical entity that "morals" clearly refers to. There is no agreed upon list of axioms that define what "morals" is. That's why, to me, "there are objective morals" doesn't feel entirely like a factual statement.

I might jus... (read more)

Naruto is the opposite of Tsuioku Naritai. It's the story of "everyone had something to protect and practiced like mad, but none of it made a huge difference and most everyone would have been about as powerful anyway

But the series clearly wants to be "Tsuioku Naritai". The good guys all value hard work. Maybe the show is hypocritical, then.

I'm not sure if the message that sticks with the people who watch Naruto is what the characters say (work hard) or how the show actually develops (be born special).

I actually really like that you have to spend a resource to learn new information and that the score is dependent on luck. I.e. you use limited resources to optimize the gamble you are making. That seems like a very good description of how life works, only, it's all transparent and quantified in your game.

Some suggestions:

  • In the tutorial, why do I first get to read a description of a picture and then I'm presented with the picture? Obviously, it should be the other way around.
  • You should be able to progress the text by mouse.
  • It should be easier to disti
... (read more)
0Kaj_Sotala
Thanks! I agree with all of your points and had considered implementing many of them myself: unfortunately, while working on this project I learned that I hate UI programming, and finally got to the point where I just wanted to put out a not-too-totally-horrible prototype and be done with it. :( The source code was written to employ a bit of an MVC architecture, with the intention of making it easier for other people to implement a better UI afterwards... but in retrospect just rewriting the whole thing under a better platform than Java might be the best approach, if anyone wants to do that.

change your mind, get a cookie

admitting you're wrong = winning/learning

conservation of expected evidence (add formula)

The path to truth is a random walk

discussions are random walks

what is true is already so

rationality: outcomes > rituals of thought

what can be destroyed by truth, should be

update beliefs incrementally

beliefs should pay rent

the cat 's alive, curiosity got framed

optimize everything

delta knowledge = surprise

minimize future surprise

A diagram like this with some actual data e.g. about P(autism|vaccine) or P(violence|video games).

A matrix repre... (read more)

0Gleb_Tsipursky
Thank you, these are some great ideas. For the broad audience, I think "What Is True Is Already So" and something about yay for changing your mind would work especially well.

I really like "The facts don't know whose side they're on", though the other two might require less wrong knowledge.

0Gleb_Tsipursky
The first set of rationality-themed merchandise is ready! Thanks for your suggestions :-)
0Gleb_Tsipursky
I like that too, but it's a bit long - is there a way to shorten the idea to get it across effectively?

following up to my own post: I was sceptical because the examples AshwinV provided were examples that lend themselves to punishing oneself and using guilt, shame etc. But by flipping the title of the post to "Make good habits the heroes" all that criticism becomes irrelevant and AshwinV's idea remains the same. I think that is very related to the idea of identity, which has been discussed previously here on lesswrong. Use Your Identity Carefully is a good an relevant example.

First, your markup is broken. I can see the link-syntax, instead of the links. Also, the firs link is to an article by Phil Goetz, not Eliezer Yudkowsky.

Now about the actual content. I'm all for trying to use one's natural tendencies, instead of just trying to compensate for them. But I'm critical of the concrete examples you gave. What you are trying to do seems to be to motivate yourself through shame and guilt. And no one seems to be in favour of that. Some reasons why I think it's a bad idea:

  1. I believe you train yourself to be judgemental, not just ab
... (read more)
2[anonymous]
I think you're actually imagining this technique differently than I am. In my view, this actually removes pain and guilt. Instead of saying "oh, I was lazy". You say "oh no, the mustachioed villain Mr Lazy pants is trying to attack again" and don't internalize that guilt to you. Likewise, you can imagine mr Lazy Pants attacking other people as well, which would cause you to be less judgemental of them, as they have to deal with the same evil villians that you do.
1Bobertron
following up to my own post: I was sceptical because the examples AshwinV provided were examples that lend themselves to punishing oneself and using guilt, shame etc. But by flipping the title of the post to "Make good habits the heroes" all that criticism becomes irrelevant and AshwinV's idea remains the same. I think that is very related to the idea of identity, which has been discussed previously here on lesswrong. Use Your Identity Carefully is a good an relevant example.
0AshwinV
Thanks for the input! I'm not able to correct the hyperlink part, but I did change the name to Phil Goetz as was due.
0NancyLebovitz
Nitpick: I think you mean transgress, not digress.

I've heard of the controversy. I think it was mentioned in a link post on slatestarcodex, and obviously on GiveWell's blog.

the community seems to be comprehensively inept, poor at marketing, extremely insular, methodologically unsophisticated but meticulous, transparent and well-intentioned

I find it stylistically strange to have a long list of negative adjectives end with two positive ones (transparent and well-intentioned are good things, right?) without any explanation. Wouldn't one say something like "These things suck:...., but on the good si... (read more)

0[anonymous]
It's not mentioned anywhere on SSC AFAIK. I wrote this post because it's absent in the rationality sphere. GiveWell's treatise is quite pathetic honestly, but I won't be posting a critique of it because (1) it would shoot down a high quality organisation and may do more harm then good and (2) it would be an effortful undertaking that I would prefer to publish under a more reputable pseudonym. That's very obvious though. Deworming consistutures roughly half the suggested charities of most EA orgs, so I think it's fair to say methodological issues reflect on the whole movement.
3Vaniver
I think the "but" was the transition, and that "meticulous" was also intended positively. I was under the impression that specialists worried that mass deworming leads to resistance, by standard evolutionary logic, and so argue that the deworming initiatives are committing a long-term harm for nonexistent short-term gains.

Sounds nice. Making predictions about personal events makes more sense to me than predicting e.g. elections or sport events (beauce a) I don't know anything about it, and b) I don't care about it). But I don't like the idea of making them (all) public, like on prediction book. Though a PredictionBook integration sounds like an obvious fancy feature.

And I liked what I saw the one second I could use the app ;-)

After installing, it crashed pressing "save" on the first prediction. Now it chrashed right on startup. I get to see the app for a moment,... (read more)

1Gust
Check the link below, v0.2. Should be working now! https://www.dropbox.com/s/59redws46ncdiax/predict_v0.2.apk?dl=0
0Gust
This is weird. I'll test to see if I can reproduce and report back (hopefully with a fix).

Sounds like it's the same or similar to what some modern practicing stoics do.

No, your real friend is the one you helped. The friend that helps you in a counterfactual situation where you are in trouble is just in your head, not real. Your counterfactual friend helps you, but in return you help your real friend. The benefit you get is that once you really are in trouble, the future version of your friend is similar enough to the counterfactual friend that he really will help you. The better you know your friend, the likelier this is.

I'm not saying that that isn't a bit silly. But I think it's coherent. In fact it might be just a geeky way to describe how people often think in reality.

0Lumifer
That seems like a remarkably convoluted way to describe a trivial situation where you help a member of your in-group.

I just read a book on behavior and that's the kind of thing I would expect to read in that book: Attention is generally a reinforcer. Swearing can be reinforced by attention. When you stop paying attention to swearing, swearing stops (extinction). Of course that will only stop the child from swearing when talking to you, not when it's in school.

For the Greman speakers this is the introductory paragraph I already wrote for the blog: [...]

I'm not much of a writer, and this might not be the final version, but I still like giving advice.

I'd really like to see some citations and references here. Are all those opinions based only on you own observations or also from things you have read? Since I don't have children, I'm not interested in the answer to that question, but your readers will be.

Werte, die während der Kindheit anerzogen wurden, werden während der Pubertät auch durch die natürliche Gehi

... (read more)
3Gunnar_Zarncke
Thank you for your feedback. Yes I have quite some references to back that up. I didn't give them because they are unordered and incomplete and I just wrote the text as a first draft. I'm unclear about how to include them. Options I consider: * references only via linking correspondings passages * Inline references (short with links) * references at the end * writing separate posts with a focus on the particular referenced topic. I'd really prefer the last one as it'd also bridge the inferencial gap behind it and I started to structure some post in that way, but it is also the most complex approach.

I was wondering why. It doesn't seem all that useful, unless you are abnormally bad at color perception or you have a job or hobby that somehow needs good color perception (something in art or design?). I suppose it's fun and interesting to see how well that kind of thing can be trained, and how it changes your experience, but I was wondering if there was more to it.

I have written about this on LW in the past.

Here and here.

Can you tell me something about your color perception deck? Are you trying to train yourself to be better at distinguising (and naming?) colours for some reason?

1ChristianKl
Yes, I train color distinctions. Every card has two colors and shows them plus a color name then the user has to decide which color Anki displayed. Over times the distance between the colors goes down and I pick colors that are more near to each other. I have written about this on LW in the past.

I like the animation and the voice, but I dislike the text. I don't need it and it really distracts from the animations. And if I did need to read along with what you say, I think YT has a subtitle feature that would be much less distracting and could be turned off. I suppose I've seen videos using the style you attempt here, but I'm not sure I like then, either, and they typically use text only, while you also use pictures.

Oh, and I suppose you would be faster in producing those videos if you were to give up on the text.

There is this idea (I think it's a stoic one) that's supposed to show that no one ever has anything to worry. It goes like this:

Either you can do something about it, in which case you don't have to worry, you just do it. Or there is nothing you can do, then you can simply accept the inevitabel

It throws out the possiblility that you don't know whether you can do anything (and what precisely) or not. As I see it, worry is precisely the (sometimes maladaptive) attempt to answer that.

Every calse dichotomy is another example for this failure mode (if I understood you correctly).

1MaximumLiberty
I think there is a fatalistic prayer to that effect: http://en.wikipedia.org/wiki/Serenity_Prayer. It kind of depends on how you read it, though. Max L.

The idea that it's a habit is, in a way, boring, true.

But when I read that industriousness and creativity can be learned like described in the learned industriousness wikipedia article, I was quite surprised. So the iedea isn't boring to me at all.

I know it's just an example, but concerning

I find it hard to do something I consider worthwhile while on a spring break

maybe you have learned to be lazy on spring break? I mean, the theory that it's a habit seems more prosaic to me than being tired or something about "activasion energy".

2BT_Uytya
Good call! Yes, your theory is more prosaic, yet it never occured to me. I wonder whether purposefully looking for boring explanations would help with that. Also, your theory is actually plausible, fits with some of my observations, so I think that I should look into it. Thanks!

Such a person would probably strongly [missing verb?] rationality, rationalists, and the complex of ideas surrounding rationality, for probably understandable reasons

Since I kind of like your comment, I'd liked to know how that sentence should have sounded. Strongly dislike, hate, mistrust?

0devas
All three options fit the bill, actually, but I was going for strongly dislike. Man, I must have been more tired than I realized to miss a whole word like that.

The "A=a" stands for the event that the random variable A takes on the value a. It's another notation for the set {ω ∈ Ω | A(ω) = a}, where Ω is your probability space and A is a random variable (a mapping from Ω to something else, often R^n).

Okay, maybe you know that, but I just want to point out that there is nothing vague about the "A=a" notation. It's entirely rigorous.

4IlyaShpitser
I think the grandparent refers to the fact that in the context of causality (not ordinary probability theory) there is a distinction between ordinary mathematical equality and imperative assignment. That is, when I write a structural equation model: Y = f(A, M, epsilon(y)) M = g(A, epsilon(m)) A = h(epsilon(a)) and then I use p(A = a) or p(Y = y | do(A = a)) to talk about this model, one could imagine getting confused because the symbol "=" is used in two different ways. Especially for p(Y = y | do(A = a)). This is read as: "the probability of Y being equal to y given that I performed an imperative assignment on the variable A in the above three line program, and set it to value a." Both senses of "=" are used in the same expression -- it is quite confusing!

Societies often punish people that refuse to help. Why not consider people that break the law as defectors?

In fact, that would be an alternative (and my preferred) way to fix you second and third objection to value ethics. Consider everyone who breaks the laws and norms within your community as a defector. Where I live, torture is illegal and most people think it's wrong to push the fat man, so pushing the fat man is (something like) breaking a norm.

Have you read Whose Utilitarianism?? Not sure if it addresses any of your concerns, but it's good and about utilitarianism.

So I'm still a defector and society would do well to defect against me in proportion

Which, of course, they wouldn't do. They wouldn't have much sympathy for the guy sitting one bear repellant, who chose not to help. In fact, refusing to help can be illegal.

I suppose in your terms, you could say that the guy-sitting-on-the-repellant is a defector, therefore it's okay to defect against him.

0[anonymous]
No. My point is that the guy is not a defector. He merely refuses to cooperate which is an entirely different thing. So I am the defector whether or not society chooses to defect in return. And I really mean that society would do well to defect against me proportionally in return in order to discourage defection. Or to put it differently if I want to help and the guy does not, why should he have to bear (no pun intended) the cost and not me?

Ok, I didn't understand the post. Like you are saying that the blue lines don't have any direktion, and then you go on to paint (directed) arrows over it.. Is this by mistake? Did you want to make the green arrows double directed or something like that? I suppose that not only does the blue line not have a direction, it also doesn't have an order? E.g. could you have written from top to bottom "Psychology Physiology Chemistry Morality Physics Neuroscience"? It's clearly no accident that you wrote those "sets of things that exist" in tha... (read more)

As I understand it, a bodhisattva also enters niravana eventually, so I don't see the hypocrisy.

1Rob Bensinger
Sort of. There's usually taken to be an infinite number of beings a bodhisattva needs to save before leaving samsara; bodhisattvas aren't supposed to leave anybody behind, and the buddhist cosmos is very very big.

There are some blogs mentioned on the wiki.

Keeping money for yourself can be thought of as a [small] charity

Oh, interesting. I assumed the reason I keep anything beyond the bare minimum to myself is that I'm irrationally keeping my own happiness and the well-being of strangers as two separate, incomparable things. I probably prefer to see myself as irrational compared to seeing myself as selfish.

The concept I was thinking of (but didn't quite remember) when I wrote the comment was Purchase Fuzzies and Utilons Separately.

Most people set aside an amount of money they spend an charity, and an amount they spend on their own enjoyment. It seems to me that whatever reasoning is behind splitting money between charity and yourself, can also support splitting money between multiple charities.

6DanielLC
There is a reason that I probably should have made more clear in the article. I'll go back and fix it. The reasoning assumes that your donation is small compared to the size of the charity. For example, donating $1,000 a year to a charity that spends $10,000,000 a year. Keeping money for yourself can be thought of as a charity. Even if you're partially selfish and you value yourself as a thousand strangers, the basic reasoning still works the same. The reason you keep some for yourself is that it's a small charity. The amount you donate makes up 100% of its budget. As a result, it cannot be approximated as a linear function. A log function seems to work better. I should add that there is still something about that that's often overlooked. If you're spending money on yourself because you value your happiness more than others, the proper way to donate is to work out how much money you have to have before the marginal benefit to your happiness is less than the amount of happiness that would be created by donating to others, and donating everything after that. There are other reasons to keep money for yourself. Keeping yourself happy can improve your ability to work and by extension make money. The thought of having more money can be incentive to work. Nonetheless, I don't think you should be donating anywhere near a fixed fraction of your income. I mean, it's not going to hurt much if you decide to only donate 90% no matter how rich you get, but if you don't feel like you can spare more than 10% now, and you become as rich as Bill Gates, you shouldn't be spending 90% of your money on yourself.

Ergo, if you're risk-averse, you aren't a rational agent. Is that correct?

5Squark
Depends how you define "risk averse". When utility is computed in terms on another parameter, diminishing returns result in what appears like "risk averseness". For example, suppose that you assign utility 1u to having 1000$, utility 3u to having 4000$ and utility 4u to having 10000$. Then, if you currently have 4000$ and someone offers you to participate in a lottery in which you have a 50% chance of losing 3000$ and a 50% chance of gaining 6000$, you will reject it (in spite of an expected gain of 1500$) since your expected utility for not participating is 3u whereas your expected utility for participating is 2.5u.

Reading that ZFC has countable models spooked me. How can uncountable sets exist and an axiomatization of set-theory have a countable model? For a fraction of a second it made me doubt mathematics was real. For a few seconds after that I was thinking of giving up on understanding maths, or at least logic. Then I realized that there had to be a trick about it that made everything make sense again.

7[anonymous]
There is, yeah. If we look at a countable model of ZFC, and examine one of the sets that the model claims to be uncountable, we'll find that the set is actually countable. We'll also find, however, that the model doesn't contain any surjective function from the natural numbers to that set. So the set will be "uncountable in the model", in a sense. A fact that I find spooky is that there is no computable set of first-order axioms that uniquely defines the natural numbers. You can only define them uniquely in second-order logic. But second-order logic doesn't seem to have a well-defined semantics. (If I'm not mistaken, the continuum hypothesis can be written in SOL without having to use ZFC or anything like that. But the continuum hypothesis doesn't have a well-defined answer.) These two facts, together, suggest that the natural numbers aren't actually well-defined at all. And this would mean that provability in a formal system isn't well-defined, either.

I think that fits what I've read about worry.

From Chapter nine of "Resilience–How to Survive and Thrive in Any Situation A Teach Yourself Guide (Teach Yourself: Relationships & Self-Help) by Donald Robertson":

When we worry, we perceive danger, feel anxious, and naturally try to problem-solve in order to remove the perceived threat and achieve a sense of safety. As long as we believe future problems are threatening and remain unsolved there’s a tendency for our attention to automatically return to them as ‘unfinished business’, which partial

... (read more)
Load More