Looking for advice with something it seems LW can help with.
I'm currently part of a program the trains highly intelligent people to be more effective, particularly with regards to scientific research and effecting change within large systems of people. I'm sorry to be vague, but I can't actually say more than that.
As part of our program, we organize seminars for ourselves on various interesting topics. The upcoming one is on self-improvement, and aims to explore the following questions: Who am I? What are my goals? How do I get there?
Naturally, I'm of the ...
It also depends on the jeans. Some jeans are, for some reason, more likely to smell after being worn just once. I have no idea why, but several people I know have corroborated this independently.
Map and territory - why is rationality important in the first place?
Alright, that works too. We're allowed to think differently. Now I'm curious, could you define your way of thinking more precisely? I'm not quite sure I grok it.
So, essentially, there isn't actually any way of getting around the hard work. (I think I already knew that and just decided to go on not acting on it for a while longer.) Oh well, the hard work part is also fun.
This appears to be a useful skill that I haven't practiced enough, especially for non-proof-related thinking. I'll get right on that.
reads the first essay and bookmarks the page with the rest
Thanks for that, it made for enjoyable and thought-provoking reading.
I don't really have good definitions at this point, but in my head the distinction between verbal and nonverbal thinking is a matter of order. When I'm thinking nonverbally, my brain addresses the concepts I'm thinking about and the way they relate to each other, then puts them to words. When I'm thinking verbally, my brain comes up with the relevant word first, then pulls up the concept. It's not binary; I tend to put it on a spectrum, but one that has a definite tipping point. Kinda like a number line: it's ordered and continuous, but at some point you cross zero and switch from positive to negative. Does that even make sense?
Right, that makes much more sense now, thanks.
One of my current problems is that I don't understand my brain well enough for nonverbal thinking not to turn into a black box. I think this might be a matter of inexperience, as I only recently managed intuitive, nonverbal understanding of math concepts, so I'm not always entirely sure what my brain is doing. (Anecdotally, my intuitive understanding of a problem produces good results more often than not, but any time my evidence is anecdotal there's this voice in my head that yells "don't update on that, ...
I'd say that my thinking about mathematics is just as verbal as any other thinking.
Just to clarify, because this will help me categorize information: do you not do the nonverbal kind of thinking at all, or is it all just mixed together?
Could you please explain what you mean by "correct" and "accurate" in this case? I have a general idea, but I'm not quite sure I get it.
I only got to a nonverbal level of understanding of advanced math fairly recently, and the first time I experienced it I think it might have permanently changed my life. But if you dream about math...well, that means I still have a long way to go and deeper levels of understanding to discover. Yay!
Follow-up question (just because I'm curious): how do you approach math problems differently when working on them from the angle of engineering, as opposed to pure math?
I have a question for anyone who spends a fair amount of their time thinking about math: how exactly do you do it, and why?
To specify, I've tried thinking about math in two rather distinct ways. One is verbal and involves stating terms, definitions, and the logical steps of inference I'm making in my head or out loud, as I frequently talk to myself during this process. This type of thinking is slow, but it tends to work better for actually writing proofs and when I don't yet have an intuitive understanding of the concepts involved.
The other is nonverbal ...
You're right, my apologies.
My value judgment about disincentives still stands, though. Religious communities have a framework for applying social and other disincentives (and incentives) in order to achieve their desired result. That framework could be useful if adapted to the purpose of promoting rationality.
Based on admittedly anecdotal evidence I'm inclined believe this correlation, but I think we're interpreting its existence differently. In my view, by becoming more "religious" and providing more disincentives for deviating from norms, we can increase our cohesiveness and effectiveness, but this should only be done up to a point, that point being, as far as I can tell, where we as a community can no longer tolerate the disincentives. This view is based on my value judgment that not all disincentives for deviating from norms I find acceptable or admirable are unacceptable, but rather too many disincentives or those that are too extreme are unacceptable.
I agree that this is the case in some religious communities, and that this is not necessarily the direction a rationalist community should go. (On the other hand, I have a hard time agreeing with the proposition that social pressure in favor of rationality is a bad thing, but I have yet to reach a definite conclusion on the subject.) However, I happen to be familiar with several religious communities where direct and violent pressure to conform is not the case, and it is those communities I wish to emulate.
I made no mention of control. Simply being present in all aspects of life is not the same as having control over all aspects of life. For example, if you live in a western society it's extremely probable that marketing and advertising are present in many aspects of your life, but I don't think either of us would say that the simple fact of their presence gives the marketers control over those aspects of your life.
Done, though sadly without the digit ratio due to lack of equipment. I'm a newbie and I just thought that was really cool.
Not necessarily. It's totalitarianism if said institutions do the ensuring through force, and without the consent of the disciples. However, by choosing to belong to a religious community, people choose to have institutions and members of the community remind them of the religious values.
You're right, that was uncalled for and I retract that statement.
I think this sort of thing works differently in my country (Israel) than it does in other places. Because religious and secular societies are more segregated, it's fairly common for people to affiliate themselves with a particular group due to the community's norms, customs or values rather than religious belief.
As a newbie around here: thank you, this is quite helpful.
When explaining/arguing for rationality with the non-rational types, I have to resort to non-rational arguments. This makes me feel vaguely dirty, but it's also the only way I know of to argue with people who don't necessarily value evidence in their decision making. Unsurprisingly, many of the rationalists I know are unenthused by these discussions and frequently avoid them because they're unpleasant. It follows that the first step is to stop avoiding arguments/discussions with people of alternate value systems, which is really just a good idea anyway.
Universities are not a good example of the institutions he was talking about. Durability isn't the only important factor. One of the main strengths of religious institutions is their sheer pervasiveness; by inserting itself into every facet of life, religion ensures that its disciples can't stray too far from the path without being reminded of it. Universities, sadly, are not capable of this level of involvement in the lives of communities or individuals.
In this case, rationality should seek to emulate religion by creating institutions and thus a lifestyle...
Until very recently I believed that I was completely anti-religious and took the opposing view to religion whenever the choice presented itself. I participated in a discussion on the topic and found myself making arguments I didn't actually agree with. This was mostly due to several habits I've been practicing to make me better at analyzing my own beliefs, most notably running background checks on any arguments I make to see where exactly in my brain they originate and constantly looking for loopholes in my arguments.
Because of this experience I've come t...
I think the basic problem here is an undissolved question: what is 'intelligence'? Humans, being human, tend to imagine a superintelligence as a highly augmented human intelligence, so the natural assumption is that regardless of the 'level' of intelligence, skills will cluster roughly the way they do in human minds, i.e. having the ability to take over the world implies a high posterior probability of having the ability to understand human goals.
The problem with this assumption is that mind-design space is large (<--understatement), and the prior pro... (read more)