There are a lot of applause lights.
The simplest and most widely exercised of these is the practice of exercising the body; which does little to improve one's rationality and if one's primary goal is simply to become a better rationalist exercising does little to nothing to advance that goal
For those of you downvoting, assuming any of you are still seeing this page:
Please use this space as an opportunity to list your reasons why.
I always downvote stream-of-consciousness posts. If you do not respect your audience enough to put an effort into writing something readable (clear, concise, catchy), you deserve a downvote.
To clarify for my understanding: you disliked my writing style (as you describe it, "stream-of-consciousness"), and feel that it was not 'readable' because of this -- yes?
Always, ALWAYS use your opening paragraph to clearly state your main point.
If you feel you cannot adequately do that, chances are you do not know what your main point is. In that case, do not post, work on your draft until you do know.
I have tried to give an example by extracting the main point of your post from the mud that it is, but, unfortunately, came up empty. Well, almost empty, there was one definition I found:
"I describe acrohumanity as that state of achieving the maximum optimization of the human condition and capabilities by an arbitrary person that are available to that arbitrary person." Naive though it might be, at least it is a core you can form your opening paragraph around.
Well, almost empty, there was one definition I found:
Well, my goal quite frankly was to foster conversation about the concept so as to improve the concept itself. I'll have to think more on how to target that to the LW audience a little better, as it is becoming clearer to me over time that my patterns of thinking on various topics do not fall in line with the folks here.
Thank you.
it is becoming clearer to me over time that my patterns of thinking on various topics do not fall in line with the folks here.
This looks like a classic example of "sour grapes", an attempt to resolve your cognitive dissonance.
Flesch-Kincaid reading ease score of 10. There are some articles for which that level of effort would be worth it; this did not seem to be one of them.
Flesch-Kincaid reading eas score of 10.
Interesting. I wonder if there's a relatively easy way to derive the score of the average LW article.
I have checked a few popular LW posts using the online Readability Calculator and they all came up in the 60-70 range, meaning "easily understandable by 13- to 15-year-old students". This seems like an exaggeration, but still a vast improvement over the score of 23 for your post ("best understood by university graduates").
I wonder if the LW post editor could use a button "Estimate Readability".
Using a different calculator I found that the ten highest scoring articles on LessWrong averaged a score of 37, range 27-46. That suggests that there's a fair bit of variance between scoring methods, but if we could find a consistent method, a "Estimate Readability' button in the post editor could be interesting.
I second (third?) the suggestion of a readability estimator; I need it. I have a tendency toward excessively long sentences.
Another comparison: The Simple Truth Flesch Reading Ease of 69.51, and supposedly needs only 8.51 years of education to read.
That seems to illustrate a potential shortcoming of the Readability Estimator, though. The Simple Truth doesn't use as much sophisticated vocabulary as many posts on Less Wrong (it seems that posts are penalized heavily for multisyllabic words) but it is a fair bit harder to understand then to read.
I didn't really get it (if by 'get it' you mean 'see why Eliezer wrote it, and what questions it was intended to answer') until I'd read most of the rest of the site.
In short, it seems like a decent measure of writing clarity, but it's not a measure of inferential distance at all.
In short, it seems like a decent measure of writing clarity, but it's not a measure of inferential distance at all.
Very true. The reason I picked The Simple Truth for an example is that I thought it did a good job of explaining a hard idea in simple language. The idea was still hard to get, but the writing made it much easier than it could have been.
(it seems that posts are penalized heavily for multisyllabic words)
Yeah, polysyllabicity gets a bad rap 'round some parts.
Without knowing your point, it's hard for me to answer that. It could be unclear writing, or maybe you didn't have a point in mind at all. Given the downvotes, it's probably not my failure to read correctly.
It could be unclear writing,
"What follows is an as-yet poorly formed notion on my part that I am relating in an attempt to get at the meat of it and perhaps contribute to the higher-order goal of becoming a better rationalist myself."
""acrohumanity". This is a direct analogue to "posthuman" and "transhuman"; 'acro-' being a prefix meaning, essentially, "highest". So a strictly minimal definition of the term could be "the highest of the humane condition", or "the pinnacle of humanity"."
"I believe this is a topic that bears greater investigation and as such am sharing these rambling thoughts with you all. I am hopeful of a greatly productive conversation -- for others, and for myself."
The first quote is where I stated my purpose. The second quote is the notion that purpose references. The third is my reiteration/conclusion.
With these pointed out directly, is there something about them that is difficult to understand, notice, or retain?
The first quote is where I stated my purpose. The second quote is the notion that purpose references. The third is my reiteration/conclusion.
The words-to-substance ratio is very bad, especially in the first and third quotes. The middle one feels like it needs to interact in some way with the fun theory sequence. And after reading it, I have no idea what you think acrohumanity is (your definitions include the magical terms "highest" and "pinnacle").
With these pointed out directly, is there something about them that is difficult to understand, notice, or retain?
It's not clear that there is anything there to be retained. Sorry!
The middle one feels like it needs to interact in some way with the fun theory sequence.
Could you elaborate on why you believe this to be the case?
And after reading it, I have no idea what you think acrohumanity is (your definitions include the magical terms "highest" and "pinnacle").
I wrote a great deal more in providing a definition of the term than just those two sentences. About a third of the effort invested in the article was in fleshing out that definition. But one must always start somewhere, when introducing a new term. So if it was your goal to introduce the term, how would you start it?
Could you elaborate on why you believe this to be the case?
Have you read the fun theory sequence? If you have and think it isn't relevant, then I misunderstand your point here to a greater degree than I thought. If you haven't read it then go read it.
I wrote a great deal more in providing a definition of the term than just those two sentences. About a third of the effort invested in the article was in fleshing out that definition.
From the next paragraph: "I intentionally refrain from defining what form that optimization takes..."
But one must always start somewhere, when introducing a new term. So if it was your goal to introduce the term, how would you start it?
I still don't understand what you're trying to say, so I can't really answer this.
Have you read the fun theory sequence? If you have and think it isn't relevant, then I misunderstand your point here to a greater degree than I thought. If you haven't read it then go read it.
I haven't read it deeply. I was hoping to get insight as to how you feel it should "interact". It is entirely plausible that I may incorporate elements of said sequence into the body of lore of acrohumanism. I will note that from what I myself have seen, there is a categorical difference between "being free to optimize" and having optimization itself as a higher-order goal. (Part of this is possibly resultant from my having a low value on hedonism in general, which seems to be a primary focus of the Fun Theory sequence. I would even go so far as to state that my idea of acrohumanism would have anti-hedonistic results: it takes as a given the notion that one should never be satisfied with where he currently is on his personal optimization track; that he should be permanently dissatisfied.)
From the next paragraph: "I intentionally refrain from defining what form that optimization takes..."
Indeed. But I also gave several examples of what I meant by the term, and I associated it with other specific notions: transhumanism / posthumanism -- from these contextually my meaning should be obvious enough.
This is a point, however, I freely recognize I am currently weak on. I do not -- morally cannot -- assert that I am fit to determine what universally optimal would be for all persons. But I do not believe that optimization itself -- augmentation of the self to within whatever tolerance-limits our biological frailties limit us -- is an impossible topic.
I still don't understand what you're trying to say, so I can't really answer this.
Fair enough. Are there any specific points you believe I could clarify?
The first three paragraphs seemed to me devoid of useful content, and, after skimming the post, I was left with a feeling of "So what?" and that it wasn't worth rereading more carefully.
Acknowledge: Initial skimming rather than reading was likely influenced by the number of downvotes already on the post.
The first three paragraphs seemed to me devoid of useful content,
If I had simply begun with a brief sentence asking for an open dialogue and then jumped into the definition of the term, do you believe -- currently -- that this might have altered your opinion of the idea of discussing it?
I think that I would have still downvoted it for leaving me with a 'So what?' feeling, but I feel that reducing the length would have made happier.
NMDV, but it is long-winded, coins unnecessary neologisms, and doesn't contain much of anything new to Less Wrong. There is something squicky about the tone, too.
(Nothing personal/you asked).
(Nothing personal/you asked).
I did, and have upvoted you for your cooperation.
NMDV
I am unfamiliar with this acronym. Elucidate me?
coins unnecessary neologisms
Point of order: what are you considering a neologism? The only term(s) I coined to my knowledge are acrohuman and its associated variations.
There is something squicky about the tone, too.
Is there any chance you could elaborate on this?
NMDV is "not my down vote". I didn't down vote you, I'm just guessing about those who did.
Point of order: what are you considering a neologism? The only term(s) I coined to my knowledge are acrohuman and its associated variations.
Thats the term I'm talking about.
With regard the squickiness, that's always hard to articulate. I think it has to do with using a really authoritative and academic tone without authoritative and academic content-- it sort of pattern matches to bad philosophy and pseudoscience.
Thats the term I'm talking about.
Hrm. One of the things I've struggled with and why I bothered with it at all is that there really isn't, to my knowledge, already a term that encapsulates the meaning of "a person with an agenda of maximally optimizing his own experience of the human condition to within the limits of what is possible" or the state of being so "optimized." If I might ask -- why do you feel that it was unnecessary? Are you familiar with a term that already carries this meaning?
I think it has to do with using a really authoritative and academic tone without authoritative and academic content
That's strange... I honestly thought I was doing the opposite of this; I was, I thought, careful to elaborate that I was solely relating my own opinion, with the intention of introducing the topics in question for dialectical examination by, well, all of you.
This post could be clearer and more concise. I do agree that if there is such a thing as the best I can be, it would be nice to be it, especially if we're defining "best" in terms of how nice it is.
This post could be clearer and more concise.
Well, yes. In future visitations of this concept I hope to be capable of achieving both of those goals. I did say I'm early in the rough-draft stages of the thought.
I do agree that if there is such a thing as the best I can be, it would be nice to be it, especially if we're defining "best" in terms of how nice it is.
I would advise being very careful of any definition of optimal states which is vulnerable to wire-heading. I personally am not an adherent in general to hedonism. That being said:
My specific reason for avoiding defining 'best' was a moral one: I do not believe myself fit to decide what the maximally optimal state is for anyone other than myself. I did attempt to provide substance towards what that state would be for me. There is very little in the way of literature that describes what, precisely, it means to be possessed of "humanity"; what specific qualities that state is describing.
The notion of the human condition is a very nebulous one; and the idea of intentionally altering it in any way is one that seems to my observations very insular to the transhumanist movement (of which I am, admittedly, a member). This informs my notions pretty heavily. The whole concept I'm espousing here is essentially an answer to a dilemma pretty much all contemporary transhumanists face: we desire to be 'improved', but lack the means to improvement. For example; I currently dose modafinil, knowing full well that there are no measurable cognitive improvements associated with it, because it's the closest to a genuine nootropic that's available to me. Being able to remain alert and clear-of-mind at any hour is of benefit to me (especially as I work overnight shifts; modafinil is actually an on-label medication for me.)
I do not believe that I was making any great ground-shaking claims when I stated that people should want to 'be better'. That's trivial. Instead, it is my hope that by introducing this term I might begin a dialogue towards a few ends.
The establishment of a 'movement' or 'agenda' of individual maximal-optimization.
The establishment of a dialogue or body of lore facilitating the implementation of that goal.
The establishment of a label for both the idyllic end-state (useful as a symbolic analogue more than anything else), a label for the agenda/movement, and the practice of implementing said goal.
In other words; if somewhere along down the line there were a group or groups of people who spoke of "acrohumanism" (or whatever term comes to supplant that), or described themselves as "acrohumanists", and had a body of techniques which were in place to that end, I would consider myself to have succeeded well beyond my current expectations of maximal probability. If I can, from this dialogue, pick up a few new ways of reaching that end myself, or at least establish a means of communicating my notions more clearly, I will have achieved my specific agenda in making this post.
You are using too many big words and your writing is too flowery. Read politics and the english language by george orwell. Use smaller words and shorter sentences.
also, TL;DR.
Greetings, fellow LessWrongians.
What follows is an as-yet poorly formed notion on my part that I am relating in an attempt to get at the meat of it and perhaps contribute to the higher-order goal of becoming a better rationalist myself. As such I will attempt to restrict any responses to comments I give to explanations of points of fact or explanations of my own opinions if directly requested, but otherwise will not argue any particulars for purposes of persuasion.
For a few years now a general notion -- what originally led me to discover the LessWrong site itself, in fact -- has rattled around in my brain, which I only today have derived a sufficiently satisfactory term to label it with: "acrohumanity". This is a direct analogue to "posthuman" and "transhuman"; 'acro-' being a prefix meaning, essentially, "highest". So a strictly minimal definition of the term could be "the highest of the humane condition", or "the pinnacle of humanity".
In brief, I describe acrohumanity as that state of achieving the maximum optimization of the human condition and capabilities *by* an arbitrary person that are available *to* that arbitrary person. I intentionally refrain from defining what form that optimization takes; but my own personal intuitions and opinions on the topic, as a life-long transhumanist and currently aspiring-rationalist, tend towards mental conditioning and improvements upon ways of thinking and optimization of thought, memory, and perception. "Acrohumanism", then, would be the belief in, practice of, and advocacy of achieving or approaching acrohumanity, in much the same way that transhumanism is the belief in or advocacy of achieving transhuman conditions. (In fact; I tend to associate the two terms, at least personally; what interests me *most* about transhumanism is achieving greater capacity for thought, recollection, and awareness than is humanly possible today.)
Instrumental rationality is, thusly, a core component of any approach to the acrohuman condition/state. But while it is a requirement, it is not sufficient in and of itself to focus on one's capabilities as a rationalist. There are other avenues of optimization of the self that should also bear investigation. The simplest and most widely exercised of these is the practice of exercising the body; which does little to improve one's rationality and if one's primary goal is simply to become a better rationalist exercising does little to nothing to advance that goal. But if one's goal is to "in general optimize yourself to the limits available", exercising is just as key as focusing on instrumental rationality. Additional examples of a more cognitive nature could include techniques for improving recollection. Mnemotechnics has existed long enough that many cultures developed their own variants of it before they even developed a written language. It occurs to me that developing mnemotechnical skill would be convergent with becoming a better rationalist by making it easier to recall the various biases and heuristics we utilize in a broader array of contexts. Still another, also cognitive in nature, would be developing skill/practice in meditative reflection. While there is a lot of what Michael Shermer calls "woo" around meditation, the simple truth is that it is an effective tool for metacognition. My own history with meditative practice originated in my early-teens with martial arts training which I then extended into basic biofeedback as a result of coping with chronic pain. I quickly found that the same skill-level needed to achieve success in that arena had a wide array of applications, from coping with various stimuli to handling other physiological symptoms or indulging specific senses.
Taken as an aggregate, an individual with strong skill in biofeedback, a history of rigorous exercise and physical health, skill and knowledge of instrumental rationality, mnemotechnics, metacognition, and through metacognition strong influence over his own emotional states (note; as I myself am male, I am in the habit of using masculine pronouns as gender-neutrals), represents an individual who is relatively far from what at least consists of my personal image of the baseline 'average human'. And yet I am certain that there might be other techniques or skillsets that one might add to his 'grab-bag' of tools for improving upon his own overall capabilities as a person -- none of which individually exceeding what is humanly possible, but definitely impressively approaching those limits when taken as a whole.
I believe this is a topic that bears greater investigation and as such am sharing these rambling thoughts with you all. I am hopeful of a greatly productive conversation -- for others, and for myself.