All of JanetK's Comments + Replies

I have a different way to look at this question. (1) introspection is bunk (2) if someone asks us or we ask ourselves why we did something - the answer is a guess, because we have no conscious access to the actual causes of our thoughts and actions (3)we vary in how good we are at guessing and in how honestly they judge themselves and so some people appear to be clearly rationalizing and other appear less so (4) most people are not actually aware that introspection is not direct knowledge but guesswork and so they do not recognize their guesses as guesses... (read more)

Good, upvoted - your hypothesis is interesting. I tend to think of type 1 as the cognition/pattern recognition/thinking operation and type 2 as a way of sequentially combining type 1 sub-results. The sequentially operation involves working memory and therefore passes through consciousness and is slowed down. As soon as a group of type 1 operations fine-tune themselves to the point of not requiring working memory, they no longer generate type 2 operations.

SaidAchmiz asked for an opinion and I gave an honest one. I may be wrong in the view of some other people but that is still my honest opinion. It is not an overgeneralization as I believe that in all cases, in all situations, at all times the descriptive approach is preferable to the prescriptive one.

1fortyeridania
The descriptive approach may well be universally preferable to the prescriptive one, but that does not make it more scientific, productive, and interesting. It need not be preferable in every respect.

In all cases 1-6 - descriptive is scientific, productive, interesting while prescriptive is without evidence, harmful and boring.

-1fortyeridania
Downvoted for overgeneralization.

OK, I over reacted. Several others have said that it is acceptable in Main - so be it. I guess it does not bother others as much as it bothers me and I won't comment on corrections in future.

Doesn't anyone think that it is very rude to comment in someone else's language unless it is not understandable - just plain RUDE? If someone wants help with language they can ask. Language is a tool not a weapon.

1Richard_Kennaway
I might not go as far as "very rude", but I basically agree. I don't find corrections like these useful, and I doubt I would even if I was the one writing in a second language and being corrected, except when my errors were genuinely obscuring my meaning. One serious comment about what I am saying is worth any number of such trifles. I guess that Dmytry's native language is Russian, which does not have a definite article, and so it is unsurprising if he sometimes uses "the" inaccurately. But having sussed that immediately on seeing his name and the first three words, it's of no further importance. I'm not here to give or to receive language lessons.
-2Risto_Saarelma
Comments on the language mistakes can be helpful for the author, but probably best sent as private messages rather than public comments, since they don't contribute much to the discussion otherwise. Though I'm not sure if there's any UI shortcut for sending a private message related to a specific article, so you'll have to go to the user's page and phrase the message specifically to refer to the article, and that's a lot more work than just writing a public comment here...
2Solvent
I read the post, and didn't have much to say about the content. I felt a little bit bad about just correcting the grammar without having anything of substance to say, but it was in Main so I did so anyway. I tried to be polite.

Language is a tool not a weapon.

Correcting someone's grammar and diction = sharpening their tool for them.

3tristanhaze
I don't think this is rude at all. One of the things I like about Less Wrong, and which seems characteristic of it, is that the writing in posts - style and form as well as more basic stuff - is often constructively discussed with a view to improving the author's writing.
7wedrifid
Sometimes. For example I'd probably consider it slightly rude to reply to this with "s/comment in/comment on/". That said it is a borderline case since 'comment in someone else's language' actually means something (unintended) and so I needed to read your comment twice then look up the context before I could guess what you actually meant to say. In the case of top level posts in main corrections are entirely appropriate. A certain standard is expected for main level posts. If that standard is not met then the alternative to polite correction is a silent downvote - many people prefer the correction. If someone is particularly sensitive to correction they probably shouldn't make top posts - or, preferably they can ask someone to proofread for them before they post. This is actually what many people do anyway even if they have no language difficulties whatsoever. In fact there are people who have volunteered to proofread drafts for others as their way to contribute. Typo and grammar corrections don't hurt as much as having your arm hacked off by a claymore either. I certainly don't consider Solvent's comment an attack.

Voting up and waiting for your next installment. (dtz weird text still there)

Why not adopt the convention used in many types of writing? The first time the term is used in a text, it is written in full and its abbreviation or acronym is put after it in brackets. After that the short form is used.

Thank you for the link - very illuminating.

I would like to see some enlargement on the concept of definition. It is usually treated as a simple concept: A means B or C or D; which one depending on Z. But when we try to pin down C for instance, we find that it has a lot of baggage - emotional, framing, stylistic etc. So does B and D. And in no case is the baggage of any of them the same as the baggage of A. None of - defining terms or tabooing words or coining new words - really works all that well in the real world, although they of course help. Do you see a way around this fuzziness?

Another 'morally good' definition for your list is 'that which will not make the doer feel guilty or shameful in future'. It is no better than the others but quite different.

1fubarobfusco
I don't like this one. It implies that successful suicide is always morally good.

I hope there are soon some comments to this question. What do AI people think of the analysis - Marr's and nhamann? Is the history accurate? This there a reason for ignoring?

I have been pointed at those pieces before. I read them originally and I have re-read them not long ago. Nothing in them changes my conviction (1) that it is dangerous to communication to use the term 'free will' in any sense other than freedom from causality, (2) I do not accept a non-material brain/mind nor a non-causal thought process. Also I believe that (3) using the phrase 'determinism' in any sense other that the ability to predict is dangerous to communication, and (4) we cannot predict in any effective way the processes of our own brain/minds. The... (read more)

-1Peterdjones
Does that mean we should stop exonerating people who did bad things under duress? (iIOW, your stipulation about FW would change the way the word is used in law). Does that mean we should stop saying that classical chaos is deterministic? (IOW, your stipulation about "deterministic" would change the way the word is used by physicists).
2Tiiba
"that it is dangerous to communication to use the term 'free will' in any sense other than freedom from causality" Why is that? There are many things that can keep your will from being done. Eliminating them makes your will more free. Furthermore, freedom from causality is pretty much THE most dangerous definition for free will, because it makes absolutely, positively no sense. Freedom from causality is RANDOMNESS. "Therefore free will vs determinism is not a productive argument." We don't have this argument here. We believe that free will requires determinism. You aren't free if you have no idea what the hell is about to happen.

Right on. Free will is nonsense but morality is important. I see moral questions as questions that do not have a clear cut answer that can be found be consulting some rules (religious or not). We have to figure out what is the right thing to do. And we will be judged by how well we do it.

3Tiiba
"Free will is nonsense" It's not nonsense. http://wiki.lesswrong.com/wiki/Free_will http://wiki.lesswrong.com/wiki/Free_will_(solution)

Tordmor has commented on my attitude - sorry I did not mean to sound so put out. The reason for the 'near future' was because the discussion was about 'upload' and so I assumed we talking about our lifetimes which in the context seemed the near furture (about the next 50 years). Making an approximate emulation of some simple invertebrate brain is certainly on the cards. But an accurate emulation of a particular person's brain is a different ballpark entirely.

I never know exactly what people mean when they say emulation or simulation or model. How much is ... (read more)

Do you honestly believe that an artificial brain can be built purely in software in the near future? And if it could how would it be accurate enough to be some particular person's brain rather than a generic one? And if it was someone's brain could the world afford to do this for more than one or two person's at a time? I am not at all convinced of 'uploads'.

0Kaj_Sotala
The Whole Brain Emulation Roadmap implies that it may very well be possible. I don't have the expertise to question their judgement in this matter.
2[anonymous]
Kaj never mentioned "near future" or any timeline for uploads for that matter. The only thing he did was pointing out a possible flaw in your argument, yet you took it as a personal insult to your belief.

I am a bit surprised if this is surprising - is it not obvious that electric fields will affect neuron activity. Whether a neuron fires depends on the voltage across its membrane (at a point in a particular region at the base of the axon and, it seems, down the axon). The electric field around the neuron will affect this voltage difference as in good old-fashioned electrical theory. This is important for synchrony in firing (as in the brain waves) and that is important for marking synapses between neurons that have fired simultaneously for chemical changes... (read more)

I figure when we have built an artificial kidney that works as well as a kidney, and an artificial heart that works as well as a heart, and an artificial pancreas that works as well as a pancreas - then it will be reasonable to know whether an artificial brain is a reasonable goal.

If we have figured out how to compute the weather accurately some weeks into the future - then we might know whether we can compute a much more complex system. If we had the foggiest idea of how the brain actually works - then we might know what level of approximation is good enough.

Don't hold your breath for a personal upload.

6Kaj_Sotala
Building an artificial kidney requires both knowledge about how a kidney works, and the physical engineering skill to build an artificial kidney with the same structure. Unlike a kidney, an artificial brain can be implemented in software, so it's enough to only know how it works. The comparison would be valid if by an "artificial brain" we meant a brain built out of biological neurons, but we don't.

I seems to agree with your original list. I would phrase the free will one differently - both free will and determinism are useless concepts because we have no mechanism for contra-causality other than spirit-magic and we cannot predict our decisions even if they are causally produced.

This is not a surprise. Who wants to be a philosopher and who wants to be a scientist? Who likes to discuss the questions and who likes to discuss the answers? Who values consensus?

I used to read LW but never commented. Then I had a change of heart when there was a post that in effect invited people to take a more active part. I commented often and even posted but the experience was not a happy one and I have gone back to only reading some items and commenting very rarely. I do not feel welcome nor feel completely rejected. There are a lot of reasons for the alienation in my case and the repeated return to PUA is one of them. But there are many other reason that are more important. In a sense PUA is political. Evolutionary psycholog... (read more)

I believe there should be a subject in school (and text books to go with it) that goes through all the things that adult citizens should know. I believe this was part of what was called Civics but that is dead or changed to something else. The idea is somewhat dated but it included things like how to vote, how to read a train schedule, that different types of insurance actually were, simple first aid, how to find a book in a library and all sorts of things like that. Today it would be a slightly different list. Somewhere between 10 and 14 seems the ideal age to be interested and learn these sort of things.

I agree. I've also long held a different but complementary view: that all establishments should (hopefully, out of the goodness of their hearts) put up signs that basically say, "this is how it works here".

(For example, at a grocery store in the US, the sign would say something like, "This store sells the items you see inside that have a price label by them. To buy something, take it with you to one of the numbered short aisles [registers] toward the exit and place it on the belt. If you need many items, you may want to use one of the bas... (read more)

0jsalvatier
"Home Economics" and similar courses teach life skills like cooking, paying bills and doing your taxes.
5tenshiko
Civics, at least in my area of the United States, is mainly education about government and ethics. I do believe they may discuss how to vote and other information that would be useful to the democratic process, but nothing like going onto trains. (Although in the United States, this could only ever discuss the subway, and only in certain metropolitan areas - culturally, the elegant train is dead here, which is sad, since I've had much more positive travel experiences on trains than planes.)
  1. I do not use the phrase 'free will' except to attack it. And I do not attack 'free will' without at the same time attacking 'determinism' and the outdated fight between them. They are both useless and flawed. When others use the word, I assume they mean a mental process that is non-material and that they therefore still have a dualist ghost in their thinking.
  2. Freewill is an illusion only if you believe in it. If you don't believe in free will (and don't believe in determinism) then you just make decisions.
  3. Why lie to others or yourself? The important thi
... (read more)

I have used 'reality' but sometimes needed to call it 'undifferentiated reality' or to introduce the map-territory metaphor and then refer to the 'territory'. There is a problem with the right word for this in English. I suggest that you use the Russian word after a paragraph explaining its meaning - this would be interesting to your readers, allow you to define the concept you want to use very carefully and avoid any English language philosophical baggage.

1kononov
Thank you very much for your advice, I think I will try to follow it.

I just think it is a good time for a party and has been for a long time at high latitudes. I don't think there is a problem with this. If religious people want to control the party, just ignore them and enjoy Christmas.

People have been celebrating around the solstice long before Christianity sold the holiday. Most of the Christmassy things: gifts, trees, fires, food, song and so on are left over from pre-Christian holidays. Take back Christmas a have a ball.

2JanetK
I just think it is a good time for a party and has been for a long time at high latitudes. I don't think there is a problem with this. If religious people want to control the party, just ignore them and enjoy Christmas.
2nerzhin
...because Less Wrong and all of us rationalists are descended in a direct line from pre-Christian solstice-celebrating pagans? I am really confused by this line of thought. Is this really a way to be rational about Christmas, or is it just a way to be anti-Christian?

I do not remember believing in Santa or when I stopped. But I do remember the game of everyone pretending there was a Santa and a Tooth Fairy and an Easter Bunny. It was great fun and I had no feeling that I was lied to by my parents or others. When I realized that God was not in this group and I was actually supposed to believe in that being was when my problems with pretense really began. I started to notice how others, by their actions etc., displayed a lack of believe in what they said about God, but they insisted that it was important to believe. End of innocence, now I was being lied to!

I can think of a number of reasons why monarchs may suffer somewhat less from the 'power corrupts' norm. (1) often educated from childhood to use power wisely (2) often feel their power is legit and therefore less fearful of overthrow (3) tend to get better 'press' than other autocrats so that abuse of power less noticeable (4) often have continuity and structure in their advisors inherited from previous monarch.

Despite this, there have been some pretty nasty monarchs through history - even ones that are thought of as great like Good Queen Bess. However, if I had to live in an autocratic state I would prefer an established monarchy, all others things being equal.

I voted up. Post makes good sense to me.

the list of personality axes OkCupid uses.

Ok I get it. I don't really find it convincing but I get it. I can understand the idea of a 'space' made of personality dimensions and I can envisage an idea that someone could link their identity as their area in such a space.

Personality theory seems pretty weak (and boring) to me, a sort of left over from Freud's and other psychoanalytic theories. So I guess I have nothing to add to this discussion.

Maybe 'personality'?

What question is personality the answer to?

0Jonathan_Graehl
To my dismay, I've just checked, and my answer is not to any sentence of yours. But I took your overall question to be: what's meant by 'identityspace'? This was also my first reaction. FAE maybe.

I understand what you mean by space - I think it is a fairly common construct. For example, I think of a biological niche as an area in a multidimensional space. OK. But what on earth is an 'identityspace'? And what does 'angriness' and similar have to do with identity? What would be some actual dimensions of identity and where is any experimental evidence for such dimensions in the context of identity?

0nick012000
Here is the list of personality axes OkCupid uses. There's quite a few of them. There's also that four-letter personality test, though I can't remember its name at the moment.

I am lost. This piece appears to be based on a well developed model that I have never encountered in my 71 years. I am well educated and well read in many areas of science and philosophy but can I not follow your argument and find it just jargon from some area that I have never been introduced to. I cannot relate something like 'identityspace' to any thing I know about brains or human behaviour or myself. Why not start with a couple of paragraphs that give the reader an idea of the structure you are going to talk about? It is no use pointing to previous articles as they seem just as incomprehensible.

0Jonathan_Graehl
Maybe 'personality'?
1nick012000
Imagine a grid. Along one axis is one measure of a configuration of a person's mind. Let's say "angriness". On the other is another one, "sadness". Now let's say you add on a third axis at ninety degrees, up out of the page the other two are drawn on, and say it measures "happiness". Now, take another axis at ninety degrees to all of them, representing another measurement of that mind. And another. And another, and another, until all the axes are assigned. This is the mindspace of that person.

I think you may have misunderstood what I was trying to say. Because the group used Bayesian methods, I had assumed that they would not be anti-scientific. I was surprised when it seemed that they were willing to ignore evidence. I have been reassured that many in the group are rational in the everyday sense and not opposed to empiricism. Indeed it is Science AND Bayes.

Thank you. That seems clear. I will assume that my antennas were giving me the wrong impression. I can relax/

2[anonymous]
Maybe you shouldn't relax. Regardless of official definitions, there is in practice a heavy emphasis on conceptual rigor over evidence. There's still room for people who don't quite fit in.

I don't think that's how most people here understand "rationalism".

Good

Do you have a reason of sarcasm? I notice a tendency that seems to me disturbing and I am pointing it out to see if others have noticed it and have opinions, but I am not attacking. I am deciding whether I fit this group or not - hopefully I can feel comfortable in LW.

3wedrifid
It felt like irony from my end - a satire of human behaviour. As a general tendency of humanity we seem to be more inclined to be abhored by beliefs that are similar to what we consider the norm but just slightly different. It is the rebels within the tribe that are the biggest threat, not the tribe that lives 20 kms away. I hope someone can give you an adequate answer to your question. The very short one is that empirical evidence is usually going to be the most heavily weighted 'bayesian' (rational) evidence. However everything else is still evidence, even though it is far weaker.

According to my dictionary: rationalism 1. Philos. the theory that reason is the foundation of certainty in knowledge (opp. empiricism, sensationalism)

This is there as well as: rational 1. of or based on reasoning or reason

So although there are other (more everyday) definitions also listed at later numbers, the opposition to empirical is one of the literal definitions. The Bayesian updating thing is why it took me a long time to notice the other anti-scientific tendency.

4timtyler
I wouldn't say "anti-scientific" - but it certainly would be good if scientists actually studied rationality more - and so were more rational. With lab equipment like the human brain, you have really got to look into its strengths and weaknesses - and read the manual about how to use it properly. Personally, when I see material like Science or Bayes - my brain screams: false dichotomy: Science and Bayes! Don't turn the scientists into a rival camp: teach them.

The penny has just dropped! When I first encountered LessWrong, the word 'Rationality' did not stand out. I interpreted it to mean its everyday meaning of careful, intelligent, sane, informed thought (in keeping with 'avoiding bias'). But I have become more and more uncomfortable with the word because I see it having a more restricted meaning in the LW context. At first, I thought this was an economic definition of the 'rational' behaviour of the selfish and unemotional ideal economic agent. But now I sense an even more disturbing definition: rational as opposed to empirical. As I use scientific evidence as the most important arbiter of what I believe, I would find the anti-empirical idea of 'rational' a big mistake.

0[anonymous]
Now that I come to think of it, I've never seen the LW definition of "rationality" used anywhere outside LW and OB, and I've never even seen it explicitly defined. EDIT: http://lesswrong.com/lw/31/what_do_we_mean_by_rationality/ But if you asked me, I would say it means taking your selfish and unemotional economic agent to his logical extreme: rationally examining one's own thought processes in order to optimise them, rationally examining scientific evidence without interference from one's biases, and rationally accepting the possibility that one has made a mistake.
3thomblake
The philosophical tradition of 'Rationalism' (opposed to 'Empiricism') is not relevant to the meaning here. Though there is some relationship between it and "Traditional Rationality" which is referenced sometimes.
0Sniffnoy
Here is our definition of rationality. See also the "unnamed virtue".
1timtyler
There is at least one post about that - though I don't entirely approve of it. Occam's razor is not exactly empirical. Evidence is involved - but it does let you choose between two theories both of which are compatible with the evidence without doing further observations. It is not empirical - in that sense.
2kodos96
Ummmmmmmm.... no. The word "rational" is used here on LW in essentially its literal definition (which is not quite the same as its colloquial everyday meaning).... if anything it is perhaps used by some to mean "bayesian"... but bayesianism is all about updating on (empirical) evidence.
0FAWS
In a certain sense rationality is using evidence efficiently. Perhaps overemphasis on that type of rationality tempts one to be sparing with evidence - after all if you use less evidence to reach your conclusion you used whatever evidence you did use more efficiently! But not using evidence doesn't mean there is more evidence left afterwards, not using free or very cheap evidence is wasteful, so proper rationality, even in that sense, means using all easily available evidence when practical.
2wedrifid
Indeed. It is heretic in the extreme! Burn them!
2Emile
I don't think that's how most people here understand "rationalism".

Personally I have great hopes for Blue Brain. If it figures out how a single cortex unit works ( which they seem to be on the way to). If they can then figure out how to convert that into a chip and put oodles of those clips in the right environment of inputs and interactions with other parts of the brain (thalamus and basal ganglia especially) and then.....

A lot of work but it has a good chance as long as it avoids the step-by-step algorithm trap.

Yes we will have to describe the subjective side of consciousness but the physiology has to come first. As an illustration: if you didn't know the function of the heart or much about its physiology, it would be useless to try and understand it by how it felt. Hence we would have ideas like 'loving with all my heart', 'my heart is not in it' etc. which come from the pre-biology world. Once we know how and why the heart works the way it does, those feeling are seen differently.

I am certainly not a behaviorist and I do think that consciousness is an extremel... (read more)

Well, maybe not that long, but a long, long time to do the 'lot of little steps'. It does not seem the appropriate tool to me. After all, the much slower component parts of a brain do a sort of unit of perception in about a third of a second. I believe that is because it is not done step-wise but something like this: the enormous number of overlapping feedback loops can only stabilize in a sort of 'best fit scenario' and it takes very little time for the whole network to hone in on the final perception. (Vaguely that sort of thing)

1rwallace
Right, fair enough, then it's a quantitative question on which our intuitions differ, and the answer depends both on a lot of specific facts about the brain, and on what sort of progress Moore's Law ends up making over the next few decades. Let's give Blue Brain another decade or two and see what things look like then.

with a lot of steps

I guess that is the conversation stopper. We agree that it takes a lot of steps. We disagree on whether the number makes it only possible in principle or not.

1rwallace
Ah, I was about to reply with a proof of concept explanation in terms of molecular modeling (which of course would be hopelessly intractable in practice but should illustrate the principle), until I saw you say 'only possible in principle'; are you saying then that your objection is that you think even the most efficient software-based techniques would take, say, a million years of supercomputer time to run a few seconds of consciousness?

Allenwang voted up - I don't understand why there was a negative reaction to this.

A little group of neurons in the brain stem starts sending a train of signals to the base of the thalamus. The thalamus 'wakes up' and then sends signals to the cortex and the cortex 'wakes up'. Consciousness is now 'on'. Later, the brain stem stops sending the train of signals, the thalamus 'goes to sleep' and the cortex slowly winds down the 'goes to sleep'. Consciousness is now 'off'. Neither on or off was instantaneous or sharply defined. (Dreaming activated the cortex differently at times during sleep but ignore that for now). Descriptions like this (... (read more)

0Mitchell_Porter
Science in some broad sense "is OK... for understanding consciousness", but unless you're a behaviorist, you need to be explaining (and first, you need to be describing) the subjective side of consciousness, not just the physiology of it. It's the facts about subjectivity which make consciousness a different sort of topic from anything in the natural sciences.

I may not be expressing my self well here. I am try to express what I can and cannot imagine - I do not presume to say that because I cannot imagine something, it is impossible. In fact I believe that it would be possible to simulate the nervous system with digital algorithms in principle, just extremely difficult in practice. So difficult I think that I cannot imagine it happening. It is not the 'software' or the 'digital' that is my block, it is the 'algorithm', the stepwise processes that I am having trouble with. How do you imagine the enormous amount and varied nature of feedback in the brain can be simulated by step-by-step logic? I take it that you can imagine how it could be done - so how?

0PaulAlmond
with a lot of steps.

You may be right but my imagination has a problem with it. If there is a way to do analog computing using software in a non step-by-step procedure, then I could imagine a software solution. It is the algorithm that is my problem and not the physical form of the 'ware'.

4rwallace
I may not be understanding your objection in that case. Are you saying that there's no way software, being a digital phenomenon, can simulate continuous analog phenomena? If so, I will point to the many cases where we successfully use software to simulate analog phenomena to sufficient precision. If not, can you perhaps rephrase?

I have no doubt in my mind that some time in the future nervous systems with be simulated with all their functions including consciousness. Perhaps not a particular person's nervous system at a particular time, but a somewhat close approximation, a very similar nervous system with consciousness but no magic. However, I definitely doubt that it will be done on a general purpose computer running algorithms. I doubt that step-by-step calculations will be the way that the simulation will be done. Here is why:

1.The brain is massively parallel and complex feedba... (read more)

2rwallace
Hardware might ultimately be more efficient than software for this kind of thing, but software is a lot easier to tune and debug. There are reasons neural network chips never took off. I can plausibly imagine the first upload running in software, orders of magnitude slower than real time, on enough computers to cover a city block and require a dedicated power station, cooperating with a team of engineers and neuroscientists by answering one test question per day; 10 years later, the debugged version implemented in hardware, requiring only a roomful of equipment per upload, and running at a substantial fraction of real-time speed; and another 10 years later, new process technology specifically designed for that hardware, allowing a mass-market version that runs at full real-time speed, fits in desktop form factor and plugs into a standard power socket.

Yes, you are right. Carbohydrates can not be made from fat. Fatty acid are used to supply energy directly and not by being converted to carbohydrate first. Sorry to have been misleading. Thank you for the correction.

Something I should have said in my previous reply. I agree with the "no current politics" rule. My problem is with what is politics - to some everything is and to some almost nothing is. When a subject is a purely scientific one and the disagreement is about whether there is evidence and how to interpret it, then this is a area for rationality. We should be looking at evidence and evaluating it. That does not involve what I would call politics.

Load More