All of wedrifid's Comments + Replies

We have the instinct to consume sugar because it is the most concentrated form of energy that humans can process, not because it is naturally paired with vitamins.

Sugar is desirable as the most easily accessible form of energy. Being concentrated is more useful for long term storage in a mobile form, hence the use of the more concentrated fat.

UPI Reporter Dan Olmsted went looking for the autistic Amish. In a community where he should have found 50 profound autistics, he found 3.

He went looking for autistics in a community mostly known for rejecting Science and Engineering? It 'should' be expected that the rate of autism is the same as in the general population? That's... not what I would expect. Strong social penalties for technology use for many generations would be a rather effective way to cull autistic tendencies from a population.

-1NatPhilosopher
I don't reject the possibility there are other explanations for the observation that unvaccinated Amish have very low autism rates. I even offered one: that they also reject Glyphosate. However, when it turns out that the rare cases of Amish with autism that are found mostly turn out to be vaccinated, or have some very specific other cause obvious that's not present in the general population (high mercury), the case for vaccination being a cause becomes much much stronger. And when you realize that other groups of unvaccinated also have low autism rates, the case becomes stronger. And when you realize that injecting the aluminum into animal models causes behavioral deficits, and injecting vaccines into post-natal animals causes brain damage, in every study I've found, the case becomes stronger still. And when you discover that the safety surveys don't cite any empirical measurements whatsoever of the toxicity of injected aluminum in neo-nates, (or even injected aluminum in adults, for that matter), and don't generally address the issue of aluminum at all, and don't cite or rebut any of the many papers published in mainstream journals observing these things, or rebut or cite any of the half dozen or more epidemiological studies showing aluminum is highly correlated with autism, then I think you should conclude there is strong cognitive bias at work, if not worse.

I think this is about the only scenario on LW that someone can be justifiably downvoted for that statement.

I up-voted it for dissenting against sloppy thinking disguised as being deep or clever. Twisting the word 'god' to include other things that do fit the original, literal or intended meaning of the term results in useless equivocation.

Hubris isn't something that destroys you, it's something you are punished for. By the gods!

Or by physics. Not all consequences for overconfidence are social.

wedrifid-40

You were willing to engage with me after I said something "inexcusably obnoxious" and sarcastic, but you draw the line at a well reasoned collection of counterarguments? Pull the other one.

For those curious, I stopped engaging after the second offense - the words you wrote after what I quoted may be reasonable but I did not and will not read them. This is has been my consistent policy for the last year and my life has been better for it. I recommend it for all those who, like myself, find the temptation to engage in toxic internet argument har... (read more)

Can't imagine who'd have guessed your exact intention just based on your initial response, though.

You are probably right and I am responsible for managing the predictable response to my words. Thankyou for the feedback.

-14geniuslevel20
-227chaos
You were straightforward in the most mocking and least helpful way possible, maybe. Earlier, you claimed your intention was to lend moral support to the OP against common_law's rudeness. But now, you are claiming sincerity and straightforwardness in your reply to common law that simply contradicted what he said. Those things don't fit together. People who are being straightforward don't make sincere comments to one person for the purpose of communicating something else to another. Nor do they make assertions without providing explanations for their reasoning process. Being vague and ambiguous about your ideas is the opposite of being straightforward, actually. A straightforward approach would have been to say that you thought his choice of language was inappropriate, or for you to advance right away the arguments against his view that you ended up making later on. Snarkiness is not sincerity, equivocation is not straightforwardness. You were willing to engage with me after I said something "inexcusably obnoxious" and sarcastic, but you draw the line at a well reasoned collection of counterarguments? Pull the other one.
wedrifid120

Wow, thank God you've settled this question for us with your supreme grasp of rationality. I'm completely convinced by the power of your reputation to ignore all the arguments common_law made, you've been very helpful!

Apart from the inexcusably obnoxious presentation the point hidden behind your sarcasm suggests you misunderstand the context.

Stating arguments in favour of arguing with hostile arguers is one thing. "You should question your unstated but fundamental premise" is far more than that. It uses a condescending normative dominance atte... (read more)

9MarkusRamikin
That, or unskilled use of language by someone who lacks better arguing habits. Either way, yeah, worth discouraging. Can't imagine who'd have guessed your exact intention just based on your initial response, though.
-1027chaos
0[anonymous]
I was sarcastic, but you were sarcastic first. At least my sarcasm had ideas within it, yours was a disdainful contradiction that didn't supply anyone with new information. I think you're overreacting to common_law's choice of language. OP will speak for themself if they felt offended or domineered, I'm sure. I disagree with you about expected information value. Intelligent people are often irrational, I'd even say the majority of intelligent people are irrational. There are plenty of dumb irrational people as well, but it'd be quite uncharitable to assume that arguments with them are what's being defended. I also disagree that arguments with irrational people are dangerous, psychologically or physically costly, or economically expensive. Why do you think that this is true? I think that even arguments had in person don't typically end in violence, and that arguments online practically never do. I don't see how arguments cost money either, except in the same opportunity cost sense that anything does - but people aren't optimal utilitarians, so this is a pretty lame criticism. I agree that arguing with irrational people can be psychologically unhealthy, but don't see any reason to think that's the case in the majority of situations. Nobody here is advocating intentionally getting embroiled in all imaginable possible arguments, that would indeed be as horrible as trying to fight drunks to learn self-defense. It's assumed that discrimination is still applied when deciding whether or not to enter a conversation. Your analogy is very biased.

Trying to use reasoned discussion tactics against people who've made up their minds already isn't going to get you anywhere, and if you're unlucky, it might actually be interpreted as backtalk, especially if the people you're arguing against have higher social status than you do--like, for instance, your parents.

At times being more reasonable and more 'mature' sounding in conversation style even seems to be more offensive. It's treating them like you are their social equal and intellectual superior.

I want the free $10. The $1k is hopeless and were I to turn out to lose that side of the bet then I'd still be overwhelmingly happy that I'm still alive against all expectations.

3Unknowns
Great. Please send me an address (PM would be fine.) If anyone else wants to take that side of the bet, please let me know.
wedrifid-10

I consider social policy proposal harmful and reject it as applied to myself or others. You may of course continue to refrain from speaking out against this kind of behaviour if you wish.

In the unlikely event that the net positive votes (at that time) given to Azathoth123 reflect the actual attitudes of the lesswrong community the 'public' should be made aware so they can choose whether to continue to associate with the site. At least one prominent user has recently disaffiliated himself (and deleted his account) for a far less harmful social political concern. On the other hand other people who embrace alternate lifestyles may be relieved to see that Azathoth's prejudiced rabble rousing is unambiguously rejected here.

-2pragmatist
Yes, but wouldn't this be more effective if you first confirmed/disconfirmed your hypothesis about the votes through a mod? In the absence of that information, how is a member of the public to know how to act? My objection was more about the speculative nature of the comment rather than the fact that you're "speaking out". I have nothing against speculation per se, but in cases where it can be fairly easily verified I prefer to see that happen instead.

Ignorant is fastest - only calculate answer and doesn't care of anything else.

Just don't accidentally give it a problem that is more complex than you expect. Only caring about solving such a problem means tiling the universe with computronium.

3Unknowns
Eliezer has said he would be willing to make one more bet like this (but not more, since he needs to ensure his ability to pay if he loses). I don't think anyone has taken him up on it. Robin Hanson was going to do it but backed out, so as far as I know the offer is still open.

2) Gays aren't monogamous. One obvious way to see this is to note how much gay culture is based around gay bathhouses. Another way is to image search pictures of gay pride parades.

This user seems to to spreading an agenda of ignorant bigotry against homosexuality and polyamory. It doesn't even temper the hostile stereotyping with much pretense of just referring to trends in the evidence.

Are the upvotes this account is receiving here done by actual lesswrong users (who, frankly, ought to be ashamed of themselves) or has Azathoth123 created sockpuppets to vote itself up?

1Ixiel
Hi. I'm a kneejerk moderate who has found Aza's comments a rare view into a world I do not know. I vote him/her up often, since I am benefited by this knowledge. I do not vote people up because I agree with them or, in this case, vice versa. I believe s/he is an asset to the site. Care to explain exactly why I should be ashamed of myself?
-3pragmatist
I assign a very high probability (>90%) to Azathoth123 being Eugine_Nier. Given the latter's history, I wouldn't be surprised if Azathoth were involved in voting shenanigans. But I think it would be better if you take this to a mod (Viliam_Bur, I believe) for confirmation/action, rather than speculating in public. ETA: Just realized that this comment is doing exactly what it was advising against. Slightly embarrassed that I didn't notice while I was writing it.
satt110

Are the upvotes this account is receiving here done by actual lesswrong users (who, frankly, ought to be ashamed of themselves) or has Azathoth123 created sockpuppets to vote itself up?

I've suspected Azathoth123 of upvoting their own comments with sockpuppets since having this argument with them. (If I remember rightly, their comments' scores would sit between -1 & +1 for a while, then abruptly jump up by 2-3 points at about the same time my comments got downvoted.)

Moreover, Azathoth123 is probably Eugine_Nier's reincarnation. They're similar in qui... (read more)

-6Azathoth123

This is the gist of the AI Box experiment, no?

No. Bribes and rational persuasion are fair game too.

wedrifid-20

To quote someone else here: "Well, in the original formulation, Roko's Basilisk is an FAI

I don't know who you are quoting but they are someone who considers AIs that will torture me to be friendly. They are confused in a way that is dangerous.

The AI acausally blackmails people into building it sooner, not into building it at all.

It applies to both - causing itself to exist at a different place in time or causing itself to exist at all. I've explicitly mentioned elsewhere in this thread that merely refusing blackmail is insufficient when there a... (read more)

0Jiro
"How could it" means "how could it always result in", not "how could it in at least one case". Giving examples of how it could do it in at least one case is trivial (consider the case where refusing to be blackmailed results in humanity being killed off for some unlikely reason, and humanity, being killed off, can't build an AI).

Is TDT accurately described by "CDT + acausal comunication through mutual emulation"?

Communication isn't enough. CDT agents can't cooperate in a prisoner's dilemma if you put them in the same room and let them talk to each other. They aren't going to be able to cooperate in analogous trades across time no matter how much acausal 'communicaiton' they have.

7TheAncientGeek
And that's evidence of what? That persuasion is possible? That persuasion is persuasive (special .lu in marketing...)? That some extra bad kind of persuasion is happening in classrooms?

By "the basilisk", do you mean the infohazard, or do you mean the subject matter of the inforhazard? For the former, whatever causes you to not worry about it protects you from it.

Not quite true. There are more than two relevant agents in the game. The behaviour of the other humans can hurt you (and potentially make it useful for their creation to hurt you).

wedrifid-20

It is plausible (though not necessarily true) that refusing to be blackmailed acausally prevents the AI from becoming a torture AI, but it cannot prevent the AI from existing at all. How could it?

In this case "be blackmailed" means "contribute to creating the damn AI". That's the entire point. If enough people do contribute to creating it then those that did not contribute get punished. The (hypothetical) AI is acausally creating itself by punishing those that don't contribute to creating it. If nobody does then nobody gets punished.

0Jiro
To quote someone else here: "Well, in the original formulation, Roko's Basilisk is an FAI that decided the good from bringing an FAI into the world a few days earlier (saving ~150,000 lives per day earlier it gets here)". The AI acausally blackmails people into building it sooner, not into building it at all. So failing to give into the blackmail results in the AI still being built but later and it is capable of punishing people.

I'll be sure to ask you the next time I need to write an imaginary comment.

I wasn't the pedant. I was the tangential-pedantry analyzer. Ask Lumifer.

It's not like anyone didn't know what I meant. What do you think of the actual content? How much do you trust faul_sname's claim that they wouldn't trust their own senses on a time-travel-like improbability?

Your comment was fine. It would be true of most people, I'm not sure if Faul is one of the exceptions.

Realistically speaking?

Unfortunately this still suffers from the whole "Time Traveller visits you" part of the claim - our language doesn't handle it well. It's a realistic claim about counterfactual response of a real brain to unrealistic stimulus.

0ike
I'll be sure to ask you the next time I need to write an imaginary comment. It's not like anyone didn't know what I meant. What do you think of the actual content? How much do you trust faul_sname's claim that they wouldn't trust their own senses on a time-travel-like improbability?

This seems weird to me.

It seemed weird enough to me that it stuck in my memory more clearly than any of his anti-MIRI comments.

XiXiDu does not strike me as someone who is of average or below-average intelligence--quite the opposite, in fact.

I concur.

Is there some advantage to be gained from saying that kind of thing that I'm just not seeing here?

My best guess is an ethical compulsion towards sincere expression of reality as he perceives it. For what it is worth that sincerity did influence my evaluation of his behaviour and personality. XiXiDu... (read more)

I don't think it's literally factually :-D

I think you're right. It's closer to, say... "serious counterfactually speaking".

False humility? Countersignalling? Depression? I don't want to make an internet diagnosis or mind reading, but from my view these options seem more likely than the hypothesis of low intelligence.

From the context I ruled out countersignalling and for what it is worth my impression was that the humility was real, not false. Given that I err on the side of cynical regarding hypocrisy and had found some of XiXiDu's comments disruptive I give my positive evaluation of Xi's sincerity some weight.

I agree that the hypothesis of low intelligence is implausible d... (read more)

I gave two TEDx talks in two weeks (also a true statement: I gave two TEDx talks in 35 years), one cosmic colonisation, one on xrisks and AI.

I'm impressed. (And will look them up when I get a chance.)

2Stuart_Armstrong
They are not out yet; the wheels of TEDx videos move slowly and mysteriously.
wedrifid-20

For what it's worth, I don't think anybody understands acausal trade.

It does get a tad tricky when combined with things like logical uncertainty and potentially multiple universes.

wedrifid-20

Precommitment isn't meaningless here just because we're talking about acausal trade.

Except in special cases which do not apply here, yes it is meaningless. I don't think you understand acausal trade. (Not your fault. The posts containing the requisite information were suppressed.)

What I described above doesn't require the AI to make its precommitment before you commit; rather, it requires the AI to make its precommitment before knowing what your commitment was.

The time of this kind decision is irrelevant.

2bogus
For what it's worth, I don't think anybody understands acausal trade. And I don't claim to understand it either.

The key is that the AI precommits to building it whether we refuse or not.

The 'it' bogus is referring to is the torture-AI itself. You cannot precommit to things until you exist, no matter your acausal reasoning powers.

0Jiro
If "built" refers to building the AI itself rather than the AI building a torture simulator, then refusing to be blackmailed doesn't prevent the AI from being built. The building of the AI, and the AI's deduction that it should precommit to torture, are two separate events. It is plausible (though not necessarily true) that refusing to be blackmailed acausally prevents the AI from becoming a torture AI, but it cannot prevent the AI from existing at all. How could it?

It's such a plausible conclusion that it makes sense to draw, even if it turns out to be mistaken. Absent the ability to read minds and absent an explicit statement, we have to go on what is likely.

The best we can say is that it is a sufficiently predictable conclusion. Had the author not underestimated inferential distance he could easily have pre-empted your accusation with an additional word or two.

Nevertheless, it is still a naive (and incorrect) conclusion to draw based on the available evidence. Familiarity with human psychology (in general), inte... (read more)

False humility? Countersignalling? Depression? I don't want to make an internet diagnosis or mind reading, but from my view these options seem more likely than the hypothesis of low intelligence.

(Unless the context was something like "intelligence lower than extremely high"; i.e. something like "I have IQ 130, but compared with people with IQ 160 I feel stupid".)

3dxu
This seems weird to me. While I acknowledge that there are widespread social stigmas associated with broadcasting your own intelligence, it hardly seems productive to actively downplay your intelligence either. XiXiDu does not strike me as someone who is of average or below-average intelligence--quite the opposite, in fact. So it seems odd that he would choose to "repeatedly [claim] that he is not a smart person". Is there some advantage to be gained from saying that kind of thing that I'm just not seeing here?
wedrifid-10

I can't read minds

Yet you spoke with the assumption that you could, and when many observers do not share your mind-reading conclusions. Hopefully in the future when you choose to do that you will not fail to see why you get downvotes. It's a rather predictable outcome.

5Jiro
It's such a plausible conclusion that it makes sense to draw, even if it turns out to be mistaken. Absent the ability to read minds and absent an explicit statement, we have to go on what is likely.
wedrifid-10

XiXiDu should discount this suggestion because it seems to be motivated reasoning.

The advice is good enough (and generalizable enough) that the correlation to the speaker's motives is more likely to be coincidental than causal.

Addicts tend to be hurt by exposing themselves to their addiction triggers.

When discussing transparent Newcomb, though, it's hard to see how this point maps to the latter two situations in a useful and/or interesting way.

Option 3 is of the most interest to me when discussing the Transparent variant. Many otherwise adamant One Boxers will advocate (what is in effect) 3 when first encountering the question. Since I advocate strategy 2 there is a more interesting theoretical disagreement. ie. From my perspective I get to argue with (literally) less-wrong wrong people, with a correspondingly higher chance that I'm the one who is c... (read more)

3dxu
This is a question that I find confusing due to conflicting intuitions. Fortunately, since I endorse reflective consistency, I can replace that question with the following one, which is equivalent in my decision framework, and which I find significantly less confusing: "What would you want to precommit to doing, if you encountered transparent Newcomb and found the big box (a.k.a. Box B) empty?" My answer to this question would be dependent upon Omega's rule for rewarding players. If Omega only fills Box B if the player employs the strategy outlined in 2, then I would want to precommit to unconditional one-boxing--and since I would want to precommit to doing so, I would in fact do so. If Omega is willing to reward the player by filling Box B even if the player employs the strategy outlined in 3, then I would see nothing wrong with two-boxing, since I would have wanted to precommit to that strategy in advance. Personally, I find the former scenario--the one where Omega only rewards people who employ strategy 2--to be more in line with the original Newcomb's Problem, for some intuitive reason that I can't quite articulate. What's interesting, though, is that some people two-box even upon hearing that Omega only rewards the strategy outlined in 2--upon hearing, in other words, that they are in the first scenario described in the above paragraph. I would imagining that their reasoning process goes something like this: "Omega has left Box B empty. Therefore he has predicted that I'm going to two-box. It is extremely unlikely a priori that Omega is wrong in his predictions, and besides, I stand to gain nothing from one-boxing now. Therefore, I should two-box, both because it nets me more money and because Omega predicted that I would do so." I disagree with this line of reasoning, however, because it is very similar to the line of reasoning that leads to self-fulfilling prophecies. As a rule, I don't do things just because somebody said I would do them, even if that so

Breaking the vicious cycle

I endorse this suggestion.

Don't Feed The Trolls!

If I consider my predictions of Omega's predictions, that cuts off more branches, in a way which prevents the choices from even having a ranking.

It sounds like your decision making strategy fails to produce a useful result. That is unfortunate for anyone who happens to attempt to employ it. You might consider changing it to something that works.

"Ha! What if I don't choose One box OR Two boxes! I can choose No Boxes out of indecision instead!" isn't a particularly useful objection.

It's me who has to run on a timer.

No, Nshepperd is right. Omega imposing computation limits on itself solves the problem (such as it is). You can waste as much time as you like. Omega is gone and so doesn't care whether you pick any boxes before the end of time. This is a standard solution for considering cooperation between bounded rational agents with shared source code.

When attempting to achieve mutual cooperation (essentially what Newcomblike problems are all about) making yourself difficult to analyse only helps against terribly naive intelligences. ie. It's a solved problem and essentially useless for all serious decision theory discussion about cooperation problems.

As I argued in this comment, however, the scenario as it currently is is not well-specified; we need some idea of what sort of rule Omega is using to fill the boxes based on his prediction.

Previous discussions of Transparent Newcomb's problem have been well specified. I seem to recall doing so in footnotes so as to avoid distraction.

I have not yet come up with a rule that would allow Omega to be consistent in such a scenario, though, and I'm not sure if consistency in this situation would even be possible for Omega. Any comments?

The problem (such a... (read more)

3dxu
I'd say that about hits the nail on the head. The permutations certainly are exhaustively specifiable. The problem is that I'm not sure how to specify some of the branches. Here's all four possibilities (written in pseudo-code following your example): 1. IF (Two box when empty And Two box when full) THEN X 2. IF (One box when empty And One box when full) THEN X 3. IF (Two box when empty And One box when full) THEN X 4. IF (One box when empty And Two box when full) THEN X The rewards for 1 and 2 seem obvious; I'm having trouble, however, imagining what the rewards for 3 and 4 should be. The original Newcomb's Problem had a simple point to demonstrate, namely that logical connections should be respected along with causal connections. This point was made simple by the fact that there's two choices, but only one situation. When discussing transparent Newcomb, though, it's hard to see how this point maps to the latter two situations in a useful and/or interesting way.

I am too; I'm providing a hypothetical where the player's strategy makes this the least convenient possible world for people who claim that having such an Omega is a self-consistent concept.

It may be the least convenient possible world. More specifically it is the minor inconvenience of being careful to specify the problem correctly so as not to be distracted. Nshepperd gives some of the reason typically used in such cases.

Moreover, the strategy "pick the opposite of what I predict Omega does" is a member of a class of strategies that have t

... (read more)

No, because that's fighting the hypothetical. Assume that he doesn't do that.

It is actually approximately the opposite of fighting the hypothetical. It is managing the people who are trying to fight the hypothetical. Precise wording of the details of the specification can be used to preempt such replies but for casual defininitions that assume good faith sometimes explicit clauses for the distracting edge cases need to be added.

1Jiro
It is fighting the hypothetical because you are not the only one providing hypotheticals. I am too; I'm providing a hypothetical where the player's strategy makes this the least convenient possible world for people who claim that having such an Omega is a self-consistent concept. Saying "no, you can't use that strategy" is fighting the hypothetical. Moreover, the strategy "pick the opposite of what I predict Omega does" is a member of a class of strategies that have the same problem; it's just an example of such a strategy that is particularly clear-cut, and the fact that it is clear-cut and blatantly demonstrates the problem with the scenario is the very aspect that leads you to call it trolling Omega. "You can't troll Omega" becomes equivalent to "you can't pick a strategy that makes the flaw in the scenario too obvious".

While this is on My Side, I still have to protest trying to sneak any side (or particular (group of) utility function(s)) into the idea of "rationality".

To be fair, while it is possible to have a coherent preference for death far more often people have a cached heuristic to refrain from exactly the kind of (bloody obvious) reasoning that Boy 2 is explaining. Coherent preferences are a 'rationality' issue.

Since nothing in the quote prescribes the preference and instead merely illustrates reasoning that happens to follow from having preferences ... (read more)

when much of mainstream philosophy consists of what (I assume) you're calling "bad amateur philosophy".

No, much of it is bad professional philosophy. It's like bad amateur philosophy except that students are forced to pretend it matters.

Curiously enough, I made no claims about ideal CDT agents.

True. CDT is merely a steel-man of your position that you actively endorsed in order to claim prestigious affiliation.

The comparison is actually rather more generous than what I would have made myself. CDT has no arbitrary discontinuity between at p=1 and p=(1-e) for example.

That said, the grandparent's point applies just as well regardless of whether we consider CDT, EDT, the corrupted Lumifer variant of CDT or most other naive but not fundamentally insane decision algorithms. In the general cas... (read more)

Precommitment is loss of flexibility and while there are situations when you get benefits compensating for that loss, in the general case there is no reason to pre-commit.

Curiously, this particular claim is true only because Lumifer's primary claim is false. An ideal CDT agent released at time T with the capability to self modify (or otherwise precommit) will as rapidly as possible (at T + e) make a general precommitment to the entire class of things that can be regretted in advance only for the purpose of influencing decisions made after (T + e) (but... (read more)

-4Lumifer
Curiously enough, I made no claims about ideal CDT agents.

If Omega is just a skilled predictor, there is no certain outcome so you two-box.

Unless you like money and can multiply, in which case you one box and end up (almost but not quite certainly) richer.

-9Lumifer

I retract my previous statement based on new evidence acquired.

-1RobinZ
I continue to endorse being selective in whom one spends time arguing with.

I may have addressed the bulk of what you're getting at in another comment; the short form of my reply is, "In the cases which 'heroic responsibility' is supposed to address, inaction rarely comes because an individual does not feel responsible, but because they don't know when the system may fail and don't know what to do when it might."

Short form reply: That seems false. Perhaps you have a different notion of precisely what heroic responsibility is supposed to address?

-1RobinZ
Is the long form also unclear? If so, could you elaborate on why it doesn't make sense?
-1RobinZ
I didn't propose that you should engage in detailed arguments with anyone - not even me. I proposed that you should accompany some downvotes with an explanation akin to the three-sentence example I gave. Another example of a sufficiently-elaborate downvote explanation: "I downvoted your reply because it mischaracterized my position more egregiously than any responsible person should." One sentence, long enough, no further argument required.

If there's something about "ability to learn" outside of this, I'd be interested to hear about it.

Skills, techniques and habits are also rather important.

0dxu
I agree that these things are also important, but I'm not sure they should be classified as "basic" traits the way memory, processing speed, and intelligence are. Then again, I could be mistaken.
Load More