All of antigonus's Comments + Replies

I agree with vallinder's point, and would also like to add that arguments for moral realism which aren't theistic or contractarian in nature typically appeal to moral intuitions. Thus, instead of providing positive arguments for realism, they at best merely show that arguments for the unreliability of realists' intuitions are unsound. (For example, IIRC, Russ Shafer-Landau in this book tries to use a parity argument between moral and logical intuitions, so that arguments against the former would have to also apply to the latter.) But clearly this is an es... (read more)

-3Eugine_Nier
Where would you put Kant's categorical imperative in this scheme?

If I scratch my nose, that action has no truth value. No color either.

The proposition "I scratched my nose" does have a truth value.

Bayesian epistemology maintains that probability is degree of belief. Assertions of probabilities are therefore assertions of degrees of belief, which are psychological claims and therefore obviously have or can have truth-value. Of course, Bayesians can be more nuanced and take some probability claims to be about degrees of belief in the minds of some idealized reasoner; but "the degree of belief of an ideal... (read more)

Nope, I wasn't familiar. Very interesting, thanks!

Probability assignments don't have truth value,

Sure they do. If you're a Bayesian, an agent truly asserts that the (or, better, his) probability of a claim is X iff his degree of belief in the claim is X, however you want to cash out "degree of belief". Of course, there are other questions about the "normatively correct" degrees of belief that anyone in the agent's position should possess, and maybe those lack determinate truth-value.

0buybuydandavis
If I scratch my nose, that action has no truth value. No color either. The proposition "I scratched my nose" does have a truth value. See the distinction. Don't hand wave it with "it's all the same", "that's just semantics", etc. You started saying that this is more of a question. I've tried to clarify the answer to you.

I don't see the relation between the two. It seems like you're pointing out that Jaynes/people here don't believe there are "objectively correct" probability distributions that rationality compels us to adopt. But this is compatible with there being true probability claims, given one's own probability distribution - which is all that's required.

0buybuydandavis
There may be an objectively correct way to throw globs of paint at the wall if I wish to do it in a way that is consistent with certain desired properties given my state of knowledge. That would not make that correct way of throwing globs of paint "true". A la Jaynes, there is a correct way to assign degrees of belief based on your state of knowledge if you want your degrees of belief to be consistent with certain constraints, but that doesn't make any particular probability assignment "true". Probability assignments don't have truth value, they assign degrees of belief to propositions that do have truth value. It is a category error, under Jaynes perspective, to assert that a probability assignment is "true", or purple, or hairy, or smelly.

That statement is too imprecise to capture Jaynes's view of probability.

Of course; it wasn't intended to capture the difference between so-called objective Bayesianism vs. subjective Bayesianism. The tension, if it arises at all, arises from any sort of Bayesianism. That the rules prescribed by Jaynes don't pick out the "true" probability distributions on a certain question is compatible with probability claims like "It will probably rain tomorrow" having a truth-value.

0buybuydandavis
I was pointing out that your original statement characterizing "most people here" as asserting that "probability claims are true ..." is antithetical to Jaynes's approach, which I take as the canonical, if not universal, view on this list.

I don't understand where the tension is supposed to come in.

It just seems really weird to be able to correctly say that A caused B when, in fact, A had nothing to do with B. If that doesn't seem weird to you, then O.K.

The idea that causation is in the mind, not in the world is part of the Humean tradition

I think that's unclear; I side with those who think Hume was arguing for causal skepticism rather than some sort of subjectivism.

0Jayson_Virissimo
This point is completely independent of whether causation is "in the mind" or not. Also, correlated things do have something to do with each other (by definition!). What is at issue is whether this something is "out in the world" or "in your head". Right, there is probably no consensus on Humean interpretation. In any case, Hume would predict with near certainty that a billiard ball that was struck by a second billiard ball would make a sound and roll away in regular manner, just the same as you would. But since he doesn't need this "causal necessity" thing "out in the world" somewhere in order to coherently make the same prediction, your web-of-belief real estate seems to have lower rent than Hume's.
0khafra
"Causation is in the mind" does not imply "correlation is in the mind," does it? I mean, assuming a deterministic interpretation of QM, causal determinism is pretty much a correct philosophical position. That means causality, in the Pearl sense, really is only in the mind. In the world, there are only interactions which happen according to mathematically regular rules. You might as well talk about causality along the X-axis instead of the time axis: "the state of the universe at any point along the X axis can be known, with unlimited computing power and complete knowledge of any other Y,Z,T hyperplane." If we were epistemically limited to a one-way view along the universe's X-axis, and could see in both directions along the time axis, this would make sense.

No considerations are given for the strength of the advantage

I wish this were stressed more often. It's really easy to think up selective pressures on any trait and really hard to pin down their magnitude. This means that most armchair EP explanations have very low prior probabilities by default, even if they seem intuitively reasonable.

The word "cult" never makes discussions like these easier. When people call LW cultish, they are mostly just expressing that they're creeped out by various aspects of the community - some perceived groupthink, say. Rather than trying to decide whether LW satisfies some normative definition of the word "cult," it may be more productive to simply inquire as to why these people are getting creeped out. (As other commenters have already been doing.)

0[anonymous]
This exactly. It's safe to assume that when most people say some organization strikes them as being cultish, they're not necessarily keeping a checklist.

I suppose I'd like to hear Solvent ask him about those.

Do you feel this is a full rebuttal to McDermott's paper? I agree that his generalized argument against "extendible methods" is a straw man; however, he has other points about Chalmers' failure to argue for existing extendible methods being "extendible enough."

4lukeprog
No, our paragraph does not rebut everything we disagree with in McDermott's paper. Chalmers' reply in the forthcoming "The Singularity: a reply" is adequate.

Questions on anything, or just topics that relate to the class? If the former, I'd like to hear his response to Drew McDermott's critique of his Singularity article in JCS, even though I think he's going to publish a response to it and others in the next issue.

lukeprog
190

The response Anna and I give in our forthcoming chapter "Intelligence Explosion: Evidence and Import" is the following:

Chalmers (2010) suggested that AI will lead to intelligence explosion if an AI is produced by an "extendible method," where an extendible method is "a method that can easily be improved, yielding more intelligent systems." McDermott (2012a, 2012b) replies that if P≠NP (see Goldreich 2010 for an explanation) then there is no extendible method. But McDermott's notion of an extendible method is not the one esse

... (read more)

I just noticed from that document that you listed Alexander Funcke as owner of "Zelta Deta." Googling his name, I think you meant "Zeta Delta?"

Yeah, you're correct. Wasn't thinking very hard.

I tell you that as long as I can conceive something better than myself I cannot be easy unless I am striving to bring it into existence or clearing the way for it.

-- G.B. Shaw, "Man and Superman"

Shaw evinces a really weird, teleological view of evolution in that play, but in doing so expresses some remarkable and remarkably early (1903) transhumanist sentiments.

I love that quote, but if it carries a rationality lesson, I fail to see it. Seems more like an appeal to the tastes of the audience here.

You may want to check out John Earman's Bayes or Bust?.

For example, Aristotle proved lots of stuff based on the infallibility of sensation

I don't know much about Aristotle, but this claim sounds to me like a distortion of something Aristotle might have said.

-2Manfred
Hm, well, after a little looking into it I think my criticism wasn't the best characterization ever, but not entirely unfounded. The bad stuff is like this: http://classics.mit.edu/Aristotle/soul.html The reason my criticism was bad is because it was unspecific - a mostly undeserved general impugning rather than noting a specific problem. Which is mostly because I don't enough to point at a specific problem.

No, never seen that before.

antigonus
250

Something has gone horribly wrong here.

4David_Gerard
The Crackpot Offer.
4JenniferRM
Is the apparent reference to David Stove's "What is Wrong with Our Thoughts?" intentional?
1windmil
Y'know, we came up with this idea for this institution and all the cool things we could do. We got so wrapped up in it that the name was kind of an afterthought.
antigonus
100

When I told people about the plan in #1, though, it was because I wanted them to listen to me. I was back off the brink for some reaon, and I wanted to talk about where I'd been. Somebody who tells you they're suicidal isn't asking you to talk him out of it; he's asking you to listen.

Just wanted to say that I relate very strongly to this. When I was heavily mentally ill and suicidal, I was afraid of reaching out to other people precisely because that might mean I only wanted emotional support rather than being serious about killing myself. People who r... (read more)

5see
Not at all. A concise and relevant comment.

Of course it depends on the specific papers and the nature of the publications. "Publish more papers" seems like shorthand for "Demonstrate that you are capable of rigorously defending your novel/controversial ideas well enough that very many experts outside of the transhumanism movement will take them seriously." It seems to me that doing this would change a lot of people's behavior.

I don't imagine it would have nearly as much of an effect on people who aren't familiar with anime. But I would read that study in a heartbeat if it existed.

One is the asymmetry, which is the better one, but it has weird assumptions about personhood - reasonable views either seem to suggest immediate suicide (if there is no continuity of self and future person-moments are thus brought into existence, you are harming future-you by living)

I'm not sure I remember his arguments relying on those assumptions in his asymmetry argument. Maybe he needs them to justify not committing suicide, but I thought the badness of suicide wasn't central to his thesis.

I'm reading Benatar's Better Never To Have Been and I noticed that the actual arguments for categorical antinatalism aren't as strong as I thought and seem to hinge on either a pessimistic view of technological progress (which might well be justified)

I don't think this is true. Benatar's position is that any being that ever suffers is harmed by being created. This is not something that technological progress is very likely to relieve. Or are you thinking of some sort of wireheading?

or confusions about identity and personhood.

That sounds like an interesting criticism.

1[anonymous]
He has two main arguments. One is the asymmetry, which is the better one, but it has weird assumptions about personhood - reasonable views either seem to suggest immediate suicide (if there is no continuity of self and future person-moments are thus brought into existence, you are harming future-you by living) or need to rely on consent, but I see no reason why consent can't be given without instantiating a person. (But I'm still confused about consent.) The other argument is based on low expected value of any life. Specifically, he argues that life is much worse than commonly thought (plausible) and addresses why common approaches can't justify the harm anyway. This relies on the assumption that the status quo will more-or-less continue. Justifiable, but unless he provides an argument to the contrary, transhumanists can still argue to you only need to engineer a world in which humans don't suffer (or even can't - the wireheading solution). If we all lived in a Post-Singularity Utopia, I'm sure his justifications for his specific comparison of harms and benefits would look much stranger to us.

I suppose one could draw from this a similar response to any Dutch book argument. Sure, if my "degree of belief" in a possible statement A is 2, I can be Dutch booked. But now that I'm licensed to disbelieve entailments (so long as I take myself to be ignorant that they're entailments), perhaps I justifiably believe that I can't be Dutch booked. So what rational constraints are there on any of my beliefs? Whatever argument you give me for a constraint C from premises P1, ..., Pn, I can always potentially justifiably believe the conditional "If the premises P1, ..., Pn are true, then C is correct" has low probability - even if the argument is purely deductive.

0prase
You are right. I think this is the tradeoff: either we demand logical omniscience, of we have to allow disbelief in entailment. Still, I don't see a big problem here because I think of the Bayesian epistemology as of a tool which I voluntarily adopt to improve my congnition - I have no reason to deliberately reject (assign a low probability to) a deductive argument when I know it, since I would harm myself that way (at least I believe so, because I trust deductive arguments in general). I am "licensed to disbelieve entailments" only in order to keep the system well defined, in practice I don't disbelieve them once I know their status. The "take myself to be ignorant that they're entailments" part is irrational. I must admit that I haven't a clear idea how to formalise this. I know what I do in practice: when I don't know that two facts are logically related, I treat them as independent and it works in approximation. Perhaps the trust in logic should be incorporated in the prior somehow. Certainly I have to think about it more.

Logical omniscience comes from probability "statics," not conditionalization. When A is any propositional tautology, P(A) (note the lack of conditional) can be algebraically manipulated via the three Kolmogorov axioms to yield 1. Rejecting one of the axioms to avoid this result leaves you vulnerable to Dutch books. (Perhaps this is not so surprising, since reasoning about Dutch books assumes classical logic. I have no idea how one would handle Dutch book arguments if we relax this assumption.)

5prase
Of course, if I am inconsistent, I can be Dutch booked. If I believe that P(tautology) = 0.8 because I haven't realised it is a tautology, somebody who knows that will offer me a bet and I will lose. But, well, lack of knowledge leads to sub-optimal decisions - I don't see it as a fatal flaw.

Could you explain in more detail why Bayesian epistemology can't be built without such an assumption?

Well, could you explain how to build it that way? Bayesian epistemology begins by interpreting (correct) degrees of beliefs as probabilities satisfying the Kolmogorov axioms, which implies logical omniscience. If we don't assume our degrees of belief ought to satisfy the Kolmogorov axioms (or assume they satisfy some other axioms which entail Kolmogorov's), then we are no longer doing Bayesian epistemology.

0prase
Is there more to it than that it is the definition of Bayesian epistemology? Logical omniscience with respect to propositional logic is necessary if we require that p(A|B) = 1 if A is deducible from B. Releasing this requirement leaves us with a still working system. Of course, the reasoner should update his p(A|B) somewhere close to 1 after seeing the proof that B⇒A, but he needn't have this belief a priori.

One of the reasons given against peer review is that it takes a long time to articles to be published after acceptance. Is it not possible to make them available on your own website before they appear in the article? (I really have barely any idea how these things work; but I know that in some fields you can do this.)

antigonus
100

You mentioned recently that SIAI is pushing toward publishing an "Open Problems in FAI" document. How much impact do you expect this document to have? Do you intend to keep track? If so, and if it's less impactful than expected, what lesson(s) might you draw from this?

I'm interested in what you have to say, and I'm sympathetic (I think), but I was hoping you could restate this in somewhat clearer terms. Several of your sentences are rather difficult to parse, like "And to be committed to false statements as being not-false would be absurd, such that it would alone be proper to aver that one has been defeated in having previously been committed to the truth of T despite that that committment was fundamentally invalid."

1argumzio
Read my latest comments. If you need further clarity, ask me specific questions and I will attempt to accommodate them. But to give some additional note on the quote you provide, look to reductio ad absurdum as a case where it would be incorrect to aver to the truth of what is really contradictory in nature. If it still isn't clear, ask yourself this: "does it make sense to say something is true when it is actually false?" Anyone who answers this in the affirmative is either being silly or needs to have their head checked (for some fascinating stuff, indeed).

Sorry, I'm not sure I understand what you mean. Could you elaborate?

0shokwave
It's just that logical omniscience is required to quickly identify the (pre-determined) truth value of incredibly complicated mathematical equations; if you want to exploit my not knowing the thousandth mersenne prime, you have to know the thousandth mersenne prime to do so, and humans generally don't encounter beings that have significantly more logical knowledge.

I think a lot of the replies here suggesting that Bayesian epistemology easily dissolves the puzzles are mistaken. In particular, the Bayesian-equivalent of (1) is the problem of logical omniscience. Traditional Bayesian epistemology assumes that reasoners are logically omniscient at least with respect to propositional logic. But (1), suitably understood, provides a plausible scenario where logical omniscience fails.

I do agree that the correct understanding of the puzzles is going to come from formal epistemology, but at present there are no agreed-upon solutions that handle all instances of the puzzles.

0Manfred
This can be treated for cases like problem (1) by saying that since the probabilities are computed with the brain, if the brain makes a mistake in the ordinary proof, the equivalent proof using probabilities will also contain the mistake. Dealing with limited (as opposed to imperfect) computational resources would be more interesting - I wonder what happens when you relax the consistency requirement to proofs smaller than some size N?
0prase
Could you explain in more detail why Bayesian epistemology can't be built without such an assumption? All arguments I have seen went along the lines "unless you are logically omniscient, you may end up having inconsistent probabilities". That may be aesthetically unpleasant when we think about ideal Bayesian agents, but doesn't seem to be a grave concern for Bayesianism as a prescriptive norm of human reasoning.
1shokwave
The formulations of "logical omniscience is a problem for Bayesian reasoners" that I have seen are not sufficiently worrying; actually creating a Dutch Book would require the formulating party to have the logical omniscience the Bayesian lacks which is not a situation we encounter very much.

scroll to 4:40 I like his one argument: if we have finite neurons and thus cannot construct an infinite set in our "map" what makes you think that you can make it correspond to a (hypothetical) infinity in the territory?

I don't really see what this argument comes to. The map-territory metaphor is a metaphor; neural structures do not have to literally resemble the structures they have beliefs about. In fact, if they did, then the objection would work for any finite structure that had more members than there are synapses (or whatever) in the brain.

In that case, I'd say that your response involves special pleading. SI priors are uncomputable. If the fine structure constant is uncomputable, then any uncomputable prior that assigns probability 1 to the constant having its actual value will beat SI in the long run. What is illicit about the latter sort of uncomputable prior that doesn't apply to SI priors? Or am I simply confused somehow? (I'm certainly no expert on this subject.)

5cousin_it
SI belongs to a class of priors that could be described as "almost computable" in a certain technical sense. The term is lower-semicomputable semimeasure. An interesting thing about SI is that it's also optimal (up to a constant) within its own class, not just better than all puny computable priors. The uncomputable prior you mention does not belong to that class, in some sense it's "more uncomputable" than SI.

You will find that even if you're endowed with the privileged knowledge that the fine structure constant is a halting oracle, that knowledge provably can't help you win a prediction game against SI

We can frequently compute the first several terms in a non-computable sequence, so this statement seems false.

2cousin_it
When people talk about the impossibility of "winning" against SI, they usually mean it's impossible to win by more than a constant in the long run.

I'm having trouble seeing your point in the context of the rest of the discussion. Tyrrell claimed that the pre-theoretic notion of an infinite set - more charitably, perhaps, the notion of an infinite cardinality - is captured by Dedekind's formal definition. Here, "capture" presumably means something like "behaves sufficiently similarly so as to preserve the most basic intuitive properties of." Your response appears to be that there is a good metaphorical analysis of infinitude that accounts for this pre-theoretic usage as well as som... (read more)

If they really honed their skills in crushing their opponents arguments, and could transmit this skill to other successfully, then we wouldn't have so many open questions in philosophy

What is your basis for concluding this? "Philosophers are really good at demolishing unsound arguments" is compatible with "Philosophers are really bad at coming to agreement." The primary difference between philosophy and biology that explains the ideological diversity of the former and the consensus of the latter is not that philosophers are worse cri... (read more)

0Ronny Fernandez
I am going to look for problems that Analytics say have not been solved, let LW work on them, and then ask Analytics if they think LWers solved them. I'll be looking for problems that have not been settled in modern philosophy with 2/3ds agreeance, and seeing if we can have 2/3ds agreeance here. I'll compare all of our solutions to analytic solutions of varying kinds. I'll try to randomize the Analytics I use as much as possible. I predict that LWers will not be stumped by many of the problems that are considered hard in analytic philosophy, and that they will be able to reach 2/3ds consensus a few orders of magnitude faster than analytic philosophers. Also i predict that eventually Analytics will end up agreeing with us, if they ever do reach 2/3ds consensus, it just takes them longer.

I haven't read their book, but an analysis of the pre-theoretic concept of the infinitude of a set needn't be taken as an analysis of the pre-theoretic concept of infinitude in general. "Unmarried man" doesn't define "bachelor" in "bachelor of the arts," but that doesn't mean it doesn't define it in ordinary contexts.

0bogus
Except that Lakoff and Núñez's pre-theoretic analysis does account for transfinite sets. There is a single pre-theoretic concept of infinity which accounts for a variety of formal definitions. This is unlike the word "bachelor" which is an ordinary word with multiple meanings.

But let us not forget, that comparing molecular biology and philosophy, is like comparing self-help and physics.

I'm comparing the review processes of molecular biology and philosophy. In both cases, experts with a deep grasp of most/all the relevant pitfalls provide extensive, specific, technical feedback regarding likely sources of error, failure to address existing objections and important points of clarification. That this is superior to a glorified Facebook "Like" button used by individuals with often highly limited familiarity with the su... (read more)

2Ronny Fernandez
If they really honed their skills in crushing their opponents arguments, and could transmit this skill to other successfully, then we wouldn't have so many open questions in philosophy, and we would notice the sort of exponential growth of the power of our methods, like we see in molecular bio. I think philosophers are critical, but they still argue about things which they do not know how to settle far too often, at least when biologists or physicists argue, they can work on settling it right away nine times out of ten, instead of first spending time figuring out what procedure we could use to decide. This can make it as if philosophers aren't critical at all; if I don't know how to figure out which one of us is right, then if you critique me I won't have any reason to change my position, since I don't know if what you just said is independent of my position. What's worse is that sometimes we argue still without even trying to figure out a procedure that would decide amongst solutions. These problems are not as rampant in philosophy as they are in self-gelp, but those are the issues I was trying to get at. Well there is more, do not forget that most LWers are heavily sequenced, and that is nothing to disregard. It is part of my hypothesis which predicts that LW will do better than analytics, that being trained in the history of philosophy, and learning phiosophical concepts through their history, inevitably makes them confusing. And that is the common practice in academic philosophy. Might you say that someone might have a better understanding of Quantum physics after reading the sequence than after reading and completing a textbook on Quantum physics for a university class? They are at least not too far off. And I have many friends whom are qualified whom have told me that the quantum physics sequence helped them understand quantum physics more than any class they have taken. But either way, these posts should help us decide how far off my optimism is, and how f
antigonus
100

I guess I can't really imagine how you came to that conclusion. You seem to be going preposterously overboard with your enthusiasm for LW here. Don't mean to offend, but that's the only way I know how to express the extent of my incredulity. Can you imagine a message board of dabblers in molecular biology congratulating each other over the advantages their board's upvoting system has over peer review?

2Ronny Fernandez
I know it sounds crazy, that is why i wanna test it. My probability that what I am saying is true is probably too high, I agree, and I suck for not being able to correct it right now. But if it is or isn't, I should have some better idea after these posts. If I didn't have the stark contrast between the friendly arguments I have with my class mates and professors, and the arguments I have here on LW, I would react to someone else saying what I am saying roughly as you are reacting. But let us not forget, that comparing molecular biology and philosophy, is like comparing self-help and physics. We should not be as surprised if a bunch of clever enthusiasts make better self-help than professionals, as if a bunch of clever enthusiasts made better physics than physicists. This is because physicists are better at physics than self-help writers are at self-help, the same is true of biologists and philosophers respectively.

And because of our practices of constant focused argument, and karma selection, to select amongst positions, instead of the usual trend-method of philosophy.

I don't understand this. Are you saying that a casual voting system by a group of amateurs on a website consisting of informal blog posts is superior to rigorous peer-review by experts of literature-aware arguments?

1Ronny Fernandez
Yes, that is exactly what I am saying, If by "amateur" we mean non-professional. On the other hand if by "amateur" we mean only slightly more competent than average, I would disagree that LWers are amateur.

I agree there's good reason to imagine that, had further selective pressure on increased intelligence been applied in our evolutionary history, we probably would've ended up more intelligent on average. What's substantially less clear is whether we would've ended up much outside the present observed range of intelligence variation had this happened. If current human brain architecture happens to be very close to a local maximum of intelligence, then raising the average IQ by 50 points still may not get us to any IQ 200 individuals. So while there likely is... (read more)

I didn't vote down your post (or even see it until just now), but it came across as a bit disdainful while being written rather confusingly. The former is going to poorly dispose people toward your message, and the latter is going to poorly dispose people toward taking the trouble to respond to it. If you try rephrasing in a clearer way, you might see more discussion.

1Randolf
Then maybe, instead of just downvoting, these persons should have asked him to clarify and repharse his post. This would have actually led to an interesting dicussion, while downvoting gave nobody nothing. Maybe it should be possible to downvote a post only if you also reply to that post.

I had the same reaction. The post reads like singularity apologetics.

I think he's a sincere teenager who's very new to this sort of thing. They sound, behave and type like that.

Another thing: We need to distinguish between getting better at designing intelligences vs. getting better at designing intelligences which are in turn better than one's own. The claim that "the smarter you are, the better you are at designing intelligences" can be interpreted as stating that the function f(x, y) outlined above is decreasing for any fixed y. But the claim that the smarter you are, the easier it is to create an intelligence even smarter is totally different and equivalent to the aforementioned thesis about the shape of f(x, x+1).

I... (read more)

0torekp
Here's a line of reasoning that seems to suggest the possibility of an interesting region of decreasing f(x, x+1). It focuses on human evolution and evolutionary algorithms. Human intelligence appeared relatively recently through an evolutionary process. There doesn't seem to be much reason to believe that if the evolutionary process were allowed to continue (instead of being largely pre-empted by memetic and technological evolution) that future hominids wouldn't be considerably smarter. Suppose that evolutionary algorithms can be used to design a human-equivalent intelligence with minimal supervision/intervention by truly intelligent-design methods. In that case, we would expect with some substantial probability that carrying the evolution forward would lead to more intelligence. Since the evolutionary experiment is largely driven by brute-force computation, any increase in computing power underlying the evolutionary "playing field" would increase the rate of increase of intelligence of the evolving population. I'm not an expert on or even practitioner of evolutionary design, so please criticize and correct this line of reasoning.

For what it's worth, I've posted a fair number of things in my short time here that go against what I assume to be consensus, and I've mostly only been upvoted for them. (This includes posts that come close to making the cult comparison.)

Load More