All of noen's Comments + Replies

noen00

"On the contrary, most people don't care whether it is conscious in some deep philosophical sense."

Do you mean that people don't care if they are philosophical zombies or not? I think they care very much. I also think that you're eliding the point a bit by using "deep" as a way to hand wave the problem away. The problem of consciousness is not some arcane issue that only matters to philosophers in their ivory towers. It is difficult. It is unsolved. And... and this is important. it is a very large problem, so large that we should not sp... (read more)

1JoshuaZ
If you look above, you'll note that the statement you've quoted was in response to your claim that "people want is a living conscious artificial mind" and my sentence after the one you are quoting is also about AI. So if it helps, replace "it" with "functional general AI" and reread the above. (Although frankly, I'm confused by how you interpreted the question given that the rest of your paragraph deals with AI.) But I think it is actually worth touching on your question: Do people care if they are philosophical zombies? I suspect that by and large the answer is "no". While many people care about whether they have free will in any meaningful sense, the question of qualia simply isn't something that's widely discussed at all. Moreover, whether a given individual think that they have qualia in any useful sense almost certainly doesn't impact how they think they should be treated. If a problem is large, exploring false leads is going to be inevitable. This is true even for small problems. Moreover, I'm not sure what you mean by "strong AI proponents" in this context. Very few people actively work towards research directly aimed at building strong AI, and the research that does go in that direction often turns out to be useful in weaker cases like machine learning. That's how for example we now have practical systems with neural nets that are quite helpful. So insisting that thinking has to occur in a specific substrate is not magical thinking but self-improvement is? Bootstraping doesn't involve physical processes arising out of nothing. The essential idea in most variants is self-modification producing a more and more powerful AI. There are precedents for this sort of thing. Human civilization for example has essentially self-modified itself, albeit at a slow rate, over time. I suspect this is a definitional issue. What do you think behaviorism says that is an attempt to explaine consciousness and not just argue that it doesn't need an explanation? Ok. I think I'
noen-20

"Is this a variant of what it is like to be a bat?"

Is there something that it is like to be you? There are also decent arguments that qualia does matter. It is hardly a settled matter. If anything, the philosophical consensus is that qualia is important.

"Whether some AI has qualia or not doesn't change any of the external behavior,"

Yes, behaviorism is a very attractive solution. But presumably what people want is a living conscious artificial mind and not a useful house maid in robot form. I can get that functionality right now.

If I wr... (read more)

2JoshuaZ
I'm not sure this question is any better formed. "What it is like to be an X" doesn't seem to have any coherent meaning when one presses people about what they actually are talking about. Taking qualia seriously as a question is a distinct claim than qualia actually having anything substantial to do with consciousness. I'm not sure of specific acceptance levels of qualia, but the fact is that a majority of philosophers either accept physicalism or lean towards it. So I'm not sure how to reconcile that with your claim. On the contrary, most people don't care whether it is conscious in some deep philosophical sense. In fact, having functional AI that are completely not conscious have certain advantages- such as being less of an ethical problem in sending them to be destroyed (say as robot soldiers, or as probes to other planets). Moreover, the primary worry discussed on LW as far as AI is concerned is that the AI will bootstrap itself in a way that results in a very unpleasant bad singularity. Whether the AI is truly conscious or not has nothing to do with that worry. Yes, for many purposes Wikipedia is quite useful and reasonably reliable as a source. In many fields (math and chemistry for example) articles have been written by actual experts in the fields. My primary intent for the link was for its use in the introduction where it uses the fairly standard notion that "that psychology should concern itself with the observable behavior of people and animals, not with unobservable events that take place in their minds." It is incidentally useful to understand behaviorism in most senses of the term went away not due to arguments about things like qualia, but rather that advances in neuroscience and related areas allowed us to get much more direct access to what was going on inside. At some level, psychology is still controlled by behaviorism if one interprets that to include brain activity as behavior. And yes, I am familiar with behaviorism in the sense that is di
noen-40

That is correct, you don't know what semantic content is.

"I still don't know what makes you so sure conciousness is impossible on an emulator."

For the same reason that I know simulated fire will not burn anything. In order for us to create an artificial mind, which certainly must be possible, we must duplicate the causal relations that exist in real consciousnesses.

Let us imagine that you go to your doctor and he says, "You're heart is shot. We need to replace it. Lucky for you we have miniature super computer we can stick into your chest th... (read more)

3loup-vaillant
Care to explain? Oh. and how do you know that? Assigned by us, I suppose? Then what makes us so special? Anyway, that's not the most important: Of course not: von Neuman machines have limitations that would make them too slow. But even in principle? I have a few questions for you: * Do you think it is impossible to build a simulation of the human brain on a Von Neuman machine, accurate enough to predict the behaviour of an actual brain? * If it is possible, do you think it is impossible to link such a simulation to reality via an actual humanoid body? (The inputs would be the sensory system of the body, and the outputs would be the various actions performed by the body.) * If it is possible, do you think the result is concious? Why not?
4Emile
I don't know what you mean by "physical" here - for any other "physical phenomenon" - light, heat, magnetism, momentum, etc. - I could imagine a device that measures / detects it. I have no idea how one would go about making a device that detects the presence of consciousness. In fact, I don't see anything "consciousness" has in common with light, heat, magnetism, friction etc. that warrants grouping them in the same category. It would be like having a category for "watersnail-eating fish, and Switzerland".
noen00

Meaning.

The words on this page mean things. They are intended to refer to other things.

noen-40

"Because the telegraph analogy is actually a pretty decent analogy."

No it isn't. Constructing analogies is for poets and fiction writers. Science does not construct analogies. The force on an accelerating mass isn't analogous to F=ma, it IS F=ma. If what you said is true, that neurons are like telegraph stations and their dendrites the wires then it could not be true that neurons can communicate without a direct connection or "wire" between them. Neurons can communicate without any synaptic connection between them (See: "Neurons Ta... (read more)

2JoshuaZ
Science uses analogies all the time. For example, prior to the modern quantum mechanical model of the atom one had a variety of other models which were essentially analogies. The fact that analogies break down in some respects shouldn't be surprising: they are analogies not exact copies. It might be useful to give as an example an analogy that is closely connected to my own thesis work of counting Artin representations. It turns out that this is closely connected to the behavior of the units (that is elements that have inverses) in certain rings). For example, we can make the ring denoted as Z[2^(1/2)], which is formed by taking 1 and the square root of 2 and then taking all possible finite sums, differences and products of elements. Rings of this sort, where one takes all combinations of 1 with the square root of an integer are have been studied since the late 1700s. Now, it turns out that there are some not so obvious units in Z[2^(1/2)]. I claim that in this ring, 1+2^(1/2) is a unit. It turns out that if instead one takes a ring in the following way: Take 1, and take 1/p for some prime p, and the form all products, sums and differences, one gets a ring that behaves in many ways similarly to the quadratic fields, but is much easier to analyze. The analogy breaks down pretty badly in some aspects, but in most ways is pretty good to the point where large classes of results in one setting translate into almost identical results in the other setting (although the proofs are often different and require much more machinery in the quadratic case) . So here we have in math, often seen as one of the most rigorous of disciplines, an analogy that is not just occurring at a pedagogical level but is actively helpful for research. You appear to be ignoring the bit where I noted "organized". But actually, even without that your statement is wrong. Often we do get critical masses where behavior becomes different on a large scale. Indeed, the term "critical mass" occurs precis
noen-40

"What do you mean by "strong AI is refuted""

The strong AI hypothesis is that consciousness is the software running on the hardware of the brain. Therefore one does not need to know or understand how brains actually work to construct a living conscious mind. Thus any system that implements the right computer program with the right inputs and outputs has cognition in exactly the same literal sense that human beings have understanding, thought and memory. It was the belief of strong AI proponents such as Marvin Minski at MIT and others tha... (read more)

5Emile
How would one determine whether a given device/system has this "semantic content"? What kind of evidence should one look at? Inner structure? Only inputs and outputs? Something else?
2loup-vaillant
I second fubarobfusco. While you could say programs are pure syntax, they are executed on real machines and have real effects. If those capabilities don't count as semantic content, I don't know what does. So, I still don't know what makes you so sure conciousness is impossible on an emulator. (Leaving aside the fact that using "strong AI" to talk about conciousness, instead of capabilities, is a bit strange.)
5fubarobfusco
What on earth is "semantic content"?
6khafra
A wild Aristotelian Teleologist appears! Phrasing claims in the passive voice to lend an air of authority is grating to the educated ear. Aside from stylistic concerns, though, I believe you're claiming that electronic circuits don't really mean anything. However, I'm not sure whether you're making the testable claim that no arrangement of electronic circuits will ever perform complicated cross-domain optimization better than a human, or the untestable claim that no electronic circuit will ever really be able to think.

Strong AI is refuted because syntax is insufficient for semantics.

Where the heck does that come from? What do you mean by "strong AI is refuted", "syntax is insufficient for semantics", and how does the former follow from the latter?

0JoshuaZ
Because the telegraph analogy is actually a pretty decent analogy. What makes you think a sufficiently large number of organized telegraph lines won't act like a brain? Note that whether the number may be too large to actually fit on Earth is besides the point.
4NancyLebovitz
What he's describing isn't normal aging.
-3Eugine_Nier
Once they have power, yes. In order to seize power, they need to appeal to "novelty seeking" liberals in order to destroy the existing order, especially all those annoying checks and balances designed to keep any one person from acquiring power.
7JoshuaZ
That may be one notion of what those words mean, but they aren't what people mean when they discuss their political ramifications. For example, how do attitudes towards abortion and free markets fall into this setting? Political alignments are to a large effect due to historical consequences, not due to any simple coherent philosophical positions.
noen-40

Vacuums exist. Nearly frictionless planes and more or less perfectly rigid bodies actually exist. There is nothing wrong with abstraction based on objective reality. Claiming that one is about to declare how economies ought to work is not a abstraction based on a preexisting reality. It is attempting to impose one's own subjective needs wants and desires on reality.

That is not science, that is pseudoscience.

Spherical cow is not "how science is done". It is a joke. Jokes rely on reversing expectations, going counter to reality, for the surprise el... (read more)

8gwern
Partial vacuums exist. Somewhat frictionless planes, somewhat rigid bodies exist. I don't see any difference between the idealizing in either case. Really? I always thought it was a veiled criticism of abstraction gone wrong - sterile abstractions, abstractions which can't then be linked back to the real world. If you say so.
noen00

Is there something that it is like to be Siri? Still, Siri is a tool and potentially a powerful one. But I feel no need to be afraid of Siri as Siri any more than I am afraid of nuclear weapons in themselves. What frightens me is how people might misuse them. Not the tools themselves. Focusing on the tools then does not address the root issue. Which is human nature and what social structures we have in place to make sure some clown doesn't build a nuke in his basement.

Did ELIZA present the "dangers and promises" of AI? Weizenbaum's secretary thou... (read more)

0JoshuaZ
I'm not sure what you mean by this question. Is this a variant of what it is like to be a bat? There's a decent argument that such questions don't make sense. But this doesn't matter much: Whether some AI has qualia or not doesn't change any of the external behavior, than for most purposes like existential risk it doesn't matter. This and most of the rest of your post are assertions, not arguments. First, what do you mean by behaviorism in this context? Behaviorism as that word is classically defined isn't an attempt to explain consciousness. It doesn't care about consciousness at all.
5[anonymous]
You should read the material linked to from this LW wiki article on Compartmentalization.
5JoshuaZ
My other reply got very long and this matter was essentially tangential so I've broken this off into a separate comment. This seems to be more about word games than anything else. If someone believes that the Earth is round but they don't know why that's commonly accepted, they have a fact about the universe, and one that if they think hard enough about it, one that probably pays rent. That they got to that result by "conformity" is both not obviously testable, and isn't relevant in this context. Understanding that astrology doesn't work is a perfect example of scientific knowledge. Moreover, I'm not completely sure what you mean by conformity. For example, I've never personally tested whether astrology works or not. Is it conformity to accept the broad set of scientific papers showing that it doesn't work?
5JoshuaZ
By the way, you can quote on less wrong by putting a ">" at the beginning of a paragraph. So if I write "> this" I get: Moving on: No. That's not the argument being made here. The argument being made is twofold: 1) Exceptions exist (which doesn't contradict the statistical claim) and 2) The statistics are actually weak effects. But if you prefer, consider the following situation: In many parliamentary systems one has a wide variety of different political parties. Israel for example has 14 parties with representation in the Knesset. Almost any two parties agree on at least one issue, and disagree on a variety of issues. That means that if a party is correct about all issues, then there have to be a large number (or even a majority) of people who are correct about that issue but wrong on many other issues. Even in a system like the US, people have a variety of different views and don't fall into two strict camps in many ways (here again is somewhere where the GSS data is worth looking at), so the claim that people are across the board irrational or rational just doesn't make sense. Sure, this is likely the cause of some of what is going on here, especially in regards to global warming. Moreover, more educated people are more likely to know what their own tribe is generally expected to believe and adjust their views accordingly. I'm citing GSS data which happens to be discussed in more detail at a certain set of blogs. Note that the GSS data is freely availalble so you can easily verify the claims yourself. Note also that phrasing this question as "authoritarian" v. "liberal" is even more misleading than your earlier statement about authoritaianism. The data in question is explicitly about self-identification as liberal or conservative, not about any metric of authoritarianism. Indeed, many viewpoints that are classically seen as "conservative" or "right-wing" are anti-authoritarian. For example, free market economics is a right-wing viewpoint. Yes, and there are
8[anonymous]
You might be interested in these two articles. Note that Moldbug's writing is cited as an example of insight porn in the comments. The "Dark Enlightenment" crowd differs from the Conservative crowd in that the former are more likely to be maladapted people wanting radical change in society and novelty in their ideas while the latter are well adjusted people who dislike change since it is cognitively expensive to deal with. Here is a left wing take on the difference. I think this is a key problem of his readers since it biases them towards such ideas (this includes me naturally). Arguably Marxism had a boosted appeal in the middle of the 20th century among Western intellectual elites because of similar reasons. Moldbug himself probably only enjoys demolishing Universalism as much as he does because his grandparents where Communists, parents where Liberals and he moved in the university crowd in California, so the ideas he comes up with and the material he seeks out differ radically from what he was immersed in as a child, teenager and probably even now.
Nornagest100

In Chris Mooney's book "The Republican Brain" he makes a good case based on recent studies for why we should think of the totalitarianism of the former USSR as a right wing phenomenon. [...] Conservative personalities then acclimate themselves to the resulting bureaucracy and seek to freeze it in place. Then, being authoritarians, they accumulate power and use it as authoritarians always do.

It'd be hard for me to overstate my skepticism for the genre of popular political science books charging that their authors' enemies are innately evil. I... (read more)

noen-20

I cannot imagine why this:

"Here at UR, "economics" is not the study of how real economies work. It is the study of how economies should work "

should not bring to mind this:

"Here at Fantasy University, "physics" is not the study of how real physical principles work. It is the study of how physics should work."

or should not raise giant red flags that you are about to be fed a steaming pile of horse shit. I don't know about everyone else but for me the moment anyone purports to dictate how the world ought to be over and... (read more)

1Jank
What the first part of the quoted line means - I think - is that economies today (ie, "real economies") are monetarily mismanaged.
5James_Ernest
Actually, the is/ought distinction is omnipresent in the complete Moldbug thesis, as espoused in his, uh, sequences. Hence the reformulation of politics as an amoral engineering challenge. There's a lot of deliberately inflammatory language present, as well as a relatively high inferential distance, to which the inflammatory language mostly serves to filter the audience for, or at least a positive affect. Translated into English, all that statement says is "Here is a presentation of classical or Austrian economics. This is not practised at large anywhere on earth today (for reasons which will be divulged elsewhere)."
Shmi100

Spherical cow is how science is done. What you are complaining about is in the realm of engineering.

gwern100

"Here at Fantasy University, "physics" is not the study of how real physical principles work. It is the study of how physics should work." or should not raise giant red flags that you are about to be fed a steaming pile of horse shit.

Alright, let's start with the basics like Galileo and Newton's laws of motions. Assume a frictionless plane in a vacuum on which we place a perfectly rigid body - hey wait where are you going?

[anonymous]150

Well fascist is roughly equivalent to authoritarian which is the fancy schmancy new term term for right wing reactionary.

You seem to be using nearly all the words in this sentence as mere boo lights.

7JoshuaZ
So, I agree with this at a very weak level. The question is how good an indicator is this? For example, I know a very successful mathematician who has extreme right-wing politics, and another who has extreme left-wing politics. I know a linguist who is a monarchist. The fact is that humans can be highly rational in one area while extremely irrational in another. Look for example at how much of the left has extreme anti-nuclear power, anti-GMO and pro-alt med views that have little connection to evidence. The degree to which the left is "relatively free" has the word "relative" doing a lot of work in that sentence. Moreover, Moldbug's views don't fit into a standard notion of far-right. Another issue to point out is that the studies which show a difference between left-wing and right-wing cognition are to a large extent limited: The differences in populations are quite small. Moreover, by other metrics, conservatives have more science knowledge than liberals on average. In fact, the GSS data strongly suggests that in general the most stupid, ignorant people are actually the political moderates. They have lower average vocab, and on average perform more poorly at answering basic science questions. So I'm deeply confused by this statement. You seem to be asserting that "Person X who says A will be extremely unlikely to have anything useful to say." And asserting that "If Person Y thinks that Person X has interesting things to say about B despite X's declaration of A, that makes the person Y even less likely to have useful things to say?" I'm curious, if we had a Person Z who pointed out that Y had interesting thing to say about issue C, would Z become even further less useful to listen to?
Nornagest110

Well fascist is roughly equivalent to authoritarian which is the fancy schmancy new term term for right wing reactionary

That's a reasonable description of the word's use as a slur -- though I might go further and say that in that context "fascist" means simply "bad" -- but in political science parlance it has a much narrower, though not entirely consistent, meaning. (I've actually heard some dispute over whether the Nazis properly count as fascist or are better given their own weird little category, although this is a somewhat fring... (read more)

Let me get this straight, you're accusing LessWrong of group think because we're willing to listen to fringe viewpoints and take them seriously if they seem to merit it?

JoshuaZ190

If a right wing fascist is admired here then I am probably in the wrong place.

Speaking as someone who really dislikes Moldbug's viewpoints, it requires a very non-standard notion "fascist" to describe him that way. He has his own ideas for a model government which doesn't look much like historical fascism or even any other sort of dictatorship. "right-wing" is probably somewhat more accurate.

Moreover, it may help if you haven't to read politics is the mind-killer. Disagreement about politics with people (in this case a vocal minor... (read more)

noen-20

I predict that the search for AI will continue to live up to it's proud tradition of failing to produce a viable AI for the indefinite future. Since the Chinese Room argument does refute the strong AI hypothesis no AI will be possible on current hardware. An artificial brain that duplicates the causal functioning of an organic brain is necessary before an AI can be constructed.

I further predict that AI researchers will continue to predict immanent AI in direct proportion to research grant dollars they are able to attract. Corollary: A stable nuclear fusion reactor will be built before a truly conscious artificial mind is. Neither of which will happen in the lifetime of anyone reading this.

5JoshuaZ
There's a lot of objections to the Chinese room, but in this context, the primary issue is that the Chinese room doesn't matter: Even if the AI isn't conscious in some deep philosophical sense, it has all the same results then for humans the dangers and promises of strong AI are identical. Continue implies this is currently the case. Do you have evidence for this? My impression is that most AI research is going to practical machine learning which is currently being used for many real world applications. Many people in the machine learning world state that any form of general AI is extremely unlikely to happen soon, so what evidence for this claimed proportion is there? I don't see how this is a corollary. If you mean to state it as an example of a comparison to what sort of technology would be needed that might make some sense. However, we actually already have stable fusion reactors. Examples include tabtletop designs that can be made by hobbyists. Do you mean something like a fusion reactor that produces more useful energy than is inputted?
noen20

Among candidate stars for going nova I would think you could treat it as a random event. But Sol is not a candidate and so doesn't even make it into the sample set. So it's a very badly constructed setup. It's like looking for a needle in 200 million haystacks but restricting yourself only to those haystacks you already know it cannot be in. Or do I have that wrong.

0Cyan
I'm going to try the Socratic method... Is a coin flip a random event?
noen-20

How about "the probability of our sun going nova is zero and 36 times zero is still zero"?

Although... continuing with the XKCD theme if you divide by zero perhaps that would increase the odds. ;)

2Cyan
Since the sun going nova is not a random event, strict frequentists deny that there is a probability to associate with it.
noen20

I think the null hypothesis is "the neutrino detector is lying" because the question we are most interested in is if it is correctly telling us the sun has gone nova. If H0 is the null hypothesis and u1 is the chance of a neutrino event and u2 is the odds of double sixes then H0 = µ1 - µ2. Since the odds of two die coming up sixes is vastly larger than the odds of the sun going nova in our lifetime the test is not fair.

2gwern
I don't think one would simply ignore the dice, and what data is the frequentist drawing upon in the comic which specifies the null?
noen00

Plants do not count and have no awareness of time or of anything at all. The exact method by which venus fly traps activate is unknown but it seems hard to me to attribute it with the ability to count. That kind of teleological explanation is something we are cognitively biased to give but it fails to be explanatory.

Sunflowers do not turn their heads to face the sun because they want to catch more sunlight. They turn towards light because those cells that are in shadow receive more auxin which in turn stimulates the elongation of the cell walls causing th... (read more)

0[anonymous]
Please allow me to clarify. "This plant moves as if it can count and is aware of time." We are in agreement that the plant is not aware, and I was careful to say so. "Learning that caused me to re-think what it means to count, as did your essay. Except the plant-fact is interesting while your essay is useful." Hearing a random song on the radio can spark a thought, but it doesn't mean I'm thinking about that song. That's what I mean by an interesting re-think. The essay I praise is something I'm thinking about. That's what I mean by a useful re-think.
5MixedNuts
Where are you seeing any teleology? Counting is just switching to a different state whenever a thing happens and performing a certain behavior when a certain state is reached. Time-sensitive behavior is... basically ubiquitous. (Yeah, yeah, "awareness" was a poor word choice.) You can buy counters and clocks at the electronics store! They don't require any mysterious ghost of anthropomorphism!
noen00

I generally agree with point (1) but the point is irrelevant. Counting isn't what makes 2 + 2 = 4 true. Although that is how we all learn to do math, by counting and memorizing addition and multiplication tables. I owe it all to my 3rd grade teacher. ;)

On point (2): "on our macro scale of reality, on the scale of things we perceive with our senses, discrete, separate objects are a feature of the map, not the territory; they exist in your mind, not the reality. In the reality, there's just a lot of atoms everywhere"

There are no atoms at the macro ... (read more)

noen10

Ok, I do wonder how one would distinguish between perceived effects vs real effects. The real effects of say civil rights legislation was greater freedom and opportunity for minorities. We are a better more productive society when we, at least in theory, give everyone an equal chance to succeed. That's the real material result of the 60's civil rights movement.

The psychological effect of those who benefited was maybe "I am a valued member of society." I'm not sure how one teases that apart from the positive effect of simply being able to get a jo... (read more)

noen00

If the confidence fairy has been shown not to exist. (The confidence fairy is the theory that the reason banks are not lending right now is due to a lack of confidence in the market.) Then why should we believe that feelings of hopelessness or empowerment will effect the economy? (productivity is an economic feature) What seems to me more likely to affect productivity is whether or not one got a good night's sleep the night before and ate a decent breakfast.

If folk psychology (hope, despair) is epiphenominal then there is no reason to believe they have causal effects in the world.

0wesley
To clarify (I didn't do a good job above), I meant to ask "do certain perceived psychological effects (which probably do correlate with neurophysiological mechanisms) correlate with voting events AND significant positive and negative effects on the populace in terms of perceived well-being and productivity? I did not know that about banking, although I did not expressly believe the alternative either. I will definitely look at that a little more. Intuitively I also agree with the sentiment that many other seemingly mundane things probably have a greater overall impact on societal production than relatively uncommon events.
noen10

This is the wrong way to think about it. One's vote matters not because in rare circumstances it might be decisive in selecting a winner. One's vote matters because by voting you reaffirm the collective intentionality that voting is how we settle our differences. All states exist only through the consent of it's people. By voting you are asserting your consent to the process and it's results. Democracy is strengthened through the participation of the members of society. If people fail to participate society itself suffers.

-1TraditionalRationali
Good someone pointed this out! I think this is correct and an important point. Voting is to a large extent about expressing loyalty to king and land (or system and government for those of you who do not live in constitutional monarchies). It is one of the processes that build trust in the society and thus in efficient coordination. Looking just at who will win the election is too a narrow perspective to properly understand the effect of voting.