All of TheAncientGeek's Comments + Replies

Saying that some things are right and others wrong is pretty standard round here. I don't think I'm breaking any rules. And I don't think you avoid making plonking statements yourself.

We do have some laws that are explicit about scale. For instance speed limits and blood alcohol levels. However, nor everything is easily quantified. Money changing hands can be a proxy for something reaching too large a scale

3Dagon
Many laws incorporate scaling in terms of damage threshold or magnitude of single incident. We have very few laws that are explicit about scale in terms of overall frequency or number of participants in multiple incidents. City zoning may be one example of success in this area - only allowing so many residents in an area, without specifying who. There are very few criminal laws such that something is legal only when a few people are doing it, and becomes illegal if it's too popular. Much more common to just outlaw it and allow prosecutors/judges leeway in enforcing it. I'd argue that this choice gets exercised in ways that are harmful, but it does get the job (permitting low-level incidence while preventing large-scale infractions) done.

Possibly related:

The other day it was raining heavily. I chose to take an umbrella rather than shaking my fist at the sky. Shaking your fist at the sky seems pretty stupid, but people do analogous things all the time.

Complaining to your ingroup about your outgroup isn't going to change your outgroup. Complaining that you are misunderstood isn't going to make you understood. Changing the way you communicate might. You are not in control of how people interpret you, but you are in control of what you say.

It might be unfortunate that people have a hair-trigge... (read more)

That observation runs headlong into the problem, rather than solving it.

0entirelyuseless
Exactly. "The reality is undecatillion swarms of quarks not having any beliefs, and just BEING the scientist." Let's reword that. "The reality is undecatillion swarms of quarks not having any beliefs, and just BEING 'undecatillion swarms of quarks' not having any beliefs, with a belief that there is a cognitive mind calling itself a scientist that only exists in the undecatillion swarms of quarks's mind." There seems to be a logic problem there.

Well, we don't know if they work magically, because we don't know that they work at all. They are just unavoidable.

It's not that philosophers weirdly and unreasonably prefer intuition to empirical facts and mathematical/logical reasoning, it is that they have reasoned that they can't do without them: that (the whole history of) empiricism and maths as foundations themselves rest on no further foundation except their intuitive appeal. That is the essence of the Inconvenient Ineradicability of Intuition. An unfounded foundation is what philosophers mean by &... (read more)

Many-worlds-flavored QM, on the other hand, is the conjunction of 1 and 2, plus the negation of 5

Plus 6: There is a preferred basis.

0gjm
In so far as I understand what the "preferred basis problem" is actually supposed to be, the existence of a preferred basis seems to me to be not an assumption necessary for Everettian QM to work but an empirical fact about the world; if it were false then the world would not, as it does, appear broadly classical when one doesn't look too closely. Without a preferred basis, you could still say "the wavefunction just evolves smoothly and there is no collapse"; it would no longer be a useful approximation to describe what happens in terms of "worlds", but for the same reason you could not e.g. adopt a "collapse" interpretation in which everything looks kinda-classical on a human scale apart from random jumps when "observations" or "measurements" happen. The world would look different in the absence of a preferred basis. But I am not very expert on this stuff. Do you think the above is wrong, and if so how?

First, it's important to keep in mind that if MWI is "untestable" relative to non-MWI, then non-MWI is also "untestable" relative to MWI. To use this as an argument against MWI,

I think it's being used as an argument against beliefs paying rent.

MWI is testable insofar as QM itself is testable.

Since there is more than one interpretation of QM, empirically testing QM does not prove any one interpretation over the others. Whatever extra arguments are used to support a particular interpretation over the others are not going to be, ... (read more)

This is why there's a lot of emphasis on hard-to-test ("philosophical") questions in the Sequences, even though people are notorious for getting those wrong more often than scientific questions -- because sometimes [..] the answer matters a lot for our decision-making,

Which is one of the ways in which beliefs that don't pay rent do pay rent.

I am not familiar with Peterson specifically, but I recognise the underpinning in terms of Jung, monomyth theory, and so on.

-3Erfeyah
Cool. Peterson is much clearer than Jung (for which I don't have a clear opinion). I am not claiming that everything that Peterson says is correct and I agree with. I am pointing to his argument for the basis of morality in cultural transmission through imitation, rituals, myth, stories etc. and the grounding of these structures in the evolutionary process as the best rational explanation of morality I have come across. I have studied it in depth and I believe it to be correct. I am inviting engagement with the argument instead of biased rejection.

, a state is good when it engages our moral sensibilities s

Individually, or collectively?

We don't encode locks, but we do encode morality.

Individually or collectively?

Namely, goodness of a state of affairs is something that I can assess myself from outside a simulation of that state. I don't need to simulate anything else to see it

The goodness-to-you or the objective goodness?

if you are going say that morality "is" human value, you are faced with the fact that humans vary in their values..the fact that creates the suspicion of relati... (read more)

No, we're in a world where tourists generally don't mind going slowly and enjoying the view. These things would be pretty big on the outside, at least RV size, but they wouldn't be RVs. They wouldn't usually have kitchens and their showers would have to be way nicer than typical RV showers.

And they could relocate overnight. That raises the possibility of self-driving sleeper cars for business travellers who need to be somewhere by morning.

0chaosmage
Yes. I wonder how hard it'll be to sleep in the things. I find sleeper trains generally a bad place to sleep, but that's mostly because of the other passengers.

That amounts to "I can make my theory work if I keep on adding epicycles".

0Erfeyah
Your comment seems to me an indication that you don't understand what I am talking about. It is a complex subject and in order to formulate a coherent rational argument you will need to study it in some depth.

can think of two possibilities:

[1] that morality is based on rational thought as expressed through language

[2] that morality has a computational basis implemented somewhere in the brain and accessed through the conscious mind as an intuition..

[3] Some mixture. Morality doesn't have to be one thing, or achieved in one way. In particular, novel technologies and social situations provoke novel moral quandaries that intuition is not well equipped to handle , and where people debate such things, they tend to use a broadly rationalist style, trying to find common principles, noting undesirable consequences.

0Erfeyah
Sure this is a valid hypothesis. But my assessment and the individual points I offered above can be applied to this possibility as well uncovering the same issues with it. Novel situations can be seen through the lens of certain stories because they are acting to such a level of abstraction that they are applicable to all human situations. The most universal and permanent levels of abstraction are considered archetypal. These would apply equally to a human living in a cave thousands of years ago and a Wall Street lawyer. Of course it is also true that the stories always need to be revisited to avoid their dissolution into dogma as the environment changes. Interestingly it turns out that there are stories that recognize this need for 'revisiting' and deal with the strategies and pitfalls of the process.

That assumes he had nothing to learn from college, and the only function it could have provided is signalling and social credibility.

2Dr_Manhattan
Not necessarily, just that costs of going to college were outweighed by other things (not saying that was EYs reasoning)

If you have a mind that cannot communicate, figuring out what it feels is not your biggest problem. Saying anything about such a mind is a challenge. Although I'm confident much can be said, even if I can't explain the algorithm how exactly that would work.

It seems you are no longer ruling out a science of other minds, Are you still insisting that robots don't feel pain?

but I don't necessarily understand what it would mean for a different kind of mind.

I've already told you what it would mean, but you have a self-imposed problem of tying meaning to ... (read more)

0tadasdatys
No, by "mind" I just mean any sort of information processing machine. I would have said "brain", but you used a more general "entity", so I went with "mind". The question of what is and isn't a mind is not very interesting to me. Where exactly? First of all, the meaningfulness of words depends on the observer. "Robot pain" is perfectly meaningful to people with precise definitions of "pain". So, in the worst case, the "thing" remains meaningless to the people discussing it, and it remains meaningful to the scientist (because you can't make a detector if you don't already know what exactly you're trying to detect). We could then simply say that that the people and the scientist are using the same word for different things. It's also possible that the "thing" was meaningful to everyone to begin with. I don't know what "dubious detectability" is. My bar for meaningfulness isn't as high as you may think, though. "Robot pain" has to fail very hard so as not to pass it. The idea that with models of physics, it might sometimes be hard to tell which features are detectable and which are just mathematical machinery, is in general a good one. Problem is that it requires good understanding of the model, which neither of us has. And I don't expect this sort of poking to cause problems that I couldn't patch, even in the worst case.

I am questioning the implicit premise that some kinds of emergent things are "reductively understandable in terms of the parts and their interactions.

It's not so much some emergent things, for a uniform definiton of "emergent", as all things that come under a vriant definition of "emergent".

I think humans have a basic problem with getting any grasp at all on the idea of things being made of other things, and therefore you have arguments like those of Parmenides, Zeno, etc., which are basically a mirror of modern arguments abou

... (read more)
0entirelyuseless
"physicalism is comfortable with spacial relations and causal interactions as being basic elements or reality" I am suggesting this is a psychological comfort, and there is actually no more reason to be comfortable with those things, than with consciousness or any other properties that combinations have that parts do not have.

There are positions between those. Medium-strength emergentism would have it that some systems are conscious, that cosnciousness is not a property of their parts, and that it is not reductively understandable in terms of the parts and their interactions, but that it is by no mean inevitable.

Reduction has its problems too. Many writings on LW confuses the claim that things are understandable in terms of their parts with the claim that they are merely made of parts.

Eg:-

(1) The explanatory power of a model is a function of its ingredients. (2) Reductionism

... (read more)
0entirelyuseless
I am questioning the implicit premise that some kinds of emergent things are "reductively understandable in terms of the parts and their interactions." I think humans have a basic problem with getting any grasp at all on the idea of things being made of other things, and therefore you have arguments like those of Parmenides, Zeno, etc., which are basically a mirror of modern arguments about reductionism. I would illustrate this with Viliam's example of the distance between two oranges. I do not see how the oranges explain the fact that they have a distance between them, at all. Consciousness may seem even less intelligible, but this is a difference of degree, not kind.

I asked you before to propose a meaningless statement of your own.

And what I said before is that a well-formed sentence can robustly be said to be meaningful if it embeds a contradiction, like "colourless green", or category error, like "sleeping idea".

So, what you're saying, is that you don't know if "ghost unicorns" exist? Why would Occam's razor not apply here? How would you evaluate the likelihood that they exist?

Very low finite rather than infinitessimal or zero.

I don't see how this is helping. You have a chain o... (read more)

0tadasdatys
Obviously I agree this is meaningless, but I disagree about the reasoning. A long time ago I asked you to prove that "bitter purple" (or something) was a category error, and your answer was very underwhelming. I say that "sleeping idea" is meaningless, because I don't have a procedure for deciding if an idea is sleeping or not. However, we could easily agree on such procedures. For example we could say that only animals can sleep and for every idea, "is this idea sleeping" is answered with "no". It's just that I honestly don't have such a restriction. I use the exact same explanation for the meaninglessness of both "fgdghffgfc" and "robot pain". The question "is green colorless" has a perfectly good answer ("no, green is green"), unless you don't think that colors can have colors (in that case it's a category error too). But I'm nitpicking. Here you treat detectability as just some random property of a thing. I'm saying that if you don't know how to detect a thing, even in theory, then you know nothing about that thing. And if you know nothing about a thing, then you can't possibly say that it exists. My "unicorn ghost" example is flawed in that we know what the shape of a unicorn should be, and we could expect unicorn ghosts to have the same shape (even though I would argue against such expectations). So if you built a detector for some new particle, and it detected a unicorn-shaped obstacle, you could claim that you detected a ghost-unicorn, and then I'd have to make up an argument why this isn't the unicorn I was talking about. "Robot pain" has no such flaws - it is devoid of any traces of meaningfulness.

We can derive that model by looking at brain states and asking the brains which states are similar to which.

That is a start, but we can't gather data from entities that cannot speak , and we don't know how to arrive at general rules that apply accross different classes of conscious entity.

They only need to know about robot pain if "robot pain" is a phrase that describes something.

As i have previously pointed out, you cannot assume meaninglessness as a default.

morality, which has many of the same problems as consciousness, and is even le

... (read more)
0tadasdatys
If you have a mind that cannot communicate, figuring out what it feels is not your biggest problem. Saying anything about such a mind is a challenge. Although I'm confident much can be said, even if I can't explain the algorithm how exactly that would work. On the other hand, if the mind is so primitive that it cannot form the thought "X feels a like Y", then does X actually feel like Y to it? And of course, the mind has to have feelings in the first place. Note, my previous answer (to ask the mind which feelings are similar) was only meant to work for human minds. I can vaguely understand what similarity of feelings is in a human mind, but I don't necessarily understand what it would mean for a different kind of mind. Are there classes of conscious entity? You cut off the word "objective" from my sentence yourself. Yes, I mean "objective morality". If "morality" means a set of rules, then it is perfectly well defined and clearly many of them exist (although I could nitpick). However if you're not talking about "objective morality", you can no longer be confident that those rules make any sense. You can't say that we need to talk about robot pain, just because maybe robot pain is mentioned in some moral system. The moral system might just be broken.

Those sound like fixable problems.

He's naive enough to reinvent LP. And since when was "coherent , therefore true" a precept of his epistemology?

You should do good things and not do bad things

You know that is not universally followed?

0Lumifer
I could never imagine such a thing! Next thing you'll be telling me that people do stupid things on a regular basis :-P

Not saying your epsitemology can do things it can;'t do.

Motte: We can prove things about reality.

Bailey: We can predict obervations.

0Lumifer
That doesn't seem to be meaningful advice given how "X should not claim it can do things it can't do" is right there near "You should do good things and not do bad things". And aren't your motte & bailey switched around?

Why would one care about correspondence to other maps?

It's worse than that , and they're not widely enough know

0Lumifer
Eh. "All models are wrong but some are useful". Do you happen to have alternatives?

Usually, predictive accuracy is used as a proxy for correspondence to reality, because one cannot check map-territory correspondence by standing outside the map-territory relationship and observing (in)congreuence directly.

0Lumifer
Right. There are caveats because e.g. you can never prove that a map is (entirely) correct, you can only prove that one is wrong -- but these are not new and well-known.

If "like" refers to similarity of some experiences, a physicalist model is fine for explaining that

We can't compare experiences qua experiences using a physicalist model, because we don't have a model that tells us which subset or aspect of neurological functioning corresponds to which experience.

If it refers to something else, then I'll need you to paraphrase.

If you want to know what "pain" means, sit on a thumbtack.

You can say "torture is wrong", but that has no implications about the physical world

That is comple... (read more)

0tadasdatys
We can derive that model by looking at brain states and asking the brains which states are similar to which. They only need to know about robot pain if "robot pain" is a phrase that describes something. They could also care a lot about the bitterness of colors, but that doesn't make it a real thing or an interesting philosophical question. It's interesting that you didn't reply directly about morality. I was already mentally prepared to drop the whole consciousness topic and switch to objective morality, which has many of the same problems as consciousness, and is even less defensible.

That's just another word for the same thing? What does one do operationally?

0Lumifer
One tests. Operationally. Think science/engineering/Popper/the usual stuff.

I can also use"ftoy ljhbxd drgfjh"

But you could not have used it to make a point about links between meaning, detectabiity, and falsehood.

If you have no arguments, then don't respond.

The implicit argument is that meaning/communication is not restricted to literal truth.

Let me answer that differently. You said invisible unicorns don't exist. What happens if an invisible unicorn detector is invented tomorrow?

What would happen is that you are changing the hypothesis. Originally, you stipulated an invisible unicvorn as undetectable in any... (read more)

0tadasdatys
No, but I can use it to make a point about how low your bar for meaningfulness is. Does that not count for some reason? I asked you before to propose a meaningless statement of your own. Do none exist? Are none of them grammatically correct? ??? Yes, the unicorns don't have to be undetectable be definition. They're just undetectable by all methods that I'm aware of. If "invisible unicorns" have too much undetectability in the title, we can call them "ghost unicorns". But, of course, if you do detect some unicorns, I'll say that they aren't the unicorns I'm talking about and that you're just redefining this profound problem to suit you. Obviously this isn't a perfect analogue for your "robot pain", but I think it's alright. So, what you're saying, is that you don't know if "ghost unicorns" exist? Why would Occam's razor not apply here? How would you evaluate the likelihood that they exist?

My take is that the LP is the official doctrine, and the MWI is an unwitting exception.

1ChristianKl
I don't think MWI is an exception to Eliezer's other stated views about epistomology. He isn't naive about epistomology and thinks that the fact that MWI is coherent in some sense is reason to believe in it even when there's no experiment that could be run to prove it.

Everyone builds their own maps and yes, they can be usefully ranked by how well do they match the territory.

How do you detect that?

0Lumifer
In the usual way: by testing the congruence with reality.

In what way is "there is an invisible/undetectable unicorn in your room" not "useless for communication"?

Well, you used it,.

I can give you a robot pain detector today. It only works on robots though. The detector always says "no". The point is that you have no arguments why this detector is bad.

Its' bad because there's nothign inside the box. It's just a apriori argument.

0tadasdatys
I can also use"ftoy ljhbxd drgfjh". Is that not meaningless either? Seriously, if you have no arguments, then don't respond. Let me answer that differently. You said invisible unicorns don't exist. What happens if an invisible unicorn detector is invented tomorrow? To make a detector for a thing, that thing has to have known properties. If they did invent a robot pain detector tomorrow, how would you check that it really detects robot pain? You're supposed to be able to check that somehow.

That's harder to do when you have an explicit understanding.

Do you think anyone can undertstand anything? (And are simplifications lies?)

2Lumifer
Everyone builds their own maps and yes, they can be usefully ranked by how well do they match the territory. Truth/lie is not a boolean, but a whole range of possible values between two obvious extremes. For any threshold that you might set, you can find and argue about uncertain edge cases. In practice, I find that "intent to deceive" works well, though, of course, there are situations when it breaks down.

Yes, I said it's not a fact, and I don't want to talk about morality because it's a huge tangent. Do you feel that morality is relevant to our general discussion?

Yes: it's relevant because "tortruing robots is wrong" is a test case of whether your definitons are solving the problem or changing the subject.

and also don;'t want to talk about consciousness.

What?

You keep saying it s a broken concept.

A theory should be as simple as possible while still explaining the facts. There are prima facie facts facts about conscious sensations,th

... (read more)
0tadasdatys
Yes. I consider that "talking about consciousness". What else is there to say about it? If "like" refers to similarity of some experiences, a physicalist model is fine for explaining that. If it refers to something else, then I'll need you to paraphrase. Yes, if I had actually said that. By the way, matter exists in you universe too. Well, if we must. It should be obvious that my problem with morality is going to be pretty much the same as with consciousness. You can say "torture is wrong", but that has no implications about the physical world. What happens if I torture someone?

I also claim that meta-rationalists claim to be at level 3, while they are not.

Can you support that? I rather suspect you are confusing new in the historic sense with new-to-rationalists. Bay area rationalism claims to be new, but is in many respects a rehash of old ideas like logical positivism. Likewise, meta rationalism is old, historically.

, I haven't seen any proof that meta-rationalists have offered god as a simplifying hypothesis of some unexplained phoenomenon that wasn't trivial.

Theres a large literature on that sort of subject. Meta ratio... (read more)

0MrMind
Now I understand that we are talking with two completely different frames of reference. When I write about meta-rationalists, I'm specifically referring to Chapman and Gworley and the like. You have obviously a much wider tradition in mind, on which I don't necessarily have an opinion. Everything I said needs to be restricted to this much smaller context. On other points of your answer: * yes, there are important antecedents, but also important novelties too; * identification of what you consider to be the relevant corpus of 'old' meta-rationality would be appreciated, mainly of deity as a simplifying nontrivial hypothesis; * about inherently mysteriousness, it's claimed in the linked post of this page, first paragraph: " I had come to terms with the idea that my thoughts might never be fully explicable".
2ChristianKl
I'm not sure that's true. CFAR (as one of the institutions of Bay Area rationalism) puts a lot of value on system I and system II being friends. Even when we just look at rationality!Eliezer, Eliezer argued for the Multiple World Hypothesis in a way that runs counter to logical positivism.

Obviously, anything can be of ethical concert, if you really want it to be

Nitpicking about edge cases and minority concerns does not address the main thrust of the issue.

"pain is of ethical concern because you don't like it" is a trivial fact in the sense that, if you loved pain, hurting you would likely not be morally wrong.

You seem to be hinting that the only problem is going against preferences. That theory is contentious.

is "the concept of preference is simpler than the concept of consciousness", w

The simplest theory is t... (read more)

0tadasdatys
Yes, I said it's not a fact, and I don't want to talk about morality because it's a huge tangent. Do you feel that morality is relevant to our general discussion? What? What facts am I failing to explain? That "pain hurts"? Give concrete examples. In this case, "definition" of a category is text that can be used to tell which objects belong to that category and which don't. No, I don't see how silly this is. I only complain about the words when your definition is obviously different from mine. It's actually perfectly fine not to have a word well defined. It's only a problem if you then assume that the word identifies some natural category. Not really, in many cases it could be omitted or replaced and I just use it because it sounds appropriate. That's how language works. You first asked about definitions after I used the phrase "other poorly defined concepts". Here "concept" could mean "category". Proper as not circular. I assume that, if you actually offered definitions, you'd define consciousness in terms of having experiences, and then define experiences in terms of being conscious.

What is stopping me from assigning them truth values?

The fact that you can't understand them.

You may prefer "for meaningless statements there are no arguments in favor or against them", but for statements "X exists", Occam's razor is often a good counter-argument.

If you cant understand a statement as exerting the existence of something, it isn't meaningless by my definition. What I have asserted makes sense with my definiions. If you are interpreting in terms of your own definitions....don't.

I want you to decide whether &quo

... (read more)
0tadasdatys
I'm trying to understand your definitions and how they're different from mine. I see that for you "meaningless" is a very narrow concept. But does that agree with your stated definition? In what way is "there is an invisible/undetectable unicorn in your room" not "useless for communication"? Also, can you offer a concrete meaningless statement yourself? Preferably one in the form "X exists". I can give you a robot pain detector today. It only works on robots though. The detector always says "no". The point is that you have no arguments why this detector is bad. This is not normal. I think we need to talk about other currently immeasurable things. None of them work like this.

Your example, in my opinion, disproves your point. Einstein did not simply noticed them: he constructed a coherent explanation that accounted for both the old model and the discrepancies.

I wasn't making a point about meta-rationality versus rationality, I was making a point about noticing-and-putting-on-a-shelf versus noticing-and-taking-seriously. Every Christian has noticed the problem of evil...in the first sense.

the more people have independently access to the phoenomenon, the more confidence I would give to its existence.

You need to distinguis... (read more)

2MrMind
Right. Let's say that there are (at least) three levels of noticing a discrepancy in a model: 1 - noticing, shrugging and moving on 2 - noticing and claiming that it's important 3 - noticing, claiming that it's important and create something new about it ('something' can be a new institution, a new model, etc.) We both agree that LW rationalists are mostly at stage 1. We both agree that meta-rationalsts are at level 2. I also claim that meta-rationalists claim to be at level 3, while they are not. This is also right. But at the same time, I haven't seen any proof that meta-rationalists have offered god as a simplifying hypothesis of some unexplained phoenomenon that wasn't trivial. I think this is our true disagreement. I reject your thesis: there is nothing that is inherently mysterious, not even relatively. I think that any idea is either incoherent, comprehensible or infinitely complex. Math is an illustration of this classification: it exists exactly at the level of being comprehensible. We see levels because we break down a lot of complexity in stages, so that you manipulate the simpler levels, and when you get used to them, you start with more complex matters. But the entire raison d'etre of mathematics is that everything is reducible to trivial, it just takes hundreds of pages more. Maybe meta-rationalists have yet to unpack their intuitions: it happens all the time that someone has a genius idea that only later gets unpacked into simpler components. So kudos to the idea of destroying inscrutability (I firmly believe that destroying inscrutability will destroy meta-rationalism), but claiming that something is inherently mysterious... that runs counter epistemic hygiene.

Points 1 and 2 are critiques of the rationalist community that are around since the inception of LW (as witnessed by the straw Vulcan / hot iron approaching metaphors), so I question that they usefully distinguish meta- from plain rationalists

Maybe the distinction is in noticing it enough and doing something about it.. iti is very common to say "yeah, that's a problem, let's put it in a box to be dealt with later" and then forget about it .

Lots of people noticed the Newton/Maxwell disparities in the 1900s, but Einstein noticed them enough.

&qu... (read more)

2MrMind
Your example, in my opinion, disproves your point. Einstein did not simply noticed them: he constructed a coherent explanation that accounted for both the old model and the discrepancies. It unified both models under one map. Do you feel that meta-rationalists have a model of intention-implementation and maps generation that is coherent with the naive model of a Bayesian agent? A meta-rationalist is like physicist from the 19th century, that, having noticed the dual nature of light, called himself meta-physicist, because he uses two maps for the phoenomenon of light. Instead the true revolution, quantum mechanics, happened when two conflicting models were united under one explanation. It's a degree: the more people have independently access to the phoenomenon, the more confidence I would give to its existence. If it's only one person and said person cannot communicate it nor behaves any differently... well I would equate its existence to that of the invisible and intangible dragon.

No, pain is of ethical concern because you don't like it. You don't have to involve consciousness here.

Is that a fact or an opinion?

What is it exactly?

"highly unpleasant physical sensation caused by illness or injury."

Obviously, I expect that it either will not be a definition or will rely on other poorly defined concepts.

have you got an exact definition of "concept"?

Requiring extreme precision in all things tends to bite you.

0tadasdatys
Well, you quoted two statements, so the question has multiple interpretations. Obviously, anything can be of ethical concert, if you really want it to be. Also the opinion/fact separation is somewhat silly. Having said that: "pain is of ethical concern because you don't like it" is a trivial fact in the sense that, if you loved pain, hurting you would likely not be morally wrong. "You don't have to involve consciousness here" - has two meanings: one is "the concept of preference is simpler than the concept of consciousness", which I would like to call a fact, although there are some problems with preference too. another is "consciousness is generally not necessary to explain morality", which is more of an opinion. Of course, now I'll say that I need "sensation" defined. I'd say it's one of the things brains do, along with feelings, memories, ideas, etc. I may be able to come up with a few suggestions how to tell them apart, but I don't want to bother. That's because I have never considered "Is X a concept" to be an interesting question. And, frankly, I use the cord "concept" arbitrarily. It's you who thinks that "Can X feel pain" is an interesting question. At that point proper definitions become necessary. I don't think I'm being extreme at all.

. Can you define "meaningless" for me, as you understand it? In

  1. Useless for communication.

  2. Meaningless statements cannot have truth values assigned to them. (But not all statements without truth values ae meaningless).

So you agree that invisible unicorns indeed do not exist? How do you know? Obviously, the unicorns I'm talking about are not just undetectable by light, they're also undetectable by all other methods

Where is this going? You can't stipulate that robot pain is forever immeasurable without begging the question.It is not analogous to your invisible unicorns.

0tadasdatys
A bit too vague. Can I clarify that as "Useless for communication, because it transfers no information"? Even though that's a bit too strict. What is stopping me from assigning them truth values? I'm sure you meant, "meaningless statements cannot be proven or disproven". But "proof" is a problematic concept. You may prefer "for meaningless statements there are no arguments in favor or against them", but for statements "X exists", Occam's razor is often a good counter-argument. Anyway, isn't (1.) enough? It's still entirely about meaning, measurability and existence. I want you to decide whether "there is an invisible/undetectable unicorn in your room" is meaningless or false. This started when you said that "robots don't feel pain" does not follow from "we have no arguments suggesting that maybe 'robot pain' could be something measurable". I'm trying to understand why not and what it could follow from. Does "invisible unicorns do not exist" not follow from "invisible unicorns cannot be detected in any way?". Or maybe "invisible unicorns cannot be detected" does not follow from "we have no arguments suggesting that maybe 'invisible unicorns' could be something detectable"?

But this is not how it works. For certain definitions, meta-X is still a subset of X;

And for others, it isn't.

If a statement is true for all algorithms, it is also true for the "algorithm that tries several algorithms";

Theoretically, but there is no such algorithm.

Similarly, saying: "I don't have an epistemology; instead I have several epistemologies, and I use different ones in different situations" is a kind of epistemology.

But it's not a single algorithmic epistemology.

Also, some important details are swept under th

... (read more)

I am, at times, talking about alternative definition

Robot pain is of ethical concern because pain hurts. If you redefine pain into a schmain that is just a behavioural twitch without hurting or any other sensory quality, then it is no longer of ethical interest. That is the fraud.

Meaninglessness is not the default.

Well, it should be

That can't possible work, as entirelyuseless has explained.

Sure, in a similar way that people discussing god or homeopathy bothers me.

God and homeopathy are meaningful, which is why people are able to mount arg... (read more)

0tadasdatys
No, pain is of ethical concern because you don't like it. You don't have to involve consciousness here. You involve it, because you want to. Homeopathy is meaningful. God is meaningful only some of the time. But I didn't mean to imply that they are analogues. They're just other bad ideas that get way too much attention. What is it exactly? Obviously, I expect that it either will not be a definition or will rely on other poorly defined concepts.

but you have brought in a bunch of different issues without explaining how they interrelate Which issues exactly

Meaningfulness, existence, etc.

Is this still about how you're uncomfortable saying that invisible unicorns don't exist?

Huh? It's perfectly good as a standalone stament , it's just that it doens't have much to do with meaning or measurabiltiy.

Does "'robot pain' is meaningless" follow from the [we have no arguments suggesting that maybe 'robot pain' could be something measurable, unless we redefine pain to mean something a lo

... (read more)
0tadasdatys
It is evident that this is a major source of our disagreement. Can you define "meaningless" for me, as you understand it? In particular, how it applies to grammatically correct statements. So you agree that invisible unicorns indeed do not exist? How do you know? Obviously, the unicorns I'm talking about are not just undetectable by light, they're also undetectable by all other methods.

So I assumed you understood that immeasurability is relevant here

I might be able to follow an argument based on immeasurabilty alone, but you have brought in a bunch of different issues without explaining how they interrelate. you

Expressed in plain terms "robots do not feel pain" does not follow from "we do not know how to measure robot pain".

No, but it follows from "we have no arguments suggesting that maybe 'robot pain' could be something measurable, unless we redefine pain to mean something a lot more specific".

N... (read more)

0tadasdatys
Which issues exactly? Why not? Is this still about how you're uncomfortable saying that invisible unicorns don't exist? Does "'robot pain' is meaningless" follow from the same better?

You are taking the inscrutability thing a bit too strongly. Someone can transition from rationality to meta rationality, just as someone can transition to pre rationality to rationality. Pre rationality has limitations because yelling tribal slogans at people who aren't in your tribe doesn't work.

What does meta-rationality even imply, for the real world?

What does rationality imply? You can't actually run Solomonoff Induction, so FAPP you are stuck with a messy plurality of approximations. And if you notice that problem...

It's usually the case that the rank and file are a lot worse than the leaders.

Load More