We do have some laws that are explicit about scale. For instance speed limits and blood alcohol levels. However, nor everything is easily quantified. Money changing hands can be a proxy for something reaching too large a scale
Possibly related:
The other day it was raining heavily. I chose to take an umbrella rather than shaking my fist at the sky. Shaking your fist at the sky seems pretty stupid, but people do analogous things all the time.
Complaining to your ingroup about your outgroup isn't going to change your outgroup. Complaining that you are misunderstood isn't going to make you understood. Changing the way you communicate might. You are not in control of how people interpret you, but you are in control of what you say.
It might be unfortunate that people have a hair-trigge...
That observation runs headlong into the problem, rather than solving it.
Well, we don't know if they work magically, because we don't know that they work at all. They are just unavoidable.
It's not that philosophers weirdly and unreasonably prefer intuition to empirical facts and mathematical/logical reasoning, it is that they have reasoned that they can't do without them: that (the whole history of) empiricism and maths as foundations themselves rest on no further foundation except their intuitive appeal. That is the essence of the Inconvenient Ineradicability of Intuition. An unfounded foundation is what philosophers mean by &...
Many-worlds-flavored QM, on the other hand, is the conjunction of 1 and 2, plus the negation of 5
Plus 6: There is a preferred basis.
First, it's important to keep in mind that if MWI is "untestable" relative to non-MWI, then non-MWI is also "untestable" relative to MWI. To use this as an argument against MWI,
I think it's being used as an argument against beliefs paying rent.
MWI is testable insofar as QM itself is testable.
Since there is more than one interpretation of QM, empirically testing QM does not prove any one interpretation over the others. Whatever extra arguments are used to support a particular interpretation over the others are not going to be, ...
This is why there's a lot of emphasis on hard-to-test ("philosophical") questions in the Sequences, even though people are notorious for getting those wrong more often than scientific questions -- because sometimes [..] the answer matters a lot for our decision-making,
Which is one of the ways in which beliefs that don't pay rent do pay rent.
I am not familiar with Peterson specifically, but I recognise the underpinning in terms of Jung, monomyth theory, and so on.
, a state is good when it engages our moral sensibilities s
Individually, or collectively?
We don't encode locks, but we do encode morality.
Individually or collectively?
Namely, goodness of a state of affairs is something that I can assess myself from outside a simulation of that state. I don't need to simulate anything else to see it
The goodness-to-you or the objective goodness?
if you are going say that morality "is" human value, you are faced with the fact that humans vary in their values..the fact that creates the suspicion of relati...
No, we're in a world where tourists generally don't mind going slowly and enjoying the view. These things would be pretty big on the outside, at least RV size, but they wouldn't be RVs. They wouldn't usually have kitchens and their showers would have to be way nicer than typical RV showers.
And they could relocate overnight. That raises the possibility of self-driving sleeper cars for business travellers who need to be somewhere by morning.
That amounts to "I can make my theory work if I keep on adding epicycles".
can think of two possibilities:
[1] that morality is based on rational thought as expressed through language
[2] that morality has a computational basis implemented somewhere in the brain and accessed through the conscious mind as an intuition..
[3] Some mixture. Morality doesn't have to be one thing, or achieved in one way. In particular, novel technologies and social situations provoke novel moral quandaries that intuition is not well equipped to handle , and where people debate such things, they tend to use a broadly rationalist style, trying to find common principles, noting undesirable consequences.
That assumes he had nothing to learn from college, and the only function it could have provided is signalling and social credibility.
If you have a mind that cannot communicate, figuring out what it feels is not your biggest problem. Saying anything about such a mind is a challenge. Although I'm confident much can be said, even if I can't explain the algorithm how exactly that would work.
It seems you are no longer ruling out a science of other minds, Are you still insisting that robots don't feel pain?
but I don't necessarily understand what it would mean for a different kind of mind.
I've already told you what it would mean, but you have a self-imposed problem of tying meaning to ...
I am questioning the implicit premise that some kinds of emergent things are "reductively understandable in terms of the parts and their interactions.
It's not so much some emergent things, for a uniform definiton of "emergent", as all things that come under a vriant definition of "emergent".
...I think humans have a basic problem with getting any grasp at all on the idea of things being made of other things, and therefore you have arguments like those of Parmenides, Zeno, etc., which are basically a mirror of modern arguments abou
There are positions between those. Medium-strength emergentism would have it that some systems are conscious, that cosnciousness is not a property of their parts, and that it is not reductively understandable in terms of the parts and their interactions, but that it is by no mean inevitable.
Reduction has its problems too. Many writings on LW confuses the claim that things are understandable in terms of their parts with the claim that they are merely made of parts.
Eg:-
...(1) The explanatory power of a model is a function of its ingredients. (2) Reductionism
I asked you before to propose a meaningless statement of your own.
And what I said before is that a well-formed sentence can robustly be said to be meaningful if it embeds a contradiction, like "colourless green", or category error, like "sleeping idea".
So, what you're saying, is that you don't know if "ghost unicorns" exist? Why would Occam's razor not apply here? How would you evaluate the likelihood that they exist?
Very low finite rather than infinitessimal or zero.
I don't see how this is helping. You have a chain o...
We can derive that model by looking at brain states and asking the brains which states are similar to which.
That is a start, but we can't gather data from entities that cannot speak , and we don't know how to arrive at general rules that apply accross different classes of conscious entity.
They only need to know about robot pain if "robot pain" is a phrase that describes something.
As i have previously pointed out, you cannot assume meaninglessness as a default.
...morality, which has many of the same problems as consciousness, and is even le
Those sound like fixable problems.
He's naive enough to reinvent LP. And since when was "coherent , therefore true" a precept of his epistemology?
You should do good things and not do bad things
You know that is not universally followed?
Not saying your epsitemology can do things it can;'t do.
Motte: We can prove things about reality.
Bailey: We can predict obervations.
Why would one care about correspondence to other maps?
It's worse than that , and they're not widely enough know
Usually, predictive accuracy is used as a proxy for correspondence to reality, because one cannot check map-territory correspondence by standing outside the map-territory relationship and observing (in)congreuence directly.
If "like" refers to similarity of some experiences, a physicalist model is fine for explaining that
We can't compare experiences qua experiences using a physicalist model, because we don't have a model that tells us which subset or aspect of neurological functioning corresponds to which experience.
If it refers to something else, then I'll need you to paraphrase.
If you want to know what "pain" means, sit on a thumbtack.
You can say "torture is wrong", but that has no implications about the physical world
That is comple...
That's just another word for the same thing? What does one do operationally?
I can also use"ftoy ljhbxd drgfjh"
But you could not have used it to make a point about links between meaning, detectabiity, and falsehood.
If you have no arguments, then don't respond.
The implicit argument is that meaning/communication is not restricted to literal truth.
Let me answer that differently. You said invisible unicorns don't exist. What happens if an invisible unicorn detector is invented tomorrow?
What would happen is that you are changing the hypothesis. Originally, you stipulated an invisible unicvorn as undetectable in any...
My take is that the LP is the official doctrine, and the MWI is an unwitting exception.
Everyone builds their own maps and yes, they can be usefully ranked by how well do they match the territory.
How do you detect that?
In what way is "there is an invisible/undetectable unicorn in your room" not "useless for communication"?
Well, you used it,.
I can give you a robot pain detector today. It only works on robots though. The detector always says "no". The point is that you have no arguments why this detector is bad.
Its' bad because there's nothign inside the box. It's just a apriori argument.
That's harder to do when you have an explicit understanding.
Yes, that's one of the prime examples.
Do you think anyone can undertstand anything? (And are simplifications lies?)
Yes, I said it's not a fact, and I don't want to talk about morality because it's a huge tangent. Do you feel that morality is relevant to our general discussion?
Yes: it's relevant because "tortruing robots is wrong" is a test case of whether your definitons are solving the problem or changing the subject.
and also don;'t want to talk about consciousness.
What?
You keep saying it s a broken concept.
...A theory should be as simple as possible while still explaining the facts. There are prima facie facts facts about conscious sensations,th
I also claim that meta-rationalists claim to be at level 3, while they are not.
Can you support that? I rather suspect you are confusing new in the historic sense with new-to-rationalists. Bay area rationalism claims to be new, but is in many respects a rehash of old ideas like logical positivism. Likewise, meta rationalism is old, historically.
, I haven't seen any proof that meta-rationalists have offered god as a simplifying hypothesis of some unexplained phoenomenon that wasn't trivial.
Theres a large literature on that sort of subject. Meta ratio...
Obviously, anything can be of ethical concert, if you really want it to be
Nitpicking about edge cases and minority concerns does not address the main thrust of the issue.
"pain is of ethical concern because you don't like it" is a trivial fact in the sense that, if you loved pain, hurting you would likely not be morally wrong.
You seem to be hinting that the only problem is going against preferences. That theory is contentious.
is "the concept of preference is simpler than the concept of consciousness", w
The simplest theory is t...
What is stopping me from assigning them truth values?
The fact that you can't understand them.
You may prefer "for meaningless statements there are no arguments in favor or against them", but for statements "X exists", Occam's razor is often a good counter-argument.
If you cant understand a statement as exerting the existence of something, it isn't meaningless by my definition. What I have asserted makes sense with my definiions. If you are interpreting in terms of your own definitions....don't.
...I want you to decide whether &quo
Your example, in my opinion, disproves your point. Einstein did not simply noticed them: he constructed a coherent explanation that accounted for both the old model and the discrepancies.
I wasn't making a point about meta-rationality versus rationality, I was making a point about noticing-and-putting-on-a-shelf versus noticing-and-taking-seriously. Every Christian has noticed the problem of evil...in the first sense.
the more people have independently access to the phoenomenon, the more confidence I would give to its existence.
You need to distinguis...
Points 1 and 2 are critiques of the rationalist community that are around since the inception of LW (as witnessed by the straw Vulcan / hot iron approaching metaphors), so I question that they usefully distinguish meta- from plain rationalists
Maybe the distinction is in noticing it enough and doing something about it.. iti is very common to say "yeah, that's a problem, let's put it in a box to be dealt with later" and then forget about it .
Lots of people noticed the Newton/Maxwell disparities in the 1900s, but Einstein noticed them enough.
&qu...
No, pain is of ethical concern because you don't like it. You don't have to involve consciousness here.
Is that a fact or an opinion?
What is it exactly?
"highly unpleasant physical sensation caused by illness or injury."
Obviously, I expect that it either will not be a definition or will rely on other poorly defined concepts.
have you got an exact definition of "concept"?
Requiring extreme precision in all things tends to bite you.
. Can you define "meaningless" for me, as you understand it? In
Useless for communication.
Meaningless statements cannot have truth values assigned to them. (But not all statements without truth values ae meaningless).
So you agree that invisible unicorns indeed do not exist? How do you know? Obviously, the unicorns I'm talking about are not just undetectable by light, they're also undetectable by all other methods
Where is this going? You can't stipulate that robot pain is forever immeasurable without begging the question.It is not analogous to your invisible unicorns.
But this is not how it works. For certain definitions, meta-X is still a subset of X;
And for others, it isn't.
If a statement is true for all algorithms, it is also true for the "algorithm that tries several algorithms";
Theoretically, but there is no such algorithm.
Similarly, saying: "I don't have an epistemology; instead I have several epistemologies, and I use different ones in different situations" is a kind of epistemology.
But it's not a single algorithmic epistemology.
...Also, some important details are swept under th
I am, at times, talking about alternative definition
Robot pain is of ethical concern because pain hurts. If you redefine pain into a schmain that is just a behavioural twitch without hurting or any other sensory quality, then it is no longer of ethical interest. That is the fraud.
Meaninglessness is not the default.
Well, it should be
That can't possible work, as entirelyuseless has explained.
Sure, in a similar way that people discussing god or homeopathy bothers me.
God and homeopathy are meaningful, which is why people are able to mount arg...
but you have brought in a bunch of different issues without explaining how they interrelate Which issues exactly
Meaningfulness, existence, etc.
Is this still about how you're uncomfortable saying that invisible unicorns don't exist?
Huh? It's perfectly good as a standalone stament , it's just that it doens't have much to do with meaning or measurabiltiy.
...Does "'robot pain' is meaningless" follow from the [we have no arguments suggesting that maybe 'robot pain' could be something measurable, unless we redefine pain to mean something a lo
So I assumed you understood that immeasurability is relevant here
I might be able to follow an argument based on immeasurabilty alone, but you have brought in a bunch of different issues without explaining how they interrelate. you
Expressed in plain terms "robots do not feel pain" does not follow from "we do not know how to measure robot pain".
No, but it follows from "we have no arguments suggesting that maybe 'robot pain' could be something measurable, unless we redefine pain to mean something a lot more specific".
N...
You are taking the inscrutability thing a bit too strongly. Someone can transition from rationality to meta rationality, just as someone can transition to pre rationality to rationality. Pre rationality has limitations because yelling tribal slogans at people who aren't in your tribe doesn't work.
What does meta-rationality even imply, for the real world?
What does rationality imply? You can't actually run Solomonoff Induction, so FAPP you are stuck with a messy plurality of approximations. And if you notice that problem...
It's usually the case that the rank and file are a lot worse than the leaders.
Saying that some things are right and others wrong is pretty standard round here. I don't think I'm breaking any rules. And I don't think you avoid making plonking statements yourself.