Maybe because someone wants to?
It might fit one's preferences better than programming, and the pay isn't different by orders of magnitude.
Maybe because someone wants to?
It might fit one's preferences better than programming, and the pay isn't different by orders of magnitude.
Yes but the question is why do they want to? :)
I've worked in elderly care myself a long time ago when I was around 15 years old, which I imagine is quite comparable to being a nurse but I've found the work to be very hard both physically and emotionally (a lot of suffering and occasionally death to deal with). In fact it inspired me to do better in school just to not have to do work this hard for what back then I envisioned being "the rest of my life".
In Germany you either finish school with after 9, 10 or 12 (back then 13) years and you could only study at a University (without jumping through hoops) after attaining your 12/13 year school diploma. I was in the 10-year school type and working in elderly care was pretty much the type of work I might have to do if I left school after 10 years. My grades improved and I switched schools after 10 years and did another 3 just to "escape" hard work like that.
It sort of fits an (not very common) idiomatic pattern where the compliment is empty-to-sarcastic, but it seems pretty obvious that you didn't intend it that way, and I can't actually think of any examples I learned the idiom from.
I get it. Makes sense, actually now that you point it out I think I've also seen this phrase employed as a "pseudo-compliment". Rest assured that it wasn't intended that way.
It seems like you're trying to ask this nicely, which is good, and I don't know how Swimmer963 feels about this so I'm not upset on her behalf, but in general I read this sort of comment as less insulting when it doesn't use a phrase like "someone as smart as you".
...I don't understand how that part is insulting. I don't use smart as a weak form of intelligent if that's what you mean, exactly the opposite in fact. I'm sorry maybe I'm losing some finer point of the English language as I'm not a native speaker, but I would really like you, or someone, to try to explain how that part could possibly be interpreted as insulting because I honestly don't see it.
Edit: I'm also not implying that it's work unworthy or anything at all, I'm honestly just genuinely curious why she chose that profession because where I'm from it's a respected job because people know (or imagine to know) how hard the work is, but simultaneously it's also a job that's very much at the bottom of the food chain in terms of pay and status. I'm simply curious why she chose it.
I feel almost ashamed for asking that question, partly because it's quite impolite and inappropriate to ask a question like that (at least outside of LW) and maybe also because it might betray some kind of deeply rooted egghead-elitism on my part that I still can't quite manage shake off, but I simply can't resist this attempt to satisfy my raging curiosity: What's the reason why someone as smart as you chooses to become a nurse?
Also: Do you think of your perfectionism as largely useful, largely a hindrance, or kind-of-a-mixed-bag?
[Part 1]
I like this post, I also doubt there is much coherence let alone usefulness to be found in most of the currently prevailing concepts of what consciousness is.
I prefer to think of words and the definitions of those words as micro-models of reality that can be evaluated in terms of their usefulness, especially in building more complex models capable of predictions. As in your excellent example of gender, words and definitions basically just carve complex features of reality into manageable chunks at the cost of losing information - there is a trade-off and getting it right enhances the usefulness of words and the concepts behind them. In 99.9% of cases the concept of biological gender is perfectly applicable to everyday life and totally a "good enough" model of reality, as long as you have the insight that hermaphrodites are actually also a real thing. In a case where you have to deal with one, the correct reaction is to adopt a more complex model of reality instead of trying to fit a complex reality into a model that is designed to compress information into categories with some inevitable information loss. Biological gender is a really good high-level model of reality, because it draws its imaginary line in an area where very few exceptions actually exist in reality. It's an especially sharp distinction if you think of biological gender as having testiclular/ovarial(?) tissue, but in very rare instances this model will still miss to encompass rare special cases where the complexity of reality defies your model. "Mental gender" seems to be a rather different yet in most cases more useful everyday concept of gender, because we usually care more about creating models of other peoples' minds than about whether or not they have testicular or ovarian tissues - outside of curiosity or medical context. The lesson here is that your model of reality will always fall short no matter where you draw the line (at least at "higher levels" of reality, "low-level" models of atoms or particles are much more precise and unambiguous, than models of "higher-level" things like "persons" (When exactly does a fetus become a person?) or "societies" (are two people a society? How about eight?) and I'm quite sure any models of whatever "consciousness" is face the same problem).
In other words I think it's better to think of models/maps in terms of their usefulness, not in terms of right or wrong. In my opinion the job of a model is to make something understandable and more predictable, the job of a model is not to "reflect reality as closely as possible", especially of complex higher-level things. The "perfect model" in the latter sense would essentially be a perfect carbon copy of a real thing and tells you exactly as much -or as little- as the thing you're trying to model already does anyway. The usefulness of a model lies in compacting information enough to become understandable while also predicting outcomes better than competing models.
If you accept that notion, the question really becomes how should the term consciousness be defined to be useful and describe / differentiate something we actually care about. So we'd like to represent a part of reality we care about in a way that compresses information while retaining a high level of usefulness - meaning we can understand it but without cutting away vital parts and ideally in terms of being able to make predictions if we were to integrate the concept of consciousness into models with the potential to predict outcomes.
So which part of reality should the term consciousness try to model in order to be useful? I find it highly problematic and close to maximum uselessness to think of consciousness as some kind of continuum on which we rank information processing in living things/agents. Some people actually really think of consciousness as rocks having 0, bacteria having perhaps 0.001, bees having 0.01, rats having 0.1, dogs maybe 0.2, humans perhaps 0.5. Maximum uselessness I would argue as it tells you nothing. Why not just substitute consciousness with some notion of "maximum calculations per second" then and reserve the term consciousness for something we actually care about instead of wasting such a nice word on something we don't really care so much about - and more importantly on something we can already express with other words and concepts like "information processing".
What's funny about consciousness is that no one really agrees what exactly the definition should be but somehow everyone agrees that it's really really important. Why do we care so much about something we seemingly know close to nothing about? Seriously though why do we?
Look at all those hilarious "quantum consciousness" or "become more conscious" concepts peddled by the self-help industry complex. Possessing consciousness seems really high status nowadays, unlike say... all those lousy low-status life forms like frogs and bees and mice. The idea that somehow you can improve your consciousness seems very appealing, because if insects and birds have little or none of that thing called consciousness, and people surely have some of that thing called consciousness, then logically if I can get more of that awesome "consciousness" than my neighbor I'm superior than him in just the same way I'm superior to a frog. Really, self-help opened my eyes to how unconsciously I lived my life once and nowadays I feel strongly about helping all those low-consciousness people realize their full potential and I do my best to help them become more conscious beings...
It sounds ridiculous but couldn't that be part of why consciousness is so damn important to us even if he have no clue what exactly it is? I may have no idea what that consciousness is but somehow I really insist that I have it, I mean if everyone else says s/he has it I surely have it too, can't be left out. Whatever consciousness is, we usually agree that bacteria doesn't have it and we do so it must be important if some kinds of life have it and others don't.
Okay let's get serious again. What distinct features of minds exactly do we actually explicitly (and perhaps implicitly) care about, when we usually attempt employ that murky concept of consciousness? Hmm... well if we care about it, we might gain insight into what exactly it is we care about by thinking about which specific situations make us choose to employ that word and maybe from there we can distill why we seem to care so much.
Well whatever consciousness means, most people agree the concept of awareness seems highly related or somehow relevant to it. Consciousness is often used as a synonym for self-awareness, but what on earth is that exactly? (And why would we ever need two words for the exact same thing?) For some people it means having "internal experiences", for others "being aware of having internal experiences", which doesn't quite seem to be the same thing from where I stand... but where do these intuitions about something I seemingly know nothing about come from? Probably personal experiences...
Sometimes I read a paragraph and my mind stars wandering and daydreaming until I snap out of it and think to myself "Jesus I was totally gone for a second where was I again?" I realize my eyes are at the bottom of the paragraph already and it seems like I semi-remember that they kept wandering over the letters and words as if I was actually reading them... without being aware. And moreover my mind reawakening seems to have been triggered by arriving at the end of that paragraph and going "now what?", seemingly out of habit because I usually stop at the end of a paragraph and consider if I actually "got" what I read there. And sure enough upon rereading the paragraph, it seems very familiar to me... but I was not at all sure whether I read it or not just a few seconds ago, and I'd say whatever the word "self-aware" means shouldn't really include that experience (or non-experience?) I just described. But did I lack "awareness" or just "self-awareness" in that example? Hmm...
[Part 2]
If I drive a car (especially on known routes) my "auto-pilot" takes over sometimes. I stop at a red light but my mind is primarily focused on visually modeling the buttocks of my girlfriend in various undergarments or none at all. Am I actually "aware" of having stopped at the red light? Probably I was as much"aware" of the red light as a cheetah is aware of eating the carcass of a gazelle. Interestingly my mind seems capable of visually modeling buttocks in my mind's eye and reading real visual cues like red lights and habitually react to them - all at the same time. It seems I was more aware of my internal visual modeling than of the external visual cue however. In a sense I was aware of both, yet I'm not sure I was "self-aware" at any point, because whatever that means I feel like being self-aware in that situation would actually result in me going "Jesus I should pay more attention to driving, I can still enjoy that buttocks in real life once I actually managed to arrive home unharmed".
So what's self-awareness then? I suppose I use that term to mean something roughly like: "thoughts that include a model of myself while modeling a part of reality on-the-fly based on current sensual input". If my mind is predominantly preoccupied with "daydreaming" aka. creating and playing with a visual or sensual model that is based on manipulating memories rather than real sensual inputs, I don't feel like the term "self-awareness" should apply here even if that daydreaming encompasses a mental model of myself slapping a booty or whatever.
That's surely still quite ill-defined and far from maximum usefulness but whenever I'm tempted to use the word self-aware I seem to roughly think of something like that definition. So if we were to use "consciousness" as a synonym for self-awareness (which I'm not a fan of, but quite some people seem to be), maybe my attempt at a definition is a start to get toward something more useful and includes at least some of the "mental features" we seem to care about like "model of oneself" and "interpreting sensory input to create a model of reality".
The problem is that rats can construct models of reality as well, and these models outlive sensual inputs as well, which is pretty clear from experiments that put rats in mazes. They are stuck for some time in that maze without any exit and any rewards present but during that time they learn the layout of that maze even if it's empty and even though they are not externally rewarded for doing so. Once you drop a treat in that maze the rats who were able to wander around the maze beforehand know exactly how to get there as fast as possible, while rats new to that particular maze do not ("cognitive revolution" in psychology). Presumably their rat-mind also features some kind of model of themselves, presumably one that mainly features their body not so much their mind.
So to make the concept of self-awareness and perhaps consciousness more useful maybe what we really care about in the end is a mind being able to feature a model of its own mind (and thus what we call "ourselves").
This is quite interesting... young children and for example gorillas who were taught to communicate in sign language seem to lack a fully developed "theory of mind". Meaning it seems they can't conceive of the possibility that other minds contain things theirs does not... well kind of. If they do model other minds, they seem to model them a lot like copies of their own mind, or perhaps just slightly altered copies. Gorillas that can communicate in sign language are perfectly capable of answering questions about i.e. their mood... implying self-awareness that goes somewhat beyond just recognizing their physical reflection in a mirror but also being aware of their own feelings aka. internal experiences. But they never ever seem to get the brilliant idea of asking you a question, presumably because they can't conceive of the possibility that you know something they don't. Perhaps here we can draw a sensible line that differentiates between the terms self-awareness and consciousness, where the latter includes the ability to make complex models of the models contained in minds other than your own. I want to stress the word complex, as it doesn't seem like Gorillas feature no theory of mind, just some kind of more primitive version. It seems they model other minds as versions of their own minds in different states aided by mirror-neurons. Actually upon reflection it's not so clear humans do it all that differently, seeing how prone we are to anthropomorphism. You know what I'm talking about if you gained new insights from "Three Worlds Collide" - it seems hard to conceive of nonhuman minds and sometimes you end up with real nonsense like King-Kong falling in love with a tiny female human because she has the "universally recognized property" called "beautiful". Also I sometimes catch myself implicitly modeling other human minds in terms of "like me except for x, y, and z".
So maybe the reason why Gorillas don't ask questions isn't really because they lack a theory of mind, but only that this theory of mind does not include the model of reality of that particular mind they try to model. They seem quite capable when it comes to modeling the emotional states and needs of other minds, but they just seem to lack the insight that those minds also contain different perspectives on reality. Maybe that is what the term consciousness should describe... being able to create a model of a mind other than your own including that mind having a different model of reality than your own. Yeah I think this is it...
This seems to me like a genuinely more useful definition of what consciousness is, because it includes distinguishing features of minds you could actually test with meaningful results as outcomes. At some point children start to riddle you with questions but for gorillas capable of sign language that point just doesn't seem to arrive. The kinds of "questions" they ask are more along the lines of "Can I I get X" or maybe rather "I want you to give me permission to do X".
Naturally not everyone can be happy with that definition because they really, really want to be able to say "my dog was unconscious when we visited the vet, but then it regained consciousness when it woke up", but I submit usefulness should trump habits of speech. Also I can totally conceive of other minds putting forth even more detailed and useful definitions of what the term consciousness should describe, so define away.
[Part 1]
I like this post, I also doubt there is much coherence let alone usefulness to be found in most of the currently prevailing concepts of what consciousness is.
I prefer to think of words and the definitions of those words as micro-models of reality that can be evaluated in terms of their usefulness, especially in building more complex models capable of predictions. As in your excellent example of gender, words and definitions basically just carve complex features of reality into manageable chunks at the cost of losing information - there is a trade-off and getting it right enhances the usefulness of words and the concepts behind them. In 99.9% of cases the concept of biological gender is perfectly applicable to everyday life and totally a "good enough" model of reality, as long as you have the insight that hermaphrodites are actually also a real thing. In a case where you have to deal with one, the correct reaction is to adopt a more complex model of reality instead of trying to fit a complex reality into a model that is designed to compress information into categories with some inevitable information loss. Biological gender is a really good high-level model of reality, because it draws its imaginary line in an area where very few exceptions actually exist in reality. It's an especially sharp distinction if you think of biological gender as having testiclular/ovarial(?) tissue, but in very rare instances this model will still miss to encompass rare special cases where the complexity of reality defies your model. "Mental gender" seems to be a rather different yet in most cases more useful everyday concept of gender, because we usually care more about creating models of other peoples' minds than about whether or not they have testicular or ovarian tissues - outside of curiosity or medical context. The lesson here is that your model of reality will always fall short no matter where you draw the line (at least at "higher levels" of reality, "low-level" models of atoms or particles are much more precise and unambiguous, than models of "higher-level" things like "persons" (When exactly does a fetus become a person?) or "societies" (are two people a society? How about eight?) and I'm quite sure any models of whatever "consciousness" is face the same problem).
In other words I think it's better to think of models/maps in terms of their usefulness, not in terms of right or wrong. In my opinion the job of a model is to make something understandable and more predictable, the job of a model is not to "reflect reality as closely as possible", especially of complex higher-level things. The "perfect model" in the latter sense would essentially be a perfect carbon copy of a real thing and tells you exactly as much -or as little- as the thing you're trying to model already does anyway. The usefulness of a model lies in compacting information enough to become understandable while also predicting outcomes better than competing models.
If you accept that notion, the question really becomes how should the term consciousness be defined to be useful and describe / differentiate something we actually care about. So we'd like to represent a part of reality we care about in a way that compresses information while retaining a high level of usefulness - meaning we can understand it but without cutting away vital parts and ideally in terms of being able to make predictions if we were to integrate the concept of consciousness into models with the potential to predict outcomes.
So which part of reality should the term consciousness try to model in order to be useful? I find it highly problematic and close to maximum uselessness to think of consciousness as some kind of continuum on which we rank information processing in living things/agents. Some people actually really think of consciousness as rocks having 0, bacteria having perhaps 0.001, bees having 0.01, rats having 0.1, dogs maybe 0.2, humans perhaps 0.5. Maximum uselessness I would argue as it tells you nothing. Why not just substitute consciousness with some notion of "maximum calculations per second" then and reserve the term consciousness for something we actually care about instead of wasting such a nice word on something we don't really care so much about - and more importantly on something we can already express with other words and concepts like "information processing".
What's funny about consciousness is that no one really agrees what exactly the definition should be but somehow everyone agrees that it's really really important. Why do we care so much about something we seemingly know close to nothing about? Seriously though why do we?
Look at all those hilarious "quantum consciousness" or "become more conscious" concepts peddled by the self-help industry complex. Possessing consciousness seems really high status nowadays, unlike say... all those lousy low-status life forms like frogs and bees and mice. The idea that somehow you can improve your consciousness seems very appealing, because if insects and birds have little or none of that thing called consciousness, and people surely have some of that thing called consciousness, then logically if I can get more of that awesome "consciousness" than my neighbor I'm superior than him in just the same way I'm superior to a frog. Really, self-help opened my eyes to how unconsciously I lived my life once and nowadays I feel strongly about helping all those low-consciousness people realize their full potential and I do my best to help them become more conscious beings...
It sounds ridiculous but couldn't that be part of why consciousness is so damn important to us even if he have no clue what exactly it is? I may have no idea what that consciousness is but somehow I really insist that I have it, I mean if everyone else says s/he has it I surely have it too, can't be left out. Whatever consciousness is, we usually agree that bacteria doesn't have it and we do so it must be important if some kinds of life have it and others don't.
Okay let's get serious again. What distinct features of minds exactly do we actually explicitly (and perhaps implicitly) care about, when we usually attempt employ that murky concept of consciousness? Hmm... well if we care about it, we might gain insight into what exactly it is we care about by thinking about which specific situations make us choose to employ that word and maybe from there we can distill why we seem to care so much.
Well whatever consciousness means, most people agree the concept of awareness seems highly related or somehow relevant to it. Consciousness is often used as a synonym for self-awareness, but what on earth is that exactly? (And why would we ever need two words for the exact same thing?) For some people it means having "internal experiences", for others "being aware of having internal experiences", which doesn't quite seem to be the same thing from where I stand... but where do these intuitions about something I seemingly know nothing about come from? Probably personal experiences...
Sometimes I read a paragraph and my mind stars wandering and daydreaming until I snap out of it and think to myself "Jesus I was totally gone for a second where was I again?" I realize my eyes are at the bottom of the paragraph already and it seems like I semi-remember that they kept wandering over the letters and words as if I was actually reading them... without being aware. And moreover my mind reawakening seems to have been triggered by arriving at the end of that paragraph and going "now what?", seemingly out of habit because I usually stop at the end of a paragraph and consider if I actually "got" what I read there. And sure enough upon rereading the paragraph, it seems very familiar to me... but I was not at all sure whether I read it or not just a few seconds ago, and I'd say whatever the word "self-aware" means shouldn't really include that experience (or non-experience?) I just described. But did I lack "awareness" or just "self-awareness" in that example? Hmm...
With pleasure!
Ok, so the old definition of "knowledge" was "justified true belief". Then it turned out that there were times when you could believe something true, but have the justification be mere coincidence. I could believe "Someone is coming to see me today" because I expect to see my adviser, but instead my girlfriend shows up. The statement as I believed it was correct, but for a completely different reason than I thought. So Alvin Goldman changed this to say, "knowledge is true belief caused by the truth of the proposition believed-in." This makes philosophers very unhappy but Bayesian probability theorists very happy indeed.
Where do causal and noncausal statistical models come in here? Well, right here, actually: Bayesian inference is actually just a logic of plausible reasoning, which means it's a way of moving belief around from one proposition to another, which just means that it works on any set of propositions for which there exists a mutually-consistent assignment of probabilities.
This means that quite often, even the best Bayesians (and frequentists as well) construct models (let's switch to saying "map" and "territory") which not only are not caused by reality, but don't even contain enough causal machinery to describe how reality could have caused the statistical data.
This happens most often with propositions of the form "There exists X such that P(X)" or "X or Y" and so forth. These are the propositions where belief can be deduced without constructive proof: without being able to actually exhibit the object the proposition applies to. Unfortunately, if you can't exhibit the object via constructive proof (note that constructive proofs are isomorphic to algorithms for actually generating the relevant objects), I'm fairly sure you cannot possess a proper description of the causal mechanisms producing the data you see. This means that not only might your hypotheses be wrong, your entire hypothesis space might be wrong, which could make your inferences Not Even Wrong, or merely confounded.
(I can't provide mathematics showing any formal tie between causation/causal modeling and constructive proof, but I think this might be because I'm too much an amateur at the moment. My intuitions say that in a universe where incomputable things don't generate results in real-time and things don't happen for no reason at all, any data I see must come from a finitely-describable causal process, which means there must exist a constructive description of that process -- even if classical logic could prove the existence of and proper value for the data without encoding that constructive decision!)
What can also happen, again particularly if you use classical logic, is that you perform sound inference over your propositions, but the propositions themselves are not conceptually coherent in terms of grounding themselves in causal explanations of real things.
So to use my former example of the Great Filter Hypothesis: sure, it makes predictions, sure, we can assign probabilities, sure, we can do updates. But nothing about the Great Filter Hypothesis is constructive or causal, nothing about it tells us what to expect the Filter to do or how it actually works. Which means it's not actually telling us much at all, as far as I can say.
(In relation to Overcoming Bias, I've ranted on similarly about explaining all possible human behaviors in terms of signalling, status, wealth, and power. Paging /u/Quirinus_Quirrell... If they see a man flirting with a woman at a party, Quirrell and Hanson will seem to explain it in terms of signalling and status, while I will deftly and neatly predict that the man wants to have sex with the woman. Their explanation sounds until you try to read its source code, look at the causal machine working, and find that it dissolves into cloud around the edges. My explanation grounds itself in hormonal biology and previous observation of situations where similar things occurred.)
The problem with the signaling hypothesis is that in everyday life there is essentially no observation you could possibly make that could disprove it. What is that? This guy is not actually signaling right now? No way, he's really just signaling that he is so über-cool that he doesn't even need to signal to anyone. Wait there's not even anyone else in the room? Well through this behavior he is signaling to himself how cool he is to make him believe it even more.
Guess the only way to find out is if we can actually identify "the signaling circuit" and make functional brain scans. I would actually expect signaling to explain an obscene amount of human behavior... but really everything? As I said I can't think of any possible observation outside of functional brain scans we could potentially make that could have the potential to disprove the signaling hypothesis of human behavior. (A brain scan where we actually know what we are looking at and where we are measuring the right construct obviously).
So Alvin Goldman changed this to say, "knowledge is true belief caused by the truth of the proposition believed-in." This makes philosophers very unhappy but Bayesian probability theorists very happy indeed.
If I am insane and think I'm the Roman emperor Nero, and then reason "I know that according to the history books the emperor Nero is insane, and I am Nero, so I must be insane", do I have knowledge that I am insane?
Interesting thought but surely the answer is no. If I take the word "knowledge" in this context to mean having a model that reasonably depicts reality in its contextually relevant features, then the same model of what the word "insane" in this specific instance depicts two very different albeit related brain patterns.
Simply put the brain pattern (wiring + process) that makes the person think they are Nero is a different though surely related physical object than the brain pattern that depicts what that person thinks "Nero being insane" might actually manifest like in terms of beliefs and behaviors. In light of the context we can say the person doesn't have any knowledge about being insane, since that person's knowledge does not include (or take seriously) the belief that depicts the presumably correct reality/model of that person not actually being Nero.
Put even simpler we use the same concept/word to model two related but fundamentally different things. Does that person have knowledge about being insane? It's the tree and the sound problem, the word insane is describing two fundamentally different things yet wrongfully taken to mean the same. I'd claim any reasonable concept of the word insane results in you concluding that that person does not have knowledge about being insane in the sense that is contextually relevant in this scenario, while the person might have actually roughly true knowledge about how Nero might have been insane and how that manifested itself. But those are two different things and the latter is not the contextually relevant knowledge about insanity here.
I admire the restraint involved in waiting nearly five years before selecting a favorite.
Well too bad he didn't wait a year longer then ;). I think preferring torture is the wrong answer for the same reason that I think universal health-care is a good idea. The financial cost of serious illness and injury is distributed over the taxpaying population so no single individual has to deal with a spike in medical costs ruining their life. And I think it's still the correct moral choice regardless of whether universal health-care happens to be more expensive or not.
Analogous I think the exact same applies to dust vs torture. I don't think the correct moral choice is about minimizing the total area under the pain-curve at all, it's about avoiding severe pain-spikes for any given individual even at the cost of having a larger area under the curve. I don't think "shut up and multiply" applies here in it's simplistic conception in the way it might apply in the scenario where you have to choose whether 400 people live for sure or 500 people live with .9 probability (and die with .1 probability).
Irrespective of the former however, the thought experiment is a bit problematic because it's more complex than apparent at first, if we really take it seriously. Eliezer said the dust-specks are "barely noticed", but being conscious or aware of something isn't an either-or thing, awareness falls on a continuum so whatever "pain" the dust-specks causes has to be multiplied by how aware the person really is. If someone is tortured that person is presumably very aware of the physical and emotional pain.
Other possible consequences like lasting damage or social repercussions not counting, I don't really care all that much about any kind of pain that happens to me while I'm not aware of it. I could probably figure out whether or not pain is actually registered in my brain during having my upcoming operation under anesthesia, but the fact that I won't bother tells me very clearly, that awareness of pain is an important weight we have to multiply in some fashion with the actual pain-registration in the brain.
That's just an additional consideration though, even if we simplify it and imagine the pain is directly comparable and has no difference in quality at all, while the total quantity of pain is excessively higher in the dust-scenario compared to the torture-scenario, it changes nothing about my current choice.
So what does that tell me about the relationship between utility and morality? I don't accept that morality is just about the total lump sums of utility and disutility, I think we also have to consider the distribution of those in any given population. Why is that I ask myself and my brain offers the following answer to this question:
If I was the only agent in the entire universe and had to pick torture vs dust for myself (and obviously if I was immortal/ had a long enough life to experience all those dust specks), I would still prefer the larger area under the curve over the pain-spike, even if I assume direct comparability of the two kinds of pain. I suspect the reason for this choice is a type of time-discounting my brain does, I'd rather suffer a little pain every day for a trillion years than a big spike for 50 years. Considering that briefly speaking utility is (or at least I think should be defined as) a thing that only results from the interaction of minds and environments, my mind and its workings are definitely part of the equation that says what has utility and what doesn't. And my mind wants to suffer low disutility evenly distributed over a long time-period rather than suffer great disutility for a 50 year spike (assuming a trillion-year lifetime).
Well, why does anyone want to do anything? Your question implied that there one might want to "do better", which strikes me as underinformed.
EDIT I just figured out something really interesting but am almost out of charge in the computer, will update in a bit
Maybe it was, looking at that 50000€/y number solipsist quotes. In Germany you earn barely half of that before tax.
But that's not at all the main reason why I ask to be perfectly honest. I remember Swimmer portraying herself as having some form of social anxieties so this job strikes me as a particularly counterintuitive choice.