All of jeronimo196's Comments + Replies

"The mystery is why the community doesn't implement obvious solutions. Hiring PR people is an obvious solution. There's a posting somewhere in which Anna Salamon argues that there is some sort of moral hazard involved in professional PR, but never explains why, and everyone agrees with her anyway."

 

""You", plural, the collective, can speak as freely as you like ...in private."

 

Suppose a large part of the community wants to speak as freely as it likes in public, and the mystery is solved.

We even managed to touch upon the moral hazard involved in professional PR - insofar as it is a filter between what you believe and what you say publicly.

1TAG
Theres a hazard in having no filters, as well. One thing being bad doesn't make another good.

Semantics. 

Good PR requires you to put a filter between what you think is true and what you say.

1TAG
It requires you to filter what you publicly and officially say. "You", plural, the collective, can speak as freely as you like ...in private. But if you, individually, want to be able to say anything you like to anyone, you had better accept the consequences.

The level of PR you aim for puts an upper limit to how much "radical" honesty you can have.

If you aim for perfect PR, you can have 0 honesty.

If you aim for perfect honesty, you can have no PR. lesswrong doesn't go that far, by a long shot - even without a PR team present.

 

Most organization do not aim for honesty at all.

 

The question is where do we draw the line.

 

Which brings us to "Disliking racism isn't some weird idiosyncratic thing that only Gerard has." 

 

From what I understand, Gerard left because he doesn't like discussions ab... (read more)

1TAG
honesty=/=frankness. Good PR does not require you to lie.

"It's sad that our Earth couldn't be one of the more dignified planets that makes a real effort, correctly pinpointing the actual real difficult problems and then allocating thousands of the sort of brilliant kids that our Earth steers into wasting their lives on theoretical physics.  But better MIRI's effort than nothing."

 

To be fair, a lot of philosophers and ethicist have been trying to discover what does "good" mean and how humans should go about aligning with it.

Furthermore, a lot of effort has gone into trying to align goals and incentives ... (read more)

For any statement one can make, there will be people "alienated" (=offended?) by it. 

 

David Gerard was alienated by a race/IQ discussion and you think that should've been avoided. 

But someone was surely equally alienated by discussions of religion, evolution, economics, education and our ability to usefully define words. 

 

Do we value David Gerard so far above any given creationist, that we should hire a PR department to cater to him and people like him specifically? 

 

There is an ongoing effort to avoid overtly political t... (read more)

4TAG
If you also assume that nothing available except of perfection, that's a fully general argument against PR, not just against the possibility of LW/MIRI having good PR. If you don't assume that, LW/MIRI can have good PR, by avoiding just the most significant bad PR. Disliking racism isn't some weird idiosyncratic thing that only Gerard has.

Tolstoy sounds ignorant of game theory - probably because he was dead when it was formulated.

Long story short, non-cooperating organisms regularly got throttled by cooperating ones, which is how we evolved to be cooperating.

14 years too late, but I can never pass on an opportunity to recommend "Essence of Calculus" by 3blue1brown on youtube.

It is a series of short clips, explaining Calculus concepts and core ideas without too much formalism and with plenty of geometric examples.

"Dear God" by XTC is my favourite atheist hymn. On the other hand, "Transcendence" with Johnny Depp made me feel empathy for christians watching bible flicks - I so wanted to like the damn thing.

As to OPs main point, "politics is the art killer" has recently entered the discourse of almost every fandom (if the franchise is still ongoing). Congratulations on pointing out yet another problem years before it became so exacerbated, that people can no longer ignore it.

Reverse stupidity is not wisdom. Here we have reversed ad populus (aka The Hipster's Fallacy). Pepsi and Macs are not strictly superior to their more popular counterparts by dent of existing. Rather, their existence is explained by comparative advantage in some cases for some users.

I've heard Peterson accuse feminists of disregarding what is true in the name of ideology on many occasions.

Sam Harris initially spent an hour arguing against Peterson's redefinition of "truth" to include a "moral dimension". They've clashed about it since, with no effect. Afaik, "the bible is true because it is useful" is central component of Peterson's worldview.

To be fair, I believe Peterson has managed to honestly delude himself on this point and is not outright lying about his beliefs.

Nevertheless, when prompted to think of a "General Defense of Fail", attempting to redefine the word "truth" in order to protect one's ideology came to mind very quickly.

If we accept MWI, cryonics is a backdoor to Quantum Immortality, one which waiting and hoping may not offer.

Parents getting to their 9 to 5 jobs on time is more important.

Going any further would require to taboo "task".

I agree your reading explains the differences in responses given in the survey.

1Дмитрий Зеленский
Unfortunately, it is quite difficult to taboo a term when discussing how (mis)interpretation of said term influenced a survey.

Creating an AI that does linguistic analysis of a given dataset better than me is easier than creating an AI that is a better linguist than me because it actually requires additional tasks such as writing academic papers.

If AI is not better than you at task "write an academic paper", it is not at the level, specified in the question.

If a task requires output for both the end result and the analysis used to reach it, both shall be outputted. At least that is how I understand "better at every task".

2Дмитрий Зеленский
Moreover, even if my understanding is ultimately not what the survey-makers had in mind, the responding researchers having the same understanding as me would be enough to get the results in the OP.
2Дмитрий Зеленский
I would say that, in ideal world, the relevant skill/task is "given the analysis already at hand, write a paper that conveys it well" (and it is alarming that this skill becomes much more valuable than the analysis itself, so people get credit for others' analyses even when they clearly state that they merely retell it). And I fully believe that both the task of scientific analysis (outputting the results of the analysis, not its procedure, because that's what needed for non-meta-purposes!) and the task outlined above will be achieved earlier than an AI that can actually combine them to write a paper from scratch. AND that each new simple task in the line to the occupation further removes their combination even after the simple task itself is achieved.

Thank you for the link.

Right, none of our models are philosophically grounded. But, does that make them all equal? That's what the post sounds like to me:

Well maybe: deny the concept of objective truth, of which there can only be one, and affirm subjectivism and pluralism.

To me, this seems like the ultimate Fallacy of Gray.

Then again, I am not well read at philosophy, so my comments might be isomorphic to "Yay pragmatism! Go objectivity!", while those may or may not be compatible.

The IoT (internet of things) comes to mind. Why not experience WiFi connectivity issues while trying to use the washing machine?

Everything trying to become a subscription service is another example (possibly related to IoT). My favourite is a motorcycle lifesaving airbag vest, which won't activate during a motorcycle crash, if the user misses a monthly payment. The company is called Klim, and in fairness, the user can check whether the airbag is ready for use, before getting on their bike.

Extractable internal data is only needed during troubleshooting. During normal operation, only the task result is needed.

As for the time/process-flow management, I already consider it a separate task - and probably the one that would benefit the most drastically by being automated, at least in my case.

1Дмитрий Зеленский
Well, that's not quite true. Let's go to the initial example: you need to write a linguistic paper. To this, you need at least two things: perform the lingustic analysis of some data and actually put it in words. Yet the latter needs the internal structure of the former, not just the end result (as would most currently-practical applications of a machine that does a linguistic analysis). The logic behind trees, for instance, not just a tree-parsed syntactic corpus. A neural network (RNN or something) making better and quicker tree-parsed syntactic corpora than me would just shrug (metaphorically) if asked for the procedure of tree-making. I am near-certain other sciences would show the same pattern for their papers. Managing AI would also have to manually handle information flow between other AIs more generally, which is kinda "automatic" for human minds (though with some important exceptions, leading to the whole idea of mental modules a la Fodor).

Yes, there probably is an in-universe explanation for why organic pilots are necessary. I think droids were shown to be worse fighters than clones (too slow/stupid ?) in the Prequels.

However, the implied prediction that FTL travel will be discovered before AI pilots superior to humans still seems unlikely.

1Дмитрий Зеленский
Well, it were specifically B1 mass production droids which were made incredibly cheap and so with, let's say, not the best AI ever. A rare model like HK-47 was superior to usual (neither Force-amplified nor decades-of-training-behind-Mandalore) humans; and the latter case could also be a difference in available weaponry (if your weapon cannot penetrate amplified beskar armor and you only find this out in the moment of attack, you'd need to be very smart to immediately find a way to win or retreat before the Battle Reflexes guy shuts you off). As for FTL - I wouldn't be so sure, history of research sometimes makes strange jumps. Romans were this close to going all steampunk, and a naive modern observer could say "having steam machines without gunpowder seems unlikely". Currently we don't know what, if anything, could provide FTL, and the solution could jump on us unexpectedly and unrelatedly to AI development.

I don't see how acknowledging that different models work in different contexts necessitates giving up the search for objective truth.

Let's say that in order to reduce complexity, we separate Physics into two fields - Relativistic Mechanics and Quantum Mechanics - whose models currently don't mesh together. I think we can achieve that without appealing to subjectivity, or abandoning the search of an unifying model. Acknowledging the limitations of our current models seems enough.

3Gordon Seidoh Worley
You're right, simply acknowledge differences is insufficient. You have to run it all the way to ground to discover there was never any ground as solid as you thought there was.

After the training begins, something like 80% of the recruits drop out during Hell Week. Seals are selected for their motivation, which is not available to everyone headed for a warzone.

On the other hand, if you'd really like an existential treat to get you going, you may consider looking into the problem of goal alignment in AGI, or aging.

I'd ask for everyone to be given Wolverine's healing factor. This would be really helpful and enough to start taking the scenario seriously.

As a side note, Teenage Matrix Overlords are indistinguishable from god from where I am standing.

The cure for bystander apathy is getting one person to lead by example. Since in this case there are several prominent such examples, a Tragedy of the Commons scenario seems more likely to me.

You are right, it's not possible to tell if this happens implicitly or explicitly (in which case there is nothing to be done anyway).

I'll be damned - after all these years, a solution to the problem of evil.

I notice I am far less moved than I expected to be. Then again, I didn't expect this to happen at all.

I don't know that. I was taught about the asteroid years ago and haven't had a reason to doubt it. The last time I came across the subject, was a video by kurzgesagt, explaining that the asteroid wrecked stuff up for years to come, causing earthquakes and volcanic eruptions, among other things.

3blue1brown has an excellent youtube series "Essence of Calculus" - which presents the main intuitions geometrically, in a way that finally helped me remember the formulas for longer than a week. Each video is 15 minutes long and I haven't seen a better introduction on the subject.

Edit: I realise the post I comment on might've been written before the Sequence on Changing Your Mind I so proudly point to.

I was recently talking to Ozy about a group who believe that society billing thin people is fatphobic, and that everyone needs to admit obese people can be just as attractive and date more of them, and that anyone who preferentially dates thinner people is Problematic. They also want people to stop talking about nutrition and exercise publicly. I sympathize with these people, especially having recently read a study showing that obes

... (read more)

Ideally, I agree with the premise as a long term strategy, opposed to the short term tactics used at a campaign. But I am not convinced any of the active actors and policymakers would want the rise of the waterline. Why change the system, which elected you? What politician wants electorate, that would actually hold them accountable? What suicidal newspaper would want the question of gun violence answered?

It is possible the educational system is doing exactly what it is supposed to do, and raising the sanity waterline will have to be achieved on people's ow... (read more)

A sane person, calling himself a feminist - not something one sees often represented in the media. Then again this is true of sane people in general - a fact I tend to forget. I wish you luck in defending the movement from the believe-all-women crowd. From the outside, it looks like they've already won.

Leaving feminism aside, there is one area, where liberalism doesn't seem to win - namely income inequality.

And sure, as long as the majority is not squeezed into poverty, this may not be a problem. But since this is the first generation of US citizens to see their average lifespan get shorter, I am not so certain we are headed for liberal utopia after all.

Limited field of view and slow decision making, reliant on multiple complex systems for life support.

If a droid or a computer could fly an X-wing, it should.

2Дмитрий Зеленский
It can and usually does. Note that we do see some scenes where a pilot leaves the ship and it, seemingly by itself, flies away to park or something (for instance, R4 does it in Episode III to Obi-Wan's ship IIRC). It might actually be a funny story of each side using organic pilots because the other side uses human pilots and astrodroids are not that good in predicting organics' behavior, so it is just a Pareto equilibrium.

I never said it was just Islam. But you are right - it is not Christians, but rather white people, that are held to a higher standard in this regard (at least by USA liberals).

Fallacies leading to inability to take action in accordance with their values is one explanation for people's apathy.

Another is that they simply prefer their own short term comfort more than most other values they would care to espouse. I know this to be the case for at least one person, and I am pretty sure there are more.

I am somehow convinced that a perceived loon like Elon Musk opening 20 positions for AI safety researchers, $10 million yearly salary, will have much better luck recruiting, than an elite university offering $100 000 (or the potential ca... (read more)

3Emiya
Since I wrote my comment I had lots of chances to prod at the apathy of people to act against imminent horrible doom. I do believe that a large obstacle it's that going "well, maybe I should do something about it, then. Let's actually do that" requires a sudden level of mental effort and responsibility that's... well, it's not quite as unlikely as oxygen turning into gold, but you shouldn't just expect people doing that (it took me a ridiculous amount of time before starting to do so).  People are going to require a lot of prodding or an environment where taking personal responsibility for a collective crisis is the social norm to get moving. 10 millions would cont as lot of prodding, yeah. 100k... eh, I'd guess lots of people would still jump at that, but not many of those who are paid the same amount or more. So a calculation like "I can enjoy my life more by doing nothing, lots of other people can try to save the world in my place" might be involved, even if not explicitly. It's a mixture of the Tragedy of the Commons and of Bystander Apathy, two psychological mechanism with plenty of literature.

If my job consists of 20 different tasks, and for each of them there is a separate narrow AI able to outperform me in them, combining them to automate me should not be that difficult.

1Дмитрий Зеленский
I am afraid I cannot agree. For one, this would require a further 21 AI, the "managing AI", that does the combining. Moreover, the data exchange between these narrow AI may be slower and/or worse (especially considering that many of the strong domain-specific AI don't really have extractable internal data of any use).

"and that one reason we’re not smarter may be that it’s too hard to squeeze a bigger brain through the birth canal" - should be pretty much obliterated by modern Caesarian, but do we see burst of intelligence in last decades?

Reliable contraceptives, combined with unprecedented safety, mean that intelligence is not the evolutionary advantage it once was. People unable or unwilling to use condoms are selected for. Idiocracy is upon us.

Another possibility is that modern Caesarian has not been widespread enough, for long enough, for its effect on intelligen... (read more)

It is only their culture that's under "siege" and it's a different kind of siege involving no laws or planned attempts to erase their cultural ways...

A redneck has seen gay marriage legalised in his lifetime, while homosexuality is still illegal in 71 countries. Islam seems to get a lot more leniency on this topic, compared to Christianity.

Rural British and American Rednecks aren't certainly seeing their resources appropriated by the powers behind the immigrants.

If I remember my history correctly, the Industrial Revolution didn't go so smoothly for ... (read more)

2Teerth Aloke
Not just Islam. It was illegal in India 3 years back. Also, Christian majority Barbados, Antigua, Camroon, Burundi, Zambia, Zimbabwe, Namibia and a few other African countries ban homosexuality. 

After reading these, I am updating from "greedy capitalists + corrupt officials" to "greedy capitalists + corrupt officials + litigation costs". I expect the administrative bloat is more or less the same in US and Europe, while litigation and official, legal lobbying is not.

Inflation isn't calculated correctly and the market isn't free.

It's what you'd expect to see in an oligarchy - politicians promising less regulation for new businesses or universal healthcare both won't deliver. Unlike OP, who delivers consistently.

On the inflation point, googling CPI: "The Controversy Originally, the CPI was determined by comparing the price of a fixed basket of goods and services spanning two different periods. In this case, the CPI was a cost of goods index (COGI). However, over time, the U.S. Congress embraced the view that the CPI sho... (read more)

I once listened to a fellow, arguing that "truth" should have a moral component in its definition, so that only statements beneficial to humanity could be considered "true". On the other hand, dangerous knowledge of civilization ending viruses was harmful, and could only be considered "technically correct", but never "true". He was carving reality so haphazardly, as to only be able to call "true" The Bible and "Crime and Punishment" by Dostoevsky. Although, how he imagines to have achieved this before humanity has ended, escapes me.

I haven't listened to hi... (read more)

Great post!

However, I have the following problem with the scenario - I have hard time trusting a doctor, who prescribes a diet pill and consultation with a surgeon, but omits healthy diet and exercise. (Genetic predisposition does not trump the laws of thermodynamics!)

In general, I don't know of any existing medicine that can effectively replace willpower when treating addiction - which is why treatment is so difficult in the first place.

Psychology tells us that, on the individual level, encouragement works better than blame. Although both have far less impact than one would hope.

I think the official title is "motivated scepticism".

There is more to productivity than not engaging in pleasurable hobbies. I am willing to extend EY the benefit of the doubt and believe he has done some cost/benefit analysis regarding his time management.

In any case, the point is mute - he is not publishing fiction anymore.

If it is any consolation, I remember reading a post or an Author's Note from EY, saying he won't be publishing any new fiction for fear of reputational losses.

This is why we can't have nice things.

-2TAG
It's not writing fanfic that's the problem, it's not doing other things.

For every mental strength we confidently point to, there will be an excellent physical strength we could also point to as a proximate cause, and vice versa.

I agree with you. I just find the particulars oddly inspiring - even if we are not the fastest land hunters, we are genetically the most persistent. This is a lesson from biology that bears thinking about.

Also, we could point to our physical strengths, but people usually don't. We collectively have this body image of ourselves as being "squishy", big brains compensating for weak, frail bodies. I like disabusing that notion.

I see your point. But if water didn’t always boil at the same temperature, why would we bother inventing thermometers?

We have more need to measure the unpredictable than the predictable.

If there was nothing with constant temperature, thermometers would work very differently. My first instinct was to say they wouldn't work at all. But then I remembered the entire field of economics, so your point stands.

Not every one sees things that way. The more hardline claims require the physical map to exclude others.

Good luck with that. I couldn't calculat

... (read more)

Thank you for this discussion.

I was wrong about grammar and the views of Chalmers, which is worse. Since I couldn't be bothered to read him myself, I shouldn't have parroted the interpretations of someone else.

I now have better understanding of your position, which is, in fact, falsifiable.

We do agree on the importance of the question of consciousness. And even if we expect the solution to have different shape, we both expect it to be embedded in physics (old or new).

I hope I've somewhat clarified my own views. But if not, I don't expect to do better in future comments, so I will bow out.

Again, thank you for the discussion.

1DPiepgrass
Yeah, this was a good discussion, though unfortunately I didn't understand your position beyond a simple level like "it's all quarks". On the question of "where does a virtual grenade explode", to me this question just highlights the problem. I see a grenade explosion or a "death" as another bit pattern changing in the computer, which, from the computer's perspective, is of no more significance than the color of the screen pixel 103 pixels from the left and 39 pixels down from the top changing from brown to red. In principle a computer can be programmed to convincingly act like it cares about "beauty" and "love" and "being in pain", but it seems to me that nothing can really matter to the computer because it can't really feel anything. I once wrote software which actually had a concept that I called "pain". So there were "pain" variables and of course, I am confident this caused no meaningful pain in the computer. I intuit that at least one part* of human brains are different, and if I am wrong it seems that I must be wrong either in the direction of "nothing really matters: suffering is just an illusion" or, less likely, "pleasure and suffering do not require a living host, so they may be everywhere and pervade non-living matter", though I have no idea how this could be true. * after learning about the computational nature of brains, I noticed that the computations my brain does are invisible to me. If I glance at an advertisement with a gray tube-nosed animal, the word "elephant" comes to mind; I cannot sense why I glanced at the ad, nor do I have any visibility into the processes of interpreting the image and looking up the corresponding word. What I feel, at the level of executive function, is only the output of my brain's computations: a holistic sense of elephant-ness (and I feel as though I "understand" this output—even though I don't understand what "understanding" is). I have no insight into what computations happened, nor how. My interpretation of this

But note that Linux is a noun and "conscious" is an adjective—another type error—so your analogy doesn't communicate clearly.

Linux is also an adjective - linux game/shell/word processor.

Still, let me rephrase then - I don't need a wet cpu to simulate water. Why would I need a conscious cpu to simulate consciousness?

AFAIK, you are correct that we have no falsifiable predictions as of yet.

Do you expect this to change? Chalmers doesn't. In fact, expecting to have falsifiable predictions is itself a falsifiable prediction. So you should drop the "yet".

... (read more)
2DPiepgrass
"Car" isn't an adjective just because there's a "Car factory"; Consider: *"the factory is tall, car, and red". Yes, but I expect it to take a long time because it's so hard to inspect living humans brains non-destructively. But various people theorize about the early universe all the time despite our inability to see beyond the surface of last scattering... ideas about consciousness should at least be more testable than ideas about how the universe began. Hard problems often suffer delays; my favorite example is the delay between the Michelson–Morley experiment's negative result and the explanation of that negative result (Einstein's Special Relativity). Here, even knowing with certainty that something major was missing from physics, it still took 18 years to find an explanation (though I see here an ad-hoc explanation was given by George FitzGerald in 1889 which pointed in the right direction). Today we also have a long-standing paradox where quantum physics doesn't fit together with relativity, and dark matter and dark energy remain mysterious... just knowing there's a problem doesn't always quickly lead to a solution. So, while I directly sense a conflict between my experience and purely reductive consciousness, that doesn't mean I expect an easy solution. Assuming illusionism, I wouldn't expect a full explanation of that to be found anytime soon either. It was just postulation. I wouldn't rule out panpsychism. Chalmers seems not to believe in a consciousness without physical effects - see his 80000 hours interview. So Yudkowsky's description of Chalmers' beliefs seems to be either flat-out wrong, or just outdated. I do hope we solve this before letting AGIs take over the world, since, if I'm right, they won't be "truly" conscious unless we can replicate whatever is going on in humans. Whether EAs should care about insect welfare, or even chicken welfare, also hinges on the answer to this question.

How do you know that water always boils at the same temperature?

I remember reading it somewhere...

I see your point. But if water didn't always boil at the same temperature, why would we bother inventing thermometers?

The moral of the story is not so much that science always works, it's that it works in a way that's more coherentist than foundationalist.

Right. And since science does work, coherentism gets a big boost in probability, right until the sun stop rising every day.

And the downside of coherentism is that you can have more than one equally c

... (read more)
2TAG
We have more need to measure the unpredictable than the predictable. Not every one sees things that way. The more hardline claims require the physical map to exclude others.

Edit: Now I see Sister_Y addressed my point in the very next paragraph, so this entire comment is a reading comprehension fail more than anything.

Necroing:

poke - my friend likes to explain this to his undergrads by asking them how they would verify that a thermometer is accurate (check it against another thermometer, but how do you know that one is accurate . . . etc.) until they figure out that thermometers are only "accurate" according to custom or consensus. Then he asks them how they know their eyes work. And their memories.

Some of them cry.

Go t

... (read more)
5TAG
How do you know that water always boils at the same temperature? Well, you could use a reliable thermometer... The moral of the story is not so much that science always works, it's that it works in a way that's more coherentist than foundationalist. And the downside of coherentism is that you can have more than one equally coherent wordlviews...
A good "atheistic hymn" is simply a song about anything worth singing about that doesn't happen to be religious.

No, that's a good non-religious song. Without religion there would be no atheism, only the much broader scepticism. Atheism is a response to religion - to be considered "atheistic", a song could not avoid the topic. (Alternatively, we'd have to consider "Fear of the dark" a great aspiderman song).

The best atheistic song I've heard is "Dear God" by XTC - the last prayer of many a new at... (read more)

Because I believe things are what they are. Therefore if I introspect and see choice, then it really truly is choice. The other article might explain it, but an explanation can not change what a thing is, it can only say why it is.

An example of mind projection fallacy so pure, even I could recognise it. Ian believes "he believes things are what they are". If Ian actually believed things are what they are, he would possess unobtainable level of rationality and we would do well to use him as an oracle. In reality, Ian believes things are what they seem to be (to him), which is understandable, but far less impressive.

I think of consciousness as a process (software) run on our brains (wetware), with the theoretical potential to be run on other hardware. I thought you understood my position. Asking me to pinpoint the hardware component which would contain suffering, tells me you don't.

To me, saying the cpu (or the gpu) is conscious sounds like saying the cpu is linux - this is a type error. A pc can be running linux. A pc cannot actually be linux, even if "running" is often omitted.

But if one doesn't know "running" is omitted, one could ask where does the linux-ness come

... (read more)
0DPiepgrass
So, I think we've cleared up the distinction between illusionism and non-illusionism (not sure if the latter has its own name), yay for that. But note that Linux is a noun and "conscious" is an adjective—another type error—so your analogy doesn't communicate clearly. I can't be sure of that. AFAIK, you are correct that we have no falsifiable predictions as of yet—it's called the "hard problem" for a reason. But illusionism has its own problems. The most obvious problem—that there is no "objective" subjective experience, qualia, or clear boundaries on consciousness in principle (you could invent a definition that identifies a "boundary" or "experience", but surely someone else could invent another definition with different boundaries in edge cases)—tends not to be perceived as a problem by illusionists, which is mysterious to me. I think you're saying the suffering has no specific location (in my hypothetical scenario), but that it still exists, and that this makes sense and you're fine with it; I'm saying I don't get it. But perhaps illusionism's consequences are a problem? In particular, in a future world filled with AGIs, I don't see how morality can be defined in a satisfactory way without an objective way to identify suffering. How could you ever tell if an AGI is suffering "more" than a human, or than another AGI with different code? (I'm not asking for an answer, just asserting that a problem exists.)
Load More