Interesting proposal.
I would suggest one modification : a "probation" period for contents, changing the rule "Content ratings above 2 never go down, except to 0; they only go up." to "Once a content staid for long enough (two days? one week?) at level 2 or above, it can never go down, only up" to make the system less vulnerable to the order in which content gets rated.
Something important IMHO is missing from the list : no new physics were discovered in LHC, even running at 14TeV, no Susy, no new particle, nothing but a confirmation of all predictions of Standard Model.
It's relatively easy to miss because it's a "negative" discovery (nothing new), but since many were expecting some hints towards new physics from the 2016 LHC runs, the confirmation of the Standard Model (and the death sentence it is to many theories, like many forms of SUSY) is news.
Answer 1 is not always possible - it's possible when you're answering on IRC or Internet forum, but usually is not in real life conversation.
As for #3, it is sometimes justified - there are people out there who will use unnecessarily obscure words just to appear smarter/impress people, or who will voluntarily use unnecessarily complex language just to obfuscate the flaws of their reasoning.
You're right than #1 is (when available) nearly always the best reaction, and that the cases were #3 is true (unless you're speaking to someone trying to sell you homeopathy, or some politicians) are rare, but people having mis-calibrated heuristics is sadly a reality we have to deal with.
Sounds like a good idea, but from a practical pov how do you count those 12 seconds ? I can count 12 seconds more or less accurately, but I can't do that as a background process while trying to think hard. Do you use some kind of a timer/watch/clock ? Or the one asking the question counts on his finger ?
I know the "12 seconds" isn't a magical number, if it ends up being "10" or "15" it won't change much, but if you give a precise number (not just "think before answering") you've to somehow try to respect it.
I expect that the utility per unit time of future life is significantly higher than what we have today, even taking into account loss of social network.
Perhaps, but that's highly debatable. Anyway, my main point was that the two scenarios (bullet / cryonics) are not anywhere near being mathematically equivalent, there are a lot of differences, both in favor and against cryonics, and pretending they don't exist is not helping. If anything, it just reinforces the Hollywood stereotype of the "vulkan rationalist" who doesn't have any feeling or em...
Hum, first I find you numbers very unlikely - cryonics costs more than $1/day, and definitely have less than 10% of chance of working (between the brain damage done by the freezing, the chances that the freezing can't be done in time, disaster striking the storage place before resurrection, risk of society collapse, unwillingness of future people to resurrect you, ...).
Then, the "bullet" scenario isn't comparable to cryonics, because it completely forgets all the context and social network. A significant part of why I don't want to die (not the o...
Well, I would consider it worrying if a major public advocate of antideathism were also publically advocating a sexuality that is considered disgusting by most people - like say pedophilia or zoophilia.
It is an unfortunate state of the world, because sexual (or political) preference shouldn't have any significant impact on how you evaluate their position on non-related topics, but that's how the world works.
Consider someone who never really thought about antideathism, open the newspaper the morning, reads about that person who publically advocate disgust...
"Infinite" is only well-defined as the precise limit of a finite process. When you say "infinite" in absolute, it's a vague notion that is very hard to manipulate without making mistakes. One of my university-level maths teacher kept saying that speaking of "infinite" without having precise limit of something finite is equivalent to dividing by zero.
I am, and not just MIRI/AI safety, also for other topics like anti-deathism. Just today I read in a major French newspaper an article explaining how Peter Thiel is the only one from the silicon valley to support the "populist demagogue Trump" and in the same article that he also has this weird idea that death might ultimately be a curable disease...
I know that reverse stupidity isn't intelligence, and about the halo effect, and that Peter Thiel having disgusting (to me, and to most French citizen) political tastes have no bearing on him being right or wrong about death, but many people will end up associating antideathism with being a Trump-supporting lunatic :/
Imagine a cookie like Oreo to the last atom, except that it's deadly poisonous, weighs 100 tons and runs away when scared.
Well, I honestly can't. When you tell me that, I picture a real Oreo, and then at its side a cartoonish Oreo with all those weird property, but then trying to assume the microscopic structure of the cartoonish Oreo is the same than of a real Oreo just fails.
It's like if you tell me to imagine an equilateral triangle which is also a right triangle. Knowing non-euclidian geometry I sure can cheat around, but assuming I don't know abou...
My impression was that this was pretty much tinujin's point: saying "imagine something atom-for-atom identical to you but with entirely different subjective experience" is like saying "imagine something atom-for-atom identical to an Oreo except that it weighs 100 tons etc.": it only seems imaginable as long as you aren't thinking about it too carefully.
Because consciousness supervenes upon physical states, and other brains have similar physical states.
But why, how ? If consciousness is not a direct product of physical states, if p-zombies are possible, how can you tell apart the hypothesis "every other human is conscious" from "only some humans are conscious" from "I'm the only one conscious by luck" from "everything including rocks are conscious" ?
It definitely does matter.
If you build a human-like robot, remotely controlled by a living human (or by a brain-in-a-vat), and interact with the robot, it'll appear to be conscious but isn't, and yet it wouldn't be a zombie in any way, what actually produces the response about being conscious would be the human (or the brain), not the robot.
If the GLUT was produced by a conscious human (or conscious human simulation), then it's akin to a telepresence robot, only slightly more remote (like the telepresence robot is only slightly more remote than a phone). ...
Did you read the GAZP vs GLUT article ? In the GLUT setup, the conscious entity is the conscious human (or actually, more like googolplex of conscious humans) that produced the GLUT, and the robot replaying the GLUT is no more conscious than a phone transmitting the answer from a conscious human to another - which is basically what it is doing, replaying the answer given by a previous, conscious, human from the same input.
Not having a solution doesn't prevent from criticizing an hypothesis or theory on the subject. I don't know what are the prime factors of 4567613486214 but I know that "5" is not a valid answer (numbers having 5 among their prime factors end up with 5 or 0) and that "blue" doesn't have the shape of a valid answer. So saying p-zombism and epiphenomenalism aren't valid answers to the "hard problem of consciousness" doesn't require having a solution to it.
I would like to suggest zombies of second kind. This is a person with inverted spectrum. It even could be my copy, which speaks all the same philosophical nonsense as me, but any time I see green, he sees red, but names it green. Is he possible? I could imagine such atom-exact copy of me, but with inverted spectrum.
I can't.
As a reductionist and materialist, it doesn't make sense - the feeling of "red" and "green" is a consequence of the way your brain is wired and structured, an atom-exact copy would have the same feelings.
But letti...
Another more directly worrying question, is why or if the p-zombie philosopher postulate that other persons have consciousness.
After all, if you can speak about consciousness exactly like we do and yet be a p-zombie, why doesn't Chalmer assume he's the only not being a zombie, and therefore letting go of all forms of caring for others and all morality ?
The fact that Chalmer and people like him still behave like they consider other people to be as conscious as they are probably points to the fact they have belief-in-belief, more than actual belief, in the possibility of zombieness.
I agree with your point in general, and it does speak against an immaterial soul surviving death, but I don't think it necessarily apply to p-zombies. The p-zombie hypothesis is that the consciousness "property" has no causality over the physical world, but it doesn't say that there is no causality the other way around: that the state of the physical brain can't affect the consciousness. So a traumatic brain injury would (under some unexplained mysterious mechanism) reflect into that immaterial consciousness.
But sure, it's yet more epicycles.
No, it is much more simple than that - "green" is a wavelength of light, and "the feeling of green" is how the information "green" is encoded in your information processing system, that's it. No special ontology for qualia or whatever. Qualia isn't a fundamental component of the universe like quarks and photons are, it's only encoding of information in your brain.
But yes, how reality is encoded in an information system sometimes doesn't match the external world, the information system can be wrong. That's a natural, direct con...
First, "Social justice" is a broad and very diverse movement of people wanting to reduce the amount of (real or perceived) injustice people face for a variety of reasons (skin color, gender, sexual orientation, place of birth, economical position, disability, ...). Like in any such broad political movement, subparts of the movement are less rational than others.
Overall, "social justice" is still mostly a force of reason and rationality against the most frequent and pervasive forms of irrationality in society, which are mostly religion-b...
Overall, "social justice" is still mostly a force of reason and rationality against the most frequent and pervasive forms of irrationality in society
Citation needed.
it might be very rational to make irrational demands
This is true. But then are you claiming that the irrational demands we are discussing in this thread are the result of such gaming of negotiations or dark-arting of the memesphere?
One issue I have with statements like "~50% of the variation is heritable and ~50% is due to non-shared environment" is that they assume the two kind of factors are unrelated, and you can do an arithmetic average between the two.
But very often, the effects are not unrelated, and it works more like a geometric average. In many ways it's more than genetic gives you a potential, an ease to learn/train yourself, but then it depends of your environment if you actually develop that potential or not. Someone with a very high "genetic IQ" but w...
The experience of green has nothing to with wavelengths of light. Wavelengths of light are completely incidental to the experience.
Not at all. The experience of green is the way our information processing system internally represent "light of green wavelength", nothing else. That if you voluntarily mess up with your cognitive hardware by taking drugs, or that during background maintenance tasks, or that "bugs" in the processing system can lead to "experience of green" when there is no real green to be perceived doesn't cha...
Do you think there's something wrong about all that? Because it seems obviously reasonable to me.
Well, perhaps it is a reason of "cognitive simplicity" but it really feels a very artificial line when someone refuses to eat meat in every situation, with all associated consequences, like they are invited to relatives for christmas eve dinner and they won't eat meat, putting extra burden on the person inviting him so they cook a secondary vegetarian meal for him, and yet not caring much about the rats that are killed regularly in the basement of ...
I guess the average driver kills at most one animal ever by bumping into them, whereas the average meat-eater may consume thousands of animals.
There we touch another problem with the "no meat eating" thing : where do you draw the line ? Would people who refuse to eat chicken and beef be ok with eating shrimps or insects ? What with fish, is it "meat" and unethical ? Because, whenever you drive, you kill hundred of flies and butterflies and the like, which are animals.
So where to draw the line, vertebrates ? Eating shrimps and insects would be fine ? But it's not like a chicken or a cow have lots of cognitive abilities, so feels quite arbitrary to me.
I always felt that argument 1 is a bit hypocritical and not very rational. We kill animals constantly for many reasons - farming even for vegetables requires killing rodents and birds to prevent them eating the crops, we kill rats and other pests in our buildings to keep them from transmitting disease and damaging cables, we regularly kill animals by bumping into them when we drive a car or take a train or a plane, ... And of course, we massively take living space away from animals, leading them to die.
So why stop eating meat, and yet disregard all the oth...
Regular sleep may not suspend consciousness (although it can very well be argued in some phases of sleep it does), but anesthesia, deep hypothermia, coma, ... definitely do, and are very valid examples to bring forward in the "teleport" debate.
I've yet to see a definition of consciousness that doesn't have problems with all those states of "deep sleep" (which most people don't have any trouble with), while saying it's not "the same person" for the teleporter.
There is no objective absolute morality that exists in a vacuum. Our morality is a byproduct of evolution and culture. Of course we should use rationality to streamline and improve it, not limit ourselves to the intuitive version that our genes and education gave us. But that doesn't mean we can streamline it to the point of simple average or sum, and yet have it remain even roughly compatible with our intuitive morality.
Utility theory, prisoner's dilemma, Occam's razor, and many other mathematical structures put constraints on what a self-consistent, form...
The same way that human values are complicated and can't be summarized as "seek happiness !", the way we should aggregate utility is complicated and can't be summarized with just a sum or an average. Trying to use a too simple metric will lead to ridiculous cases (utility monster, ...). The formula we should use to aggregate individual utilities is likely to be involve total, median, average, Ginny, and probably other statistical tools, and finding it is a significant part of finding our CEV.
The MWI doesn't necessarily mean that every possible event, however unlikely, "exists". As long as we don't know where the Born rule comes from, we just don't know.
Worlds in MWI aren't discrete and completely isolated from each others, they are more inkstains on paper, not clearly delimited blobs, where "counting the blobs" can't be defined in non ambiguous way. There are hytpothesis (sometimes called "mangled world") that would make worlds of too small probability (inkstains not thick enough) unstable and "contaged"...
Personally, I liked LW for being an integrated place with all that : the Sequences, interesting posts and discussions between rationalists/transhumanists (be it original thoughts/viewpoints/analysis, news related to those topics, links to related fanfiction, book suggestion, ...), and the meetup organization (I went to several meetup in Paris).
If that were to be replaced by many different things (one for news, one or more for discussion, one for meetups, ...) I probably wouldn't bother.
Also, I'm not on Facebook and would not consider going there. I think r...
This wont work, like with all other similar schemes, because you can't "prove" the gatekeeper down to the quark level of what makes its hardware (so you're vulnerable to some kind of side-attack, like the memory bit flipping attack that was spoken about recently), nor shield the AI from being able to communicate through side channels (like, varying the temperature of its internal processing unit which it turns will influence the air conditioning system, ...).
And that's not even considering that the AI could actually discover new physics (new part...
To be fair, the DRAM bit flipping thing doesn't work on ECC RAM, and any half-decent server (especially if you run an AI on it) should have ECC RAM.
But the main idea remains yes : even a program proven to be secure can be defeated by attacking one of the assumptions made (such as the hardware being 100% reliable, which it rarely is) in the proof. Proving a program to be secure down from applying Schrödinger's equation on the quarks and electrons the computer is made of is way beyond our current abilities, and will remain so for a very long time.
I see your point, but I think you're confusing a partial overlapping with an identity.
There are many bugs/uncertainty that appear as agency, but there are also many bugs/uncertainty which doesn't appear as agency (as you said about true randomness), and there are also behavior that are actually smart and that appear as agency because of smartness (like the way I was delighted with Emacs the first time I realized that if I asked it to replace "blue" with "red", it would replace "Blue" with "Red" and "BLUE" w...
I'm really skeptical of claims like « the "thinking unit" is really the whole body », they tend to discard quantitative considerations for purely qualitative ones.
Yes, the brain is influenced, and influences, the whole body. But that doesn't mean the whole body has the same importance in the thinking. The brain is also influenced by lots of external factors (such as ambient light or sounds, ...) if as soon as there is a "connection" between two parts you say "it's the whole system that does the processing", you'll just end up ...
A little nitpicking about the "2 dice" thing : usually when you throw you two dices, it doesn't matter which dice gives which result. Sure you could use colored dices and have the "blue 2, red 3" be different than "blue 3, red 2", but that's very rarely the case. Usually you do the sum (or look for patterns like doubles) but "2, 3" and "3, 2" are equivalent, and in that case the entropy isn't the double, but lower.
What you wrote is technically right - but goes against the common usage of dices, so it would be worth adding a footnote or precision about that, IMHO.
I'm not really sure the issue is about "direction", but more about people who have enough time and ideas to write awesome (or at least, interesting) posts like the Sequences (the initial ones by Eliezer or the additional ones by various contributors).
What I would like to see are sequences of posts that build on each other, starting from the basics and going to deep things (a bit like Sequences). It could be collective work (and then need a "direction"), but it could also be the work of a single person.
As for myself, I did write a few p...
I don't see why it's likely one of the numbers has to be big. There are really are lots of complicated steps you need to cross to go from inert matter to space-faring civilizations, it's very easy to point a dozen of such steps that could fail in various ways or just take too long, and there many disasters that can happen to blow everything down.
If you've a long ridge to climb in a limited time and most people fail to do it, it's not very likely there is a very specific part of it which is very hard, but (unless you've actual data that most people fail at ...
There is a thing which really upsets me with the "Great Filter" idea/terminology, is that implies that it's a single event (which is either in the past or the future).
My view on the "Fermi paradox" is not that there is a single filter, cutting ~10 orders of magnitude (ie, from 10 billions of planets in our galaxy with could have a life to just one), but more a combination of many small filters, each taking their cuts.
To have intelligent space-faring life, we need a lot of things to happen without any disaster (nearby supernova, too big...
Nicely put for an introduction, but of course things are in reality not as clear-cut, "rationality" changing the direction and "desire" the magnitude.
Rationality can make you realize some contradictions between your desires, and force you to change them. It can also make you realize that what you truly desire isn't what you thought you desired. Or it can make you desire whole new things, that you didn't believe to be possible initially.
Desire will affect the magnitude because it'll affect how much effort you put in your endeavor. Wi
Yes, it is a bit suspicious - but then Azkaban and Dementors are so terrible that it's worth the risk, IMHO.
And I don't think Harry is counting just on the Horcrux, I think he's counting on Horcrux as last failback, counting on the unicorn blood and the "she knows death can be defeated because she did went back from death", and maybe even Hermione calling a Phoenix.
The chapter 122 in itself was good, I liked it, but I feel a bit disappointed that it's the end of the whole hpmor.
Not to be unfairly critical, it's still a very great story and many thanks to Eliezer for writing it, but... there are way too many remaining unanswered questions, unfinished business, ... to be the complete end. It feels more like "end of season 1, see season 2 for the next" than "and now it's over".
First, I would really have liked a "something to protect" about Harry's parents.
But mostly, there are lots of unan...
I don't really see the point in antimatter suiciding. It'll not kill Voldermort due to the Horcrux network, so it'll just kill the Death Eaters but letting Voldemort in power, and Voldemort would be so pissed of he would do the worst he can to Harry's family and friends... how is that any better than letting Voldemort kill Harry and manage to save a couple of people by telling him a few secrets ?
If I remember well, it's not just "person", but information. I can't use a Time Turner to go 6 hours back to the past, give a piece of paper to someone (or an information to that person), and have that person goes back for 6 more hours.
So while it is an interesting hypothesis, it would require no information to be carried... and isn't the fact that the Stone still exists and works an information in itself ? Or that's nitpicking ?
Just a small nitpciking correction : the metric system wasn't invented in the 1600s, but in the late 1700s during French Revolution.