All of kilobug's Comments + Replies

Just a small nitpciking correction : the metric system wasn't invented in the 1600s, but in the late 1700s during French Revolution.

Just a small nitpicking correction : the metric system wasn't created in the 1600s, but in the late 1700s, during French Revolution.

Interesting proposal.

I would suggest one modification : a "probation" period for contents, changing the rule "Content ratings above 2 never go down, except to 0; they only go up." to "Once a content staid for long enough (two days? one week?) at level 2 or above, it can never go down, only up" to make the system less vulnerable to the order in which content gets rated.

0Paul Crowley
That makes sense. I'd like people to know when what they're seeing is out of probation, so I'd rather say that even if you have set the slider to 4, you might still see some 3-rated comments that are expected to go to 4 later, and they'll be marked as such, but that's just a different way of saying the same thing.

The same as "Content ratings above 2 never go down, except to 0", once content as been promoted to level 3 (or 4 or 5) once, it'll never go lower than that.

2Paul Crowley
Yes, exactly. I don't think I've done as good a job of being clear as I'd like, so I'm glad you were able to parse this out!

Something important IMHO is missing from the list : no new physics were discovered in LHC, even running at 14TeV, no Susy, no new particle, nothing but a confirmation of all predictions of Standard Model.

It's relatively easy to miss because it's a "negative" discovery (nothing new), but since many were expecting some hints towards new physics from the 2016 LHC runs, the confirmation of the Standard Model (and the death sentence it is to many theories, like many forms of SUSY) is news.

Answer 1 is not always possible - it's possible when you're answering on IRC or Internet forum, but usually is not in real life conversation.

As for #3, it is sometimes justified - there are people out there who will use unnecessarily obscure words just to appear smarter/impress people, or who will voluntarily use unnecessarily complex language just to obfuscate the flaws of their reasoning.

You're right than #1 is (when available) nearly always the best reaction, and that the cases were #3 is true (unless you're speaking to someone trying to sell you homeopathy, or some politicians) are rare, but people having mis-calibrated heuristics is sadly a reality we have to deal with.

Sounds like a good idea, but from a practical pov how do you count those 12 seconds ? I can count 12 seconds more or less accurately, but I can't do that as a background process while trying to think hard. Do you use some kind of a timer/watch/clock ? Or the one asking the question counts on his finger ?

I know the "12 seconds" isn't a magical number, if it ends up being "10" or "15" it won't change much, but if you give a precise number (not just "think before answering") you've to somehow try to respect it.

0Raemon
"12 seconds" was chosen mostly to be easier to remember. I think it's totally fine if people end up taking 10 or 15. (If people tend to get fixated on the number 12 we can come up with some other name, but so far "think before answering" had seemed less memorable than "12 second rule")

I expect that the utility per unit time of future life is significantly higher than what we have today, even taking into account loss of social network.

Perhaps, but that's highly debatable. Anyway, my main point was that the two scenarios (bullet / cryonics) are not anywhere near being mathematically equivalent, there are a lot of differences, both in favor and against cryonics, and pretending they don't exist is not helping. If anything, it just reinforces the Hollywood stereotype of the "vulkan rationalist" who doesn't have any feeling or em... (read more)

Hum, first I find you numbers very unlikely - cryonics costs more than $1/day, and definitely have less than 10% of chance of working (between the brain damage done by the freezing, the chances that the freezing can't be done in time, disaster striking the storage place before resurrection, risk of society collapse, unwillingness of future people to resurrect you, ...).

Then, the "bullet" scenario isn't comparable to cryonics, because it completely forgets all the context and social network. A significant part of why I don't want to die (not the o... (read more)

0The_Jaded_One
TBH I think this works out fairly heavily in favour of the future, I expect that the utility per unit time of future life is significantly higher than what we have today, even taking into account loss of social network. Of course this asymmetry goes away if you persuade your friends and family to sign up too. I suppose your mileage may reasonably vary depending on how much of a nerd you are and how good your present day relationships are. Personally, if cryonics was 100% and a positive future to wake up in was also 100% (both of which are false by a large margin), I would go to the future right now and start enjoying the delights it has to offer. I have spent some time thinking about how good the best possible human life is. It's somewhat hard to tell as it is an underresearched area, but I think it's probably 2-10 times better in utility than the best we have today.
0The_Jaded_One
Nope: A term life insurance policy in the amount of the minimum fee often costs around $30 per month for a person starting their policy in good health at middle age.

Well, I would consider it worrying if a major public advocate of antideathism were also publically advocating a sexuality that is considered disgusting by most people - like say pedophilia or zoophilia.

It is an unfortunate state of the world, because sexual (or political) preference shouldn't have any significant impact on how you evaluate their position on non-related topics, but that's how the world works.

Consider someone who never really thought about antideathism, open the newspaper the morning, reads about that person who publically advocate disgust... (read more)

0ChristianKl
I think you overrate the impact of reading a newspaper article. It doesn't trigger strong feelings.

"Infinite" is only well-defined as the precise limit of a finite process. When you say "infinite" in absolute, it's a vague notion that is very hard to manipulate without making mistakes. One of my university-level maths teacher kept saying that speaking of "infinite" without having precise limit of something finite is equivalent to dividing by zero.

I am, and not just MIRI/AI safety, also for other topics like anti-deathism. Just today I read in a major French newspaper an article explaining how Peter Thiel is the only one from the silicon valley to support the "populist demagogue Trump" and in the same article that he also has this weird idea that death might ultimately be a curable disease...

I know that reverse stupidity isn't intelligence, and about the halo effect, and that Peter Thiel having disgusting (to me, and to most French citizen) political tastes have no bearing on him being right or wrong about death, but many people will end up associating antideathism with being a Trump-supporting lunatic :/

4Lumifer
So in which way are you different from someone who, say, thinks that Peter Thiel has disgusting (to him and a lot of other people) tastes in sex and so will end up associating antideathism with being a moral degenerate?

Imagine a cookie like Oreo to the last atom, except that it's deadly poisonous, weighs 100 tons and runs away when scared.

Well, I honestly can't. When you tell me that, I picture a real Oreo, and then at its side a cartoonish Oreo with all those weird property, but then trying to assume the microscopic structure of the cartoonish Oreo is the same than of a real Oreo just fails.

It's like if you tell me to imagine an equilateral triangle which is also a right triangle. Knowing non-euclidian geometry I sure can cheat around, but assuming I don't know abou... (read more)

5Good_Burning_Plastic
O.O.O.O O..O..O

My impression was that this was pretty much tinujin's point: saying "imagine something atom-for-atom identical to you but with entirely different subjective experience" is like saying "imagine something atom-for-atom identical to an Oreo except that it weighs 100 tons etc.": it only seems imaginable as long as you aren't thinking about it too carefully.

Because consciousness supervenes upon physical states, and other brains have similar physical states.

But why, how ? If consciousness is not a direct product of physical states, if p-zombies are possible, how can you tell apart the hypothesis "every other human is conscious" from "only some humans are conscious" from "I'm the only one conscious by luck" from "everything including rocks are conscious" ?

0UmamiSalami
Chalmers does believe that consciousness is a direct product of physical states. The dispute is about whether consciousness is identical to physical states. Chalmers does not believe that p-zombies are possible in the sense that you could make one in the universe. He only believes it's possible that under a different set of psychophysical laws, they could exist.

Is "it" zombies, or epiphenomenalism?

The hypothesis I was answering to, the "person with inverted spectrum".

It definitely does matter.

If you build a human-like robot, remotely controlled by a living human (or by a brain-in-a-vat), and interact with the robot, it'll appear to be conscious but isn't, and yet it wouldn't be a zombie in any way, what actually produces the response about being conscious would be the human (or the brain), not the robot.

If the GLUT was produced by a conscious human (or conscious human simulation), then it's akin to a telepresence robot, only slightly more remote (like the telepresence robot is only slightly more remote than a phone). ... (read more)

0Houshalter
The question is whether the GLUT is conscious. I don't believe that it is. Perhaps it was created by a conscious process. But that process is gone now. I don't believe that torturing the GLUT is wrong, for example, because the conscious entity has already been tortured. Nothing I do to the GLUT can causally interact with the conscious process that created it. This is why I say the origin of the GLUT doesn't matter. I'm not saying that I believe GLUTs are actually likely to exist, let alone appear from randomness. But the origin of a thing shouldn't matter to the question of whether or not it is conscious. If we can observe every part of the GLUT, but know nothing about it's origin, we should still be able to determine if it's conscious or not. The question shouldn't depend on its past history, but only it's current state. I believe it might be possible for a non conscious entity to create a GLUT, or at least fake consciousness. Like a simple machine learning algorithm that imitates human speech or text. Or AIXI with it's unlimited computing power, that doesn't do anything other than brute force. I wouldn't feel bad about deleting an artificial neural network, or destroying an AIXI. The question that bothers me is what about a bigger, more human like neural network? Or a more approximate, less brute force version of AIXI? When does an intelligence algorithm gain moral weight? This question bothers me a lot, and I think it's what people are trying to get at when they talk about GLUTs.

Did you read the GAZP vs GLUT article ? In the GLUT setup, the conscious entity is the conscious human (or actually, more like googolplex of conscious humans) that produced the GLUT, and the robot replaying the GLUT is no more conscious than a phone transmitting the answer from a conscious human to another - which is basically what it is doing, replaying the answer given by a previous, conscious, human from the same input.

-1Houshalter
I don't think the origin of the GLUT matters at all. It could have sprung up out of pure randomness. The point is that it exists, and appears to be conscious by every outward measure, but isn't.

Sorry to go meta, but could someone explain me how "Welcome back!" can be at -1 (0 after my upvote) and yet "Seconded." at +2.

Doesn't sound like very consistent scoring...

9NancyLebovitz
We have a karma troll.

I am plagued by our resident troll Eugine because I am the mod that keeps banning him. Working on alternative solutions.

Not having a solution doesn't prevent from criticizing an hypothesis or theory on the subject. I don't know what are the prime factors of 4567613486214 but I know that "5" is not a valid answer (numbers having 5 among their prime factors end up with 5 or 0) and that "blue" doesn't have the shape of a valid answer. So saying p-zombism and epiphenomenalism aren't valid answers to the "hard problem of consciousness" doesn't require having a solution to it.

-4TheAncientGeek
There's saying it and saying it. If you say that a particular solution is particularly bad, that kind of inmplies a better solution somewhere. If you wanted to say that all known solutions are bad, you would presumably say something about all known solutions.
8gjm
Quite true, but if you follow the link in Furcas's last paragraph (which may not have been there when you wrote your comment) you will see Eliezer more or less explicitly claiming to have a solution.

Or more likely :

d) the term "qualia" isn't very properly defined, and what turchin means with "qualia" isn't exactly what VAuroch means with "qualia" - basically an illusion of transparecny/distance of inference issue.

2VAuroch
No one defines qualia clearly. If they did, I'd have a conclusion one way or the other.

I would like to suggest zombies of second kind. This is a person with inverted spectrum. It even could be my copy, which speaks all the same philosophical nonsense as me, but any time I see green, he sees red, but names it green. Is he possible? I could imagine such atom-exact copy of me, but with inverted spectrum.

I can't.

As a reductionist and materialist, it doesn't make sense - the feeling of "red" and "green" is a consequence of the way your brain is wired and structured, an atom-exact copy would have the same feelings.

But letti... (read more)

-1TheAncientGeek
Is "it" zombies, or epiphenomenalism?

Another more directly worrying question, is why or if the p-zombie philosopher postulate that other persons have consciousness.

After all, if you can speak about consciousness exactly like we do and yet be a p-zombie, why doesn't Chalmer assume he's the only not being a zombie, and therefore letting go of all forms of caring for others and all morality ?

The fact that Chalmer and people like him still behave like they consider other people to be as conscious as they are probably points to the fact they have belief-in-belief, more than actual belief, in the possibility of zombieness.

1buybuydandavis
A wonderful way to dehumanize. The meat bag you ride will let go of caring, or not. Under the theory, the observer chooses nothing in the physical world. The meatbag produces experiences of caring for you, or not, according to his meatbag reasons for action in the world.
-1UmamiSalami
Because consciousness supervenes upon physical states, and other brains have similar physical states.

I agree with your point in general, and it does speak against an immaterial soul surviving death, but I don't think it necessarily apply to p-zombies. The p-zombie hypothesis is that the consciousness "property" has no causality over the physical world, but it doesn't say that there is no causality the other way around: that the state of the physical brain can't affect the consciousness. So a traumatic brain injury would (under some unexplained mysterious mechanism) reflect into that immaterial consciousness.

But sure, it's yet more epicycles.

0buybuydandavis
You're watching a POV movie of a meat bag living out it's life. When the meat bag falls apart, the movie gets crapped up.

No, it is much more simple than that - "green" is a wavelength of light, and "the feeling of green" is how the information "green" is encoded in your information processing system, that's it. No special ontology for qualia or whatever. Qualia isn't a fundamental component of the universe like quarks and photons are, it's only encoding of information in your brain.

But yes, how reality is encoded in an information system sometimes doesn't match the external world, the information system can be wrong. That's a natural, direct con... (read more)

1Jakub Supeł
Green is not a wavelength of light. Last time I checked, wavelength is measured in units of length, not in words. We might call light of wavelength 520nm "green" if we want, and we do BECAUSE we are conscious and we have the qualia of green whenever we see light of wavelength 520nm. But this is only a shorthand, a convention. For all I know, other people might see light of wavelength 520nm as red (i.e. what I describe as red, i.e. light of wavelength 700nm), but refer to it as green because there is no direct way to compare the qualia.

First, "Social justice" is a broad and very diverse movement of people wanting to reduce the amount of (real or perceived) injustice people face for a variety of reasons (skin color, gender, sexual orientation, place of birth, economical position, disability, ...). Like in any such broad political movement, subparts of the movement are less rational than others.

Overall, "social justice" is still mostly a force of reason and rationality against the most frequent and pervasive forms of irrationality in society, which are mostly religion-b... (read more)

0bogus
True, inasmuch as almost all modern worldviews may be called 'a byproduct of the Enlightenment'. It certainly applies to Marxism, which SJ is a fairly direct successor of.
2The_Jaded_One
Are you living in the same universe as me or have the LW admins enabled some kind of cross-branch commenting capability and you're here from an alternate reality?

Overall, "social justice" is still mostly a force of reason and rationality against the most frequent and pervasive forms of irrationality in society

Citation needed.

it might be very rational to make irrational demands

This is true. But then are you claiming that the irrational demands we are discussing in this thread are the result of such gaming of negotiations or dark-arting of the memesphere?

One issue I have with statements like "~50% of the variation is heritable and ~50% is due to non-shared environment" is that they assume the two kind of factors are unrelated, and you can do an arithmetic average between the two.

But very often, the effects are not unrelated, and it works more like a geometric average. In many ways it's more than genetic gives you a potential, an ease to learn/train yourself, but then it depends of your environment if you actually develop that potential or not. Someone with a very high "genetic IQ" but w... (read more)

The experience of green has nothing to with wavelengths of light. Wavelengths of light are completely incidental to the experience.

Not at all. The experience of green is the way our information processing system internally represent "light of green wavelength", nothing else. That if you voluntarily mess up with your cognitive hardware by taking drugs, or that during background maintenance tasks, or that "bugs" in the processing system can lead to "experience of green" when there is no real green to be perceived doesn't cha... (read more)

1algekalipso
I have seen this argument before, and I must confess that I am very puzzled about the kind of mistake that is going on here. I might call it naïve functionalist realism, or something like that. So whereas in "standard" naïve realism people find it hard to dissociate their experiences with an existing mind-independent world, they then go on to perceive everything as "seeing the world directly, nothing else, nothing more." Naïve realists will interpret their experiences as direct, unmediated, impressions of the real world. Of course this is a problematic view, and there killer arguments against it. For instance, hallucinations. However, naïve realists can still come back and say that you are talking about cases of "misapprehension", where you don't really perceive the world directly anymore. That does not mean you "weren't perceiving the world directly before." But here the naïve realist has simply not integrated the argument in a rational way. If you need to explain hallucinations as "failed representations of true objects" you don't, anymore, need to in addition restate one's previous belief in "perceiving the world directly." Now you end up having two ontologies instead of one: Inner representations and also direct perception. And yet, you only need one: Inner representations. Analogously, I would describe your argument as naïve functionalist realism. Here you first see a certain function associated to an experience, and you decide to skip the experience altogether and simply focus on the function. In itself, this is reasonable, since the data can be accounted for with no problem. But when I mention LSD and dream, suddenly that is part of another category like a "bug" in one's mind. So here you have two ontologies, where you can certainly explain it all with just one. Namely, the green is a particular qualia, which gets triggered under particular circumstances. Green does not refer to the wavelength of light that triggers it, since you can experience it without

Do you think there's something wrong about all that? Because it seems obviously reasonable to me.

Well, perhaps it is a reason of "cognitive simplicity" but it really feels a very artificial line when someone refuses to eat meat in every situation, with all associated consequences, like they are invited to relatives for christmas eve dinner and they won't eat meat, putting extra burden on the person inviting him so they cook a secondary vegetarian meal for him, and yet not caring much about the rats that are killed regularly in the basement of ... (read more)

1gjm
There are reasons why religions tend to have rules, rather than e.g. just saying "whenever you have a decision to make, consider deeply which option seems like it would please the gods most and do that". One of those reasons is that while following the rules may be challenging, applying deep consideration to every single moral decision would be pretty much impossible. Another is that if you allow yourself flexibility then you will probably overuse it. Another is that if you are known to allow yourself flexibility then others won't know when you're ignoring the rules, reducing the power of social pressure to help you keep them. If you happen to be (1) unusually smart (hence, better able to apply deep consideration to individual cases without getting overwhelmed) and (2) unusually principled (hence, better able to resist the temptation to abuse flexibility) then, indeed, you may well do better to be flexible about your rules. (But, of course, everyone likes to think they're unusually smart and unusually principled, especially when thinking so offers the prospect of more freedom to bend your moral rules.) I agree with you that many vegetarians' values would be better maximized by being flexible about their non-meat-eating in some circumstances like the ones you mention, if we consider each occasion in isolation. But it may still be a better value-maximizing strategy to have a strict policy of not breaking the rules. (For the particular cases you describe, where a vegetarian's self-imposed rules are inconvenient for other people, there's a further consideration: they may want their vegetarianism to be highly visible, in the hope of making other people consider imitating it. Their relatives may think "bah, how selfish of them" -- but they may also think "wow, they're really serious about this; perhaps they may actually have a point".)

I guess the average driver kills at most one animal ever by bumping into them, whereas the average meat-eater may consume thousands of animals.

There we touch another problem with the "no meat eating" thing : where do you draw the line ? Would people who refuse to eat chicken and beef be ok with eating shrimps or insects ? What with fish, is it "meat" and unethical ? Because, whenever you drive, you kill hundred of flies and butterflies and the like, which are animals.

So where to draw the line, vertebrates ? Eating shrimps and insects would be fine ? But it's not like a chicken or a cow have lots of cognitive abilities, so feels quite arbitrary to me.

1gjm
Somewhere that's easy to evaluate and that generally gives results that match reasonably well with those of careful case-by-case deliberation. For most vegetarians, pigs will be on one side and spiders on the other; the exact location of the line will vary. It doesn't need to give results that match perfectly in every case; no one has the time or mental energy to make every moral decision optimally. And it doesn't have to be deduced from universal general principles; the point of drawing a line is to provide an "easier" approximation to the results on gets by applying one's general principles carefully case by case. So, e.g., the simplest vegetarian policy says something like: "Don't deliberately eat animals." This will surely be too restrictive for most vegetarians' actual values; e.g., I bet most vegetarians would have no moral objection to eating insects. But so what? It's a nice simple policy, easy to apply and easy to explain, and if it means you sometimes have to eat vegetables when you had the option of eating insects, well, that's not necessarily a problem. Someone inclined towards vegetarianism who decides, after careful reflection, that most fish aren't sufficiently capable of suffering to worry much about (and/or just really likes eating fish) may choose a more permissive policy along the lines of "no animals other than fish" or "no animals other than seafood". That might be too permissive for their actual values in some cases -- e.g., they might not actually be willing to eat octopus. But, again, that's OK; if they see octopus on the menu they can decide not to eat it on the basis of actual thought rather than just applying their overall policy, much as a non-vegetarian might if they see monkey meat on a menu. Or they might just always defer to the overall policy and accept that sometimes it will lead them to eat something that overall they'd prefer not to have eaten. (I would expect the first of those options to be much more common.) Another vegetar
0polymathwannabe
Short of telepathy, we can only guess. Chicken do appear to be able to manifest visible signs of distress, whereas the nervous system of a shrimp is too simple for that.

I always felt that argument 1 is a bit hypocritical and not very rational. We kill animals constantly for many reasons - farming even for vegetables requires killing rodents and birds to prevent them eating the crops, we kill rats and other pests in our buildings to keep them from transmitting disease and damaging cables, we regularly kill animals by bumping into them when we drive a car or take a train or a plane, ... And of course, we massively take living space away from animals, leading them to die.

So why stop eating meat, and yet disregard all the oth... (read more)

0Echarmion
Doesn't intent matter? I cannot control the entirety of society with my will, nor can I control what animals I unknowingly kill, but I can react to the things I know with my own actions. It also seems irrational to let the "better be the enemy of the good". There is no rule that says that unless I solve all the problems at once, solving one problem is being hypocritical. The single decision doesn't get irrational just because I am not actually making 100% rational decisions all the time. That would only be hypocritical if I claimed that all my decisions are 100% rational when they are not.
0gjm
Killing birds? Really? I'd have thought keeping them away would be much more practical. You have more control over whether to eat meat than over those other things. And some of them are much smaller -- e.g., I guess the average driver kills at most one animal ever by bumping into them, whereas the average meat-eater may consume thousands of animals.

Regular sleep may not suspend consciousness (although it can very well be argued in some phases of sleep it does), but anesthesia, deep hypothermia, coma, ... definitely do, and are very valid examples to bring forward in the "teleport" debate.

I've yet to see a definition of consciousness that doesn't have problems with all those states of "deep sleep" (which most people don't have any trouble with), while saying it's not "the same person" for the teleporter.

0casebash
"Anesthesia, deep hypothermia, coma, ... definitely do" - don't people have dreams or at least some thoughts occur during these?

+1 for something like "no more than 5 downvotes/week for content which is more than a month old", but be careful that new comment on an old article is not old content.

There is no objective absolute morality that exists in a vacuum. Our morality is a byproduct of evolution and culture. Of course we should use rationality to streamline and improve it, not limit ourselves to the intuitive version that our genes and education gave us. But that doesn't mean we can streamline it to the point of simple average or sum, and yet have it remain even roughly compatible with our intuitive morality.

Utility theory, prisoner's dilemma, Occam's razor, and many other mathematical structures put constraints on what a self-consistent, form... (read more)

-2UmamiSalami
No, that's highly contentious, and even if it's true, it doesn't grant a license to promote any odd utility rule as ideal. The anti-realist also may have reason to prefer a simpler version of morality. There are much more relevant factors in building and choosing moral systems than those mathematical structures, whose relevance to moral epistemology is dubious in the first place. It's not obvious that we would be more likely to believe anything in particular if we knew more and were more what we wished we were. CEV is a nice way of making different people's values and goals fit together, but it makes no sense to propose it as a method of actual moral epistemology.

The same way that human values are complicated and can't be summarized as "seek happiness !", the way we should aggregate utility is complicated and can't be summarized with just a sum or an average. Trying to use a too simple metric will lead to ridiculous cases (utility monster, ...). The formula we should use to aggregate individual utilities is likely to be involve total, median, average, Ginny, and probably other statistical tools, and finding it is a significant part of finding our CEV.

-2UmamiSalami
The problem is that by doing that you are making your position that much more arbitrary and contrived. It would be better if we could find a moral theory that has solid parsimonious basis, and it would be surprising if the fabric of morality involved complicated formulas.
1casebash
"The way we should aggregate utility is complicated and can't be summarized with just a sum or an average" - why? I'm not convinced that the argument by analogy is persuasive here.

The MWI doesn't necessarily mean that every possible event, however unlikely, "exists". As long as we don't know where the Born rule comes from, we just don't know.

Worlds in MWI aren't discrete and completely isolated from each others, they are more inkstains on paper, not clearly delimited blobs, where "counting the blobs" can't be defined in non ambiguous way. There are hytpothesis (sometimes called "mangled world") that would make worlds of too small probability (inkstains not thick enough) unstable and "contaged"... (read more)

1qmotus
We don't know how to derive the Born Rule in MWI, or even if it is possible to derive it. However, uncertainty goes both ways, and that's definitely no way to dismiss QI. Is there any actual reason to suspect that MWI is true, but QI isn't (apart from, maybe, mangled worlds)? Because I lack the necessary mathematical understanding, I've never really understood what mangled worlds actually says. What does it mean when you say that a world's probability is "too small", and does mangled worlds say that these worlds never actually come into existence, or just that they eventually disappear? Also, is there something wrong with Sean Carroll's attempt?

Personally, I liked LW for being an integrated place with all that : the Sequences, interesting posts and discussions between rationalists/transhumanists (be it original thoughts/viewpoints/analysis, news related to those topics, links to related fanfiction, book suggestion, ...), and the meetup organization (I went to several meetup in Paris).

If that were to be replaced by many different things (one for news, one or more for discussion, one for meetups, ...) I probably wouldn't bother.

Also, I'm not on Facebook and would not consider going there. I think r... (read more)

2passive_fist
Disclaimer: politics is the mind-killer. LW used to be politically neutral; I'm not sure it is so anymore. A large part of the user base is American, and the current presidential election season is spilling into LW far more than previous seasons ever did. And the current wave of populist, nationalistic, libertarian/individualist ideology which seems to be very popular in the USA is being represented in the general atmosphere of LW. It would be great if a temporary ban on political subjects could be set and enforced until at least the current election season is over.
0Viliam
The advantage of Facebook is that you don't have to code anything. The disadvantage is that if you disagree with how certain things work, there is nothing you can do about it (other than leave Facebook).

This wont work, like with all other similar schemes, because you can't "prove" the gatekeeper down to the quark level of what makes its hardware (so you're vulnerable to some kind of side-attack, like the memory bit flipping attack that was spoken about recently), nor shield the AI from being able to communicate through side channels (like, varying the temperature of its internal processing unit which it turns will influence the air conditioning system, ...).

And that's not even considering that the AI could actually discover new physics (new part... (read more)

0JoshuaFox
You're quite right--these are among the standard objections for boxing, as mentioned in the post. However, AI boxing may have value as a stopgap in an early stage, so I'm wondering about the idea's value in that context.

To be fair, the DRAM bit flipping thing doesn't work on ECC RAM, and any half-decent server (especially if you run an AI on it) should have ECC RAM.

But the main idea remains yes : even a program proven to be secure can be defeated by attacking one of the assumptions made (such as the hardware being 100% reliable, which it rarely is) in the proof. Proving a program to be secure down from applying Schrödinger's equation on the quarks and electrons the computer is made of is way beyond our current abilities, and will remain so for a very long time.

3Meni_Rosenfeld
Challenge accepted! We can do this, we just need some help from a provably friendly artificial superintelligence! Oh wait...

I see your point, but I think you're confusing a partial overlapping with an identity.

There are many bugs/uncertainty that appear as agency, but there are also many bugs/uncertainty which doesn't appear as agency (as you said about true randomness), and there are also behavior that are actually smart and that appear as agency because of smartness (like the way I was delighted with Emacs the first time I realized that if I asked it to replace "blue" with "red", it would replace "Blue" with "Red" and "BLUE" w... (read more)

0Shmi
I don't know if I would call it "mis-"attribute. My point, confirmed by spxtr and some other commenters, is that agency is relative to the observer, that there is no absolute difference between a "true" agency and an "apparent agency". I think most of this statement follows from its last part, "ability to explore solution-space in a way that will end up surprising us". Once that happens, we assign the rest of the agenty attributes to whatever has surprised us. I guess this is the crux of our disagreement. To a superintelligence, we are CPUs without agency.

I'm really skeptical of claims like « the "thinking unit" is really the whole body », they tend to discard quantitative considerations for purely qualitative ones.

Yes, the brain is influenced, and influences, the whole body. But that doesn't mean the whole body has the same importance in the thinking. The brain is also influenced by lots of external factors (such as ambient light or sounds, ...) if as soon as there is a "connection" between two parts you say "it's the whole system that does the processing", you'll just end up ... (read more)

A little nitpicking about the "2 dice" thing : usually when you throw you two dices, it doesn't matter which dice gives which result. Sure you could use colored dices and have the "blue 2, red 3" be different than "blue 3, red 2", but that's very rarely the case. Usually you do the sum (or look for patterns like doubles) but "2, 3" and "3, 2" are equivalent, and in that case the entropy isn't the double, but lower.

What you wrote is technically right - but goes against the common usage of dices, so it would be worth adding a footnote or precision about that, IMHO.

0Davidmanheim
I wanted to avoid going too deep into that example - the other LW and linked posts are better, but I wanted to at least introduce it. Thanks for the feedback.

I'm not really sure the issue is about "direction", but more about people who have enough time and ideas to write awesome (or at least, interesting) posts like the Sequences (the initial ones by Eliezer or the additional ones by various contributors).

What I would like to see are sequences of posts that build on each other, starting from the basics and going to deep things (a bit like Sequences). It could be collective work (and then need a "direction"), but it could also be the work of a single person.

As for myself, I did write a few p... (read more)

I don't see why it's likely one of the numbers has to be big. There are really are lots of complicated steps you need to cross to go from inert matter to space-faring civilizations, it's very easy to point a dozen of such steps that could fail in various ways or just take too long, and there many disasters that can happen to blow everything down.

If you've a long ridge to climb in a limited time and most people fail to do it, it's not very likely there is a very specific part of it which is very hard, but (unless you've actual data that most people fail at ... (read more)

1Vaniver
This is a statement about my priors on the number of filters and the size of a filter, and I'm not sure I can shortly communicate why I have that prior. Maybe it's a statement on conceptual clumpiness. To me, your claim is a statement that the number of planets at each step follows a fairly smooth exponential, and a specific hard part means you would have a smooth exponential before a huge decrease, then another smooth exponential. But we don't know what the distribution of life on planets looks like, so we can't settle that argument. Similarly, we know about the planning fallacy because we make many plans and complete many projects--if there was only one project ever that completed, we probably could not tell in retrospect which parts were easy and which were hard, because we must have gotten lucky even on the "hard" components. Hanson wrote a paper on this in 1996 that doesn't appear to be on his website anymore, but it's a straightforward integration given exponential distributions over time to completion, with 'hardness' determining the rate parameter, and conditioning on early success. I would instead look at the various steps in the filter, and generalize the parameters of those steps, which then generate universes with various levels of noise / age at first space-colonizing civilization. If you have fat-tailed priors on those parameters, I think you'll get that it's more likely for there to be one dominant factor in the filter. Maybe I should make the effort to formalize that argument.

There is a thing which really upsets me with the "Great Filter" idea/terminology, is that implies that it's a single event (which is either in the past or the future).

My view on the "Fermi paradox" is not that there is a single filter, cutting ~10 orders of magnitude (ie, from 10 billions of planets in our galaxy with could have a life to just one), but more a combination of many small filters, each taking their cuts.

To have intelligent space-faring life, we need a lot of things to happen without any disaster (nearby supernova, too big... (read more)

5Vaniver
I don't think that the Great Filter implies only one filter, but I think that if you're multiplying several numbers together and they come out to at least 10^10, it's likely that at least one of the numbers is big. (And if one of the numbers is big, that makes it less necessary for the other numbers to be big.) Put another way, it seems more likely to me that there is one component filter of size 10^6 than two component filters each of size 10^3, both of which seem much more likely than that there are 20 component filters of size 2.

Nicely put for an introduction, but of course things are in reality not as clear-cut, "rationality" changing the direction and "desire" the magnitude.

  1. Rationality can make you realize some contradictions between your desires, and force you to change them. It can also make you realize that what you truly desire isn't what you thought you desired. Or it can make you desire whole new things, that you didn't believe to be possible initially.

  2. Desire will affect the magnitude because it'll affect how much effort you put in your endeavor. Wi

... (read more)

Interesting idea, thanks for doing it, but saddly many questions are very US-centric. It would be nice to have some "tags" on the questions, and let the users select which kind of questions he wants (for example the non-US people could remove the US-specific ones).

Yes, it is a bit suspicious - but then Azkaban and Dementors are so terrible that it's worth the risk, IMHO.

And I don't think Harry is counting just on the Horcrux, I think he's counting on Horcrux as last failback, counting on the unicorn blood and the "she knows death can be defeated because she did went back from death", and maybe even Hermione calling a Phoenix.

0lwmdw45
I agree that it's worth the risk, but apparently Harry doesn't. '"I thought..." Hermione said. She sounded uncertain. "I thought for sure that after this, you and Professor McGonagall wouldn't... you know... let me do anything the least bit dangerous ever again." 'Harry said nothing, feeling guilty about the false relationship credit he was getting. It was in fact the case that Hermione was modeling him with tremendous accuracy, and that if not for Hermione having a horcrux, the surface of the planet Venus would have dropped to fractional-Kelvin temperatures before Harry tried this.' I agree with you that unicorn blood is more likely to be significant than the Horcrux in this scenario, and until this last chapter was posted I expected HARRY to think the same way, which is why his thinking stuck out to me as memorably optimistic.

The chapter 122 in itself was good, I liked it, but I feel a bit disappointed that it's the end of the whole hpmor.

Not to be unfairly critical, it's still a very great story and many thanks to Eliezer for writing it, but... there are way too many remaining unanswered questions, unfinished business, ... to be the complete end. It feels more like "end of season 1, see season 2 for the next" than "and now it's over".

First, I would really have liked a "something to protect" about Harry's parents.

But mostly, there are lots of unan... (read more)

3raecai
You see, now EY provokes you into writing some rationalist HPMOR fanfiction before he publishes the epilogue.
2lwmdw45
Isn't it a little out of character for Harry to blithely assume that Hermione can't possibly die in her dementor mission? He doesn't even know how Horcrux 2.0 works--is there any good reason to think that the Horcrux will preserve your life if you deliberately fuel your magic with your life to kill dementors? (It's basically just a body-hopping spell, not a life-preservation spell.) Would a horcrux restore to Harry the life and magic he used to revive Hermione? It just seems suspiciously out of character that Harry has now suddenly turned into an optimist with regard to Hermione's survival. He even says to himself he would never let her risk the mission if he thought it was actually dangerous, which means that he apparently actually fully buys into her immortality. It will be tragic for Harry if she is dead again, for real, next week. Not because death is tragic per se, but because it will utterly blindside him.

I don't really see the point in antimatter suiciding. It'll not kill Voldermort due to the Horcrux network, so it'll just kill the Death Eaters but letting Voldemort in power, and Voldemort would be so pissed of he would do the worst he can to Harry's family and friends... how is that any better than letting Voldemort kill Harry and manage to save a couple of people by telling him a few secrets ?

0TobyBartels
The longer explanation said that it was bungled, that the antimatter blew before the transfiguration was finished.
1BrindIf
The point is to destroy the Stone and Voldemort's body, which should earn time to the Order to react.

If I remember well, it's not just "person", but information. I can't use a Time Turner to go 6 hours back to the past, give a piece of paper to someone (or an information to that person), and have that person goes back for 6 more hours.

So while it is an interesting hypothesis, it would require no information to be carried... and isn't the fact that the Stone still exists and works an information in itself ? Or that's nitpicking ?

2DanielLC
You can't do that to send information back more than six hours. I don't think the limitation applies to sending it through the same six hours repeatedly, although that would explain the whole DON'T MESS WITH TIME thing.
3wobster109
My feeling is things that are overwhelmingly likely do not get treated as information. For example, Harry's clothes go with him, but "Time" doesn't consider that to be information of his clothes still existing. It feels like that there's a Deus ex Machina aspect to how "Time" works and deals with information. Sometimes when you try to time-turn you just encounter Paradox. So based on that I'd predict that if you try to time-turn with intention to get more uses out of the stone, you will encounter Paradox.
3Subbak
Then when someone says "I have information from 6 hours in the future", that would be information in and of itself. It means that 6 hours in the future life is still sustainable.
Load More