Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: kilobug 08 January 2016 02:18:45PM 1 point [-]

Regular sleep may not suspend consciousness (although it can very well be argued in some phases of sleep it does), but anesthesia, deep hypothermia, coma, ... definitely do, and are very valid examples to bring forward in the "teleport" debate.

I've yet to see a definition of consciousness that doesn't have problems with all those states of "deep sleep" (which most people don't have any trouble with), while saying it's not "the same person" for the teleporter.

In response to Voiceofra is banned
Comment author: kilobug 24 December 2015 08:49:08AM 1 point [-]

+1 for something like "no more than 5 downvotes/week for content which is more than a month old", but be careful that new comment on an old article is not old content.

Comment author: UmamiSalami 23 December 2015 12:45:48AM *  1 point [-]

The problem is that by doing that you are making your position that much more arbitrary and contrived. It would be better if we could find a moral theory that has solid parsimonious basis, and it would be surprising if the fabric of morality involved complicated formulas.

Comment author: kilobug 23 December 2015 08:39:13AM 1 point [-]

There is no objective absolute morality that exists in a vacuum. Our morality is a byproduct of evolution and culture. Of course we should use rationality to streamline and improve it, not limit ourselves to the intuitive version that our genes and education gave us. But that doesn't mean we can streamline it to the point of simple average or sum, and yet have it remain even roughly compatible with our intuitive morality.

Utility theory, prisoner's dilemma, Occam's razor, and many other mathematical structures put constraints on what a self-consistent, formalized morality has to be like. But they can't and won't pinpoint a single formula in the huge hypothesis space of morality, but we'll always have to rely heavily on our intuitive morality at the end. And this one isn't simple, and can't be made that simple.

That's the whole point of the CEV, finding a "better morality", that we would follow if we knew more, were more what we wished we were, but that remains rooted in intuitive morality.

Comment author: kilobug 22 December 2015 10:12:32AM 4 points [-]

The same way that human values are complicated and can't be summarized as "seek happiness !", the way we should aggregate utility is complicated and can't be summarized with just a sum or an average. Trying to use a too simple metric will lead to ridiculous cases (utility monster, ...). The formula we should use to aggregate individual utilities is likely to be involve total, median, average, Ginny, and probably other statistical tools, and finding it is a significant part of finding our CEV.

Comment author: kilobug 07 December 2015 08:46:30AM 2 points [-]

The MWI doesn't necessarily mean that every possible event, however unlikely, "exists". As long as we don't know where the Born rule comes from, we just don't know.

Worlds in MWI aren't discrete and completely isolated from each others, they are more inkstains on paper, not clearly delimited blobs, where "counting the blobs" can't be defined in non ambiguous way. There are hytpothesis (sometimes called "mangled world") that would make worlds of too small probability (inkstains not thick enough) unstable and "contaged" from "nearby" high probability world.

But the main issue is that as long as we don't have a formal derivation of the Born rule inside MWI, we can't make any formal analysis of stuff like QI. We are left with at best semi-intuitive analysis of what "MWI" does mean, but QI being highly counter-intuitive, a semi-intuitive analysis breaks down there.

In response to LessWrong 2.0
Comment author: kilobug 03 December 2015 12:32:54PM 37 points [-]

Personally, I liked LW for being an integrated place with all that : the Sequences, interesting posts and discussions between rationalists/transhumanists (be it original thoughts/viewpoints/analysis, news related to those topics, links to related fanfiction, book suggestion, ...), and the meetup organization (I went to several meetup in Paris).

If that were to be replaced by many different things (one for news, one or more for discussion, one for meetups, ...) I probably wouldn't bother.

Also, I'm not on Facebook and would not consider going there. I think replacing the open ecosystem of Internet by a proprietary platform is a very dangerous trend for future of innovation, and I oppose the global surveillance that Facebook is part of. I know we are entering politics which is considered "dirty" by many here, but politics is part of the Many Causes, and I don't think we should alienate people for political reasons. The current LW is politically neutral, and allows "socialists" to discuss without much friction with "libertarians", which is part of its merits, and we should keep that.

In response to Gatekeeper variation
Comment author: kilobug 10 August 2015 01:41:27PM 1 point [-]

This wont work, like with all other similar schemes, because you can't "prove" the gatekeeper down to the quark level of what makes its hardware (so you're vulnerable to some kind of side-attack, like the memory bit flipping attack that was spoken about recently), nor shield the AI from being able to communicate through side channels (like, varying the temperature of its internal processing unit which it turns will influence the air conditioning system, ...).

And that's not even considering that the AI could actually discover new physics (new particles, ...) and have some ability to manipulate them with its own hardware.

This whole class of approach can't work, because there are just too many ways for side-attacks and side-channels of communication, and you can't formally prove none of them are available, without going down to making proof over the whole (AI + gatekeeper + power generator + air conditioner + ...) down at Schrödinger equation level.

Comment author: kilobug 02 August 2015 07:22:58AM 0 points [-]

To be fair, the DRAM bit flipping thing doesn't work on ECC RAM, and any half-decent server (especially if you run an AI on it) should have ECC RAM.

But the main idea remains yes : even a program proven to be secure can be defeated by attacking one of the assumptions made (such as the hardware being 100% reliable, which it rarely is) in the proof. Proving a program to be secure down from applying Schrödinger's equation on the quarks and electrons the computer is made of is way beyond our current abilities, and will remain so for a very long time.

Comment author: kilobug 08 June 2015 02:31:05PM 2 points [-]

I see your point, but I think you're confusing a partial overlapping with an identity.

There are many bugs/uncertainty that appear as agency, but there are also many bugs/uncertainty which doesn't appear as agency (as you said about true randomness), and there are also behavior that are actually smart and that appear as agency because of smartness (like the way I was delighted with Emacs the first time I realized that if I asked it to replace "blue" with "red", it would replace "Blue" with "Red" and "BLUE" with "RED"), I got the same "feeling of agency" there that I could have on bugs.

So I wouldn't say that agency is bugs, but that we have evolved to mis-attribute attribute agency to things that are dangerous/unpleasant (because it's safest to mis-attribute agency to nothing that doesn't have it, than to not attribute it to something that does have it), the same way our ancestors used to see the sun, storms, volcanoes, ... as having agency.

Agency is something different, hard to exactly pinpoint (philosophers have been going at it for centuries), but that involves ability to have a representation of reality, to plan ahead for a goal, a complexity of representation and ability to explore solution-space in a way that will end up surprising us, not because of bugs, but because of its inherent complexity. And we have been evolved to mis-attribute agency to things which behave in unexpected ways. But that's a bug of our own ability to detect agency, not a feature of agency itself.

Comment author: gurugeorge 07 June 2015 12:09:56AM 1 point [-]

I remember reading a book many years ago which talked about the "hormonal bath" in the body being actually part of cognition, such that thinking of the brain/CNS as the functional unit is wrong (it's necessary but not sufficient).

This ties in with the philosophical position of Externalism (I'm very much into the Process Externalism of Riccardo Manzotti). The "thinking unit" is really the whole body - and actually finally the whole world (not in the Panpsychist sense, quite, but rather in the sense of any individual instance of cognition being the peak of a pyramid that has roots that go all the way through the whole).

I'm as intrigued and hopeful about the possibility of uploading, etc., as the next nerd, but this sort of stuff has always led me to be cautious about the prospects of it.

There may also a lot more to be discovered about the brain and body too, in the area of some connection between the fascia and the immune system (cf. the anecdotal connection between things like yoga and "internal" martial arts and health).

Comment author: kilobug 08 June 2015 01:22:52PM 1 point [-]

I'm really skeptical of claims like « the "thinking unit" is really the whole body », they tend to discard quantitative considerations for purely qualitative ones.

Yes, the brain is influenced, and influences, the whole body. But that doesn't mean the whole body has the same importance in the thinking. The brain is also influenced by lots of external factors (such as ambient light or sounds, ...) if as soon as there is a "connection" between two parts you say "it's the whole system that does the processing", you'll just end up considering the solar system as a whole, or even the entire event horizon sphere.

There is countless evidence that, while your body and your environment have significant influence on your thinking, it's just influence, not fundamentally being part of the cognition. For example, people who have graft or amputations rarely change personality, memory or cognitive abilities in any way comparable to what brain damage can do.

View more: Next