Here's our place to discuss Less Wrong topics that have not appeared in recent posts. Have fun building smaller brains inside of your brains (or not, as you please).
Here's our place to discuss Less Wrong topics that have not appeared in recent posts. Have fun building smaller brains inside of your brains (or not, as you please).
Eliezer_Yudkowsky said:
It is only the Mind Projection Fallacy that makes some people talk as if the higher levels could have a separate existence - different levels of organization can have separate representations in human maps, but the territory itself is a single unified low-level mathematical object. Suppose this were wrong. Suppose that the Mind Projection Fallacy was not a fallacy, but simply true. Suppose that a 747 had a fundamental physical existence apart from the quarks making up the 747. What experimental observations would you expect to make, if you found yourself in such a universe? If you can't come up with a good answer to that, it's not observation that's ruling out "non-reductionist" beliefs, but a priori logical incoherence. If you can't say what predictions the "non-reductionist" model makes, how can you say that experimental evidence rules it out?
This comes from a post from almost a year ago, Excluding the Supernatural. I quote it because I was hoping to revive some discussion on it: to me, this argument seems dead wrong.
The counter-argument might go like this:
Reductionism is anything but a priori logically necessary-- it's something t...
What are some examples of recent progress in AI?
In several of Elizer's talks, such as this one, he's mentioned that AI research has been progressing at around the expected rate for problems of similar difficultly. He also mentioned that we've reached around the intelligence level of a lizard so far.
Ideally I'd like to have some examples I can give to people when they say things like "AI is never going to work" - the only examples I've been able to come up with so far have been AI in games, but they don't seem to think that counts because "it...
In the previous open thread, there was a request that we put together The Simple Math of Everything. There is now a wiki page, but it only has one section. Please contribute.
Do you know who the real heroes are? The guys who wake up every morning, and go into their normal jobs, and get a distress call from the commissioner, and take off their glasses and change into capes and fly around fighting crime. Those are the real heroes.
Some questions about the site:
1) How come there's no place for a user profile? Or am I just too stupid to find it? I know there was a thread a while back to post about yourself, and I joined LW on facebook, but it would be much easier for people to see a profile when they click on someone's name.
2) What's with the default settings for what comments "float to the top" of the comment list? Not to whine or anything, but I made a comment that got modded to 11 on the last Perceptual Control theory thread, followed up on by a few other highly-modded...
Some people commented on the "inner circuits" discussion that they didn't want this site to turn into a self-help or self-improvement forum, which made me wonder whether are there any open and relatively high quality discussion forums or communities to discuss self-improvement in general and in specific?
Inspired by Yvain's post on Dr. Ramachandran's model of two different reasoning models located in the two hemispheres, I am considering the hypothesis that in my normal everyday interactions, I am a walking, talking, right brain confabulating apologist. I do not update my model of how the world works unless I discover a logical inconsistency. Instead, I will find a way to fit all evidence into my preexisting model.
I'm a theist, and I've spent time on Less Wrong trying to be critical of this view without success. I've already ascertained that God's existenc...
In my opinion, too many comments lately have explicitly incidentally discussed their authors' votes; I think it distracts from the actual topic and metadiscussions ought to be separate comments.
What are some suggestions for approaching life rationally when you know that most of your behavior will be counter to your goals, that you'll know this behavior is counter to your goals, and you DON'T know whether or not ending this division between what you want and what you do (ie forgetting about your goals and why what you're doing is irrational and just doing it) has a net harmful or helpful effect?
I'm referring to my anxiety disorder. My therapist recently told me something along the lines of, "But you have a very mild form of conversion disorde...
So, I'm looking for some advice.
I seem to have finally reached at that stage in my life where I find myself in need of an income. I'm not interested in a particularly large income; at the moment, I only want just enough to feed a Magic: the Gathering and video game habit, and maybe pay for medical insurance. Something like $8,000 a year, after taxes, would be more than enough, as long as I can continue to live in my parents' house rent-free.
The usual method of getting an income is to get a full-time job. However, I don't find that appealing, not one bit. I...
Is there a way to undelete posts?
That might seem a weird question - just submit it again - but it turns out that "deleting" a post doesn't actually delete it. The post just moves to a netherworld where people can view it, link to it, discuss it in the comments etc. but: a) it doesn't show in the sidebar, b) it doesn't show in the user's submitted page, c) it says "deleted" where the poster's username should be. Editing and saving doesn't help.
This calamity has just befallen a post of mine that I submitted by mistake, then killed, but p...
Suppose you found yourself suddenly diagnosed with a progressive, fatal neurological disease. You have only a few years to live, possibly only a few months of good health. Do the insights discussed here offer any unique perspectives on what actions would be reasonable and appropriate?
Anders Sandberg - Swine Flu, Black Swans, and Geneva-eating Dragons (video/youtube)
Anders Sandberg on what statistics tells us we should (not) be worried about. Catastrophic risks, etc.
An interesting book is out: Information, Physics and Computation by Andrea Montanari and Marc Mézard. See this blog post for more detail.
Sorry, I sort of asked this question in a thread here, but I'm interested enough in answers that I'm going to ask it again.
Does it seem like a good idea for the long-term future of humanity for me to become a math teacher or producer of educational math software? Will having a generation of better math and science people be good or bad for humanity on net?
If I included a bit about existential risks in my lecturing/math software would that cause people to take them more seriously or less seriously?
A terribly trivial first post, but as an anchor it'll do: is there a way to change the timezone in which timestamps are displayed? I'd also prefer the YYYY-MM-DD HH:MM:SS 24-hour format over the current one, but it doesn't really matter all that much. (If the timezone turns out to match up with BST here, then forget that, I guess.)
Edit: UTC, it seems. I can live with that.
A long chain of reasoning leads me to conclude that the UFAI problem would be completely averted if this question were answered--to use the vernacular, I feel like that's the case.
But seriously. Whenever we think the thought "I want to think about apples", we then go on to think about apples. How the heck does that work? What is the proximate cause of our control over our thoughts?
What do you guys think of the Omega Point? Perhaps more importantly, what do you think of Tipler's claim that we've known the correct quantum gravity theory since 1962?
My previous attempt at asking this question failed in a manner that confuses me greatly, so I'm going to attempt to repair the question.
Suppose I'm taking a math test. I see that one of the questions is "Find the derivative of 1/cos(x^2)." I conclude that I should find the derivative of 1/cos(x^2). I then go on to actually do so. What is it that causes me (specifically, the proximate cause, not the ultimate) to go from concluding that I should do something to attempting to do it?
Eliezer_Yudkowsky said:
This comes from a post from almost a year ago, Excluding the Supernatural. I quote it because I was hoping to revive some discussion on it: to me, this argument seems dead wrong.
The counter-argument might go like this:
Reductionism is anything but a priori logically necessary-- it's something that must be verified with extensive empirical data and inductive, probabilistic reasoning. That is, we observe that the attributes of many entities can be explained with laws describing their internal relations. Occam's razor tells us that we don't need both the higher and lower order model to actually exist, so we unify our theory. The repeated experience of this success leads us to extrapolate that this can be done with all entities. Perhaps some entities present obstacles to this goal, but we then infer that their irreducibility is in the map (our model for understanding them) not in the territory (the entity itself.) But again, we infer this by assuring ourselves that they just haven't been explained YET--which implies it's reasonable, based on inductive reasoning from the past, to assume that they will be reduced. Or we describe some element of the entity's complexity that makes "irreducibility in practice" something to be expected. We therefore preserve its reducibility in principle.
But we do not (it seems to me) merely exclude its irreducibility based on a priori necessity. Why would we? It's perfectly conceivable. Eliezer describes in an earlier post the "small, hard, opaque black ball" that is a non-reductionist explanation of an entity. He claims its just a placeholder, something that fools us into thinking there's a causal chain where nothing has actually been clarified.
But it's perfectly conceivable that such a "black ball" could exist. I suppose there's no way to prove that it's irreducible, and not just unreduced as of yet, in the same way that one can't prove a negative. But this just presupposes that the default position ought to be reductionism. We should assume innocent until proven guilty. But which is innocent in this case: reducible or non-reducible?
So what if we come across something that appears to be a "black ball"? We attempt with all our mental and technological acuity to analyze it in terms or more fundamental laws, and every attempt fails. I would argue this is a good example of empirical evidence against materialist reductionism. We indeed have an entity that obeys laws which we can describe and predict--it just has laws that can't be reconciled with the physical laws of everything else, and when interacting with anything else, violates them.
Occam's razor is indeed strong here: we recognize that, given the faintest hope of reduction, we should throw out irreducibility in favor of having as few types of "stuff" as possible. This happens in the case of "elan vital." But it seems perfectly conceivable to me that there might be an entity that's truly a black ball.
Now this seems so massively incorrect that I fear I'm misunderstanding Eliezer. Does anyone have any feedback? I'd love to make a post about this, once I generate some karma.
I didn't get the 'and so' above at first, but I think it makes sense for the following reason: you can only ever "construct models made of interacting simple things" (possibly elaborated upon and abstracted to such an extent that they no longer seem simple or physical) in that universe because any model you could possibly make in that universe would be causally deter... (read more)