Wanted to add the insights of Neil Gershenfeld which I think is how we should frame these problems:
We've already had a digital revolution; we don't need to keep having it. The next big thing in computers will be literally outside the box, as we bring the programmability of the digital world to the rest of the world.
He was talking about personal fabrication in this context, but the 'digitization' of the physical world is applicable to the sustainability goals I mentioned. Using operations research, loosely-coupled distributed architectures, nature-inspired routing algorithms, and other tricks of the IT trade applied to natural resources, we can finally transition to a sustainable world.
Surprised no one has mentioned anything involving sustainable/clean tech (energy, food, water, materials). This site does stress existential threats, and I'd think that, given many (most?) societal collapses in the past were precipitated, at least partly, by resource collapse, I'd want to concentrate much of the startup activity around trying to disrupt our short-term wasteful systems. Large pushes to innovate and disrupt the big four (energy, food, water, materials) would do more than anything I can think of to improve the condition of our world and min...
Ok, so we can with confidence say that humans and other organisms with developed neural systems experience the world subjectively, maybe not exactly in similar ways, but conscious experience seems likely for these systems unless you are a radical skeptic or solipsist. Based on our current physical and mathematical laws, we can reductively analyse these systems and see how each subsystem functions, and, eventually, with sufficient technology we'll be able to have a map of the neural correlates that are active in certain environments and which produce certa...
That is the $64,000 question.
To summarize (mostly for my sake so I know I haven't misunderstood the OP):
We have evolved moral intuitions such as empathy and compassion that underly what we consider to be right or wrong. These intuitions only work because we consciously internalize another agent's subjective experience and identify with it. In other words, without the various quales that we experience we would have no foundation to act ethically. An unconscious AI that does not experience these quales could, in theory, act the way we think it should act by mimicking behaviors from a repertoire of rules (and ways to create further rules) that we give it, but that is a very brittle and complicated route, and is the route the SIAI has been taking because they have discounted qualia, which is what this post is really all about.
"How an algorithm feels from inside" discusses a particular quale, that of the intuitive feeling of holding a correct answer from inside the cognizing agent. It does not touch upon what types of physically realizable systems can have qualia.
"If everything real is made of physics, you still must either explain how certain patterns of neuronal excitations are actually green, or you must assert that nothing is actually green at any level of reality."
This is a 'why' question, not a 'how' question, and though some 'why' questions may not be amenable to deeper explanations, 'how' questions are always solvable by science. Explaining how neuronal patterns generate systems with subjective experiences of green is a straightforward, though complex, scientific problem. One day we may unders...
In my opinion, the most relevant article was from Drew McDermott, and I'm surprised that such an emphasis on analyzing the computational complexity of approaches to 'friendliness' and self-improving AI has not been more common. For that matter, I think computational complexity has more to tell us about cognition, intelligence, and friendliness in general, not just in the special case of a self-improving optimization/learning algorithms, and could completely modify the foundational assumptions underlying ideas about intelligence/cognition and the singulari...
I think extra properties outside of physics conveys a stronger notion than what this view actually tries to explain. Property dualism, such as emergent materialism or epiphenomenalism, doesn't really think there are any extra properties other than the standard physical ones, it is just that when those physical properties are arranged and interact in a certain way they manifest what we experience as subjective experience and qualia and those phenomena aren't further reducible in an explanatory sense, even though they are reducible in the standard sense of ...
Reading his essay here: http://edge.org/conversation/the-argumentative-theory it appears that he does indeed come off as pessimistic with regard to raising the sanity line for individuals (ie teaching individuals to reason better and become more rational on their own). However, he does also offer a way forward by emphasizing group reasoning such as what the entire enterprise of science (peer review, etc.) encourages and is structured for. I suspect he thinks that even though most people might be able to understand that their reasoning is flawed and that...
An alternative to making things fun is to make things unconscious and/or automatic. No healthy individual complains about insulin production because their pancreas does it for them unconsciously, but diabetic patients must actively intervene with unpleasant, routine injections. One option would be to make the injections less unpleasant (make the process fun and/or less painful), but a better option would be to bring them in line with non-diabetic people and make the process unconscious and automatic again.
The location in space was fine, the location in time, however, was problematic. Friday afternoon, especially in that area, has probably the most congested traffic anywhere on earth. I was so frustrated to finally get there that I ended up parking in a structure that cost me $16 for two hours. Maybe the next meetup can happen at a later time (after 6pm) on a weekday other than Friday.
Also, a little more structure would have been nice in order to massage the strained conversations into a more productive path. For the next meetup it might be interesting to ask prospective attendees to suggest a list of topics of discussion which we could vote on.
Other than that, nice meeting you all!
I'll be attending.
In my experience, the inability to be satisfied with a materialistic world-view comes down to simple ego preservation, meaning, fear of death and the annihilation of our selves. The idea that everything we are and have ever known will be wiped out without a trace is literally inconceivable to many. The one common factor in all religions or spiritual ideologies is some sort of preservation of 'soul', whether it be a fully platonic heaven like the Christian belief, a more material resurrection like the Jewish idea, or more abstract ideas found in Eastern a...
I may be overlooking something, but I'd certainly consider Robin's estimate of 1-2 week doublings a FOOM. Is that really a big difference compared with Eliezer's estimates? Maybe the point in contention is not the time it takes for super-intelligence to surpass human ability, but the local vs. global nature of the singularity event; the local event taking place in some lab, and the global event taking place in a distributed fashion among different corporations, hobbyists, and/or governments through market mediated participation. Even this difference isn...
There is a web-based tool being worked on at MIT's collective intelligence lab. Couldn't find the direct link to the project, but here's a video overview: Deliberatorium
Is your pursuit of a theory of FAI similar to, say, Hutter's AIXI, which is intractable in practice but offers an interesting intuition pump for the implementers of AGI systems? Or do you intend on arriving at the actual blueprints for constructing such systems? I'm still not 100% certain of your goals at SIAI.
"People are not base animals, but people, about 90% animal and 10% something new and different. Religion can be looked on as an act of rebellion by the 90% animal against the 10% new and different (most often within the same person)."
That way of looking at it is attractive but I don't think it is accurate. Most of religion is the outcome of that extra 10% and definitely part of what we identify as 'person'. Rejecting religion, and other equivalent institutions is an act of rebellion of 2% against the other 8%.
The wikipedia article for Abductive Reasoning claims this sort of privileging the hypothesis can be seen as an instance of affirming the consequent.
Aside from learning as a way to acquire useful skills, there are certain things I learn in order to change the way I think. Echoing similar comments, programming seems to have altered my perspective as a kid and continues to do so. One example is learning Lisp. It's become popular to learn lisp not because it is practically useful in day-to-day coding (though it can be), but because it changes the way you think about how to program.
Similarly, studying abstract algebra might be a waste of my time (though I'll understand Lie groups and hence theoretica...
"It is useless to attempt to reason a man out of a thing he was never reasoned into." (Jonathan Swift )
I agree and admit laziness on my part for hoping someone else to insightfully reflect on my problem instead of offering at least a minimum of a solution to start things off. Ironically, I can't seem t make time to analyze how I can make more time!
What you describe is what Tim Gallwey calls the 'inner game'. It is, to simplify a bit, training your intuitive subconscious without letting your conscious awareness interfere. Here is a video of him coaching a woman who has never touched a tennis racket to serve using the technique.
Another similar technique is drawing on the right side of the brain.
The post was supposed to be in the spirit of much of the self-improvement posts regarding akrasia, rationality, etc. It seemed logical that managing your information is an important component with the rest of the mental hygiene practices discussed here. It I was mistaken I apologize.
You might have misunderstood me. I did not limit akrasia to only things we enjoy. I said actually getting going on the task, whether inherently enjoyable or not, is what 'feels wonderful'. I hate going to the dentist, but actually engaging in the process of going to the office and getting it over with feels pretty good as an accomplishment.
And forming the habit of not procrastinating is a very big part of it, IMO. To stop putting things off and automatically jump into a task is a positive habit that does a great deal against akrasia. Why do you think juvenile delinquents get sent off to boot camp or some other long period of regimented experience. To form those habits which will mold their character accordingly.
The Cynic's Theory may in fact describe a true state of mind, but it is not describing akrasia. The Cynic's Theory might better describe those minds whose preferences are placed by exterior influences that conflict with their internal, consciously hidden preferences. An example may be someone who always thought they wanted to be a doctor but deep down knew they wanted to be an artist.
However, when I think of Akrasia, I don't think of incompatible goals or hidden preferences, I think of compatible goals but an inability to consciously exert control of y...
Yes, a big problem is the human tendency to associate strongly with beliefs so that they become a part of your identity. When I once got into an argument with a particularly stubborn friend regarding religion, I tried to disassociate arguments as much as possible by writing them down and having an impartial 3rd party check for inconsistencies and biases blindly in a type of scoring system. How'd it turn out? He gave up alright, but still retained his beliefs!
The important thing to point out is that the information signal we experience as pain is an instructive signal more than just an indicative signal. I mean to say that pain's purpose is to make the organism react against whatever is hurting it, not just become aware of it. Since conscious decision making in humans is delayed at least 500ms (and sometimes up to 10 seconds!), signals such as pain have to be a result of low-level cybernetic reactions in the nervous system and not just a conscious experience after the fact. I'm sure if an intelligent designer...
I voted up robin hanson, but I would love either Cory Doctorow or Bruce Sterling because they are both smart scifi authors who are vocally skeptical of something like the singularity happening.
Whoever it is, in my opinion the best discussions would consist of people who share very similar worldviews yet strongly differ on some critical ideas. We don't need to see another religion debate that is for sure.
Warren Buffett seems to fit all the criteria of the counterexample Eliezer asked for. And if you doubt the fanaticism of his fandom, just look over some videos of his annual shareholders' meeting/convention.
Shouldn't this be in the domain of psychological research? The positive psychology movement seems to have a large momentum and many young researchers are pursuing a lot of lines of questioning in these areas. If you really want rigorous, empirically verified, general purpose theory, that seems to be the best bet.
'cleaning my room' is still abstract. If you decompose that into 'pick up clothes off floor, then make my bed, then vacuum the carpet, .....', then those are concrete tasks.
Well, from reading the comments it seems the most popular type of akrasia that hinders this group is procrastination. I'm sure other weaknesses of will are common, but procrastination seems to be an overwhelmingly common nuisance. This paper http://www.uni-konstanz.de/FuF/SozWiss/fg-psy/gollwitzer/PUBLICATIONS/McCreaetal.PsychSci09.pdf might hint at why this is so. The gist is that the more abstract the tasks/projects/goals are, the more you will procrastinate. As the tasks become more concrete, the procrastination is eliminated. An example is the abs...
Good post and important issues: How similar are other human minds to my own? How can I discuss academically what other minds should/should not do/believe if they are so different from my own? It is much like trying to argue over aesthetics of a colored art piece to a person born blind.
It would be constructive to be able to deduce which attributes a person has and which they're lacking, and in what proportions.
In group #2, where everybody at all levels understand all tactics and strategy, they would all understand the need for a coordinated, galvanized front, and so would figure out a way to designate who takes orders and who does the ordering because that is the rational response. The maximally optimal rational response might be a self-organized system where the lines are blurred between those who do the ordering and those who follow the orders, and may alternate in round-robin fashion or some other protocol. That boils down to a technical problem in operatio...
My issue isn't with cryonics, it is with the whole notion of self identity and post-singularity personhood. I guess this ties into EY's 'fun theory' and underscores the importance of a working positive theory of 'fun' as a prerequisite for immortality as we currently define it.
Assume cryonics works, further more assume that your brain is scanned with enough resolution to capture all salient features of what you consider to be your mind. You are now an uploaded entity, and your mind is as malleable as any other piece of software. There are only so many...
Let's use one of Polya's 'how to solve it' strategies and see if the inverse helps: Irrationalists should lose. Irrationality is systematized losing.
On another note, rationality can refer to either beliefs or behaviors. Does being a rationalist mean your beliefs are rational, your behaviors are rational, or both? I think behaving rationally, even with high probability priors, is still very hard for us humans in a lot of circumstances. Until we have full control of our subconscious minds and can reprogram our cognitive systems, it is a struggle to wi...
I'm in pasadena
Los Angeles, Ca
Kiva.org has the distinct honor of being the only charity that has ensured me maximum utilons for my money with an unexpected bonus of most fuzzies experienced ever. Seeing my money being repaid and knowing that it was possible only because my charity dollars worked, that the recipient of my funds actually put the dollars to effective use enough to thrive and pay back my money, well, goddamn it felt good.
kiva feels suspiciously well-optimized on three counts -- there's the utilons (which, given that you're incentivizing industry and entrepreneurship, are pretty darn good), the warm fuzzies you mentioned, and the fact that it seems it could also help me overcome some akrasia with regards to savings. If I loan money out of my paycheck to kiva each month, and reinvest all money repaid, then (assuming a decent repayment rate), the money cycling should tend to increase, meaning that if I need to, say, put a down payment on a house one day, I can take some out,...
We don't want to create a new religion, but whatever we create to take the place of it needs to offer at least as much as that which it replaces, so we might end up actually needing a new 'religion' whether we like it or not. If indeed there is a biological predisposition for humans to want to engage in 'worship', then we might as well worship rationally. I hesitate to call this new organization a religion or the practice worship, those are the things they are replacing, but those words get my idea across.
How about we create a church-like organization t...
I like EY's writings, but don't hold them up as gospel. For instance, I think this guy's summary of Bayes Theorem (http://betterexplained.com/articles/an-intuitive-and-short-explanation-of-bayes-theorem) is much more readable and succinct than EY's much longer (http://yudkowsky.net/rational/bayes) essay.
There is a recent trend of 'serious games' which use video games to teach and train people in various capacities, including military, health care, management, as well as the traditional schooling. I see no reason why this couldn't be applied to rationality training.
I always liked adventure style games as a kid, such as King's Quest or Myst, and wondered why they aren't around any more. They seemed to be testing rationality in that you would need to guide the character through many interconnected puzzles while figuring out the model of the world and how b...
Isn't this a description of what a liberal arts education is supposed to provide? The skills of 'how to think' not 'what to think'? I'm not too familiar with the curriculum since I did not attend a liberal arts college, instead I was conned into an overpriced private university, but if anyone has more info please chime in.
I just read a nice blog post at neurowhoa.blogspot.com/2009/03/believer-brains-different-from-non.html, covering research on brain differences of believers vs. non-believers. The take away from the recent study was "religious conviction is associated with reduced neural responsivity to uncertainty and error". I'm hesitant to read too much into this particular study, but if there is something to this then the best way to spread rational thought would be to try to correct for this deficiency. Practicing not to let uncertainty or errors slide by, no matter how small, would result in a positive habit and develop their rationality skills.
Some commenters said that in fact theory revision sessions such as brainstorming, etc. were actually pleasant to most rationalists and don't necessarily induce sadness. Indeed, I really enjoy arguing and learning new things, or else I wouldn't continue to do them. However, there is a difference between the loose juggling of ideas that we aren't very attached to and the type of continual self-checking of core beliefs that strict rationalists try to do. In order to operate effectively in the world and achieve goals, we need a solid belief foundation to p...
practice, practice, practice
I think this post and the related ones are really hitting home why it is hard for our minds to function fully rationally at all times. Like Jon Haidt's metaphor that our conscious awareness is akin to a person riding on top of an elephant, our conscious attempt at rational behavior is trying to tame this bundle of evolved mechanisms lying below 'us'. Just think of the preposterous notion of 'telling yourself' to believe or not believe in something. Who are you telling this to? How is cognitive dissonance even possible?
I remember the point when I fina...
From what I've read of the source writings within the contemplative traditions, modern neuroscience studies and theories on meditation, and my own experiences and thoughts on the subject as well, I've come to view the practice of meditation as serving 3 different but interconnected purposes: 1.) ego loss, 2.) cultivation of compassion, and 3.) experience of non-dual reality.
Ego loss means inhibiting or eliminating the internal self-critic by changing the way you perceive the target of that critic, namely the concept of a stable 'self' that you identify w... (read more)