Altruism doesn't only mean preventing suffering. It also means increasing happiness. If all suffering were ended, altruists will still have purpose in providing creativity, novelty, happiness, etc. Suffering then becomes not experience unthinkable levels of insert_positive_emotion_here and philanthropists will be devoted to ensure that all sentient entities experience all they can. The post-singularity Make-a-Wish foundation would experience rapid growth and expand their services as well as volunteers as they operate full-time with repeat customers.
Doesn't this line of thinking make the case for Intelligence Augmentation (IA) over that of FAI? And let me qualify that when I say IA, I really mean friendly intelligence augmentation relative to friendly artificial intelligence. If you could 'level up' all of humanity to the wisdom and moral ethos of 'friendliness', wouldn't that be the most important step to take first and foremost? If you could reorganize society and reeducate humans in such a way to make a friendly system at our current level of scientific knowledge and technology, that would almo...
What you describe as targets over '4D states' reminds me of Finite and Infinite Games by James Carse. For an example, playing a game of basketball with a winner/loser after an hour of play is a finite game. However, the sport of basketball overall, is an infinite game. So playing a specific video game to reach a score or pass the final level is a finite game, but being a 'gamer' is an infinite game, allowing ever more types of gaming to take place.
Prediction can't be anything but a naive attempt at extrapolating past & current trends out into the future. Most of Kurzwel's accurate predictions are just trends about technology that most people can easily notice. Social trends are much more complex, and those predictions of Kurzweil are off. Also, the occasional black swan is unpredictable by definition, and is usually what causes the most significant changes in our societies.
I like how sci-fi authors talk about their writings as not predicting what the future is going to look like (that's impos...
It is interesting that no one from this group of empirical materialists has suggested looking at this problem from the perspective of human physiology. If I tried painting a house for hours on end I would need to rest--my hand would be sore, I'd be weak and tired, and generally would lack the energy to continue. Why would exercising your brain be significantly different? If Eli is doing truly mentally strenuous work for hours, it is not simply a problem of willpower, but mental energy. Maintaining a high level of cognitive load will physically wear yo...
How do periods of stagnant growth, such as extinction level events in earth's history, effect the graphs? As the dinosaurs went extinct, did we jump straight to the start of the mammalian s-curve, or was there a prolonged growth plateau that when averaged out in the combined s-curve meta-graph, doesn't show up as being significant?
A singularity type phase-shift being so steep, even If growth were to grind down in the near future and become stagnant for 100s of years, wouldn't the meta-graph still show an overall fit when averaged out if the singularity occurred after some global catastrophe?
I guess I want to know what effect periods of <= 0 growth have on these meta-graphs.
I've only been starting to read this blog consistently for a few months, but if there weren't thoughtful mini-essay style posts from EY, Hanson, or someone similar, I doubt I'd stay. I actually think a weekly frequency as opposed to daily would be slightly better since my attention and schedule are increasingly being taxed. The most important value this blog provides is the quality of the posts firstly, and subsequently the quality of the comments/discussions pertaining to the posts. Don't create a community for the sake of creating a community, maintain quality at all costs. That is your competitive advantage. If that isn't likely, then better to freeze the site at its height and leave it for posterity than to tarnish it.
So are you claiming that Brooks' whole plan was, on a whim, to just do the opposite of what the neats were doing up till then? I thought his inspiration for the subsumption architecture was nature, the embodied intelligence of evolved biological organisms, the only existence proof of higher intelligence we have so far. To me it seems like the neats are the ones searching a larger design space, not the other way around. The scruffies have identified some kind of solution to creating intelligent machines in nature and are targeting a constrained design space inspired by this--the neats on the other hand are trying to create intelligence seemingly out of the platonic world of forms.
Jeff Hawkins, in his book On Intelligence, says something similar to Eliezer. He says intelligence IS prediction. But Eliezer say intelligence is steering the future, not just predicting it. Steering is a behavior of agency, and if you cannot peer into the source code but only see the behaviors of an agent, then intelligence would necessarily be a measure of steering the future according to preference functions. This is behaviorism is it not? I thought behaviorism had been predicated as a useful field of inquiry in the cognitive sciences?
I can see whe...
Rand's objectivism and capitalism are criticized by people who reflexively see 'selfish' and equate that with greed and all the problems with capitalism. But those critics are delusional or they just don't understand human nature and our built-in modules fo...
A new method of 'lie detection' is being perfected using functional near infrared imaging of the prefrontal cortex:
http://www.biomed.drexel.edu/fNIR/Contents/deception/
In this technique the device actually measures whether or not a certain memory is being recalled or is being generated on the spot. For example, if you are interrogating a criminal who denies ever being at a crime scene, and you show them a picture of the scene, you can deduce whether he/she has actually seen it or not by measuring if their brain is recalling some sensory data from memory or newly creating and storing it.
What you are getting at is that the ends justify the means only when the means don't effect the ends. In the case of a human as part of the means, the act of the means may effect the human and thus effect the ends. In summary, reflexivity is a bitch. This is a reason why social science and economics is so hard--the subjects being modeled change as a result of the modeling process.
This is a problem with any sufficiently self-reflective mind, not with AIs that do not change their own rules. A simple mechanical narrow AI that is programmed to roam about c...
Sorry for the triple post, one more addition. Larry Lessig just gave a lecture on corruption and the monetary causes of certain types of corruption prevalent in our society.
http://www.lessig.org/blog/2007/10/corruption_lecture_alpha_versi_1.html
Short addition to my previous post.
I've been thinking about how to apply the notion of recursive self-improvement to social structures instead of machines. I think it actually offers (though non-intuitively) a simpler case to think about friendliness optimization. If anyone else is interested feel free to email me. I'm planning on throwing up a site/wiki about this topic and may need help.
haig51 AT google mail
That is why systems consisting of checks and balances were eventually created (ie. democracy). Such social systems try to quell the potential for power aggregation and abuse, though as current events show, there will always be ways for power hungry people to game the system (and the best way to game the system is to run it and change the rules in your favor, creating the illusion that you still abide by the rules).
I always felt that the best system would be one of two extremes: 1.) a benevolent dictator (friendly superintelligence?) or 2.) massively dece...
Did Einstein try to do the impossible? No, yet looking back it seems like he accomplished an impossible (for that time) feat doesn't it. So what exactly did he do? He worked on something he felt was: 1.) important, and probably more to the point, 2.) passionate about.
Did he run the probabilities of whether he would accomplish his goal? I don't think so, if anything he used the fact that the problem has not been solved so far and the problem is of such difficulty only to fuel his curiosity and desire to work on the problem even more. He worked at it eve...
I think most people's feedback threshold requires some return on their efforts in a relatively short time period. It takes monk-like patience to work on something indefinitely without any intermediary returns. So then, I don't think the point in contention is whether people are willing to make extraordinary effort, it is whether they are willing to make extraordinary effort without extraordinary returns in a time span relative to their feedback threshold. Even in eastern cultures where many people believe that enlightenment in the strong sense is possible by meditating your whole life, there is a reason why there are only a few practicing monks.
On the existential question of our pointless existence in a pointless universe, my perspective tends to oscillate between two extremes:
1.) In the more pessimistic (and currently the only rationally defensible) case, I view my mind and existence as just a pattern of information processing on top of messy organic wetware and that is all 'I' will ever be. Uploading is not immortality, it's just duplicating that specific mind pattern at that specific time instance. An epsilon unit of time after the 'upload' event that mind pattern is no longer 'me' and will ...
I'm relatively new to this site and have been trying to read the backlog this past week so maybe I've missed some things, but from my vantage point it seems like your are trying to do, Eliezer, is come up with a formalized theory of friendly agi that will later be implemented in code using, I assume, current software development tools on current computer architectures. Also, your approach to this AGI is some sort of bayesian optimization process that is 'aligned' properly as to 'level-up' in such a way as to become and stay 'friendly' or benevolent toward...
Again, ambiguous language seems to derail the conversation. I'm sure she doesn't mean stop caring about Africa, turn a blind eye and go about your way, and we'll take care of ourselves (though the data may suggest that such a course of action might have been more productive). She means stop blindly donating money and goods that at first seems to help but in reality does more harm than good with the exception of satisfying the donors commiseration. It would follow that she would love for people to think of more rational ways to help, to think about the end results of charity more then the act of being charitable.