Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
haig200

Again, ambiguous language seems to derail the conversation. I'm sure she doesn't mean stop caring about Africa, turn a blind eye and go about your way, and we'll take care of ourselves (though the data may suggest that such a course of action might have been more productive). She means stop blindly donating money and goods that at first seems to help but in reality does more harm than good with the exception of satisfying the donors commiseration. It would follow that she would love for people to think of more rational ways to help, to think about the end results of charity more then the act of being charitable.

haig250

Altruism doesn't only mean preventing suffering. It also means increasing happiness. If all suffering were ended, altruists will still have purpose in providing creativity, novelty, happiness, etc. Suffering then becomes not experience unthinkable levels of insert_positive_emotion_here and philanthropists will be devoted to ensure that all sentient entities experience all they can. The post-singularity Make-a-Wish foundation would experience rapid growth and expand their services as well as volunteers as they operate full-time with repeat customers.

haig2-10

Doesn't this line of thinking make the case for Intelligence Augmentation (IA) over that of FAI? And let me qualify that when I say IA, I really mean friendly intelligence augmentation relative to friendly artificial intelligence. If you could 'level up' all of humanity to the wisdom and moral ethos of 'friendliness', wouldn't that be the most important step to take first and foremost? If you could reorganize society and reeducate humans in such a way to make a friendly system at our current level of scientific knowledge and technology, that would almost (not entirely, but as best as we can) cut the probability of existential threats to a minimum and allow for a sustainable eudaimonic increase of intelligence towards a positive singularity outcome. Yes, that is a hard problem, but I'm sure not harder than FAI (probably a lot less hard). It'll probably take generations, and we might have to take a few steps backwards before we take further steps forwards (and non-existential catastrophes might provide those backward steps regardless of our choosing), but it seems like it is the best path. The only reasons to choose an FAI plan is because you 1.) think an existential threat is likely to occur very soon, 2.) you want to be alive for the singularity and don't want to risk cryogenics, 3.) you just fancy the FAI idea for personal non-rational reasons.

haig240

What you describe as targets over '4D states' reminds me of Finite and Infinite Games by James Carse. For an example, playing a game of basketball with a winner/loser after an hour of play is a finite game. However, the sport of basketball overall, is an infinite game. So playing a specific video game to reach a score or pass the final level is a finite game, but being a 'gamer' is an infinite game, allowing ever more types of gaming to take place.

haig200

Prediction can't be anything but a naive attempt at extrapolating past & current trends out into the future. Most of Kurzwel's accurate predictions are just trends about technology that most people can easily notice. Social trends are much more complex, and those predictions of Kurzweil are off. Also, the occasional black swan is unpredictable by definition, and is usually what causes the most significant changes in our societies.

I like how sci-fi authors talk about their writings as not predicting what the future is going to look like (that's impossible and getting more so), but as using the future to critique the present.

Lastly, Alan Kay's quote always comes in handy when talking about future forecasting: "The best way to predict the future is to invent it."

haig220

It is interesting that no one from this group of empirical materialists has suggested looking at this problem from the perspective of human physiology. If I tried painting a house for hours on end I would need to rest--my hand would be sore, I'd be weak and tired, and generally would lack the energy to continue. Why would exercising your brain be significantly different? If Eli is doing truly mentally strenuous work for hours, it is not simply a problem of willpower, but mental energy. Maintaining a high level of cognitive load will physically wear you out. The US military is experimenting with fnirs-based neuroimaging devices to see if they can measure how much cognitive load they can put on workers in high-performance mental situations the same way you measure how much weight a person can lift or how much time/distance someone can run.

If the problem was that he could not get going at all, then it is more of a psychological problem such as procrastination. But it seems to be that he just wants to sustain long stretches of high-performance cognitive work, which unfortunately the brain cannot do. Switching to watching a video or browsing the web is your brain stopping the run and resorting to walking until it rests enough.

haig200

How do periods of stagnant growth, such as extinction level events in earth's history, effect the graphs? As the dinosaurs went extinct, did we jump straight to the start of the mammalian s-curve, or was there a prolonged growth plateau that when averaged out in the combined s-curve meta-graph, doesn't show up as being significant?

A singularity type phase-shift being so steep, even If growth were to grind down in the near future and become stagnant for 100s of years, wouldn't the meta-graph still show an overall fit when averaged out if the singularity occurred after some global catastrophe?

I guess I want to know what effect periods of <= 0 growth have on these meta-graphs.

haig200

I've only been starting to read this blog consistently for a few months, but if there weren't thoughtful mini-essay style posts from EY, Hanson, or someone similar, I doubt I'd stay. I actually think a weekly frequency as opposed to daily would be slightly better since my attention and schedule are increasingly being taxed. The most important value this blog provides is the quality of the posts firstly, and subsequently the quality of the comments/discussions pertaining to the posts. Don't create a community for the sake of creating a community, maintain quality at all costs. That is your competitive advantage. If that isn't likely, then better to freeze the site at its height and leave it for posterity than to tarnish it.

haig230

So are you claiming that Brooks' whole plan was, on a whim, to just do the opposite of what the neats were doing up till then? I thought his inspiration for the subsumption architecture was nature, the embodied intelligence of evolved biological organisms, the only existence proof of higher intelligence we have so far. To me it seems like the neats are the ones searching a larger design space, not the other way around. The scruffies have identified some kind of solution to creating intelligent machines in nature and are targeting a constrained design space inspired by this--the neats on the other hand are trying to create intelligence seemingly out of the platonic world of forms.

haig210

Jeff Hawkins, in his book On Intelligence, says something similar to Eliezer. He says intelligence IS prediction. But Eliezer say intelligence is steering the future, not just predicting it. Steering is a behavior of agency, and if you cannot peer into the source code but only see the behaviors of an agent, then intelligence would necessarily be a measure of steering the future according to preference functions. This is behaviorism is it not? I thought behaviorism had been predicated as a useful field of inquiry in the cognitive sciences?

I can see where Eliezer is going with all this. The most moral/ethical/friendly AGI cannot take orders from any human, let alone be modeled on human agency to a large degree itself, and we also definitely do not want this agency to be a result of the same horrendous process of natural selection red in tooth and claw that created us.

That cancels out an anthropomorphic AI, cancels out evolution through natural selection, and it cancels out an unchecked oracle/genie type wish-granting intelligent system (though I personally feel that a controlled (friendly?) version of the oracle AI is the best option because I am skeptical with regard to Eliezer or anyone else coming up with a formal theory of friendliness imparted on an autonomous agent). ((Can an oracle type AI create a friendly AI agent? Is that a better path towards friendliness?))

Adam's comment above is misplace because I think Eliezer's recursively self-improving friendly intelligence optimization is a type of evolution, just not as blind as natural selection as has been played out through natural history on our earth.

Load More