Would you be willing to show a reference or back-of-the-envelope calculation for this?
The last time I checked, the manufacture of large photovoltaic panels was energy-intensive and low-yield (their current price suggests that these problem persist.) They were also rated for a useful life of around two decades.
I do not believe that these problems have been corrected in any panel currently on the market. There is no shortage of vaporware.
Could you give more examples about things you like about Mathematica?
1) Mathematica's programming language does not confine you to a particular style of thinking. If you are a Lisp fancier, you can write entirely Lispy code. Likewise Haskell. There is even a capability for relatively painless dataflow programming.
2) Wolfram Inc. took great pains to make interfacing with the outside world from within the app as seamless as possible. For example, you can suck in a spreadsheet file directly into a multidimensional array. There is import and export capabil...
Do you mean that organizations aren't very good at selecting the best person for each job.
Actually, no. What I mean is that human society isn't very good at realizing that it would be in its best interest to assign as many high-IQ persons as possible the job of "being themselves" full-time and freely developing their ideas - without having to justify their short-term benefit.
Hell, forget "as many as possible", we don't even have a Bell Labs any more.
But there is no evidence that any pill can raise the average person's IQ by 10 points
Please read this short review of the state of the art of chemical intelligence enhancement.
We probably cannot reliably guarantee 10 added points for every subject yet. Quite far from it, in fact. But there are some promising leads.
if some simple chemical balance adjustment could have such a dramatic effect on fitness
Others have made these points before, but I will summarize: fitness in a prehistoric environment is a very different thing from fitness in the world of ...
I will accept that "AGI-now" proponents should carry the blame for a hypothetical Paperclip apocalypse when Friendliness proponents accept similar blame for an Earth-bound humanity flattened by a rogue asteroid (or leveled by any of the various threats a superintelligence - or, say, the output of a purely human AI research community unburdened by Friendliness worries - might be able to counter. I previously gave Orlov's petrocollapse as yet another example.)
I cannot pin down this idea as rigorously as I would like, but there seems to exist such a trait as liking to think abstractly, and that this trait is mostly orthogonal to IQ as we understand it (although a "you must be this tall to ride" effect applies.) With that in mind, I do not think that any but the most outlandishly powerful and at the same time effortless intelligence amplifier will be of much interest to the bulk of the population.
ASCII - the onus is on you to give compelling arguments that the risks you are taking are worth it
Status quo bias, anyone?
I presently believe, not without justification, that we are headed for extinction-level disaster as things are; and that not populating the planet with the highest achievable intelligence is in itself an immediate existential risk. In fact, our current existence may well be looked back on as an unthinkably horrifying disaster by a superintelligent race (I'm thinking of Yudkowsky's Super-Happies.)
It's highly non-obvious that it would have significant effects
The effects may well be profound if sufficiently increased intelligence will produce changes in an individual's values and goal system, as I suspect it might.
At the risk of "argument from fictional evidence", I would like to bring up Poul Anderson's Brain Wave, an exploration of this idea (among others.)
Software programs for individuals.... prime association formation at a later time.... some short-term memory aid that works better than scratch paper
I have been obsessively researching this idea for several years. One of my conclusions is that an intelligence-amplification tool must be "incestuously" user-modifiable ("turtles all the way down", possessing what programming language designers call reflectivity) in order to be of any profound use, at least to me personally.
...Or just biting the bullet and learning Mathematica to an exper
I have located a paper describing Lenat's "Representation Language Language", in which he wrote Eurisko. Since no one has brought it up in this thread, I will assume that it is not well-known, and may be of interest to Eurisko-resurrection enthusiasts. It appears that a somewhat more detailed report on RLL is floating around public archives; I have not yet been able to track down a copy.
How are truly fundamental breakthroughs made?
Usually by accident, by one or a few people. This is a fine example.
ought to be more difficult than building an operating system
I personally suspect that the creation of the first artificial mind will be more akin to a mathematician's "aha!" moment than to a vast pyramid-building campaign. This is simply my educated guess, however, and my sole justification for it is that a number of pyramid-style AGI projects of heroic proportions have been attempted and all failed miserably. I disagree with Le...
Do you agree that you hold a small minority opinion?
Yes, of course.
Do you have any references where the arguments are spelled out in greater detail?
I was persuaded by the writings of one Dmitry Orlov. His work focuses on the impending collapse of the U.S.A. in particular, but I believe that much of what he wrote is applicable to the modern economy at large.
the fact that a decent FAI researcher would tend not to publicly announce any advancements in AGI research
Science as priestcraft: a historic dead end, the Pythagoreans and the few genuine finds of the alchemists nonwithstanding. I am astounded by the arrogance of people who consider themselves worthy of membership in such a secret club, believing themselves more qualified than "the rabble" to decide the fate of all mankind.
if you are not dead as a result
I am profoundly skeptical of the link between Hard Takeoff and "everybody dies instantly."
ad-hoc tinkering is expected to lead to disaster
This is the assumption which I question. I also question the other major assumption of Friendly AI advocates: that all of their philosophizing and (thankfully half-hearted and ineffective) campaign to prevent the "premature" development of AGI will lead to a future containing Friendly AI, rather than no AI plus an earthbound human race dead from natural causes.
Ad-...
Thank you for the link.
I concede that a post-collapse society might successfully organize and attempt to resurrect civilization. However, what I have read regarding surface-mineral depletion and the mining industry's forced reliance on modern energy sources leads me to believe that if our attempt at civilization sinks, the game may be permanently over.
I view the teenager's success as simultaneously more probable and more desirable than that of a centralized bureaucracy. I should have made that more clear. And my "goal" in this case is simply the creation of superintelligence. I believe the entire notion of pre-AGI-discovery Friendliness research to be absurd, as I already explained in other comments.
he logic of mutually assured destruction would be clear and compelling even to the general public
When was the last time a government polled the general public before plunging the nation into war?
Now that I think about it, the American public, for instance, has already voted for petrowar: with its dollars, by purchasing SUVs and continuing to expand the familiar suburban madness which fuels the cult of the automobile.
we have nuclear, wind, solar and other fossil fuels
Petrocollapse is about more than simply energy. Much of modern industry relies on petrochemical feedstock. This includes the production and recycling of the storage batteries which wind/solar enthusiasts rely on. On top of that, do not forget modern agriculture's non-negotiable dependence on synthetic fertilizers.
Personally I think that the bulk of the coming civilization-demolishing chaos will stem from the inevitable cataclysmic warfare over the last remaining drops of oil, rather than from direct effects of the shortage itself.
AGI is a really hard problem
It has successfully resisted solution thus far, but I suspect that it will seem laughably easy in retrospect when it finally falls.
If it ever gets accomplished, it's going to be by a team of geniuses who have been working on the project for years
This is not how truly fundamental breakthroughs are made.
Will they be so immersed in the math that they won't have read the deep philosophical tracts?
Here is where I agree with you - anyone both qualified and motivated to work on AGI will have no time or inclination to pontifi...
This is not how truly fundamental breakthroughs are made.
Hmm---now that you mention it, I realize my domain knowledge here is weak. How are truly fundamental breakthroughs made? I would guess that it depends on the kind of breakthrough---that there are some things that can be solved by a relatively small number of core insights (think Albert Einstein in the patent office) and some things that are big collective endeavors (think Human Genome Project). I would guess furthermore that in many ways AGI is more like the latter than the former, see below.
...Why
The intro section of my site (Part 1, Part 2) outlines some of my thoughts regarding Engelbartian intelligence amplification. For what I regard as persuasive arguments in favor of the imminence of petrocollapse, I recommend Dmitry Orlov's blog and dead-tree book.
As for my thoughts regarding AGI/FAI, I have not spoken publicly on the issue until yesterday, so there is little to read. My current view is that Friendly AI enthusiasts are doing the equivalent of inventing the circuit breaker before discovering electricity. Yudkowsky stresses the importance of &...
How is blindly looking for AGI in a vast search space better than stagnation?
No amount of aimless blundering beats deliberate caution and moderation (see 15th century China example) for maintaining technological stagnation.
How does working on FAI qualify as "stagnation"?
It is a distraction from doing things which are actually useful in the creation of our successors.
You are trying to invent the circuit breaker before discovering electricity; the airbag before the horseless carriage. I firmly believe that all of the effort currently put int...
How about thinking about ways to enhance human intelligence?
I agree entirely. It is just that I am quite pessimistic about the possibilities in that area. Pharmaceutical neurohacking appears to be capable of at best incremental improvements, often at substantial cost. Our best bet was probably computer-aided intelligence amplification, and it may be a lost dream.
If AGI even borders on being possible with known technology, I would like to build our successor race. Starting from scratch appeals to me greatly.
I am convinced that resource depletion is likely to lead to social collapse - possibly within our lifetimes. Barring that, biological doomsday-weapon technology is becoming cheaper and will eventually be accessible to individuals. Unaugmented humans have proven themselves to be catastrophically stupid as a mass and continue in behaviors which logically lead to extinction. In the latter I include not only ecological mismanagement, but, for example, our continued failure to solve the protein folding problem, create countermeasures to nuclear weapons, and to create a universal weapon against virii. Not to mention our failure of the ultimate planetary IQ test - space colonization.
catastrophic social collapse seems to require something like famine
Not necessarily. When the last petroleum is refined, rest assured that the tanks and warplanes will be the very last vehicles to run out of gas. And bullets will continue to be produced long after it is no longer possible to buy a steel fork.
R&D... efficient services... economy of scale... new technologies will appear
Your belief that something like present technological advancement could continue after a cataclysmic collapse boggles my mind. The one historical precedent we have ...
permanently put us back in the stone age
Exactly. The surface-accessible minerals are entirely gone, and pre-modern mining will have no access to what remains. Even meaningful landfill harvesting requires substantial energy and may be beyond the reach of people attempting to "pick up the pieces" of totally collapsed civilization.
resource depletion (as alluded to by RWallace) is a strong possible threat. But so is a negative singularity.
Resource depletion is as real and immediate as gravity. You can pick up a pencil and draw a line straight through present trends to a horse-and-cart world (or the smoking, depopulated ruins from a cataclysmic resource war.) The negative singularity, on the other hand, is an entirely hypothetical concept. I do not believe the two are at all comparable.
Would you have hidden it?
You cannot hide the truth forever. Nuclear weapons were an inevitable technology. Likewise, whether or not Eurisko was genuine, someone will eventually cobble together an AGI. Especially if Eurisko was genuine, and the task really is that easy. The fact that you seem persuaded of the possibility of Lenat having danced on the edge of creating hard takeoff gives me more interest than ever before in a re-implementation.
Reading "value is fragile" almost had me persuaded that blindly pursuing AGI is wrong, but shortly after, &...
Would you have hidden it?
I hope so. It was the right decision in hindsight, since the Nazi nuclear weapons program shut down when the Allies, at cost of some civilian lives, destroyed their source of deuterium. If they'd known they could've used purified graphite... well, they probably still wouldn't have gotten nuclear weapons in this Everett branch but they might have somewhere else.
Before 2001 I would probably have been on Fermi's side, but that's when I still believed deep down that no true harm could come to someone who was only faithfully trying...
I was going to reply, but it appears that someone has eloquently written the reply for me.
I'd like to take my chances of being cooked vs. a world without furnaces, thermostats or even rooms - something I believe we're headed for by default, in the very near future.
Aside from that: If I had been following your writings more carefully, I might already have learned the answer to this, but: just why do you prioritize formalizing Friendly AI over achieving AI in the fist place?
This was addressed in "Value is Fragile."
It seems to me that if any intelligence, regardless of its origin, is capable of wrenching the universe out of our control, it deserves it.
I don't think you understand the paperclip maximizer scenario. An UnFriendly AI is not necessarily conscious; it's just this device that tiles the light...
We can temporarily disrupt language processing through magnetically-induced electric currents in the brain. As far as anyone can tell, the study subjects suffer no permanent impairment of any kind. Would you be willing to try an anosognosia version of the experiment?