In response to Timeless Control
Comment author: Phillip_Huggan 08 June 2008 01:43:32AM 0 points [-]

Er, to try to simply my above point: in my model, energy (say, an atom) at time-sequence t1, sums up all its interactions with the rest of its local universe (such as a CNS if it is a brain atom), and this "calculation" affects the weighting of sick-of-ice-cream t2, t2a, t2b, world-lines. In claiming MWI is a block universe, you are accepting t1 ping-pongs to the subsequent split world-lines t2, t2a, t2b, without any "calculation" as described.

Ultimately it is a question of what limits are imposed on the splitting off of new world-lines in the multiverse. The speed-of-light, yes. I don't see why the physics of mind couldn't also qualify.

In response to Timeless Control
Comment author: Phillip_Huggan 08 June 2008 01:26:43AM -1 points [-]

"In Thou Art Physics, I pointed out that since you are within physics, anything you control is necessarily controlled by physics."

I could just as easily argue since I'm within my past self's future light cone, anything I control is/was necessarily controlled by (a younger) me. In both cases we are playing with words and muddying the waters rather than learning or teaching.

I don't see why you can't just reverse the logic and claim that since everything in my mind is controlled by physics, thought is an act of my free will. I don't believe in strong free will. But I do believe by the time a toddler can form ideals (desires ice cream) that aren't real, some free will is already at work. The theory (math is not subject to General Relativity and thus "deterministic" is this description has nothing to do with the "deterministic" used to describe human actions)of MWI may be deterministic, but playing with English language words suggests actors can't choose their world-lines by using the physics of their minds to cascade synchronized neural firing patterns that activate the parts of our brains producing minds. Maybe there is no free will, but I'd need to see a convincing theory of consciousness absent circular reasoning. The Plinko disc make fall determinalistically, but if the Plinko chip had a human CNS and accurate memories of past drops, I bet it might try to rotate in a preferred fall path, and if the Plinko chip based its decision on reflected ideals, I'd say there is some free will there (neuron firing seems at a small enough scale to harness some of the quantum-spooky-stuff that causes universes to split off, for instance. I think our brains can control the % of world-lines that decide whether to binge eat ice cream. Equating a block universe to MWI assumes there is an end state where the total ratio of all time-space co-ordinates is known. in reality, this end state does not exist (as time breaks down outside reality, like when forming the mathematical concept of a block universe). There are many random events that control which world-line an individual experiences, but I don't see why volitions can't be among the cases. I doubt few people defending free will really mean to defend their right to bring about their own birth.

In response to That Alien Message
Comment author: Phillip_Huggan 23 May 2008 01:20:28AM -1 points [-]

Patrick, my quantum key encrypted supercomputer (assuming this is what is needed to build an AGI) is an intranet and not accessible by anyone outside the system. You could try to corrupt the employees, but that would be akin to trying to pursue a suitcase nuke: 9 out of 10 buyers are really CIA or whoever. Has a nuclear submarine ever been hacked? How will an AGI with the resources of the entire Multiverse, hack into a quantumly encrypted communications line (a laser and fibreoptics)? It can't.

I'm trying to brainstorm exactly what physical infrastructures would suffice to make an AGI impotent, assuming the long-term. For instance, if you put all protein products in a long que with neutron bombs nearby and inspect every product protein-by-protein...just neutron bomb all protein products if an anamoly is detected. Same for the 2050 world's computer infrastructures. Have computers all wired to self destruct with backups in a bomb shelter. If the antivirus program (might not even be necessary if quantum computers are ubiquitous) detects an anomoly, there goes all the computers. I'm smarter than a grizzly or Ebola, but I'm still probably dead against either. That disproves your argument. More importantly, drafting such defenses probably has a higher EV of societal good than against AGI because humans will almost certainly try these sorts of attacks.

I'm not saying every defense will work, but plz specifically disprove the defenses I've written. It might help e-security some day. There is the opportunity here to do this as IDK these conversations are happening in too many other forums, but singulatarians are dropping the ball because of a political cognitive bias that they wanna build their software like it or not.

Another defense is once/if a science of AGI is established, determine the minimum run-time needed on the most powerful computers not under surveillence, to make an AGI. Have all computers built to radioactively decay before that run-time is achieved. Another run-time defense, don't allow distributed computing applications to use beyond a certain # of nodes. I can understand dismissing the after-AGI defenses, but to categorically dismiss the pre-AGI defenses...

My thesis is that the computer hardware required for AGI is so advanced, that the technology of the day can ensure surveillence wins, if it is desired not to construct an AGI. Once you get beyond the cognitive bias that thought is computation, you start to appreciate how far into the future AGI is, and that the prime threat of this nature is from conventional AI programmes.

bambi, IDK anything about hacking culture, but I doubt kids need to read a decision theory blog to learn what a logic bomb is (whatever that is). Posting specific software code, on the other hand...

In response to That Alien Message
Comment author: Phillip_Huggan 22 May 2008 04:59:06PM 1 point [-]

...as for the 3rd last paragraph, yes, once a 2008 AGI has the ability to contact 2008 humans, humanity is doomed if the AGI deems fit. But I don't see why a 2050 world couldn't merely use quantum encyption communications, monitored for AGI. And monitor supercomputing applications. Even the specific method describing how AGI gets protein nanorobots might be flawed in a world certainly ravaged by designer pandemic terrorist attacks. All chemists (and other 2050 WMD professions) are likely to be monitored with RF tags. All labs, even the types of at-home PCR biochemistry today, are likely to be monitored. Maybe there are other methods the Bayesian AGI could escape (such as?). Wouldn't X-raying mail for beakers, and treating the protein medium aghar like plutonium is now treated, suffice? Communications jamming equipment uniformly distributed throughout Earth, might permanently box an AGI that somehow (magic?!) escapes a supercomputer application screen. If AGI needs computer hardware/software made in the next two or three decades it might be unstoppable. Beyond that, humans will already be using such AGI hardware requirements to commission WMDs and the muscular NSA 2050 will already be attentive to such phenomena.

In response to That Alien Message
Comment author: Phillip_Huggan 22 May 2008 04:28:58PM 0 points [-]

Two conclusions from the specific example: 1) The aliens are toying with us. This is unsettling in that it is hard to do anything good to prove our worth to aliens that can't meet even a human level of ethics. 2) The aliens/future-humans/creator(s)-of-the-universe are limited in their technological capabilities. Consider Martians who witness the occasional rover land. They might be wondering what it all means when we really have no grand scheme; are merely trying not to mix up Imperial and Metric units in landing. Such precise stellar phenomena is maybe evidence of a conscious creator in that it suggests an artificial limit being run up upon by the signals (who may themselves be the conscious creator). A GUT would determine whether the signal is "significant" in terms of physics. Inducing ET via Anthropic Principle reasoning gives me a headache. I much prefer to stick to trying to fill in the blanks of the Rare Earth hypothesis.

In response to Einstein's Speed
Comment author: Phillip_Huggan 22 May 2008 02:08:32AM 0 points [-]

Typo. Sorry. Should say GUT where I wrote lasers. I'll proofredafjkdsf all my posts in future.

In response to Einstein's Speed
Comment author: Phillip_Huggan 22 May 2008 02:02:42AM 0 points [-]

"I have to find an actual physicist to discuss this with, but there appears to be nothing wrong with Einstein's quest for a unified theory; he simply didn't have the prerequisite information of QM at the time (Feynman, Dyson, etc. didn't develop renormalization until the 1940s). MWI wasn't proposed until several years after Einstein's death."

I can't recall what renormalization is. I think there is something wrong with Einstein's quest; he was akin to Aristotle's atom theory. The Sung Dynasty was about the earliest atoms could be empirically uncovered, and a GUT is about as far away from Einstein in terms of knowledge base. I actually think Einstein's biggest accomplishment was political: writing to FDR about the possibility of a nuke. Einstein is responsible in this regard for a year of robotics, car, and computer progress along with tens of millions of present Japanese and American lives. I think the two characteristics that allowed Einstein to make 3 huge discoveries (Brownian motion, SR, GR) were his rich family that got him his patent clerk job and his willingness to be aloof and not follow the Popper-ian knowledge base of the time. I doubt he was the first to notice something wrong with phlogistan, but no one had the spare time and the determination to retool the knowledge base from ground zero (has anyone else ever taken an eight year diversion into mathematics to solve a single physics problem?). I don't think he had the same respect for quantum theory, despite founding it, that he did for GR. It seemed like he was trying to graft "quantum effects that functioned as non-local wormholes" onto GR, rather than genuinely finding a GUT by respecting quantum theory. No doubt he would have immediately championed MWI, but it seems like he was genuinely trying to undercut Copenhagen Interpretation rather than building upon it (this is in response to EY's MWI comment in the thread starter). All I'm saying is that if he would've realized the limits of his deductive method, he might've made even more contributions in his latter years and been the greatest thinker ever, instead of sharing the mantle with a handful of others. Maybe the most cutting edge scientific field is genetics. Someone might be able to deduce a science of the behaviour of animal-human hybrids studying the input animal temperaments and physiologies, but a better avenue would be to be a protein folding scientist and learn how to cure cancer or diabetes or something. I don't want to speak for Einstein's study strengths and weaknesses, but maybe we'd have optical computers now if Einstein would've transitioned to optics instead of lasers. I can't think of any physical knowledge areas now that are in as bad shape as cosmology was pre-Einstein. The next Einstein will come from social science fields, probably (is why I mentioned M.Yunus). With computers, everything physics is research teams nowadays. Maybe M.Lazaridis funding a quantum computer research park, is the closest anyone now can come to advancing a theoretical physics field as much as Einstein (cosmology) did.

In response to Einstein's Speed
Comment author: Phillip_Huggan 21 May 2008 06:17:37PM 0 points [-]

"As of now, at least, reasoning based on scanty evidence is something that modern-day science cannot reliably train modern-day scientists to do at all."

By definition, scientists must use induction. Meant to say thinkers. IDK why thinkers mostly use induction now: maybe because the scientific funding model seems to work okay or because once you induce too far ahead, the content becomes useless if new research deviates the course a bit. For instance, all GUT/TOE physicists use Einstein-ian deduction in their elegant models. Einstein was lucky to be redeemed so quickly in that novel observatories were just being constructed. It is more expensive (maybe risky too) to turn the galaxy into a giant particle accelerator. In social sciences fileds, there is deduction. M.Yunus stimulated microfinance with a $26? loan by deducing collateral isn't a primary motivator in debt repayment (primary are entrepreneurial drive and quality-of-living gains). Drexler's nanotechnology vision was deduction. Many political programmes are deductions.

I agree with the general body content deduction is underappreciated. On reflection, the reason may be because an act of deduction almost always occurs in fields where there is no competing induction (ie. R.Freitas's simulations probably render much of E.Drexler's deductions obsolete). Thus deduction is a proxy to unearth low-hanging fruit? Deductive GUTs are fine, but will certainly be eclipsed by induced particle accelerator engineering blueprints one day. Deduction is free and addresses the issue of hypothesis generation somewhat.

I disagree strongly with the suggestion Einstein was a proponent of MWI. In fact, the overemphasis on deduction (defined here as induction from few au priors) caused him to waste the remaining 2/3 of his life attempting to disprove quantum phenomena, no?

Hopefully, ignoring ethics, cloning people for whatever reason will only ensure one of three (even less considering genetic mutations) character traits for whatever Eugenics you are practising. There is nurture and there is personal inspiration (probably could be defined here as intensity of rationality). If there is no Earth Summit in 1992, I probably don't pick up a bunch of environmental pamphlets one weekend, then. My decade-later clone exposed to Fox News maybe even exacerbates the leading extinction threat. Maybe if I don't grow up with cats, I don't make the inspired choice to value living beings; maybe my Fox News clone values killing Muslims and other "infidels" instead? If Eliezer doesn't read whichever sci-fi story inspired him, does he make the choice to focus upon AGI?

Comment author: Phillip_Huggan 20 May 2008 04:09:50PM 1 point [-]

My thoughts on the future of mankind:

1) Near-term primary goal to maximize productive peron/yrs. 2) Rearrange capital flows to prevent productive person/yrs from being lost to obvious causes (ie. UN Millenium development goals and invoking sin-taxes), with effort to offer pride-savings win-win situations. Re-educate said workforce. Determine optimum resource allocation towards civilization redundancy efforts based upon negative externality accounting revised (higher) economic growth projections. Isolate states exporting anarchy or not attempting to participate in globalized workforce. Begin measuring purchasing parity adjusted annual cost to provide a Guaranteed Annual Income (GAI) in various nations. 3) Brainstomring of industries required to maximize longeivty, and to handle technologies and wield social systems essential for safely transitioning first to a medical/health, then to a leisure society. 4) Begin reworking bilateral and global trade agreements to reward actors who subsequently trend towards #3. Begin building a multilateral GAI fund to reward actors who initiate #5. 5) Mass education of society towards health/medical and other #3 sectors. Begin dispensing GAI to poor who are trending towards education/employment relevant to #3 sectors. 6) Conversion of non-essential workforces to health/medical R+D and other #3 sectors. Hopefully the education GAI load will fall and the fund can focus upon growing to encompass a larger GAI population base in anticipation of the ensuing leisure society. 7) Climax of medical/health R+D workforce. 8) Mature medical ethics needed. Mature medical AI safeguards needed. Education in all medical AI-relevant sectors. Begin measuring AI medical R+D advances vs. human researcher medical R+D advances. 9) Point of inflection where it becomes vastly more efficient to develop AI medical R+D systems rather than educating researchers (or not if something like real-time human trials bottleneck software R+D). Subsequent surplus medical/health labour-force necessitates a global GAI by now at the latest. AI Medical R+D systems become a critical societal infrastructure and human progress in the near-term will be limited by the efficacy and safety (ie. from computer virii) of these programs. 10) Leisure society begins. Diminishing returns from additional resource allocations towards AI medical R+D. Maximum rate of annual longevity gains. 11) Intensive study of mental health problems in preparation for #13. Brainstorming of surveillence infrastructures needed to wield engineering technologies as powerful as Drexler-ian nanotechnology. Living spaces will resemble the nested security protocols of a modern microbiology lab. Potentially powerful occupations and consumer goods will require increased surveillence. Brainstorming metrics to determine the most responsible handlers of a #13 technology (I suggest something like the CDI Index as a ranking). 12) Design blueprints for surveillence tools like quantum-key encryption and various sensors must be ready either before powerful engineering technologies are developed, or be among the first products created using the powerful technology. To maintain security for some applications it may be necessary to engineer entire cities from scratch. Sensors should be designed to maximize human privacy rights. The is a heighten risk of WWIII from this period on until just after the technology is developed. 13) A powerful engineering technology is developed (or not). The risk of global tyranny is highest since 1940. Civilization-wide surveillence achieved to ensure no WMDs unleashed, and no dangerous technological experiments. A technology like the ability to cheaply manufacture precision diamond products, could unleash many sci-fi-ish applications including interstellar space travel and the hardware required for recursively improving AI software (AGI). This technology would signal the end of capitalism and patent regimes. A protocol for encountering technologically inferior ETs might be required. Safe AGI/AI software programs would be needed before desired humane applications should be used. Need mature sciences of psychology and psychiatry to assist the benevolent administration of this technology. Basic Human Rights, goods and services should be administered to all where tyrannical regimes don't possess military parity. 14) Weaponry, surveillence, communications and spacecraft developed to expand the outer perimeter of surveillence beyond the Solar System. Twin objectives: to ensure no WMDs such as rogue AGI/AI programs, super high energy physics experiments, kinetic impactor meteors,etc., are created; and to keep open the possibility of harvesting resources required to harness the most powerful energy resources in the universe. The latter objective may require the development of physics experiments and/or AGI that conflicts with the former objective. The latter objective will require a GUT/TOE. Developing a GUT may require the construction of a physics experimental apparatus that should be safe to use. Need a protocol for dealing with malevolent ETs at approximate technological parity with humanity. Need a protocol to accelerate the development of dangerous technologies like AGI and Time Machines if the risks from these are deemed less than the threat from aliens; there are many game-theoric encounter scenarios to consider. This protocol may be anthropomorphic to how to deal with malevolent/inept conscious or software actors that escape the WMD surveillence perimeter. 16) If mapping the energy stores of the universe is itself safe/sustainable or if using the technologies needed to do so is safe, begin expanding a universe energy survey perimeter, treating those who attempt to poison future energy resources as pirates. 17) If actually harnessing massive energy resources or using the technologies required to do so is dangerous, a morality will need to be defined that determines a tradeoff of person/yrs lost vs. potential energy resources lost. The potential to unleash Hell Worlds, Heavens and permanent "in-betweens" is of prime consideration. Assuming harnessing massive energy resources is safe (doesn't end local universe) and holds a negligible risk of increasing odds of a Hell World or "in betweens", I suggest at this point invoking a Utilitarian system like Mark Walker's "Angelic Heirarchy", whereby from this point on, conscious actors begin amassing "survival credits". As safe energy resources dry up towards the latter part of a closed universe (or when atoms decay), trillions of years from now, actors who don't act to maximize this dwindling resource base will be killed to free up resources required to later mine potentially uncertain/dangerous massive energy resources. Same thing if the risk of unleashing Hell Worlds or destroying reality is deemed too high to pursue mining the energy resource: a finite resource base suggests those hundred trillion yr old actors with high survival credit totals, live closer to the end of the universe, as long as enforcing such a morality is itself not energy intensive. A Tipler-ian Time Machine may be the lever here; using it or not might determine net remaining harvestable energy resources and the quality-of-living hazard level in taking different courses of action. 18a) An indefinite Hell World. 18b) An indefinite Heaven World. 18c) End of the universe for conscious actors, possibly earlier than necessary because of a decision that fails to harness a dangerous energy source. If enforcing a "survial credit" administrative regime is energy intensive, the Moral system will be abandoned at some point and society might degenerate into cannabalism.

Comment author: Phillip_Huggan 20 May 2008 06:05:44AM 0 points [-]

For what it's worth I'm posting my thoughts about the future of mankind on B.Goertzel's AGIRI forum tomorrow. The content may be of interest to the FHI.

View more: Prev | Next