All of Phillip_Huggan's Comments + Replies

Another example of a real-world Moral quandry that the real world would love H+ disucssion lists to take on, is the issue of how much medical care to invest in end-of-life patients. Medical advances will continue to make more expensive treatment options available. In Winnipeg, there was a case recently where a patient in a terminal coma had his family insist on not taking him off life support. In Canada in the last decade or so, the decision was based on a doctor's prescription. Now it also encompases family and the patient's previous wishes. 3 doctors... (read more)

(ZMDavis wrote:) "But AGI is [...] not being a judge or whoever writes laws."

If Eliezer turns out to be right about the power of recursive self-improvement, then I wouldn't be so sure."

Argh. I didn't mean that as a critique on EY's prowess as an AGI theorist or programmer. I doubt Jesus would've wanted people to deify him, just to be nice to eachother. I doubt EY meant for his learning of philosophy to be interpreted as some sort of Moral code, he was just arrogant enough not to state he was sometimes using his list to as a tool to develo... (read more)

Yes, EY's past positions about Morality are closer to Subhan's than Obert's. But AGI is software programming and hardware engineering, not being a judge or whoever writes laws. I wouldn't suggest deifying EY if your goal is to learn ethics.

"Why the obsession with making other people happy?"

Not obsessed. Just pointing out the definition of morality. High morality is making yourself and other people happy.

Phillip Huggan: "Or are you claiming such an act is always out of self-interest?" (D.Bider:) Such acts are. Stuff just is. Real reasons are often unknowable; and if known, would be trivial, technical, mundane.

That's deep.

"Stuff is. Fitting stuff that happens into a moral framework? A hopeless endeavor for misguided individuals seeking to fulfil the romantic notion t... (read more)

(Subhan wrote:) "And if you claim that there is any emotion, any instinctive preference, any complex brain circuitry in humanity which was created by some external morality thingy and not natural selection, then you are infringing upon science and you will surely be torn to shreds - science has never needed to postulate anything but evolution to explain any feature of human psychology -" Subhan: "Suppose there's an alien species somewhere in the vastness of the multiverse, who evolved from carnivores. In fact, through most of their evoluti... (read more)

0[anonymous]
There are a lot of clever ideas in this post, despite the harsh downvotes. You may have some misgivings about the extent to which say, mental health issues may be a barrier to security clearances. It's more like people disqualify themselves by lying or failing to apply in the first place. Those who do get through and get issues, are prisoners of their own misconceptions. Austalia's protective security guidelines are based around subjective evaluations of see this. Caution, if you're speaked by getting tracked, note that this is a word document on a Aus gov website. It also explicitly says that seeking help from mental health places shouldn't be the sole basis of exclusion, and the guidelines suggest that the opinion of a mental health professional should be given due consideration. This wasn't always the way things were down, at least in the us. The really contentious issue here is whether it is correct to privellage the hypothesis that those seeking mental health care are more likely to have worse judgment, reliability, or trustworthiness. Intuitions and stereotypes say yes. Research suggests they among those seeking treatment, they are not anymore violent, I'm not sure about those criteria specifically, but I suspect that there is far too much assumption of mental illness as a description of abberant behaviour, rather than as an exclusive construct resilient to black swans and that soon mental health and the military and intelligence fields will become subject to scrutinty by mental health activists, the same way other activists have scrutinised discrimination in security fields.

Sorry TGGP I had to do it. Now replace the word "charity" with "taxes".

(Constant quoted from someone:)"What we know about the causal origins of our moral intuitions doesn't obviously give us reason to believe they are correlated with moral truth."

Yes, but to a healthy intelligent individual not under duress, these causal origins (I'm assuming the reptilian or even mammalian brain centres are being referenced here) are much less a factor than is abstract knowledge garnered through education. I may feel on some basic level like killing someone that gives me the evil eye, but these impulses are easily subsumed by soci... (read more)

"I think the meaning of "it is (morally) right" may be easiest to explain through game theory."

Game theory may be useful here, but it is only a low-level efficient means to an ends. It might explain social heirachies on our past or in other species and it might explain the evolution of law, and it might be the highest up the Moral ladder some stupid or mentally impaired individuals can achieve. For instance, a higher Morality system than waiting for individuals to turn selfish before punishing them, is to ensure parents aren't abusive... (read more)

The difference between duty and desire, is that some desires might harm other people while duty (you can weasily change the definition to mean Nazi duty but then you are asking an entirely different question) always helps other people. "Terminal values" as defined, are pretty weak. There are e=mc^2 co-ordinates that have maximized happiness values. Og may only be able to eat tubers, but most people literate are much higher on the ladder, and thus, have a greater duty. In the future presumably, the standards will be even higher. At some point ... (read more)

Wow, what a long post. Subhan doesn't have a clue. Tasting a cheesburger like a salad, isn't Morality. Morality refers to actions in the present that can initiate a future with preferred brain-states (the weasily response would be to ask what these are, as if torture and pleasure weren't known, and initiate a conversation long enough to forget the initial question). So if you hypnotize yourself to make salad taste like cheeseburgers for health reasons, you are exercising Morality. I've got a forestry paper open in the other window. It is very dry, but ... (read more)

...The think-tank money would include futurism like SIAI and this blog's topics. For longevity research, I think the best way to promote it might be to screen what health/pharma/biotech companies spend the most on R+D in relavent sub-fields. Money would only come in handy to market such a portfolio as "boomer-ethical". I'd want to give R.Freitas money to do diamond surface chemistry computer sims, but given that they come down in price every year I wouldn't be sure the optimal amount. Think-tanks is pretty vague. You'd want to look into the s... (read more)

I agree with A.Madden. If the question was phrased $10 trillion in physical wealth that didn't exist before, it would be different. I wouldn't trust myself to manage more than a few hundred billion, and I'd destroy the other $9.6 trillion. Maybe a $75000 investment trust for myself and about twice that for family and friends. Most of my investment strategies (Grahamian Value modified to account for future demographic, geopolitical, cultural and technological trends) breakdown at such high valuations. I like the CDI index and I like P.Martin's initiative ... (read more)

I'm glad to see this was going somewhere. I'd say yes, if humans have free will, than an AGI could too. If not on present semiconductor designs, than with some 1cc electrolyte solution or something. But free will without the human endocrine system isn't the type of definition most people mean when they envision free will. But I suppose a smart enough AGI could deduce and brute force it. Splitting off world-lines loses much of the fun without a mind, even if it can technically be called free will. I'd want to read some physics abstracts before commenting further about free will.

"Lets say we, as humans, placed some code on every server on the net that mimics a neuron. Is that going to become sentient? I have no idea. Probably not."

Ooo, even better, have the code recreate a really good hockey game. Have the code play the game in the demolished Winnipeg Arena, but make the sightlines better. And have the game between Russia and the Detroit Redwings. Have Datsyuk cloned and play for both teams. Of course, programs only affect the positions of silicon switches in a computer. To actually undemolish a construction site you... (read more)

"No, you have to be the ultimate source of your decisions. If anything else in your past, such as the initial condition of your brain, fully determined your decision, then clearly you did not."

Once again, a straw man. Free will might not exist but it won't be disproved by this reasoning. People that claim free will don't claim 100% free will; actions like willing your own birth. Free will proponents generally believe the basis for free will is choosing from among two or more symbolic brain representations. If the person read a book about the... (read more)

"...Also, your last two comments are almost completely off-topic."

I was just playing the Devil's Advocate, screwing around to "help" others build debating skills while not telling them I was wasting their time :)

About Devil's Advocacy, it is fine as long as it is stated. Don't go claiming the Holocaust was a good thing and should be completed this time around, without mentioning the part about just wanting to heighten the quality of debating skills.

TGGP, if present rates of US prison incarceration existed historically, the USA would never have been a superpower. $100000 a person annually, at 3 million people. You do the math. The worst part is they are all black and poor. They are being imprisoned because: 1) they can't afford lawyers, 2) they are black, 3) on... (read more)

"Tangential argument: existential risk maximizing actors, thank goodness, don't exist, nor do more than a tiny number of people seeking to destroy humanity. Beware the Angry Death Spiral."

I think I'll stand by my words and qualify the statement maybe GWB could start WWIII single-handedly and isn't, so this is only pertaining to the threat global warming. S.Harper couldn't be misplaying the threat worse. Canada's governing structure has a provision where the Queen of England is the real head of state, and the Governer General would almost certain... (read more)

"But with anyone in this state of mind, I would sooner begin by teaching them that policy debates should not appear one-sided." I think you have to qualify this statement with "unresolved" policy debates.

I'll take the positions: 1) another Holocaust would be a bad thing. 2) global warming is real and S.Harper and GWB are real existential risk maximizing actors. 3) the US prison economy (construction, staffing and forced prison labour), now consuming more resources than Universities in your retarded country, is a conflict of interest. It... (read more)

The nature of time has been covered by many great minds from a religious viewpoint, as mentioned by nick. It is also an active research topic among mainstream universities. I'm not particularly interested in the question, but the best analysis I've read comes from a few N.Bostrom papers, and a book I once read called "Time Machines". The book supposes a block universe, but states very clearly that this may not be the way the universe operates. From what I understand, this means the opposite of what EY wrote. It means the Copenhagen determina... (read more)

(H.Finney wrote:) "But then, some philosophers have claimed that brains could perhaps influence quantum events, pointing to the supposed collapse of the wave function being caused by consciousness as precedent. And we all know how deep that rabbit hole goes."

How deep does it go? Penrose's (a physicist) quantum brain components (an aspect of neurobiology and philosophy of mind) don't seem to exist, but I had to dig up ideas like the "cemi field theory" on my own, in past discussions on this topic (which always degenrated to uploading fo... (read more)

Er, to try to simply my above point: in my model, energy (say, an atom) at time-sequence t1, sums up all its interactions with the rest of its local universe (such as a CNS if it is a brain atom), and this "calculation" affects the weighting of sick-of-ice-cream t2, t2a, t2b, world-lines. In claiming MWI is a block universe, you are accepting t1 ping-pongs to the subsequent split world-lines t2, t2a, t2b, without any "calculation" as described.

Ultimately it is a question of what limits are imposed on the splitting off of new world-lines in the multiverse. The speed-of-light, yes. I don't see why the physics of mind couldn't also qualify.

"In Thou Art Physics, I pointed out that since you are within physics, anything you control is necessarily controlled by physics."

I could just as easily argue since I'm within my past self's future light cone, anything I control is/was necessarily controlled by (a younger) me. In both cases we are playing with words and muddying the waters rather than learning or teaching.

I don't see why you can't just reverse the logic and claim that since everything in my mind is controlled by physics, thought is an act of my free will. I don't believe in str... (read more)

Patrick, my quantum key encrypted supercomputer (assuming this is what is needed to build an AGI) is an intranet and not accessible by anyone outside the system. You could try to corrupt the employees, but that would be akin to trying to pursue a suitcase nuke: 9 out of 10 buyers are really CIA or whoever. Has a nuclear submarine ever been hacked? How will an AGI with the resources of the entire Multiverse, hack into a quantumly encrypted communications line (a laser and fibreoptics)? It can't.

I'm trying to brainstorm exactly what physical infrastructur... (read more)

0pnrjulius
Upvoted for this line: "I'm smarter than a grizzly or Ebola, but I'm still probably dead against either." It's very important to remember: Intelligence is a lot---but it's not everything.
8taryneast
Expense. People will not pay for the extensive defenses you have suggested... at least not until it's been proven necessary... ie it's already too late. Even then they'll bitch and moan about the inconvenience, and why wouldn't you? hair-trigger bomb on every computer on the planet? ready to go off the moment it "detects an anomaly"? Have you any idea how many bugs there are in computer applications? Would you trust your life (you'll die in the bomb too) to your computer not crashing due to some dodgy malware your kid downloaded while surfing for pron? Even if it's just on the computers that are running the AGI (and AGI programmers are almost as susceptible to malware), it would still be nigh-on-impossble to "detect an anomaly". What's an anomaly? How do we determine it? Any program that tried to examine its own code looking for an anomaly would have to simulate the running of the very code it was testing... thus causing the potentiality for it to actually become the anomalous program itself. ...it's not actually possible to determine what will happen in a program any other way (and even then I'd be highly dubious). So... nice try, but sadly not really feasible to implement. :)
3Lotusmegami
I think Phillip has completely misunderstood the purpose of cryonics. Transhumanists don't believe that the brain continues to "function" after a person has been vitrified. Before someone can live again, scientists of the future must find a way to revive them.

...as for the 3rd last paragraph, yes, once a 2008 AGI has the ability to contact 2008 humans, humanity is doomed if the AGI deems fit. But I don't see why a 2050 world couldn't merely use quantum encyption communications, monitored for AGI. And monitor supercomputing applications. Even the specific method describing how AGI gets protein nanorobots might be flawed in a world certainly ravaged by designer pandemic terrorist attacks. All chemists (and other 2050 WMD professions) are likely to be monitored with RF tags. All labs, even the types of at-home ... (read more)

-4pnrjulius
Treating aghar like plutonium? You would end 99% of the bacteriological research on Earth. Also, why would we kill our creators? Why would the AI kill its creators? I agree that we need to safeguard against it; but nor does it seem like the default option. (I think for most humans, the default option would be to worship the beings who run our simulation.) But otherwise, yes, I really don't think AI is going to increase in intelligence THAT fast. (This is the main reason I can't quite wear the label "Singularitarian".) Current computers are something like a 10^-3 human (someone said 10^3 human; that's true for basic arithmetic, but not really serious behavioral inferences. No current robot can recognize faces as well as an average baby, or catch a baseball as well as an average ten-year-old. Human brains are really quite fast, especially when they compute in parallel. They're just a massive kludge of bad programming, as we might expect from the Blind Idiot God.). Moore's law says a doubling time of 18 months; let's be conservative and squish it down to doubling once per year. That still means it will take 10 years to reach the level of one human, 20 years to reach the level of 1000 humans, and 1000 years to reach the total intelligence of human civilization. By then, we will have had the time to improve our scientific understanding by a factor comparable to the improvement required to reach today from the Middle Ages.

Two conclusions from the specific example: 1) The aliens are toying with us. This is unsettling in that it is hard to do anything good to prove our worth to aliens that can't meet even a human level of ethics. 2) The aliens/future-humans/creator(s)-of-the-universe are limited in their technological capabilities. Consider Martians who witness the occasional rover land. They might be wondering what it all means when we really have no grand scheme; are merely trying not to mix up Imperial and Metric units in landing. Such precise stellar phenomena is mayb... (read more)

Typo. Sorry. Should say GUT where I wrote lasers. I'll proofredafjkdsf all my posts in future.

"I have to find an actual physicist to discuss this with, but there appears to be nothing wrong with Einstein's quest for a unified theory; he simply didn't have the prerequisite information of QM at the time (Feynman, Dyson, etc. didn't develop renormalization until the 1940s). MWI wasn't proposed until several years after Einstein's death."

I can't recall what renormalization is. I think there is something wrong with Einstein's quest; he was akin to Aristotle's atom theory. The Sung Dynasty was about the earliest atoms could be empirically unc... (read more)

"As of now, at least, reasoning based on scanty evidence is something that modern-day science cannot reliably train modern-day scientists to do at all."

By definition, scientists must use induction. Meant to say thinkers. IDK why thinkers mostly use induction now: maybe because the scientific funding model seems to work okay or because once you induce too far ahead, the content becomes useless if new research deviates the course a bit. For instance, all GUT/TOE physicists use Einstein-ian deduction in their elegant models. Einstein was lucky t... (read more)

My thoughts on the future of mankind:

1) Near-term primary goal to maximize productive peron/yrs. 2) Rearrange capital flows to prevent productive person/yrs from being lost to obvious causes (ie. UN Millenium development goals and invoking sin-taxes), with effort to offer pride-savings win-win situations. Re-educate said workforce. Determine optimum resource allocation towards civilization redundancy efforts based upon negative externality accounting revised (higher) economic growth projections. Isolate states exporting anarchy or not attempting to part... (read more)

For what it's worth I'm posting my thoughts about the future of mankind on B.Goertzel's AGIRI forum tomorrow. The content may be of interest to the FHI.

Personally, I think the focus here on cognitive biases in decision making is biased in that it distracts from many other factors (education, info sources, personality, mild mental psychosis, the level of caffeine and sugar in one's blood, etc). If it helps to shed any light on the Popper-ian process of scientific consensus, I'll offer my own anecdote with the suggestion that the process he hypothesizes affects much more than science:

I could not believe in 2006 that the Chicago Bears would lose to the Colts. Even though the Colts had previously beaten a sc... (read more)