The OP is basically the fairly standard basis of american-style libertarianism.
It doesn't particularly "defy consequentialism" any more than listing the primary precepts of utilitarian consequentialist groups defys deontology.
But I don't think the moral intuitions you list are terribly universal.
The closest parallel I can think of is someone listing contemporary american copyright law and listing it's norms as if they're some kind of universally accepted system of morals.
"but you are definitely not allowed to kill one"
Johny thousand livers is of course an exception.
Or put another way, if you say to most people,
"ok, so you're in a scenario a little bit like the films Armageddon or deep impact. Things have gone wrong but it's a smaller rock and and all you can do at this point is divert it or not, it's on course for new york city, ten million+ will die, you have the choice to divert it to a sparsely populated area of the rocky mountains... but there's at least one person living there"
Most of the people who would normally declare that the trolley problem with 1vs5 makes it unethical to throw that one person in front of the trolley... will change their view once the difference in the trade is large enough.
1 vs 5 isn't big enough for them but the idea of tens of millions will suddenly turn them into consequentialist.
"You are not required to save a random person"
Also, this is a very not-universal viewpoint. Show people that video of the chinese kid being run over repeatedly while people walk past ignoring her cries and many will declare that the passers-by who ignored the child committed a very clear moral infraction.
"Duty of care" is not popular in american libertarianism but it and variations is a common concept in many countries.
The deliberate failure to provide assistance in the event of an accident is a criminal offence in France.
In many countries if you become aware of a child suffering sexual abuse there are explicit duties to report.
And once you accept the fairly commonly held concept of "duty of care", the idea that you actually do have a duty to others, and suddenly the absolutist property stuff largely sort of falls apart and it becomes entirely reasonable to require some people to give up some fraction of property to provide care for those around them just as it's reasonable to expect them to help an injured toddler out of the street or to help the victim of a car accident or to let the authorities know if they find out that a kid is being raped.
"Duty" or similar "social contract" precepts that imply that you have some positive duties purely by dint of being a human with the capacity to intervene tend to be rejected by the american libertarian viewpoint but it's a very very common aspect of the moral intuitions of a large fraction of the worlds population.
It's not unlimited and it tends towards Newtonian Ethics but moral intuitions aren't known for being perfectly fair.
Yes, our ancestors could not build a nuclear reactor, the australian natives spent 40 thousand years without constructing a bow and arrow. Neither the Australian natives nor anyone else has built a cold fusion reactor. Running half way doesn't mean you've won the race.
Putting ourselves in the category of "entities who can build anything" is like putting yourself in the category "people who've been on the moon" when you've never actually been to the moon but really really want to be an astronaut one day. You might even one day become an astronaut but aspirations don't put you in the category with Armstrong until you actually do the thing.
Your pet collie might dream vaguely of building cars, perhaps in 5,000,000 years it's descendants might have self selected for intelligence and we'll have collie engineers, that doesn't make it an engineer today.
Currently by the definition in that book humans are not universal constructors, at best we might one day be universal constructors if we don't all get wiped out by something first. It would be nice if we became such one day. But right now we're merely closer to being universal constructors than unusually bright ravens and collies.
Feelings are not fact. Hopes are not reality.
Assuming that nothing will stop us based on a thin sliver of history is shaky extrapolation:
Adam and Eve AI's. The pair are designed such that they can automatically generate large numbers of hypothesis, design experiments that could falsify the maximum possible number of hypothesis and then run those experiments in an automated lab.
Rather than being designed to do X with yeast it's basically told "go look at yeast" and then develops hypothesis about yeast and yeast biology and it successfully re-discovered a number of elements of cell biology. Later iterations were given access to databases of already known genetic information and discovered new information about a number of genes .
http://www.dailygalaxy.com/my_weblog/2009/04/1st-artificially-intelligent-adam-and-eve-created.html
https://www.newscientist.com/article/dn16890-robot-scientist-makes-discoveries-without-human-help/
It's a remarkable system and could be extremely useful for scientists in many sectors but it's a 1.1 on the 1 to 10 scale where 10 is a credible paperclipper or Culture-Mind style AI.
This AI is not a pianist robot and doesn't play chess but has broad potential applications across many areas of science.
It blows a hole in the side of the "Universal Knowledge Creator" idea since it's a knowledge creator beyond most humans in a number of areas but but is never going to be controlling a pianist robot or running a nail salon because the belief that there's some magical UKC line or category (which humans technically don't qualify for yet anyway) is based on literally nothing except feelings. there's not an ounce of logic or evidence behind it.
It's pretty common for groups of people to band together around confused beliefs.
Millions of people have incorrect beliefs about vaccines, millions more are part of new age groups which have embraced confused and wrong beliefs about quantum physics (often related to utterly misunderstanding the term "Observer" as used in physics) and millions more have banded together around incorrect beliefs about biology. Are you smarter than all of those people combined? Are you smarter than every single individual in those groups? probably not but...
The man who replaced me on the commission said, “That book was approved by sixty-five engineers at the Such-and-such Aircraft Company!”
I didn’t doubt that the company had some pretty good engineers, but to take sixty-five engineers is to take a wide range of ability–and to necessarily include some pretty poor guys! It was once again the problem of averaging the length of the emperor’s nose, or the ratings on a book with nothing between the covers. It would have been far better to have the company decide who their better engineers were, and to have them look at the book. I couldn’t claim to that I was smarter than sixty-five other guys–but the average of sixty-five other guys, certainly!
I couldn’t get through to him, and the book was approved by the board.
— from “Surely You’re Joking, Mr. Feynman” (Adventures of a Curious Character)
This again feels like one of those things that creeps the second anyone points you to examples.
If someone points to an AI that can generate scientific hypothesis, design novel experiments to attempt to falsify them and run those experiments in ways that could be applied to chemistry, cancer research and cryonics you'd just declare that those weren't different enough domains because they're all science and then demand that it also be able to control pianist robots and scuba dive and run a nail salon.
Nothing to see here everyone.
This is just yet another boring iteration of the forever shifting goalposts of AI .
First: If I propose that humans can sing any possible song or that humans are universal jumpers and can jump any height the weight is not upon everyone else to prove that humans cannot because I'm the one making the absurd proposition.
he proposes that humans are universal constructors, able to build anything. Observation: there are some things humans as they currently are cannot construct, as we currently are we cannot actually arbitrarily order atoms any way we like to perform any task we like. The worlds smartest human can no more build a von neuman probe right now than the worlds smartest border collie.
he merely makes the guess that we'll be able to do so in future or that we'll be able to build something that will be able to build something in future that will be able to but that border collies never will. (that is based on little more than faith.)
From this he concludes we're "universal constructors" despite us quite trivially falling short of the definition of 'universal constructor' he proposes.
When you start talking about "reach" you utterly utterly cancel out all the claims made about AI in the OP. If a superhuman AI with a brain the size of a planet made of pure computation can just barely manage to comprehend some horribly complex problem and there's a slim chance that humans might one day be able to build AI's which might be able to build AI's which might be able to build AI's that might be able to build that AI that doesn't mean that humans have fully comprehended that thing or could fully comprehend that thing any more than slime mould could be said to comprehend the building of a nuclear power station because they could potentially produce offspring which produce offspring which produce offspring.....[repeat many times] who could potentially design and build a nuclear power station.
His arguments are full of gaping holes. How does this not jump out at other readers?
This argument seems chosen to make it utterly unfalsifiable.
If someone provides examples of animal X solving novel problems in creative ways you can just say "that's just the 'some flexibility' bit"
You're describing what's known as General game playing.
you program an AI which will play a set of games, you don't know what the rules of the games will be. Build an AI which can accept a set of rules for a game then teach itself to play.
This is in fact a field in AI.
also note recent news that AlphaGoZero has been converted to AlphaZero which can handle other games and rapidly taught itself how to play Chess,Shogi, and Go (beating it's ancestor AlphaGoZero) hinting that they're generalising it very successfully.
...ok so I don't get to find the arguments out unless I buy a copy of the book?
right... looking at a pirated copy of the book, the phrase "universal knowledge creator" appears nowhere in it nor "knowledge creator"
But lets have a read of the chapter "Artificial Creativity"
big long spiel about ELIZA being crap. Same generic qualia arguments as ever.
One minor gem in there for which the author deserves to be commended:
"I have settled on a simple test for judging claims, including Dennett’s, to have explained the nature of consciousness (or any other computational task): if you can’t program it, you haven’t understood it"
...
Claim that genetic algorithms and similar learning systems aren't really inventing or discovering anything because they reach local maxima and thus the design is really just coming from the programmer. (presumably then the developers of alpha-go must be the worlds best grandmaster go players)
I see the phrase "universal constructors" where the author claims that human bodies are able to turn anything into anything. This argument appears to rest squarely on the idea that while there may be some things we actually can't do or ideas we actually can't handle we should, one day, be able to either alter ourselves or build machines (AI's?) that can handle it. Thus we are universal constructors and can do anything.
On a related note I an in fact an office block because while I may not actually be 12 stories tall and covered in glass I could in theory build machines which build machine which could be used to build an office block and thus by this books logic, that makes me an office block and from this point forward in the comments we can make arguments based on the assumption that I can contain at least 75 office workers along with their desks and equipment
The fact that we haven't actually managed to create machines that can turn anything into anything yet strangely doesn't get a look in on the argument about why we're currently universal constructors but dolphins are not.
The author brings up the idea of things we may genuinely simply not be able to understand and just dismisses it with literally nothing except the objection that it's claiming things could be inexplicable and hence should be dismissed. (on a related note the president of the tautology club is the president of the tautology club)
Summary: I'd give it a C- but upgrade it to C for being better than the geocities website selling it.
Also, the book doesn't actually address my objections.
perhaps a more real-life simple medical-model example:
If a student is short-sighted, society could accommodate them to make it not-a-disability by employing someone to sit with shortsighted students to take notes for them, employing someone to dictate all material too far away for them to see, providing a navigator so they don't need to read distant signs....
Or society could just expect them to wear glasses or get lasik.
Society seems to fall so far on the side of the latter that it seems like pure medical-model.