The OP is basically the fairly standard basis of american-style libertarianism.
It doesn't particularly "defy consequentialism" any more than listing the primary precepts of utilitarian consequentialist groups defys deontology.
But I don't think the moral intuitions you list are terribly universal.
The closest parallel I can think of is someone listing contemporary american copyright law and listing it's norms as if they're some kind of universally accepted system of morals.
"but you are definitely not allowed to kill one&...
Yes, our ancestors could not build a nuclear reactor, the australian natives spent 40 thousand years without constructing a bow and arrow. Neither the Australian natives nor anyone else has built a cold fusion reactor. Running half way doesn't mean you've won the race.
Putting ourselves in the category of "entities who can build anything" is like putting yourself in the category "people who've been on the moon" when you've never actually been to the moon but really really want to be an astronaut one day. You might even one day become an...
Adam and Eve AI's. The pair are designed such that they can automatically generate large numbers of hypothesis, design experiments that could falsify the maximum possible number of hypothesis and then run those experiments in an automated lab.
Rather than being designed to do X with yeast it's basically told "go look at yeast" and then develops hypothesis about yeast and yeast biology and it successfully re-discovered a number of elements of cell biology. Later iterations were given access to databases of already known genetic information and dis...
It's pretty common for groups of people to band together around confused beliefs.
Millions of people have incorrect beliefs about vaccines, millions more are part of new age groups which have embraced confused and wrong beliefs about quantum physics (often related to utterly misunderstanding the term "Observer" as used in physics) and millions more have banded together around incorrect beliefs about biology. Are you smarter than all of those people combined? Are you smarter than every single individual in those groups? probably not but...
...The m
This again feels like one of those things that creeps the second anyone points you to examples.
If someone points to an AI that can generate scientific hypothesis, design novel experiments to attempt to falsify them and run those experiments in ways that could be applied to chemistry, cancer research and cryonics you'd just declare that those weren't different enough domains because they're all science and then demand that it also be able to control pianist robots and scuba dive and run a nail salon.
Nothing to see here everyone.
This is just yet another boring iteration of the forever shifting goalposts of AI .
First: If I propose that humans can sing any possible song or that humans are universal jumpers and can jump any height the weight is not upon everyone else to prove that humans cannot because I'm the one making the absurd proposition.
he proposes that humans are universal constructors, able to build anything. Observation: there are some things humans as they currently are cannot construct, as we currently are we cannot actually arbitrarily order atoms any way we like to perform any task we like. The worlds smartest human can no more build a von neuman pro...
This argument seems chosen to make it utterly unfalsifiable.
If someone provides examples of animal X solving novel problems in creative ways you can just say "that's just the 'some flexibility' bit"
You're describing what's known as General game playing.
you program an AI which will play a set of games, you don't know what the rules of the games will be. Build an AI which can accept a set of rules for a game then teach itself to play.
This is in fact a field in AI.
also note recent news that AlphaGoZero has been converted to AlphaZero which can handle other games and rapidly taught itself how to play Chess,Shogi, and Go (beating it's ancestor AlphaGoZero) hinting that they're generalising it very successfully.
...ok so I don't get to find the arguments out unless I buy a copy of the book?
right... looking at a pirated copy of the book, the phrase "universal knowledge creator" appears nowhere in it nor "knowledge creator"
But lets have a read of the chapter "Artificial Creativity"
big long spiel about ELIZA being crap. Same generic qualia arguments as ever.
One minor gem in there for which the author deserves to be commended:
..."I have settled on a simple test for judging claims, including Dennett’s, to have explained the nature o
I started this post off trying to be charitable but gradually became less so.
"This means we can create any knowledge which it is possible to create."
Is there any proof that this is true? anything rigorous? The human mind could have some notable blind spots. For all we know there could be concepts that happen to cause normal human minds to suffer lethal epileptic fits similar to how certain patterns of flashing light can to some people. Or simple concepts that could be incredibly inefficient to encode in a normal human mind that could be easi...
It's improbable but if they ever behave anything like dogs not 100% impossible.
I've encountered an older dog that really really wanted to have puppies that stole a kitten from a litter and tried to raise it and feed it and made no attempt to eat it.
and there appear to be real reports of domesticated dogs adopting and nursing neglected children.
https://www.thedodo.com/dog-breastfeeds-child-1336838906.html
Of course dogs have the aggression dialed way way down such that they may be way way more likely to do that.
I'd argue that a she-wolf that's recently lo...
The quickest way to make me start viewing a scifi *topia as a dystopia is to have suicide banned in a world of (potential) immortals. To me the "right to death" is an essential once immortality is possible.
Still, I get the impression that saying they'll die at some point anyway is a bit of a dodge of the challenge. After all, nothing is truly infinite. Eventually entropy will necessitate an end to any simulated hell.
This sounds like the standard argument around negative utility.
if you weight negative utility quite highly then you could also come to the conclusion that the moral thing to do is to set to work on a virus to kill all humans as fast as possible.
You don't even need mind-uploading. If you weight suffering highly enough then you could decide that it's the right thing to do taking a trip to a refugee camp full of people who, on average, are likely to have hard, painful lives, and leaving a sarin gas bomb.
Put another way: if you encountered an infant with epidermolysis bullosa would you try to kill them, even against their wishes?
"You may take ternary numeral system (base 3) and three basic instructions"
Wait, are we supposed to make up arbitrary operations for higher bases?
ok, there's not really enough data points to do proper stats but lets give it a go anyway.
Lets consider the possibility that the ad campaign did nothing. Some ad campaigns are actually damaging so lets try to get an idea of how much it varies from month to month.
Mean = 50.5 Standard Deviation = 6.05
So about 1 and 2/3rds SD's above the mean.
Sure, October is a little higher than normal but not by much.
Or put another way, imagine that the ad campaign had been put into effect in April but actually did absolutely nothing. They would have seen an increase o...
And the point of this post is? Do you want us to critique the dark-arts rhetoric in the document? Discuss the subject of ideological purity? Tribalism?
If the typical pattern holds:
Step one: new trick is discovered solving some problem X which couldn't be handled before.
Step two: people try to apply it to everything that the old styles didn't work on like problem Y which is sort of in the same problem class. At this stage overly enthusiastic people may over-promise. "I'm sure it will work amazingly on Y"
Step three: "Bah! These CS types never deliver, Y will always be better done by humans."
Step four: Interest and funding flees as the news stops paying attention, a few people keep chi...
I remember having a similar discussion about HIV and anti-retroviral drugs.
In short, it's an easy position to take if you and the people you care about aren't currently in the firing line and making policy choices on assumptions about future discoveries that we can't guarantee is ethically problematic.
There's about 3200 species of mosquito. < 200 bite humans and perhaps a dozen are major disease vectors for humans.
We extinct about 150 species per day without really trying. Increasing the number of species we push to extinction by 10% for a single day would save half a million lives per year.
perhaps a more real-life simple medical-model example:
If a student is short-sighted, society could accommodate them to make it not-a-disability by employing someone to sit with shortsighted students to take notes for them, employing someone to dictate all material too far away for them to see, providing a navigator so they don't need to read distant signs....
Or society could just expect them to wear glasses or get lasik.
Society seems to fall so far on the side of the latter that it seems like pure medical-model.