Comment author: manuelg 16 December 2007 01:43:56AM 0 points [-]

> The perfect age of the past, according to our best anthropological evidence, never existed.

Minor point: in defense of the esteemed Taoist, I would argue Chuang Tzu was speaking of the time humans were small groups of hunter-gatherers. Based on my understanding of Jared Diamond's "Agriculture: the worst mistake in the history of the human race".

Back on the point of your post. I am not ashamed to say I listen to Zig Ziglar tapes (I probably should be). His folksy way of putting it is "Do you want to be a learner, or learned?" With "learned" implying that you have mastered a system of thought perfectly suited for a receding past.

Comment author: RPMcMurphy 25 January 2015 02:04:40PM *  0 points [-]

.

Comment author: Jiro 06 October 2014 10:37:27PM *  4 points [-]

I am not very impressed by that.

"Would you change your mind if you were convinced of X" carries the connotation "if I managed to give you an argument for X, and you couldn't rebut it, would you change your mind?" The answer to that should be "no" for many values of X even if the answer to the original question is "yes". The fact that you couldn't rebut the argument may mean that it's true. It may also just mean the argument is full of holes but the person is really good at convincing you. How do you know that the person who convinced you of X isn't another case of Eliezer convincing you to let the AI out of a box?

If a lot of scientists or other experts vetted the claim of such an X and it was not only personally convincing, but had a substantial following in the community of experts, then I might change my mind.

Comment author: RPMcMurphy 12 December 2014 04:04:15PM *  -2 points [-]

.

Comment author: RPMcMurphy 06 October 2014 09:30:17PM *  0 points [-]

.

Comment author: MrMind 23 September 2014 10:15:23AM *  2 points [-]

Culture, thought, human DNA, human values, etc. have been stripped to their functional carbon and hydrogen atoms and everything now just optimizes for paperclip manufacturing or whatever. D(u/r) = D(u)

I contest this derivation. Whatever process produced humanity, made so that humanity produced an unsafe supercontroller. This may means that whatever the supercontroller is optimized for, it's part of the process that produced humanity, and so it does not make g(u,h) go to zero.

Of course, without a concrete model, it's impossible to say for certain.

Comment author: RPMcMurphy 04 October 2014 10:14:25AM -4 points [-]

Of course, without a concrete model, it's impossible to say for certain.

So, if humanity produces an ultra-intelligence that eats humanity and produces a giant turd, then humanity was --mathematically speaking-- the biological boot loader of a giant turd.

"Hope we're not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable." -Elon Musk

...Especially if algebra nerds ignore F. A. Hayek, S. Milgram, Clay Conrad, etc., and specialize in a very narrow domain (Bayesian logic), while trying to figure out how to create a single superintelligence.

Multiple instantiations, multiple environments, certain core readings imparting a sense of "many" human values. ...That will either be enough to kill us all, or get us to the "next level," ...rinse and repeat. ...This will happen anyway, whether you like it or not, given the existence of Hawkins, Kurzweil, Honda, Google, (some guy in a factory in Thailand) etc.

PS: The message keeps popping up: "You are trying to win too fast. try again in 1 minute." ...How much Karma (= how many months of making innocuous sycophantic remarks) do I need in order to post more quickly?

Comment author: Randaly 25 September 2014 12:55:53AM *  2 points [-]

Thanks for your response!

1) Hmmm. OK, this is pretty counter-intuitive to me.

2) I'm not totally sure what you mean here. But, to give a concrete example, suppose that the most moral thing to do would be to tile the universe with very happy kittens (or something). CEV, as I understand, would create as many of these as possible, with its finite resources; whereas g/g* would try to create much more complicated structures than kittens.

3) Sorry, I don't think I was very clear. To clarify: once you've specified h, a superset of human essence, why would you apply the particular functions g/g* to h? Why not just directly program in 'do not let h cease to exist'? g/g* do get around the problem of specifying 'cease to exist', but this seems pretty insignificant compared to the difficulty of specifying h. And unlike with programming a supercontroller to preserve an entire superset of human essence, g/g* might wind up with the supercontroller focused on some parts of h that are not part of the human essence- so it doesn't completely solve the definition of 'cease to exist'.

(You said above that h is an improvement because it is a superset of human essence. But we can equally program a supercontroller not to let a superset of human essence cease to exist, once we've specified said superset.)

Comment author: RPMcMurphy 04 October 2014 10:04:01AM -4 points [-]

with very happy kittens (or something)

If this is the case, then the ultra-intelligence wouldn't even be as parallel as a human is, it would be some algebraic freak not found in nature. Why wouldn't we design a smart, emergent, massively-parallel brain that was "taught" to be human? It seems this is most likely. Peter Voss is doing this now. He will achieve superintelligence within 10 years if he hasn't already. This is the pathway the entire industry is taking: brainlike emulation, rising above human. Then, superhuman brain design by already-superhuman massively parallel brains.

I'm sure that some thoughts in those superhuman brainlike minds will "outvote" others. Which is why there will be a lot of them, from a lot of backgrounds. Some will be 2,000 IQs surrounded by gardens, others by war-zones. All will be taught all human literature, including F.A. Hayek's and Clay Conrad's works. Unlike stupid humans, they will likely prioritize these works, since they alleviate human suffering.

That won't take much intelligence, but it will take enough intelligence to avoid papering the universe with toxoplasma-laced kitties, or paperclips, or nukes, or whatever.

PS: Yet again, this site is giving me a message that interferes with the steady drip of endorphins into my primate-model braincase. "You are trying to submit too fast. try again in 4 minutes." ...And here, I thought I was writing a bunch of messages about how I'm a radical libertarian who will never submit. I stand 'rected! And don't ask if it's "E" or "Co" just because I'm an ecologist. Have I blathered enough mind-killed enlightenment to regain my ability to post? ...Ahhh, there we go! I didn't even have to alternate between watching TED talks and this page. I just wasted all that time, typing this blather.

Comment author: ChristianKl 17 September 2014 05:30:30PM 0 points [-]

The very idea that we cannot obtain TRUE advertising about medical goods and services (where the truth is no defense! ...Throwing out the jury supremacy hard-won from over 300 years of jurisprudence and civil disobedience!) is antithetical to the social existence of anything other than slaves.

Mixing factual questions with what you want to be true is a bad idea. Whether or not getting rid of the FDA will result in no clinical trials is a factual question. On LW the common word to describe that kind of reasoning is 'mind-killed'.

How can a law that has no valid corpus delicti ("body of crime") be enforced in court, when the common law (which all precedent to this date states that the 6th Amendment is referring to, when it refers to "due process") demands that all criminal prosecutions contain a valid "corpus," and where the 4th amendment also maintains the same?

The common moral framework on LW is that people are utilitarians or consequentialists. Most of us don't believe in God given "natural law" but think that laws are entirely man-made. We can discuss which laws are good and which aren't, just because some Christians considered certain laws naturally produced by God doesn't imply that they are binding in the 21th century.

The thing is that I would like to eat more tuna. Mercury content in tuna is unfortunately high enough that the European food safety authority advises against daily tuna consumption. Under the Obama administration the EPA calculate the cost of the decreased IQ of children in the US and found that it's cost effective to put barriers on the ability of the free market to produce mercury emission. If you sit down and calculate childrens IQ is just worth more.

I like that the EPA stops the free market from producing mercury pollution. Fortunately some day on the future that means I can regularly eat tuna.

Yes, the reason drug companies run expensive trials is because they are coerced into doing so, and because they are complicit in the final result of anyone having not done so being banned from the marketplace by the realistic threat of violence

No, Big Pharma likes to have the standard at the level where they are. They don't always lobby for the standards for clinical trials to be less but sometimes even lobby against lowering of standards.

The basic idea of dealing with issues of the tragedy of the commons is to come to a common agreement and then enforce that agreement by punishing people who violate it.

Comment author: RPMcMurphy 04 October 2014 08:26:24AM *  0 points [-]

.

Comment author: ChristianKl 14 September 2014 09:12:50PM 0 points [-]

Of course, I favor putting "medical freedom" I & R on the ballot,

Our present system is very much broken on the other hand evidence based medicine needs expensive trials. Currently the reason drug companies run expensive trials is because otherwise the FDA wouldn't approve their products.

and think that putting legalized "Transhumanism" on the ballot simply invites needless controversy.

What do you mean with legalized transhumanism? I don't remember anyone outlawing transhumanism.

Comment author: RPMcMurphy 17 September 2014 03:35:53PM *  0 points [-]

.

Comment author: devas 14 September 2014 08:29:21AM 3 points [-]

I second this proposal. In the sites I've seen where it's implemented, I've found it extremely useful.

Comment author: RPMcMurphy 14 September 2014 09:34:45AM *  0 points [-]

.

Comment author: RPMcMurphy 14 September 2014 09:29:49AM *  -2 points [-]

.

Comment author: RPMcMurphy 14 September 2014 09:20:27AM *  3 points [-]

.

View more: Prev | Next