Are you reinventing Asimov's Three Laws of Robotics?
tomorrow
That's not conventionally considered to be "in the long run".
We don't have any theory that would stop AI from doing that
The primary reason is that we don't have any theory about what a post-singularity AI might or might not do. Doing some pretty basic decision theory focused on the corner cases is not "progress".
It seems weird that you'd deterministically two-box against such an Omega
Even in the case when the random noise dominates and the signal is imperceptibly small?
So the source-code of your brain just needs to decide whether it'll be a source-code that will be one-boxing or not.
First, in the classic Newcomb when you meet Omega that's a surprise to you. You don't get to precommit to deciding one way or the other because you had no idea such a situation will arise: you just get to decide now.
...You can decide however whether you're the sort of person who accepts their decisions can be deterministically predicted in advance with sufficient certainty, or whether you'll be claiming that other people predicting your cho
Old and tired, maybe, but clearly there is not much consensus yet (even if, ahem, some people consider it to be as clear as day).
Note that who makes the decision is a matter of control and has nothing to do with freedom. A calculator controls its display and so the "decision" to output 4 in response to 2+2 it its own, in a way. But applying decision theory to a calculator is nonsensical and there is no free choice involved.
LW is kinda dead (not entirely, there is still some shambling around happening, but the brains are in short supply) and is supposed to be replaced by a shinier reincarnated version which has been referred to as LW 2.0 and which is now in open beta at www.lesserwrong.com
LW 1.0 is still here, but if you're looking for active discussion, LW 2.0 might be a better bet.
Re qualia, I suggest that you start with trying to set up hard definitions for terms "qualia" and "exists". Once you do, you may find the problem disappears -- see e.g. this.
Re...
The truth that curi and myself are trying to get across to people here is... it is the unvarnished truth... know far more about epistemology than you. That again is an unvarnished truth
In which way all these statements are different from claiming that Jesus is Life Everlasting and that Jesus dying for our sins is an unvarnished truth?
Lots of people claim to have access to Truth -- what makes you special?
LOL. You keep insisting that people have to play by your rules but really, they don't.
You can keep inventing your own games and declaring yourself winner by your own rules, but it doesn't look like a very useful activity to me.
genetic algorithms often write and later read data, just like e.g. video game enemies
Huh? First, the expression "genetic algorithms" doesn't mean what you think it means. Second, I don't understand the writing and reading data part. Write which data to what substrate?
your examples are irrelevant b/c you aren't addressing the key intellectual issues
I like dealing with reality. You like dealing with abstractions in your head. We talked about this -- we disagree. You know that.
But if you are uninterested in empirical evidence, why bother dis...
The problem is that very very few orcas do that -- only two pods in the world, as far as we know. Orcas which live elsewhere (e.g. the Pacific Northwest orcas which are very well-observed) do not do anything like this. Moreover, there is evidence that the technique is taught by adults to juvenile orcas. See e.g .here or here.
If you want to debate that you need an epistemology which says what "knowledge" is. References to where you have that with full details to rival Critical Rationalism?
Oh, get stuffed. I tried debating you and the results were... discouraging.
Yes, I obviously think that CR is deluded.
This sentence from the OP:
Like the algorithms in a dog’s brain, AlphaGo is a remarkable algorithm, but it cannot create knowledge in even a subset of contexts.
A bit more generally, the claim that humans are UKCs and nothing else can create knowledge which is defined as a way to solve a problem.
the AI risks starting these triggers when it starts to think first thoughts about existing of the triggers
So basically you have a trap which kills you the moment you become aware of it. The first-order effect will be a lot of random deaths from just blundering into such a trap while walking around.
I suspect that the second-order effect will be the rise of, basically, superstitions and some forms of magical thinking which will be able to provide incentives to not go "there" without actually naming "there". I am not sure this is a desirable outcome.
It's also rank nonsense -- this bit in particular:
dog genes contain behavioural algorithms pre-programmed by evolution
Some orcas hunt seal pups by temporarily stranding themselves on the beaches in order to reach their prey. Is that behaviour programmed in their genes? The genes of all orcas?
Show results in 3 separate domains.
Unreason is accepting the claims of a paper at face value, appealing to its authority
Which particular claim that the paper makes I accepted at face value and which you think is false? Be specific.
I was aware of AlphaGo Zero before I posted -- check out my link
AlphaGo Zero and AlphaZero are different things -- check out my link.
In any case, are you making the claim that if a neural net were able to figure out the rules of the game by examining a few million games, you would accept that it's a universal knowledge creator?
You sound less and less reasonable with every comment.
It doesn't look like you conversion attempts are working well. Why do you think this is so?
AlphaGo is a remarkable algorithm, but it cannot create knowledge
Funny you should mention that. AlphaGo has a successor, AlphaZero. Let me quote:
...The game of chess is the most widely-studied domain in the history of artificial intelligence. The strongest programs are based on a combination of sophisticated search techniques, domain-specific adaptations, and handcrafted evaluation functions that have been refined by human experts over several decades. In contrast, the AlphaGo Zero program recently achieved superhuman performance in the game of Go, by ta
No, what surprises me is your belief that you just figured it all out. Using philosophy. That's it, we're done, everyone can go home now.
And since everything is binary and you don't have any tools to talk about things like uncertainty, this is The Truth and anyone who doesn't recognize it as such is either a knave or a fool.
There also a delicious overtone of irony in that a guy as lacking in humility as you are, chooses to describe his system as "fallible ideas".
You don't think that figuring out which ideas are "best available" is the hard part? Everyone and his dog claims his idea is the best.
well, using philosophy i did that hard part and figured out which ones are good
LOL. Oh boy.
Really? So you just used t̶h̶e̶ ̶f̶o̶r̶c̶e̶ philosophy and figured it out? That's great! Just a minor thing I'm confused about -- why are you here chatting on the 'net instead of sitting on your megayacht with a line of VCs in front of your door, willing to pay you gazillions of dollars for telling them which ideas a...
why are you trying to make claims about them?
I didn't think that stating that libertarians like Ayn Rand was controversial. We are talking about political power and neither libertarians nor objectivists have any. In this context the fact that they don't like each other is a small family squabble in some far-off room of the Grand Political Palace.
intellectual fixing of errors
What is an "intellectual" fixing of an error instead of a plain-vanilla fixing of an error?
...Aubrey de Grey says there's a 50% chance it's 100 million a year for 10 yea
Where can I find them?
I'm not plugged into these networks, but Cato will probably be a good start.
apparently thinks that homosexuality is a disease
Kinda. As far as I remember, homosexuality is an interesting thing because it's not very heritable (something like 20% for MZ twins), but also tends to persist in all cultures and ages which points to a biological aspect. It should be heavily disfavoured by evolution, but apparently isn't. So it's an evolutionary puzzle. Cochran's theory -- which he freely admits lacks any evidence in its favour -- is tha...
A pharmaceutical company with a strategy "let's try random molecules and do scientific studies whether they cure X" would go out of business.
Funny you should mention this.
...Eve is designed to automate early-stage drug design. First, she systematically tests each member from a large set of compounds in the standard brute-force way of conventional mass screening. The compounds are screened against assays (tests) designed to be automatically engineered, and can be generated much faster and more cheaply than the bespoke assays that are currently s
Considering Rand was anti-libertarianism
Funny how a great deal of libertarians like her a lot... But we were talking about transforming the world. How did she transform the world?
wanna do heritability studies? cryonics?
Cryonics is not a science. It's an attempt to develop a specific technology which isn't working all that well so far. By heritability do you mean evo bio? Keep in mind that I read people like Gregory Cochran and Razib Khan so I would expect you to fix massive errors in their approaches.
Pointing me to large amounts of idiocy in publish...
consider the influence Ayn Rand had
Let's see... Soviet Russia lived (relatively) happily until 1991 when it imploded through no effort of Ayn Rand. Libertarianism is not a major political force in any country that I know of. So, not that much influence.
What could stop them?
Oh dear, there is such a long list. A gun, for example. Men in uniform who are accustomed to following orders. Public indifference (a Kardashian lost 10 lbs through her special diet!).
...some would quickly be rich or famous, be able to contact anyone important, run presidential ca
i don't suppose you or anyone else wrote down your reasoning
Correct! :-)
i disagree that it's false. you aren't giving an argument.
This is false under my understanding of the standard English usage of the word "torture".
then i guess you can continue your life of sin
Woohoo! Life of sin! Bring on the seven deadlies!!
So, a professor of physics failed to convert the world to his philosophy. Why are you surprised? That's an entirely normal thing, exactly what you'd expect to happen. Status has nothing to do with it, this is like discussing the color of your shirt while trying to figure out why you can't fly by flapping your arms.
I don't see what's to envy about Marx.
His ideas got to be very very popular.
I estimate 1000 great people with the right philosopher is enough to promptly transform the world
ROFL. OK, so one philosopher and 1000 great people. Presumably specially selected since early childhood since normal upbringing produces mental cripples? Now, keeping in mind that you can only persuade people with reason, what next? How does this transformation of the world work?
ppl don't need to die, that's wrong
And yet everyone dies.
that's the part where you give an argument
Nope, that's true only if I want to engage in this discussion and I don't. Been there, done that, waiting for the t-shirt.
"torture" has an English meaning separate from emotional impact
Yes. Using that meaning, the sentence "I mean psychological "torture" literally" is false. Or did you mean something by these scare quotes?
if you wanted to have a productive conversation
LOL. Now, if you wanted to have a productive co...
It hasn't worked for him.
It didn't? What's your criterion for "worked", then? If you want to convert most of the world to your ideology you better call yourself a god then, or at least a prophet -- not a mere philosopher.
I guess Karl Marx is a counterexample, but maybe you don't want to use these particular methods of "persuasion".
everything good in all of history is from voluntary means
I understand this assertion. I don't think I believe it.
ppl initiate force when they fail to persuade
Kinda. When using force is simpler/cheaper than persuasion. And persuading people that they need to die is kinda hard :-/
The words have meanings.
Words have a variety of meanings which also tend to heavily depend on the context. If you want to convey precise meaning, you need not only to use words precisely, but also to convey to your communication partner which particular meaning you attach...
those people don't matter intellectually anyway
Ivory tower it is, then.
The right approach is to use purely voluntary methods which are not rightly described as passive.
How successful do you think these are, empirically?
I don't see the special difficulty with evaluating those statements as true or false.
I do. Quantum physics operates with very well defined concepts. Words like "cripple" or "torture" are not well-defined and are usually meant to express the emotions of the speaker.
"Not getting shunned" is not quite the same thing as attempting "persuasion via attaining social status".
Which method do you think can work for what you want to do? Any success so far?
accusations of "extremism" are not critical arguments
Of course they are not. But such perceptions have consequences for those who are not hermits or safely ensconced in an ivory tower. If you want to persuade (and you do, don't you?) the common people, getting labeled as an extremist is not particularly helpful.
I am not worried. However taking positions viewed as extremist by the mainstream (aka the normies) has consequences. Often you are shunned and become an outcast -- and being an outcast doesn't help with extinguishing the fire. There are also moral issues -- can you stand passively and just watch? If you can, does that make you complicit? If you can't, you are transitioning from a preacher into a revolutionary and that's an interesting transition.
The quotes above don't sound like they could be usefully labeled "true" or "not true" -- the...
I made no claims as to extremeness
Would you like to?
You are basically a missionary: you see savages engage in horrifying practices AND they lose their soul in the process. The situation looks like it calls for extreme measures.
So you don't feel these quotes represent an "extremist" point of view?
Current parenting and educational practices destroy children's minds. They turn children into mental cripples, usually for life. ... Almost everyone is broken by being psychologically tortured for the first 20 years of their life. Their spirit is broken, their rationality is broken, their curiosity is broken, their initiative and drive are broken, and their happiness is broken. And they learn to lie about what happened ...
...When I use words like "torture" regardin
Though actually I have gone to curi's website (or, rather, websites; he has several) and read his stuff
So have I, but curi's understanding of "using references" is a bit more particular than that. Unrolled, it means "your argument has been dealt with by my tens of thousands of words over there [waves hand in the general direction of the website], so we can consider it refuted and now will you please stop struggling and do as I tell you".
Why, yes, I am being snarky.
Embrace your snark and it will set you free! :-D
And knowing how this works enables us to think better.
Sure, but that's not sufficient. You need to show that the effect will be significant, suitable for the task at hand, and is the best use of the available resources.
Drinking CNS stimulants (such as coffee) in the morning also enables us to think better. So what?
And the breakthrough in AGI will come from epistemology.
How do you know that?
This is just more evasion.
Fail to ask a clear question, and you will fail to get a clear answer.
You know Yudkowsky also wants to save the world right?
Not quite save -- EY wants to lessen the chance that the humans will be screwed over by off-the-rails AI.
That Less Wrong is ultimately about saving the world?
Oh grasshopper, maybe you will eventually learn that not all things are what they look like and even fewer are what they say the are.
you're in the wrong place
I am disinclined to accept your judgement in this matter :-P
...Hypothetically, su
That's not an answer. That's an evasion.
The question is ill-posed. Without context it's too open-ended to have any meaning. But let me say that I'm here not to save the world. Is that sufficient?
Epistemology tells you how to think.
No, it doesn't. It deals with acquiring knowledge. There are other things -- like logic -- which are quite important to thinking.
impute bad motives to curi?
I don't impute bad motives to him. I just think that he is full of himself and has... delusions about his importance and relationship to truth.
I still have no idea what "hostile to using references" is meant to mean.
It means you're unwilling to go to curi's website and read all he has written on the topic when he points you there.
Why are you here?
I've been here awhile. Your account is a few days old. Why are you here?
The world is burning and you're helping spread the fire.
Whether the world is burning or not is an interesting discussion, but I'm quite sure that better epistemology isn't going to put out the fire. Writing voluminous amounts of text on a vanity website isn't going to do it either.
Are you really going to argue for Pascal's Wager here?
Tell me which single hell you think you're avoiding and I'll point out a few others in which you will end up.
He used his philosophy skills to become a world-class gamer
Gold! This is solid gold!
Are you aware of the battles great ideas and great people often face?
Have you considered becoming a stand-up comedian?
The interesting thing is that the answer is "nothing". Nothing at all.
This is so ridiculously bombastic, it's funny.
So what have this Great Person achieved in real life? Besides learning Ruby and writing some MtG guides? Given that he is Oh So Very Great, surely he must left his mark on the world already. Where is that mark?
There seems to be a complexity limit to what humans can build. A full GAI is likely to be somewhere beyond that limit.
The usual solution to that problem -- see the EY's fooming scenario -- is to make the process recursive: let a mediocre AI improve itself, and as it gets better it can improve itself more rapidly. Exponential growth can go fast and far.
This, of course, gives rise to another problem: you have no idea what the end product is going to look like. If you're looking at the gazillionth iteration, your compiler flags were probably lost around the t... (read more)