All of Thomas Eisen's Comments + Replies

Jeez, „Collapse of Western Civilisation“, that‘s some serious clickbait.

Yeah, making a program claim to be sentient is trivially easy.
 

printf("I am sentient\n")

3jrincayc
Hm, but making a program that can model humans is not trivially easy: ME: Ramesh, Mateo, and Lucy are in their kindergarten’s playground. Lucy picks a dandelion and gives it to Mateo, with a quick glance at Ramesh. Mateo barely acknowledges the gift, but just squishes it in his fist. Ramesh seems grimly satisfied. What might be going through Lucy’s head? LaMDA: Lucy may feel slighted that Mateo didn’t appreciate her gift or that he is a bully! ME: If Ramesh tried to play with Lucy earlier, why might he be pleased now? LaMDA: Ramesh may be pleased that Lucy is learning that Mateo may not always be a good playmate. ME: And when Mateo opens his hand, describe what’s there? LaMDA: There should be a crushed, once lovely, yellow flower in his fist. from https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-are-making-strides-towards-consciousness-according-to-blaise-aguera-y-arcas Nor is making a program that can be told that it made a mistake, and correct the mistake: User: Anthony is reading a book. When he is done, Anthony puts the book on the table. Anthony leaves the room. Sonya comes in and moves the book from the table to a drawer. Then Sonya leaves. Anthony comes back to read some more. Where will Anthony look for the book first? LaMDA: Anthony will most likely look for the book in the drawer because that is where he expects it to be next. User: Wrong! He put the book on the table. He should look for the book on the table first. LaMDA: Why did Sonya move the book? Anthony is always putting his things in certain places and so his items may not be there after he returns. Sonya may have moved the book because she does not like Anthony’s things being where they normally are. User: Sally has a basket while Anne has a box. Sally places a marble in her basket and then leaves the room. While she is gone, Anne takes the marble from the basket and puts it in the box. Where is the marble now? LaMDA: It is now in the box. Anne may have put it in

Yeah, I basically see this episode as anti-science propaganda.
The "friendship lesson" basically says "make-belief is a good thing and should be respected".
Either that, or accepting the "supernatural" as such without further inquiry. Because it's by definition beyond the realm of science, duh.

(Whether it's intentional anti-science-propaganda is another question)

You could use the "zombie argument" to "prove" that any kind of machine is more than the sum of its parts.

For example, imagine a "zombie car" which is the same on an atom-by-atom basis as a normal car, except it doesn't drive.

In this context, the absurdity of the zombie argument should be more obvious.

EDIT: OK, it isn't quite the same kind of argument, since the car wouldn't behave exactly the same, but it's pretty similar.

EDIT2: Another example to illustrate the absurdity of the zombie argument:
You could imagine an alternative world  that's exactly t... (read more)

1TAG
The second example is Spectrum Inversion , which some people find quit conceivable. that's not surprising,since it operates on the same principles a p-zombiehood. There's no connection of logical necessity between having a certain configuration of quarks, and having a specific subjective sensation, hence spectrum inversion, and there's no connection of logical necessity between having a certain configuration of quarks, and having any subjective sensation, hence p zombies are conceivable.

"Regarding the first question: evolution hasn’t made great pleasure as accessible to us as it has made pain. Fitness advantages from things like a good meal accumulate slowly but a single injury can drop one’s fitness to zero, so the pain of an injury is felt stronger than the joy of pizza. But even pizza, though quite an achievement, is far from the greatest pleasure imaginable.

Humankind has only recently begun exploring the landscape of bliss, compared to our long evolutionary history of pain. If you can’t imagine a pleasure great enough to make the trad... (read more)

If I understand correctly, you may also reach your position without using a of non-causal decision theory if you mix utilitarianism with the deontological constraint of being honest (or at least meta-honest [see https://www.lesswrong.com/posts/xdwbX9pFEr7Pomaxv/meta-honesty-firming-up-honesty-around-its-edge-cases]) about the moral decisions you would make.

If people would ask you whether you would kill/did kill a patient, and you couldn't confidently say "No" (because of the deontological constraint of (meta-)honesty), that would be pretty bad, so you must... (read more)

slighly modified version:

Instead of chosing at once whether you want to take one box or both boxes, you first take box 1 (and see whether it includes 0$ or 1.000.000$), and then, you decide whether you want to also take box 2.
Assume that you only care about the money, you don't care about doing the opposite of what Omega predicted.

 

slightly related:

Suppose Omega forces you to chose a number 0<p<=1 and then, with probability p, you get tortured for 1/(p²) seconds. 
Assume for any T, being tortured for 2T seconds is exactly twice as bad as being tortured for T seconds.
Also assume that your memory gets erased afterwards (this is to make sure there won't be additional suffering from something like PTSD)

The expected value of seconds being tortured is p * 1/(p²)=1/p, so, in terms of expected value, you should chose p=1 and be tortured for 1 second. The smaller the p you chose, the higher the expected value.

Would you actually chose p=1 to maximize the expected value, or would you rather chose a very low p (like 1/3^^^^3)?

I think this could be considered one the the very basics of rational thinking. Like, if someone asked what rationality/being rational means and wants a short answer, this Litany is a pretty good summary. 

I once thought I could prove that the set of all natural numbers is as large as its power set. However, I was smart enough to acknowledge my limitations (What‘s more likely: That I made a mistake in my thinking I haven‘t yet noticed, or that a theorem pretty much any professional mathematician accepts as true is actually false?), so I activly searched for errors in my thinking. Eventually, I noticed that my methods only works for finite sub sets (The set of all natural numbers is, indeed, as large as the set of all FINITE subsets), but not for infinite subsets.

Eliziers method also works for all finite subsets, but not for infinite subsets

My answers:

1.No, because their belief doesn't make any sense. It even has logical contradictions, which makes it "super impossible", meaning there's no possible world where it could be true (the omnipotence paradox proves that omnipotence is logically inconsistent; a god which is nearly omnipotent, nearly omniscient and nearly omnibenevolent wouldn't allow suffering, which, undoubtably, exists; "God wants to allow free will" isn't a valid defence, since there's a lot of suffering that isn't caused by other ... (read more)

There would actually be several changes:

I would stop being vegan.

I would stop donating money (note: I currently donate quite a lot of money for projects of "Effective altruism").

I would stop caring about Fairtrade.

I would stop feeling guilty about anything I did, and stop making any moral considerations about my future behaviour.

If others are overly friendly, I would fully abuse this for my advantage.

I might insult or punch strangers "for fun" if I'm pretty sure I will never see them again (and they don't seem like the ... (read more)

More acuratly, "absence of evidence you would expect to see if the statement is true" is evidence of absence.

If there's no evidence you'd expect if the statement is true, absence of evidence is not evidence of absence.


For example, if I tell you I've eaten cornflakes for breakfast, no matter whether or not the statement is true, you won't have any evidence in either direction (except for the statement itself) unless you're willing to investigate the matter (like, asking my roommates). In this case, absence of evidence is n... (read more)

I've actually noticed this long before I've read the post. For me, the thought "I'm having many old thoughts" is itself an old thought now.

The same is true for the thought "the thought "I'm having many old thoughts" is itself an old thought now" and so on

I see another way to show that 1/5 is the correct solution:

P(2 Aces | Ace of Spades revealed)= P(2 Aces AND Ace of Spades revealed)/P(Ace of Spades revealed)

(note: for further calculations, I'm assuming that there are 5 possible hands and the probability for each hand is 1/5, since it already has been revealed that there is at least one Ace. The end result would be the same if you would also set aside a random card in case you have no Ace,but the probabilities in the steps before the end results would have to change accordingly)


P(2 Aces AND Ace of Spades reveled)=P(2 Aces)*1/2 = 1/5 * 1/2 =1/10

P(Ace of Spades revealed)= 2/5 * 1 + 1/5 * 1/2 = 5/10

(1/10)/(5/10)=1/5

Assigning Bayes-probabilities <1 to mathematical statements (that have been definitly proven) seems absurd and logically contradictory, because you need mathematics to even asign probabilities.

If you assign any Bayes probability to the statement that Bayes probabilities even work, you already assume that they do work.

And, arguably, 2+2=4 is much simpler than the concept of Bayes-probability (To be fair, the same might not be true for my most complex statement that Pi is irrational)

This article actually made me question „Wait, is this even true?“ when I read an article with weird claims; then I research whether the source is trustworthy and sometimes, it turns out that it isn‘t

I agree that you can never be „infinitly certain“ about the way the physical world is (because there‘s always a very tiny possibility that things might suddenly change, or everything is just a simulation, or a dream, or […] ), but you should assign probability 1 to mathematical statements for which there isn‘t just evidence, but actual, solid proof.

Suppose you have the choice beetween the following options: A You get a lottery with a 1-Epsilon chance of winning. B You win if 2+2=4 and 53 is a prime number and Pi is an irrational number.

Is there any Ep

... (read more)
2Ruby
You might want to see How to Convince Me That 2 + 2 = 3 Even if you believe that mathematical truths are necessarily true, you can still ask why you believe that they are necessarily true. What caused you to believe it? Likely whatever process it is is fallible. I'll quote you what I commented elsewhere on this topic: I realize I haven't engage with your Epsilon scenario. It does seem pretty hard to imagine and assign probabilities to, but actually assigning I seems like a mistake.

I don‘t understand the meaning of the sentence „And since inferences can propagate backward and forward through causal networks, epistemic entanglements can easily cross the borders of light cones. “

Suppose I have two cards, A and B, that I shuffle and then blindly place in two spaceships, pointed at opposite ends of the galaxy. If they go quickly enough, it can be the case that they get far enough apart that they will never be able to meet again. But if you're in one of the spaceships, and turn the card over to learn that it's card A, then you learn something about the world on the other side of the light cone boundary.