weightt an

Median Internet Footprint Liver

Wiki Contributions

Comments

Sorted by

Before sleeping, I assert that the 10th digit of π equals to the number of my eyes. After falling asleep, seven coins will be flipped. Assume quantum uncertainty affects how the coins land. I survive the night only if number of my eyes equals to the 10th number of π and/or all seven coins land heads, otherwise I will be killed in my sleep.

Wil you wake up with 3 eyes?

Like, your decisions to name some digit are not equallly probable. Maybe you are the kind of person who would name 3 only if 10^12 cosmic rays hit you in precise sequence or whatever, and you name 7 with 99% prob.

AND if you are very unlikely to name the correct digit you will be unlikely to enter into this experiment at all, because you will die in majority of timelines. I.e. at t1 you decide to enter or not. At t2 experiment happens or you'll just waste time doomscrolling. At t3 you look up the digit. Your distribution at t3 is like 99% of you who chickened out.

Another possibility is Posthuman Technocapital Singularity, everything goes in the same approximate direction, there are a lot of competing agents but without sharp destabilization or power concertation, and Moloch wins. Probably wins, idk

 https://docs.osmarks.net/hypha/posthuman_technocapital_singularity

I also played the same game but with historical figure. The Schelling point is Albert Einstein by a huge margin, like 75% (19 / (19 + 6))  of them say Albert Einstein. The Schelling point figure is Albert Einstein! Schelling! Point! and no one said Schelling! 

In the first iteration of the prompt, his name was not mentioned. Then I became more and more obvious in my hints, and in the final iteration, I even bolded his name and said the prompt was the same for the other participant. And it's still Einstein!

https://i.imgur.com/XLkXTsk.png

Which means 2:1 betting odds

So, she shakes the box contemplatively. There is mechanical calendar. She knows the betting odds of it displaying "Monday" but not the credence. She thinks it's really really weird

Well, idk. My opinion here is that you bite some weird bullet, which I'm very ambivalent to. I think "now" question makes total sense and you factor it out into some separate parts from your model. 

Like, can you add to the sleeping beauty some additional decision problems including the calendar? Will it work seamlessly?

Well, now! She looks at the box and thinks there is definitely a calendar in some state. What state? What would happen if i open it?

Let's say there is an accurate mechanical calendar in the closed box in the room. She can open it but wouldn't. Should she have no expectation about like in what state this calendar is? 

How many randomly sampled humans would I rather condemn to torture to save my mother? Idk, more than one, tbh.

pet that someone purchased only for the joy of torturing it and not for any other service?

Unvirtuous. This human is disgusting as they consider it fun to deal a lot of harm to the persons in their direct relationships.

Also I really don't like how you jump into "it's all rationalization" with respect to values!

Like, the thing about utilitarian -ish value systems is that they deal poorly with preferences of other people (they mostly ignore them). Preference based views deal poorly with creation and not creation of new persons.

I can redteam them and find real murderous decision recommendations.

Maybe like, instead of anchoring to the first proposed value system maybe it's better to understand what are the values of real life people? Maybe there is no simple formulation of them, maybe it's a complex thing.

Also, disclaimer, I'm totally for making animals better off! (Including wild animals) Just I don't think it's an inference from some larger moral principle, it's just my aesthetic preference, and it's not that strong. And I'm kinda annoyed at EAs who by "animal welfare" mean dealing band aids to farm chickens. Like, why? You can just help to make that lab grown meat a thing faster, it's literally the only thing that going change it.

I propose to sic o1 on them to distill it all into something readable/concise. (I tried to comprehend it and failed / got distracted).

I think some people pointed out in comments that their model doesn't represent prob of "what day it is NOW" btw

I think you present here some false dichotomy, some impartial utilitarian -ish view VS hardcore moral relativism.

Pets are sometimes called companions. It's as if they provide some service and receive some service in return, all of this with trust and positive mutual expectations, and that demands some moral considerations / obligations, just like friendship or family relationship. I think mutualist / contractualist framework accounts for that better. It makes the prediction that such relationships will receive additional moral considerations, and they actually do in practice. And it predicts that wild animals wouldn't, and they don't, in practice. Success?

So, people just have the attitude about animals just like any other person, exacerbated with how little status and power they have. Especially shrimp. Who the fuck cares about shrimp? You can only care about shrimp if you galaxy brain yourself on some weird ethics system.

I agree that they have no consistent moral framework that backs up that attitude, but it's not that fair to force them into your own with trickery or frame control

>Extremely few people actually take the position that torturing animals is fine

Wrong. Most humans would be fine answering that torturing 1 million chickens is an acceptable tradeoff to save 1 human. You just don't torture them for no reason, as it's unvirtuous and icky

Load More