Posts

Sorted by New

Wiki Contributions

Comments

Neph10y10

I've got one. I actually came up with this on my own, but I'm gratified to see that EY has adopted it

cashback credit cards. these things essentially reduce the cost of all expenditures by 1%.

...but that's not where they get munchkiny. where they get munchkiny is when you basically arbitrage two currencies of equal value.

as a hypothetical example, say you buy $1000 worth of dollar bills for $1000. by using the credit card, it costs $990, since you get $10 back. you then take it to the bank and deposit it for $1000, making a $10 profit. wash, rinse repeat

the catch is, most of them have an annual fee attached, so you it's a use it or it's not worth it scenario (note, though, that for most people, if they use it for rent and nothing else, they'll save about the same as the annual fee). also, most of them need good credit to acquire, so if you're a starving college student with loans, kiss that goodbye. also, you cannot directly withdraw cash and get the 1%, so you have to come up with a way ton efficiently exchange a purchasable resource for money.

Neph10y40

it definitely worked in at least one happily married case

so did "find god's match for you"

if we're looking at all the successful cases, but none of the unsuccessful ones, of course we're going to get positive results. also, as positive results go, "at least one" success is hardly reassuring

Neph10y00
def checkMorals():
>[simulate philosophy student's brain]
>if [simulated brain's state is offended]:
>>return False
>else:
>>return True
if checkMorals():
>[keep doing AI stuff]

there. that's how we tell an AI capable of being an AI and capable of simulating a brain to not to take actions which the simulated brain thinks offend against liberty, as implemented in python.

Neph10y50

does anyone else find it ironic that we're using fictional evidence (a story about homeopathic writers that don't exist) to debate fictional evidence?

Neph11y00

I previously made a comment that mistakenly argued against the wrong thing. so to answer the real question- no.

the person who commented to my response said "$50 to the AMF gets someone someone around an additional year of healthy life."

but here's the thing- there's no reason it couldn't give another person- possibly a new child- an additional year of healthy life.

a life is a life, and $50 is $50, so unless the charity is ridiculously efficient (in which case, you should be looking at how to become more efficient) the utility would be the same (when comparing giving to AMF vs. doing the same thing as AMF to someone who may or may not be your child)

however with the having a child option, there is one more life- and all the utility therein- than the charity option- the people the charity would benefit would exist in either case. and since we've just shown that it doesn't really matter whether you donate to AMF or do the same thing as AMF to someone, that puts having a child at greater utility.

Neph11y00

(puts on morpheus glasses) what if I told you... many of this site's members are also members of those sites?

Neph11y60

I know this may come off as a "no true scotsman" argument, but this is a bit different- bear with me. consider christianity (yes, I'm bringing religion into this, sort of...) in the beginning, we have a single leader preaching a set of morals that is (arguably) correct from a utilitarian standpoint, and calling all who follow that set "christians" by so doing, he created what Influence: Science and Practice would call "the trappings of morality" ...so basically, fast-forward a few hundered years, and we have people who think they can do whatever they like and it'll be morally right, so long as they wear a cross doing it. parallel to the current situation: we set up science- a set of rules that will always result in truth, if followed. by so doing, we created the trappings of right-ness. fast forward to now, and we have a bunch of people who think they can decide whatever they want, and it'll be right, so long as they wear a labcoat while doing it. understand, that's a bit of metaphor, in truth, these "scientists" (scoff) simply learned the rules of science by rote without really understanding what they mean. to them, reproducible results is just something nice to have as part of the ritual of science, instead of something completely necessary to get the right answer

...all of this stuff I said, by the way, is said in one of the core sequences, but I'm not sure which. I may reply to myself later with the link to the sequence in question.

Neph11y00

remember that Bayesian evidence never reaches 100%, thus making middle ground- upon hearing another rationalist's viewpoint, instead of not shifting (as you suggest) or shifting to average your estimate and theirs together (as AAT suggests) why not adjust your viewpoint based on how likely the other rationalist is to have assessed correctly? ie- you believe X is 90% likely to be true the other rationalist believes it's not true 90%. suppose this rationalist is very reliable, say in the neighborhood of 75% accurate, you should adjust your viewpoint down to X is 75% likely to be 10% likely to be true, and 25% likely to be 90% likely to be true (or around 30% likely, assuming I did my math right.) assume he's not very reliable, say a creationist talking about evolution. let's say 10%. you should adjust to X is 10% likely to be 10% likely and 90% likely to be 90% likely. (82%) ...of course this doesn't factor in your own fallibility.

Load More