Comment author: steven0461 31 July 2012 06:42:13AM 0 points [-]

But if even a tiny fraction of future observers thinks seriously about the hypothesis despite knowing the solution...

Comment author: Grognor 31 July 2012 06:44:21AM *  1 point [-]

My current guess is that having the knows-the-solution property puts them in a different reference class. But if even a tiny fraction deletes this knowledge...

Comment author: Grognor 30 July 2012 02:07:01AM *  4 points [-]

For a while, I assumed that I would never understand UDT. I kept getting confused trying to understand why an agent wouldn't want or need to act on all available information and stuff. I also assumed that this intuition must simply be wrong because Vladimir Goddamned Nesov and Wei Motherfucking Dai created it or whatever and they are both straight upgrades from Grognor.

Yesterday, I saw an exchange involving Mitchell Porter, Vladimir Nesov, and Dmytry_messaging. The latter of these insisted that one-boxing in transparent Newcomb's (when the box is empty) was irrational, and I downvoted him because of course I knew he was wrong. Today at work (it is a mindless job), I thought for a while about the reply I would have given private_messaging if I did not consider it morally wrong to reply to trolls, and I started thinking things like how he either doesn't understand reflective consistency or doesn't understand why it's important and how if you two-box then Omega predicted correctly and I also thought,

"Well sure the box is empty, but you can't condition on that fact or else-"

It hit me like a lightning bolt. That's why it's called updateless! That's why you need to- oh man I get it I actually get it now!

I think this happened because of how much time I've spent thinking about this stuff and also thanks to just recently having finished reading the TDT paper (which I was surprised to find contained almost solely things I already knew).

Comment author: wedrifid 29 July 2012 01:32:39AM *  9 points [-]

It's like religion. If you accept that God and Hell are real, then becoming a fundamentalist Christian and trying as hard as you can to convert as many people as possible is the only ethical option.

Nonsense. That is the most ethical of the options that your brain is willing to provide you when you ask it "what is the best option?" But if someone actually had that belief a more ethical option would be to murder as many Muslims, Atheists and Buddhists (of child bearing age) as you can. The chance that you will successfully convert any given individual is tiny and if you allow them to live to breed they will raise children doomed to hell.

An even better option is to kill all males who will not convert and keep all women (Christian and otherwise) pregnant constantly with twins (IVF, fertility drugs). The children are to be taken and raised to be loyal to your faith.

(Or you build an FAI to tile the universe with Christians with the minimum possible lifespan to qualify for heaven.)

Comment author: Grognor 29 July 2012 02:35:34AM 4 points [-]
Comment author: Grognor 25 July 2012 05:25:31AM *  3 points [-]

Both of the studies linked to at the top of this post, on which the entire post is based, have been discredited. Even if they were true, I think it was a stretch to go from those to postulating a generalized verbal overshadowing bias.

With the benefit of hindsight I can say that this post was probably a mistake, which leaves me a bit dumbfounded at its karma score of 61 and endorsement by Newsome. When I scrolled down to the bottom I saw that I had already downvoted it, which made me even more confused.

Comment author: Gaviteros 19 July 2012 07:03:39AM 10 points [-]

Hellow Lesswrong! (I posted this in the other July2012 welcome thread aswell. :P Though apparently it has too many comments at this point or something to that effect.)

My name is Ryan and I am a 22 year old technical artist in the Video Game industry. I recently graduated with honors from the Visual Effects program at Savannah College of Art and Design. For those who don't know much about the industry I am in, my skill set is somewhere between a software programmer, a 3D artist, and a video editor. I write code to create tools to speed up workflows for the 3D things I or others need to do to make a game, or cinematic.

Now I found lesswrong.com through the Harry Potter and the Methods of Rationality podcast. Up unto that point I had never heard of Rationalism as a current state of being... so far I greatly resonate with the goals and lessons that have come up in the podcast, and what I have seen about rationalism. I am excited to learn more.

I wouldn't go so far to claim the label for myself as of yet, as I don't know enough and I don't particularly like labels for the most part. I also know that I have several biases, I feel like I know the reasons and causes for most, but I have not removed them from my determinative process.

Furthermore I am not an atheist, nor am I a theist. I have chosen to let others figure out and solve the questions of sentient creators through science, and I am no more qualified to disprove a religious belief than I would be to perform surgery... on anything. I just try to leave religion out of most of my determinations.

Anyway! I'm looking forward to reading and discussing more with all of you!

Current soapbox: Educational System of de-emphasizing critical thinking skills.

If you are interested you can check out my artwork and tools at www.ryandowlingsoka.com

Comment author: Grognor 25 July 2012 04:58:26AM *  2 points [-]

I am no more qualified to disprove a religious belief than I would be to perform surgery... on anything.

I disagree with this claim. If you are capable of understanding concepts like the Generalized Anti-Zombie Principle, you are more than capable of recognizing that there is no god and that that hypothesis wouldn't even be noticeable for a bounded intelligence unless a bunch of other people had already privileged it thanks to anthropomorphism.

Also, please don't call what we do here, "rationalism". Call it "rationality".

Comment author: Grognor 24 July 2012 10:12:35PM *  7 points [-]

I really wish you would have put a disclaimer on these posts the likes of:

One of the assumptions The Art of Strategy makes is that rational agents use causal decision theory. This is not actually true, but I'll be using their incorrect use of "rationality" in order to make you uncomfortable.

Anyway,

Nick successfully meta-games the game by transforming it from the Prisoner's Dilemma (where defection is rational) [...]

this is the problem with writing out your whole sequence before submitting even the first post. You make the later posts insufficiently responsive to feedback and make up poor excuses for not changing them.

Edit: Why yes, wedrifid, there was. Fixed.

Comment author: fubarobfusco 23 July 2012 10:51:00PM *  14 points [-]

(First, my apologies to Will and Divia for the unpleasantness of this subthread topic.)

The evidence in the human genome suggests that the majority of all the men who made it to sexual maturity apparently died without offspring; their womenfolk rejected them in favor of their tribes' alpha bad boys

The former claim there is evidence for. For the latter claim ("their womenfolk...") there cannot today exist evidence in the human genome, since we don't know what "alpha bad boy" would look like in a genome. The latter claim scarcely rises to the level of "speculation" — I'd call it "drama".

Since we know that war and kidnap-rape are major activities of young men in many societies in situations resembling the ancestral environment (see Pinker 2011), it seems that we should expect the discrepancy to be due to young men killing each other and kidnapping and raping young women. Rape is a lot less effective of a reproductive strategy than it once was, thanks to such social innovations as criminal prosecutions and abortions.

(On the other hand, perhaps by "alpha bad boy" you actually meant "murderer, rapist, and slaveholder" whereas I took it as meaning "seducer"?)

Comment author: Grognor 24 July 2012 03:29:04AM 2 points [-]

I initially had the parent upvoted, but I retracted it on learning that the grandparent comment is speaking from experience, and since I have the same experience, it's difficult not to believe.

Comment author: Grognor 24 July 2012 12:57:53AM *  3 points [-]

Could someone please explain to me exactly, precisely, what a utility function is? I have seen it called a perfectly well-defined mathematical object as well as not-vague, but as far as I can tell, no one has ever explained what one is, ever.

The words "positive affine transformation" have been used, but they fly over my head. So the For Dummies version, please.

Comment author: loup-vaillant 21 July 2012 07:03:12PM 1 point [-]

I do not lie to my readers

Eliezer Yudkowsky.

Comment author: Grognor 23 July 2012 06:50:16AM 1 point [-]

Better yet,

I don't lie.

-Eliezer Yudkowsky

Comment author: John_Maxwell_IV 17 July 2012 05:17:26AM 1 point [-]

If they where capable of doing so they would realize that often the discomfort experienced by the minority fraction of readers does not at all outweighs the investment needed to accommodate them.

Yep, I agree and specifically acknowledged that possibility. In this case, my current guess is that it's not worthwhile for lukeprog to rework his video, but it would be worthwhile to spend a few minutes thinking of gender if he was to make it again.

I've seen people on the internet use "white knight" to refer to men who take the pro-female position in gender-oriented online arguments. Is this just namecalling, or is there a technical difference between "white knights" and men who favor the pro-female position on collective utility maximization grounds?

Comment author: Grognor 23 July 2012 02:06:28AM 3 points [-]

I believe this term is used solely to countersignal and has no more technical meaning than "guy I don't like who defends females".

View more: Prev | Next