Comment author: negamuhia 11 December 2012 03:29:24PM 0 points [-]

This happens to me sometimes, and I sort of bring myself back to normality by reminding myself of the fact that the things my meat-brain chooses to bring to my attention are out of my control (for a short time) when I'm in panic mode. Other facts I recall: I have a reflectively-inconsistent meat brain, I was raised in a Christian home (I'm atheist now), and just about everything relevant that I can remember from Kahneman (correspondence bias etc.) and other psychology texts. Also, the Sequences.

Annoying aliefs are annoying.

A thing to do would be to condense all that gratitude into the word "awesome". Best said with a wide grin.

Comment author: John_Maxwell_IV 21 November 2012 09:58:04AM 19 points [-]

Hm. It seems like a couple of your examples may involve value drift due to behavioral reinforcement. The behavior of buying expensive stuff gets reinforced when friends act impressed, or the behavior of being a jerk gets reinforced when it gets you laid. If it's entirely behavioral phenomenon, it seems possible that the "value drift" is only a drift in revealed preferences, not reflective ones.

Schelling fences or similar come to mind as a way to prevent this sort of behavior change in oneself ("be a decent person always", "donate 30% of my income").

I've had a fair amount of success detailing policies like these for myself to follow. Typically my policies have associated lag times before policy changes take effect, which I've found to be key for experimenting to see what's convenient, workable, and makes reasonable compromises while not ditching the entire policy whenever I encounter problems. I don't think the effectiveness of these policies is best explained in terms game theory, however. The way it feels from the inside is, if I draw up a policy when I'm in a relatively high-willpower state, it's as though I can "lock in" a bunch of future decisions related to the policy that I'll later make using minimal willpower.

If people are interested in the details of what I've learned makes for effective policy administration, I could probably write a discussion post about it.

Comment author: negamuhia 26 November 2012 12:28:02PM 3 points [-]

I know it's days later....but I'm interested.

Comment author: negamuhia 13 November 2012 08:59:54AM *  1 point [-]

I realize (and I'm probably not alone in this) that I've been implicitly using this {meta-work, strategy-work, direct-work} process to try and figure out where/how to contribute. Thanks for this guide/analysis.

Comment author: mapnoterritory 02 November 2012 08:48:30AM *  3 points [-]

Since we still don't have a lectures/talks thread I put it here:

http://fora.tv/conference/the_singularity_summit_2012/buy_programs

The Singularity Summit 2012

Content:

  • Singularity Summit: Opening Remarks with Nathan Labenz
  • Temple Grandin: How Different People Think Differently
  • Singularity Summit: Olah, Deming & Other Thiel Fellows
  • Julia Galef: Rationality and the Future
  • Luke Muehlhauser: The Singularity, Promise and Peril
  • Linda Avey: Personal Genomics
  • Steven Pinker: A History of Vio
  • Ray Kurzweil: How to Create a Mind
  • Q&A: Economist Daniel Kahneman, the Pioneer of Heuristics
  • Melanie Mitchell: AI and the Barrier of Meaning
  • Author Carl Zimmer: Our Viral Future
  • Robin Hanson: Extraordinary Society of Emulated Minds
  • Jaan Tallinn: Why Now? A Quest in Metaphysics
  • John Wilbanks: Your Health, Your Data, Your Choices
  • Stuart Armstrong: How We're Predicting AI
  • Vernor Vinge: Who's Afraid of First Movers?
  • Peter Norvig: Channeling the Flood of Data
Comment author: negamuhia 04 November 2012 03:04:27PM 0 points [-]

I'd love to get these as audio files. I'd even volunteer to transcribe them if that were to happen.

Comment author: negamuhia 04 November 2012 02:57:46PM *  1 point [-]

"AI Will Be Maleficent By Default"

seems like an a priori predetermined conclusion (bad science, of the "I want this to be true" kind), rather than a research result (good problem statement for AGI risk research). A better title would be rephrased as a research question:

"Will AI Be Maleficent By Default?"

Comment author: negamuhia 31 October 2012 01:13:32PM *  2 points [-]

famous dinner parties at which the Illuminati congregate.

Upvoted. Your sense of humor is just awesome. Unless this is one humongous Fnord.

Comment author: MichaelAnissimov 20 October 2012 03:29:40AM 4 points [-]

Just so everyone knows, the talks will be going online at Fora.tv over the next week or so.

Comment author: negamuhia 30 October 2012 01:33:05PM 0 points [-]

I've downloaded most Summit talks from archive.org as [ogg] audio files, and I'm kinda partial to listening to them podcast-style i.e. while i do other stuff... So, will there be audio versions of the talks? Or is there a way I can download the videos from fora.tv, do a quick audio rip, then upload the resulting .OGGs to the archive?

Comment author: RobertLumley 01 October 2012 03:16:03PM 0 points [-]

Fiction Books Thread

Comment author: negamuhia 02 October 2012 02:35:06PM *  2 points [-]

Rudy Rucker's Ware Tetralogy. I'm thisclose to starting Freeware. I'm about to finish book 2 (The Golden Apple) of the Illuminatus! Trilogy...in fact that's what I'm currently reading... :) I just got Rapture of the Nerds by Cory Doctorow and Charlie Stross...haven't got around to reading it yet though... And I have about 39 fiction+nonfiction books on my current reading list, so...phew....

Comment author: RobertLumley 01 October 2012 03:15:57PM 0 points [-]

Music Thread

Comment author: negamuhia 02 October 2012 02:27:22PM *  0 points [-]

I've been listening to Sphongle a lot lately... the two albums Nothing Lasts and Are You Sphogled? have some pretty gnarfy Muzaks... Also, though this has been a constant in my life since about 4 years ago, Juno Reactor's Gods and Monsters... Beethoven's 9th (my favourite), via Gunter Wand. The Social Network's OST is also quite nice.... Question: Who loves chiptune?

Comment author: negamuhia 12 September 2012 09:40:04AM 0 points [-]

This reminds me of something i read in Richard Dawkins' "The God Delusion" about the "Zeitgeist" of the particular age you find yourself born into...however, i think the "sane" thing to do here would be to conform, since non-conformism doesn't even carry with it the benefit of having the technology -- or even the knowledge -- to save your wife which, in this century, certainly is the case. I see what evand below says about:

[the behaviour] is also done to gain the societal safe harbor protection

but this is only valid in a world where you absolutely can not get any better. i certainly get the sense of "existential despair" this brings.

View more: Prev | Next