Comment author: [deleted] 13 July 2013 09:40:31PM 5 points [-]

I have decided to take small risks on a daily basis (for the danger/action feeling), but I have trouble finding specific examples. What are interesting small-scale risks to take? (give as many examples as possible)

In response to comment by [deleted] on "Stupid" questions thread
Comment author: Turgurth 13 July 2013 11:57:07PM 3 points [-]

Try some exposure therapy to whatever it is you're often afraid of. Can't think of what you're often afraid of? I'd be surprised if you're completely immune to every common phobia.

Comment author: letter7 02 June 2013 02:02:12PM 7 points [-]

I have been constantly thinking recently: Your voice impacts a lot in your presentation, and it's one of those things that people generally take for granted. And it's not just your speak pattern and filler words that I'm referring to, but also intonation, fluency and so on. I would maybe risk saying that it can be as important as your appearance, or even more. (If you stumble every five or ten words, you can't really convey your ideas, can you?)

In this vein, is there a viable alternative for someone who wants to improve his own voice? I already thought about a voice acting tutor, but I generally prefer ways in which I could improve without having to pay a tutor.

Comment author: Turgurth 04 June 2013 05:00:53PM 2 points [-]

Advice from the Less Wrong archives.

Comment author: Turgurth 23 May 2013 02:58:13AM *  0 points [-]

Very interested.

Also, here's a bit of old discussion on the topic I found interesting enough to save.

Comment author: Grif 02 February 2013 01:12:40AM *  24 points [-]

If someone doesn’t value evidence, what evidence are you going to provide that proves they should value evidence? If someone doesn’t value logic, what logical argument would you invoke to prove they should value logic?

--Sam Harris

Comment author: Turgurth 03 February 2013 01:12:28AM 8 points [-]

If you can't appeal to reason to make reason appealing, you appeal to emotion and authority to make reason appealing.

In response to comment by [deleted] on Open Thread, January 16-31, 2013
Comment author: tgb 17 January 2013 09:52:46PM 6 points [-]

I think it would be a good idea to cross post or link to your blog posts in Discussion, at least until people like myself get a feel for whether this blog is something we want to follow on its own. I don't know if there are strong community pressures against making posts that are just links to your own blog, though.

Comment author: Turgurth 18 January 2013 05:43:59AM 1 point [-]

I don't think there are any such community pressures, as long as a summary accompanies the link.

Comment author: Douglas_Knight 16 January 2013 05:24:19AM *  3 points [-]

The featured articles are controlled by the wiki, and thus the history is accessible, if awkward.

Comment author: Turgurth 16 January 2013 06:41:40AM 1 point [-]

Thanks!

Comment author: Turgurth 16 January 2013 03:21:52AM 3 points [-]

I recently noticed "The Fable of the Dragon-Tyrant" under the front page's Featured Articles section, which caused me to realize that there's more to Featured Articles than the Sequences alone. This particular article (an excellent one, by the way) is also not from Less Wrong itself, yet is obviously relevant to it; it's hosted on Nick Bostrom's personal site.

I'm interested in reading high-quality non-Sequences articles (I'm making my way through the Sequences separately using the [SEQ RERUN] feature) relevant to Less Wrong that I might have missed, so is there an archive of Featured Articles? I looked, but was unable to find one.

Comment author: Locke 22 April 2012 02:21:02AM 10 points [-]

What practical things should everyone be doing to extend their lifetimes?

Comment author: Turgurth 23 April 2012 03:00:25AM *  1 point [-]

Michaelcurzi's How to avoid dying in a car crash is relevant. Bentarm's comment on that thread makes an excellent point regarding coronary heart disease.

There is also Eliezer Yudkowsky's You Only Live Twice and Robin Hanson's We Agree: Get Froze on cryonics.

In response to against "AI risk"
Comment author: CarlShulman 11 April 2012 11:46:35PM *  27 points [-]

Speaking only for myself, most of the bullets you listed are forms of AI risk by my lights, and the others don't point to comparably large, comparably neglected areas in my view (and after significant personal efforts to research nuclear winter, biotechnology risk, nanotechnology, asteroids, supervolcanoes, geoengineering/climate risks, and non-sapient robotic weapons). Throwing in all x-risks and the kitchen sink in, regardless of magnitude, would be virtuous in a grand overview, but it doesn't seem necessary when trying to create good source materials in a more neglected area.

bio/nano-tech disaster

Not AI risk.

I have studied bio risk (as has Michael Vassar, who has even done some work encouraging the plucking of low-hanging fruit in this area when opportunities arose), and it seems to me that it is both a smaller existential risk than AI, and nowhere near as neglected. Likewise the experts in this survey, my conversations with others expert in the field, and reading their work.

Bio existential risk seems much smaller than bio catastrophic risk (and not terribly high in absolute terms), while AI catastrophic and x-risk seem close in magnitude, and much larger than bio x-risk. Moreover, vastly greater resources go into bio risks, e.g. Bill Gates is interested and taking it up at the Gates Foundation, governments pay attention, and there are more opportunities for learning (early non-extinction bio-threats can mobilize responses to guard against later ones).

This is in part because most folk are about as easily mobilized against catastrophic as existential risks (e.g. Gates thinks that AI x-risk is larger than bio x-risk, but prefers to work on bio rather than AI because he thinks bio catastrophic risk is larger, at least in the medium-term, and more tractable). So if you are especially concerned about x-risk, you should expect bio risk to get more investment than you would put into it (given the opportunity to divert funds to address other x-risks).

Nanotech x-risk would seem to come out of mass-producing weapons that kill survivors of an all out war (which leaves neither side standing), like systems that could replicate in the wild and destroy the niche of primitive humans, really numerous robotic weapons that would hunt down survivors over time, and such like. The FHI survey gives it a lot of weight, but after reading the work of the Foresight Institute and Center for Responsible Nanotechnology (among others) from the last few decades since Drexler's books, I am not very impressed with the magnitude of the x-risk here or the existence of distinctive high-leverage ways to improve outcome around the area, and the Foresight Institute continues to operate in any case (not to mention Eric Drexler visiting FHI this year).

Others disagree (Michael Vassar has worked with the CRN, and Eliezer often names molecular nanotechnology as the x-risk he would move to focus on if he knew that AI was impossible), but that's my take.

Malthusian upload scenario

This is AI risk. Brain emulations are artificial intelligence by standard definitions, and in articles like Chalmers' "The Singularity: a Philosophical Analysis."

highly destructive war

It's hard to destroy all life with a war not involving AI, or the biotech/nanotech mentioned above. The nuclear winter experts have told me that they think x-risk from a global nuclear war is very unlikely conditional on such a war happening, and it doesn't seem that likely.

bad memes/philosophies spreading among humans or posthumans and overriding our values

There are already massive, massive, massive investments in tug-of-war over politics, norms, and values today. Shaping the conditions or timelines for game-changing technologies looks more promising to me than adding a few more voices to those fights. On the other hand, Eliezer has some hopes for education in rationality and critical thinking growing contagiously to shift some of those balances (not as a primary impact, and I am more skeptical). Posthuman value evolution does seem to sensibly fall under "AI risk," and shaping the development and deployment of technologies for posthumanity seems like a leveraged way to affect that.

upload singleton ossifying into a suboptimal form compared to the kind of superintelligence that our universe could support

AI risk again.

(Are there any doomsday cults that say "doom is probably coming, we're not sure how but here are some likely possibilities"?)

Probably some groups with a prophecy of upcoming doom, looking to every thing in the news as a possible manifestation.

Comment author: Turgurth 12 April 2012 05:44:57AM *  1 point [-]

I have a few questions, and I apologize if these are too basic:

1) How concerned is SI with existential risks vs. how concerned is SI with catastrophic risks?

2) If SI is solely concerned with x-risks, do I assume correctly that you also think about how cat. risks can relate to x-risks (certain cat. risks might raise or lower the likelihood of other cat. risks, certain cat. risks might raise or lower the likelihood of certain x-risks, etc.)? It must be hard avoiding the conjunction fallacy! Or is this sort of thing more what the FHI does?

3) Is there much tension in SI thinking between achieving FAI as quickly as possible (to head off other x-risks and cat. risks) vs. achieving FAI as safely as possible (to head off UFAI), or does one of these goals occupy signficantly more of your attention and activities?

Edited to add: thanks for responding!

Comment author: beriukay 15 March 2012 05:01:07AM 2 points [-]

That was one addendum I was going to suggest. Perhaps we can add a stipulation about getting a vaccine that hypothetically protects you from the worst of biological fates. After all, what good is a massive knowledge and no tools if we die before we can even find/make some basic anti-biologicals.

Comment author: Turgurth 16 March 2012 02:11:42AM 4 points [-]

One possible alternative would be choosing to appear in the Americas.

View more: Prev | Next