Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: michael_dello 10 October 2015 04:02:29AM 0 points [-]

"Why haven't more EAs signed up for a course on global security, or tried to understand how DARPA funds projects, or learned about third-world health?"

A very interesting point, and you've inspired me to take such a course. Does anyone have any recommendations for a good (and preferable reputable, given our credential addicted world) course relating to global security and health?

Comment author: Clarity 10 October 2015 01:39:09AM 1 point [-]
Comment author: Clarity 10 October 2015 01:27:51AM -1 points [-]

"The Buddha then asked all the attendant Bhikkhus to clarify any doubts or questions they had. They had none. According to Buddhist scriptures, he then finally entered Parinirvana. The Buddha's final words are reported to have been:

"All composite things (Saṅkhāra) are perishable. Strive for your own liberation with diligence" (Pali: 'vayadhammā saṅkhārā appamādena sampādethā'). "


Comment author: CellBioGuy 10 October 2015 12:57:20AM 0 points [-]

As amusing as rhythmically injecting me with alcohol and measuring its effect on my rate of reproduction would be, I think I'll pass.

Comment author: l_mir 09 October 2015 10:20:55PM 0 points [-]

It may be because we are evolutionarily wired to be curious about our surroundings so that we could feel 'safe', so if something is known, then that may mean that is 'safe', if something isn't known then there may be a 'danger' there. Just a thought.

In response to Causal Universes
Comment author: potato 09 October 2015 05:13:23PM 0 points [-]

Here's my problem. I thought we were looking for a way to categorize meaningful statements. I thought we had agreed that a meaningful statement must be interpretable as or consistent with at least one DAG. But now it seems that there are ways the world can be which can not be interpreted even one DAG because they require a directed cycle. SO have we now decided that a meaningful sentence must be interpretable as a directed, cyclic or acyclic, graph?

In general, if I say all and only statements that satisfy P are meaningful, then any statement that doesn't satisfy P must be meaningless, and all meaningless statements should be unobservable, and therefor a statement like "all and only statements that satisfy P are meaningful" should be unfalsifiable.

Comment author: TeMPOraL 09 October 2015 07:59:58AM 0 points [-]

I find the idea that people don't like being intoxicated suspicious. Experiencing euphoria from intoxication has a lot do with with brain chemistry, and it would be very odd if some humans recieved this effect and others did not.

Another n=1: I like the way intoxication feels when I'm intoxicated, but over last couple of months I've gone from wanting to enter that state often to avoiding all alcohol on purpose. What changed was realizing on an emotional level that I have tons of interesting (or necessary) things to do and alcohol limits that by taking away evening (to drink) and the next day (I feel cognitively worse 'till next afternoon, even if I didn't have a hangover). At some point the prospect of drinking became anxiety-inducing for me.

Comment author: TeMPOraL 09 October 2015 07:55:42AM 0 points [-]

I drink to make parties with friends tolerable because after an hour there is usually an infinite amount of things I'd rather be doing...

Comment author: G0W51 09 October 2015 05:59:31AM 0 points [-]

Some parties may be more likely to accelerate scientific progress than others, and those who do could decrease existential risk by decreasing the time spent in high-risk states, for example the period when there are dangerous nano-technological weapons but other astronomical objects have not be colonized. This probably is not enough to justify voting, but I thought I would just let you know.

Comment author: Draco18s 09 October 2015 05:49:53AM 0 points [-]

I read this article back months ago, but only now just connected the moral with my own life.

In telling someone about these experiments and linking this article, I realized that I to had set my mind towards doing the impossible and succeeding. Long story short, I was tasked at work with producing an impossible result and was able to succeed after two days (with downsides, but that was the framework I was working under). The net result was that my boss learned that I could produce miracles upon request and didn't bother asking how long a task might take, whether a task was possible, viable, sensible, or whatever. He'd just swing by and go "oh hey I need X by [time]" and I'd have to do it. I couldn't say no because his philosophy was "bang it out."

Ultimately this had the same toll on my psyche as your AI experiments. Accomplishing the impossible happens when you sit down, shut up, and just do it.

But don't do it too often, success or fail, or you'll grind yourself into a paste and be unable to tolerate any more.

I ended up having to quit a job I enjoyed doing for a number of years simply because no one could manage expectations of the guy in charge. I challenged the sun and won on more than one occasion, but the psychological toll on my mood and work relationships soured permanently. I could not continue, work was no longer fun and I could not tolerate management. So I quit at the worst possible time, not intentionally, but just because a request came in and I said, "You know what, no. I don't have to do this. I've put up with this long enough, I was going to tough it out, but this is too much. I quit."

Go out, accomplish the impossible.

But manage expectations and only do it when absolutely necessary.

Comment author: TeMPOraL 09 October 2015 05:12:36AM 0 points [-]

You win rationality(1) points for being honest with yourself :).

Comment author: TeMPOraL 09 October 2015 05:04:29AM 0 points [-]

Even in Europe, places where you don't have to drive in traffic / door zone are incredibly rare. Bike paths are cool, but as currently implemented they mostly serve to annoy both drivers and pedestrians alike, and there is still a default assumption that where there is no bike path, you'll be driving with traffic.

Comment author: TeMPOraL 09 October 2015 05:01:18AM 0 points [-]

tens of thousands of lives per year

Try hundreds of thousands per year from just accidents, before even counting health benefits of reduced emissions and smog saving more lives.

Comment author: AspiringRationalist 09 October 2015 12:08:22AM 2 points [-]

When we tried Paranoid Debating at the Boston meetup a few years back, we often had the problem that the deceiver didn't know enough about the question to know which direction to mislead in. I think the game would work better if the deceiver were simply trying to bias the group in a particular direction rather than make the group wrong. I think that's also a closer approximation to real life - plenty of people want to sell you their product and don't know or care which option is best. Not many just want you to buy the wrong product.

Comment author: jimrandomh 08 October 2015 10:53:07PM 0 points [-]

You're right, it definitely needs that. I've added an option where you can get notifications of players joining if you leave the tab open in the background. Hopefully this will increase the fraction of visitors who get to play.

Comment author: jimrandomh 08 October 2015 10:50:19PM 1 point [-]

I've just made a few updates to the online implementation. Specifically:

  • There's an in-game chat.
  • When waiting for games, there's an option to leave the tab in the background and get notifications when players join and when the game starts. So if other people aren't visiting at the exact same time, you have a better chance of getting to play.
  • Private games don't auto-start at 6 players, you can have more if you want.
  • Miscellaneous minor bug fixes.
Comment author: polymathwannabe 08 October 2015 10:48:34PM 2 points [-]

Be welcome, sir.

Comment author: AlexanderRM 08 October 2015 09:24:32PM 0 points [-]

I assume what Will_Pearson meant to say was "would not regret making this wish", which fits with the specification of "I is the entity standing here right now". Basically such that: if before finishing/unboxing the AI, you had known exactly what would result from doing so, you would still have built the AI. (and it's supposed the find out of that set of possibly worlds the one you would most like, or... something along those lines)) I'm not sure that would rule out every bad outcome, but... I think it probably would. Besides the obvious "other humans have different preferences from the guy building the AI"- maybe the AI is ordered to do a similar thing for each human individually- can anyone think of ways this would go badly?

Comment author: Calliope_Areyon 08 October 2015 08:59:55PM 2 points [-]

Um . . . aside from making an account on this site? Joking! I wrote four new chapters on a story I'm working on; finished the entire Children of the Bard series; aced my English, math, history, and science tests; and convinced my friend of the reality of flaws in everyone's reasoning process. Oya!

Comment author: Lumifer 08 October 2015 07:34:56PM 0 points [-]

Sure, that's fine.

View more: Next