Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: ChristianKl 24 April 2014 11:36:11AM 0 points [-]

Do you have a recommendation for a resource that explains the basics in a decent matter?

Comment author: Froolow 24 April 2014 11:32:30AM 0 points [-]

One possible mechanism would be a general social shift towards more cryogenics meaning cryo voters became an important voting block. Since most rational cryo-voters can be expected to be more-or-less single issue with respect to cryonics (almost nothing will increase your individual expected utility for a given level of money more than increasing your chance of being revivified), politicians will begin to face great pressure to appease this demographic. You'll see that this is different to the situation you describe for at least three reasons:

  • On those issues where the individual utility gain is greatest, the population is smallest (cures for very rare genetic conditions which are unaffordable to the average person and yet not subsidised by the government). This is probably because it is not in the interests of politicians to use political capital on a very small sub-section of the population.

  • On those issues where individual utility gain is small and populations are large, the individuals concerned are unlikely to be single-issue. For example public health measures undoubtedly raise my lifetime utility, but do they do so more than public education, public art or nebulous concepts like 'freedom'? Hard to say

  • On those issues where individual utility gain is large and populations are large, those populations are almost inevitably located in areas where US politicians have no incentive to help them. For example, campaigning to end malaria would be both massively important and affect a huge number of people, but those people would not be US voters.

If this social shift occurs, politicians may be incentivised to offer a 'government guarentee' to all frozen corpsicles, in the same way all mortage lenders are government-backed or banks are unable to go bust in an uncontrolled way (assets up to a certain value are protected). So it wouldn't so much be a 'not-death' right (because all three groups I describe above would still fail to be protected from death), but I was using it as a shorthand for the slightly more complex scenario I describe here.

I don't know how likely I think this scenario is, but I think if it is going to happen, it will happen before a post-scarcity society. In the interests of being charitable to the cryogenics companies, I think it is fair to point out that this is a mechanism that could greatly improve their chance of being revivified without any technological innovation.

In response to Ergonomics Revisited
Comment author: mare-of-night 24 April 2014 11:29:43AM 0 points [-]

I hadn't thought much about pillows when I was still using the ones my parents gave me when I went off to school, but after buying my own a few times, pillows have come to my attention as an item that's really worthwhile to get right. I don't know much about getting them right beyond getting the correct thickness yet, though. (I think it means I'm doing something wrong, if they wear out in 6 months of use.)

The way straps are made seems to make a big difference in how comfortable they are on your shoulders. Or at least, that's the only explanation I can think of for why my small purse hurts my shoulders pretty quickly, but an over-the-shoulder bag with a strap padded like on a backpack is much less troublesome even when I put all the contents of the purse into it and then some.

This is possibly more about convenience than ergonomics, but I've solved my problem of getting tangled in my headphone cord (especially when trying to do chores while listening to an mp3 player). I looked into bluetooth headphones for a while, but didn't buy them because they're a little expensive and I don't know enough about what I like in headphones to chose ones I'd be sure I'd like. I got a Sansa Zip Clip mp3 player instead. It's made to clip to clothing, and it's lightweight enough not to get in the way, so I can keep the mp3 player somewhere near my ears and tie up the cord. (Physical buttons are also nice for being able to pause quickly when someone talks to me.)

Comment author: alternativenickname 24 April 2014 11:22:25AM *  0 points [-]

I have done a few LSD trips. The first one was really weird, the rest were with smaller doses and less intense. They've given me a couple of mildly interesting ideas, but for the most part I don't feel like I've really gotten anything useful out of them. (I keep thinking that given the reputation that the thing has, there has to be some way of getting more out of the experiences, but for a large part it feels like my mind gets temporarily dulled by the thing so that I can't really think or do anything interesting, and then I just get bored and start waiting for the trip to be over.)

Comment author: mare-of-night 24 April 2014 10:59:39AM 1 point [-]

I don't have flat feet myself, so I don't know what the requirements for that are. I've had good luck with Clark's, but I usually only wear a 1-1.5 inch heel. (My work shoes are Clark's bendables from a previous season.) Does this fit your requirements?

Comment author: mare-of-night 24 April 2014 10:49:03AM *  0 points [-]

Seconded. I strongly prefer laceless because I know that my laced shoes get worn much less often because of it.

Comment author: mare-of-night 24 April 2014 10:45:09AM 0 points [-]

Might work for some people. I wouldn't be as comfortable working in one of those, since you can't sit down and then scoot it closer to the desk with your feet if you can't reach the floor.

Comment author: ChristianKl 24 April 2014 10:33:10AM 0 points [-]

For what kind of timeframe? Do the effects stay the same over time? Are their meaningful side effects?

Comment author: ChristianKl 24 April 2014 10:17:25AM 0 points [-]

Solar energy used to halve in price every 7 years. in the last 7 it more than halved. Battery performance also has a nice exponential improvement curve.

Comment author: ChristianKl 24 April 2014 10:16:04AM 0 points [-]

The only reason the costs per joule in dollars are near each other (true factor of about 1.5-3x the cost in dollars between nuclear and the coal everyone knows and loves, according tothe EIA ) is that a lot of the true costs of nuclear power plants are not borne in dollars and are instead externalized. Fifty years of waste have been for the most part completely un-dealt-with in the hopes that something will come along

That an quite unfair comparison. The way we deal with coal waste kills ten of thousands or even hundreds of thousands per year. The way we deal with coal waste might cost more money but doesn't kill as many people. Simply dumping all nuclear waste in the ocean would probably a more safe way of disposing of waste than the way we deal with coal.

Even tunnel that were created in coal mining can collapse and do damage.

Comment author: Squark 24 April 2014 09:57:51AM *  0 points [-]

...a FAI gains you as much difference as available, minus the opportunity cost of FAI's development...

Exactly. So for building FAI to be a good idea we need to expect its benefits to outweigh the opportunity cost (we can spend the remaining time "partying" rather than developing FAI).

For example, one possible effect is a few years of strongly optimized world, which might outweigh all of the moral value of the past human history.

Neat. One way it might work is the FAI running much-faster-than-realtime WBE's so that we gain a huge amount of subjective years of life. This works for any inevitable impending disaster.

Comment author: ChristianKl 24 April 2014 09:41:08AM 0 points [-]

The usual way groups of girls deal with this is to call the girl who actually twirls around a lot of guys around her little finger a slut. The punishment isn't physical violence but it's there.

Comment author: Squark 24 April 2014 09:14:56AM 0 points [-]

The argument falls apart once you use UDT instead of naive anthropic reasoning: http://lesswrong.com/lw/jv4/open_thread_1117_march_2014/aoym

Comment author: Squark 24 April 2014 09:09:32AM 1 point [-]

Not like I have anything against AI and machine learning literature, but can you give examples of misconceptions?

Comment author: RichardKennaway 24 April 2014 08:42:54AM 0 points [-]

My Straussian reading of Tyler Cowen is that a "serious" MIRI would be assembling and training a team of hacker-assassins to go after potential UFAIs instead of dinking around with decision theory.

A "serious" MIRI would operate in absolute secrecy, and the "public" MIRI would never even hint at the existence of such an organisation, which would be thoroughly firewalled from it. Done right, MIRI should look exactly the same whether or not the secret one exists.

Comment author: 4hodmt 24 April 2014 08:40:58AM *  1 point [-]

I find refresh rate extremely important. I stuck with CRTs at 100Hz+ for a very long time after LCDs became popular because only 60Hz LCDs were available. I now use a 120Hz LCD and it's much more enjoyable than 60Hz. Everything feels smoother and more responsive. The improved mouse control is very obvious (this might require increasing the mouse sample rate, I use usbhid mousepoll=2 on Linux). Motion appears much sharper, because the higher refresh rate allows for higher frame rate which reduces sample-and-hold blur (see http://www.blurbusters.com for detailed information on motion quality). The fastest LCDs on the market support 144Hz. I'd like one but I can't really justify the expense right now.

However, note that I am unusually sensitive to motion artifacts, eg. I am bothered by PWM dimming of LED lights well into the kHz, and I greatly dislike 3:2 pulldown judder. It's possible that some people genuinely don't mind 60Hz LCDs, although I wonder if that's only because they've never used anything faster.

Comment author: gjm 24 April 2014 08:22:40AM 1 point [-]

So now you have the same number of pixels as from that one big monitor, but you either need a fancy mounting mechanism for putting the monitors above one another or else need twice the width on your desk. And you get a big wide thing you probably can't see all of at once, instead of something a more natural shape. And it's divided into four bits which limits the possible shapes and sizes of your windows. And it's more expensive.

Again, for sure you might have good reasons to choose four smaller monitors instead of one really big one. But (1) the one big one has definite advantages and (2) I repeat, I wasn't saying "hey, everyone should get one of these things" but "yes, as it happens such things do exist and here's an example".

Comment author: JoshuaFox 24 April 2014 07:10:55AM 1 point [-]

As a start, you can simply link to http://lesswrong.com/user/Oscar_Cunningham/comments/

Comment author: JoshuaFox 24 April 2014 07:04:03AM *  3 points [-]

Yes!

comment confirming creation of such a user page

I have updated my user page.

Comment author: eeuuah 24 April 2014 06:49:23AM 0 points [-]

Probably not, but my point still stands for most leather shoes and other sneakers.

Comment author: More_Right 24 April 2014 06:34:10AM 0 points [-]

Also, the thresholds for "simple majoritarianism" are usually required to be much higher in order to obtain intelligent results. No thresholds should be possible to be reached by three people. Three people could be goons who are being paid to interfere with the LW forum. That then means that if people are disinterested, or those goons are "johnny on the spot" (the one likely characteristic of the real life agents provocateurs I've encountered), then legitimate karma is lost.

Of course, karma itself has been abused on this site (and all other karma-using sites), in my opinion. I really like the intuitions of Kevin Kelly, since they're highly emergence-optimizing, and often genius when it comes to forum design. :) Too bad too few programmers have implemented his well-spring of ideas!

Comment author: Gunnar_Zarncke 24 April 2014 06:28:00AM 0 points [-]

...but also less established social ties. And less settled long-term investments (though this correlates with with risk part).

Comment author: More_Right 24 April 2014 06:26:26AM *  1 point [-]

Intelligently replying to trolls provides useful "negative intelligence." If someone has a witty counter-communication to a troll, I'd like to read it, the same way George Carlin slows down for auto wrecks. Of course, I'm kind of a procrastinator.

I know: A popup window could appear that asks [minutes spent replying to this comment] x [hourly rate you charge for work] x.016r = "[$###.##] is the money you lost telling us how to put down a troll. We know faster ways: don't feed them."

Of course, any response to a troll MIGHT mean that a respected member of the community disagrees with the "valueless troll comment" assessment. --A great characteristic to have: one who selflessly provides protection against the LW community becoming an insular backwater of inbred thinking.

Our ideas need cross pollination! After all, "Humans are the sex organs of technology." -Kevin Kelly

Comment author: More_Right 24 April 2014 06:12:58AM 0 points [-]

Can anyone "name that troll?" (Rumplestiltskin?)

Comment author: More_Right 24 April 2014 06:08:58AM 0 points [-]

The proposals here exist outside the space of people who will "solve" any problems that they decide are problems. Therefore, they can still follow that advice, and this is simply a discussion area discussing potential problems and their potential solutions. All of which can be ignored.

My earlier comment to the effect of "I'm more happy with LessWrong's forum than I am unhappy with it, but that it still falls far short of an ideally-interactive space" should be construed as "doing nothing to improve the forum" is definitely a valid option. "If it aint broke, don't fix it."

I don't view it as either totally broken, or totally optimal. Others have indicated similar sentiments. Likely, improvements will be made when programmers have spare time, and we have no idea when that will be.

Now, if I was aggressively agitating for a solution to something that hadn't been clearly identified as a problem, that might be a little obnoxious. I hope I didn't come off that way.

Comment author: passive_fist 24 April 2014 05:47:41AM *  2 points [-]

This isn't a question, just a recommendation: I recommend everyone on this site who wants to talk about AI familiarize themselves with AI and machine learning literature, or at least the very basics. And not just stuff that comes out of MIRI. It makes me sad to say that, despite this site's roots, there are a lot of misconceptions in this regard.

Comment author: More_Right 24 April 2014 05:46:36AM *  0 points [-]

Too much information can be ignored, too little information is sometimes annoying. I'd always welcome your reason for explaining your downvote, especially if it seems legitimate to me.

If we were going to get highly technical, a somewhat interesting thing to do would be to allow a double click to differentiate your downvote, and divide it into several "slider bars." People who didn't differentiate their downvotges would be listed as "general downvote" Those who did differentiate would be listed as a "specific reason downvote." A small number of "common reasons for downvoting that don't merit an individualized comment" on LessWrong would be present, plus an "other" box. If you clicked on the light gray "other", it would be replaced with a dropdown selection box, one whose default position you could type into, limited to 140 characters. Other comments could be "poorly worded, but likely to be correct" "Poorly constructed argument," "well-worded but likely incorrect" "ad hominem attack" "contains logical fallacies" "bad grammar" "bad formatting" "ignores existing body of thought, seems unaware of existing work on the subject" "anti-consensus, likely wrong" "anti-consensus, grains of truth."

There could also be a "reason for upranking," including polar opposite options that were the opposites of the prior options, so one need only adjust one slider bar for "positive and negative" common reasons. This would allow a + and - value to be associated with comments, to obtain a truer picture of the comment more quickly. "Detailed rankings" (listed next to the general ranking) could give commentators a positive and a negative for various reasons, dividing up two possible points, and adjusting remaining percentages for remaining portions of a point as the slider bar was raised. "General argument is true" could be the positive "up" value, "general argument is false" could be its polar opposite.

It also might be interesting to indicate how long people took to write their comments, if they were written in the edit window, and not copied and pasted. A hastily written comment could be downranked as "sloppily written" unless it was an overall optimal comment.

Then, when people click on the comment ranking numbers, they could see a popup window with all the general up and downvotes, and with many of them providing specific reasoning behind them. clicking on a big "X" would close the window.

I also like letting unregistered users voting in a separate "unregistered users" ranking. Additionally, it would be interesting to create a digital currency for the site that can be traded or purchased, in order to create market karma. Anyone who produces original work for LW could be paid corresponding to the importance of the work, according to their per hour payscale and the number of hours (corresponding to "real world pay" from the CFAR, or other cooperating organizations).

A friend of mine made $2M off of an initial small investment in bitcoin, and never fails to rub that in when I talk to him. I'd like it if a bunch of LW people made similar profits off of ideas they almost inherently understand. Additionally, it would be cool to get paid for "intellectual activity" or "actual useful intellectual work" (depending on one's relationship with the site) :)

Comment author: More_Right 24 April 2014 05:31:01AM 0 points [-]

No web discussion forum I know of has filtering capabilities even in the ball park of Usenet, which was available in the 80s. Pitiful.

I strongly share your opinion on this. LW is actually one of the better fora I've come across in terms of filtering, and it still is fairly primitive. (Due to the steady improvement of this forum based on some of the suggestions that I've seen here, I don't want to be too harsh.)

It might be a good idea to increase comment-ranking values for people who turn on anti-kibbitzing. (I'm sure other people have suggested this, so I claim no points for originality.) ...What a great feature!

(Of course, then that option of "stronger karma for enabled anti-kibbitzers" would give an advantage the malevolent people who want to "game the system" who could turn it on and off, or turn it on on another device, see the information necessary to "send out their political soldiers" and use that to win arguments at a higher-ranking karma. Of course, one might want to reward malevolent players, because they are frequent users of the site, who thus increase the overall activity level, even if they do so dishonestly. They then become "invested players," for when the site is optimized further. Also, robust sites should be able to filter even malevolent players, emphasizing constructive information flow. So, even though I'm a "classical liberal" or "small-L libertarian," this site could theoretically be made stronger if there were a lot of paid government goons on it, purposefully trying to prevent benevolent or "friendly" AGI that might interfere with their plans for continuing domination.)

A good way to defeat this would be to "mine" for "anti-kibbitzing" karma. Another good idea would be to allow users to "turn off karma." Another option would be to allow those with lots of karma to turn off their own karma, and show a ratio of "possible karma" next to "visible karma," as an ongoing vote for what system makes the most sense, from those in a position of power to benefit from the system. This still wouldn't tell you if it was a good system, but everyone exercizing the option would indicate that the karma-based system was was a bad one.

Also, I think that in a perfect world, Karma in its entirety should be eliminated here. "One man's signal is another man's noise," indeed! If a genius level basement innovator shows up tomorrow and begins commenting here, I'd like him to stick around. (Possibly because I might be one myself, and have noticed that some of the people who most closely agree with certain arguments of mine are here briefly as "very low karma" partipants, agree with one or two points I make, and then leave. Also, if I try to encourage them but others vote them down, I'm encouraged to eliminate dissent, in the interest of eliminating "noise." Why not just allow users to automatically minimize anyone who comments on a heavily-downranked already minimized comment? Problem solved.)

LessWrong is at risk of becoming another "unlikeliest cult," to the same extent that Ayn Rand Institute became an "unlikely cult." (ARI did, to some extent, become a cult, and that made it less successful at its intended goal, which was similar to the stated goal of LessWrong. It became too important what Ayn Rand personally thought about an idea, and too unimportant what hierarchical importance there inherently was to the individual ideas themselves. Issues became "settled" once she had an opinion on them. Much the way that "mind-killing" is now used to "shut down" political debate, or debate over the importance of political engagement, and thus cybernetics, itself.)

There are certain subjects that "most humans in general" have an incredibly difficult time discussing, and unthinking agreement with respected members of the community is precisely what makes it "safe" to disregard novel "true" or "valuable" solutions or problem-solving ideas, ...rare as they may admittedly be.

Worse still, any human network is more likely to benefit from solutions outside of its own area of expertise. After all, the experts congregate in the same place, and familiarize themselves with the same incremental pathways toward the solution of their problems. In any complex modern discipline this requires immense knowledge and discipline. But what if there is a more direct but unanticipated solution that can arise from outside of that community? This is frequently the case, as indicated in Kurzweil's quote of Weiner's "Cybernetics" in "How to Create a Mind."

It may be that the rise of a simple algorithm designed by a nanotech pioneer rapidly builds a better brain than AGI innovators can build, and that this brain "slashes the gordian knot," by out-thinking humans and building better and better brains that ultimately are highly-logical, highly-rational, and highly-benevolent AGI. This constitutes some of the failure of biologists and computer scientists to understand the depth of each others' points in a recent Singularity Summit meeting. http://www.youtube.com/watch?v=kQ2snfsnroM -Dennis Bray on the Complexity of Biological Systems (Author of "Wetware" describing computational processes within cells).

Also, if someone can be a "troll" and bother other people with his comments, he's doing you a small favor, because he's showing that there are weaknesses in your commenting system that actually rise to the level of interfering with your mission. If we were all being paid minimum wage to be here, that might represent significant losses. (And shouldn't we put a price on how valuable this time is to us?) The provision of garbled blather as a steady background of "chatter" can be analyzed by itself, and I believe it exists on a fluid scale from "totally useless" to "possibly useful" to "interesting." Also, it indicates a partial value: the willingness to engage. Why would someone be willing to engage a website about learning an interesting subject, but not actually learn it? They might be unintelligent, which then gives you useful information about what people are willing to learn, and what kinds of minds are drawn to the page without the intelligence necessary to comprehend it, but with the willingness to try to interact with it to gain some sort of value. (Often these are annoying religious types who wish to convert you to their religion, who are unfamiliar with the reasons for unbelief. However, occasionally there's someone who has logic and reason on their side, even though they are "unschooled." I'm with Dawkins on this one: A good or bad meme can ride an unrelated "carrier meme" or "vehicle.")

Site "chatter" might normally not be too interesting, and I admit it's sub-optimal next to people who take the site seriously, but it's also a little bit useful, and a little bit interesting, if you're trying to build a network that applies rationality.

For example, there are, no doubt, people who have visited this website who are marketing majors, or who were simply curious about the current state of AGI due to a question about when will a "Terminator" or "skynet"-like scenario be possible, (if not likely). Some of them might have been willing participants in the more mindless busywork of the site, if there had been an avenue for them to pursue in that direction. There are very few such avenues on this "no nonsense" (but also no benevolent mutations) version of the site.

There also doesn't appear to be much of an avenue for people who hold significant differences of opinion that contradict or question the consensus. Such ideas will be downvoted, and likely out of destructive conformity. As such, I agree that it's best to allow users to eliminate or "minimize" their own conceptions of "what constitutes noise" and "what constitutes bias."

Comment author: JoshuaZ 24 April 2014 04:05:49AM 0 points [-]

Yes. But I do think that thinking critically about the assumptions you are making, in particular that you can meaningfully talk about what it means to pick a random individual in a uniform fashion, is worthwhile for understanding a fair bit of probability and related issues which are relevant in broader in contexts.

Comment author: Douglas_Knight 24 April 2014 03:50:33AM 0 points [-]

All the things you've heard are consistent and together they answer your final question by denying that there is a discrepancy in choice of theory, just in choice of name. (Not that I'm sure that all the things you've heard are true.)

Comment author: Douglas_Knight 24 April 2014 03:44:33AM 0 points [-]

"the parts of the US government that trained people to infiltrate the post-collapse Soviet Union and then locate and neutralize nuclear weapons."

What is he talking about? Sam Nunn?

Comment author: Douglas_Knight 24 April 2014 03:27:00AM 0 points [-]

I don't think saying "That is not a prisoner's dilemma" is a useful way of communicating "those players don't exist."

Also, the topic at hand is what do people mean by "fair," not whether the situations they do or do not call fair are real situations.

Comment author: knb 24 April 2014 03:19:52AM 0 points [-]

As far as I know there are still nuclear weapons in the post-collapse Soviet Union.

Pretty clear that he meant the "loose nukes" that went unaccounted for in the administrative chaos after Soviet Collapse.

Comment author: Torello 24 April 2014 03:09:54AM 0 points [-]

The sense of fairness evolved to make our mental accounting of debts (that we owe and are owed) more salient by virtue of being a strong emotion, similar to how a strong emotion of lust makes the reproductive instinct so tangible. This comes in handy because humans are highly social and intelligent and engage in positive-sum economic transactions, so long as both sides play fair... according to your adapted sense of what's fair. If you don't have a sharp sense of fairness other people might walk all over you, which is not evolutionarily adaptive. See "The Moral Animal" or "Nonzero" by Robert Wright, or the chapter "Family Vaules" in Steven Pinker's "How the Mind Works."

This sense of fairness may have been co-opted at other levels, like a religious or political one, but it's quite instinctual. Very young children have a strong sense of fairness before they could reason to it, just as they can acquire language before they could explicitly/consciously reason from grammar rules to produce grammatical sentences. It's very engrained in our mental structure, so I think it would take quite an effort to "wipe the concept."

Comment author: hamnox 24 April 2014 03:06:11AM 0 points [-]

I was diagnosed non-hyperactive ADD as a kid, though I haven't done meds for that since middle school. It's been suggested that it was a misdiagnosis for asperger's.

Comment author: hamnox 24 April 2014 02:38:49AM 0 points [-]

Less not screwing over my future self, more "Am I the kind of person who gives up in this situation?"

The aim is eventually be that person who doesn't let silly things stop them, if I can't be that person today then when can I?

Comment author: CellBioGuy 24 April 2014 02:19:48AM *  0 points [-]

I've seen it. It seemingly ignores the possibility that humanity will not go extinct [EDIT: in the near future, possibly into the tens of megayears] but will also never reach a 'posthuman state' capable of doing arbitrary ancestor simulations.

Comment author: gwern 24 April 2014 02:18:43AM 0 points [-]
Comment author: player_03 24 April 2014 02:03:49AM 0 points [-]

Harry left "a portion of his life" (not an exact quote) in Azkaban, and apparently it will remain there forever. That could be the remnant that Death would fail to destroy.

Anyway, Snape drew attention to the final line in the prophecy. It talked about two different spirits that couldn't exist in the same world, or perhaps two ingredients that cannot exist in the same cauldron. That's not Harry and Voldemort; that's Harry and Death.

I mean, Harry has already sworn to put an end to death. It's how he casts his patronus. He's a lot less sure about killing Voldemort, and would prefer not to, if given the choice.

Comment author: UmamiSalami 24 April 2014 02:00:08AM 2 points [-]

Sorry if this has topic has been beaten to death already here. I was wondering if anyone here has seen this paper and has an opinion on it.

The abstract: "This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed."

Quite simple, really, but I found it extremely interesting.

http://people.uncw.edu/guinnc/courses/Spring11/517/Simulation.pdf

Comment author: Vulture 24 April 2014 01:40:16AM 0 points [-]

It worked reasonably well for me.

Comment author: Eugine_Nier 24 April 2014 01:32:48AM 1 point [-]

How would you apply that to Lumifer's second example?

An unattractive girl watches an extremely cute girl get all the guys she wants and twirl them around her little finger. "That's not fair!" she says.

Comment author: IlyaShpitser 24 April 2014 12:16:57AM 1 point [-]

Good questions!

In response to comment by gjm on Ergonomics Revisited
Comment author: Alsadius 24 April 2014 12:12:20AM 0 points [-]

Then get four, for $20 more.

Comment author: Jayson_Virissimo 24 April 2014 12:00:47AM 0 points [-]

When I was a kid, our cats used a similar tactic to escape the laundry room with a closed door. One would sit on the dryer and turn the handle with both paws and the other would push against the door with their head.

Comment author: ChristianKl 23 April 2014 11:56:58PM 2 points [-]

How good is the case for taking adderall if you struggle with a lot of procrastination and have access to a doctor to give you a prescription?

Comment author: eeuuah 23 April 2014 11:54:51PM 0 points [-]

I would second that futon

Comment author: CellBioGuy 23 April 2014 11:18:34PM *  0 points [-]

at the caveman level the fossil fuels are pretty much useless

Coal was used as fuel before the Roman empire. It didn't lead to an industrial revolution until someone figured out a way to turn it into mechanical energy substituting for human labor instead of just a heat source in a society where that could be made profitable due to a scarcity of labor. That was the easiest, surface-exposed deposits, yes, but you hardly need any infrastructure at all to extract the energy, and even mechanical energy extraction just needs a boiler and some pistons and valves. This was also true of peat in what is now the Netherlands during the early second millennium.

your imagination with respect to future technology seems severely limited. ... This entirely depends on the technology level.

What does 'technology level' even mean? There's just things people have figured out how to do and things people haven't. And technology is not energy and you cannot just substitute technology for easy energy, it is not a question of technology level but instead the energy gradients that can be fed into technology.

And how are you applying concepts like "energy-dense" to, say, sunlight or geothermal?

Mostly in terms of true costs and capital (not just dollars) needed to access it, combined with how much you can concentrate the energy at the point of extraction infrastructure. For coal or oil you can get fantastic wattages through small devices. For solar you can get high wattages per square meter in direct sunlight, which you don't get on much of the earth's surface for long and you never get for more than a few hours at a time. Incredibly useful, letting you run information technology and some lights at night and modest food refrigeration off a personal footprint, but not providing the constant torrent of cheap energy we have grown accustomed to. Geothermal energy flux is often high in particular areas where it makes great sense (imagine Iceland as a future industrial powerhouse due to all that cheap thermal energy gradient), over most of the earth not so much.

Sunlight is probably our best bet for large chunks the future of technological civilization over most of the earth's surface. It is still not dense. It's still damn useful.

Comment author: CellBioGuy 23 April 2014 11:13:49PM *  0 points [-]

The only reason the costs per joule in dollars are near each other (true factor of about 1.5-3x the cost in dollars between nuclear and the coal everyone knows and loves, according tothe EIA ) is that a lot of the true costs of nuclear power plants are not borne in dollars and are instead externalized. Fifty years of waste have been for the most part completely un-dealt-with in the hopes that something will come along, nuclear power plants are almost literally uninsurable to sufficient levels in the market such that governments have to guarantee them substandard insurance by legal fiat (this is also true of very large hydroelectric dams which are probably also a very bad idea), and power plants that were supposed to be retired long ago have had their lifetimes extended threefold by regulators who don't want to incur the cost of their planned replacements and refurbishments. And the whole thing was rushed forwards in the mid 20th century as a byproduct of the national desire for nuclear weapons, and remarkably little growth has occurred since that driver decreased.

If true, how do you know it's going to remain true in the future?

How do you know it won't? More to the point, it's not a question of technology. It's a question of how much you have to concentrate rare radionuclides in expensive gas centrifuge equipment and how heavily you have to contain the reaction and how long you have to isolate the resultant stuff. Technology does not trump thermodynamics and complexity and fragility.

What happens when we reach a level of technology in which energy production is completely automatic?

What does this mean and why is it relevant?

What about nuclear fusion?

Near as I can tell, all the research on it so far has shown that it is indeed possible without star-style gravitational confinement, very difficult, and completely uneconomic. We have all the materials you need to fuse readily available, if it were easy to do it economically we would've after fifty years of work. It should be noted that the average energy output of the sun itself is about 1/3 of a watt per cubic meter - fusion is trying to produce conditions and reactions of the sort you don't even see in the largest stars in the universe. (And don't start talking about helium three on the moon, I point to a throwawy line in http://physics.ucsd.edu/do-the-math/2011/10/stranded-resources/ regarding that pipe dream.)

Is it possible I'm wrong? Yes. But literally any future other than a future of rather less (But not zero!) concentrated energy available to humanity requires some deus ex machina to swoop down upon us. Should we really bank on that?

Comment author: NancyLebovitz 23 April 2014 11:04:32PM 0 points [-]
Comment author: Vladimir_Nesov 23 April 2014 11:02:53PM 0 points [-]

Thus we remain a small civilization but survive for a long time.

It's not obvious that having a long time is preferable. For example, optimizing a large amount of resources in a short time might be better than optimizing a small amount of resources for a long time. Whatever's preferable, that's the trade that a FAI might be in a position to facilitate.

Comment author: NancyLebovitz 23 April 2014 11:01:57PM 1 point [-]

I'm going to check out the Scholl's shoes for women.

Meanwhile, if you happen to have lace-up shoes, there are permanent elastic laces. I agree that normal shoe laces add unnecessary work and risk to one's life, though I still think cloth laces are better looking.

Comment author: gjm 23 April 2014 10:56:11PM 1 point [-]

I wasn't saying you (or anyone) should get one, only answering Alsadius's question and indicating that monitors of roughly the kind he described do in fact exist.

(30Hz refresh would be very bad for gaming. If you're using your monitor for software development or data analysis or designing buildings or writing novels, though, it probably doesn't make much difference.)

Comment author: tanagrabeast 23 April 2014 10:48:21PM 1 point [-]

Yes. Go laceless. I only discovered a few years ago that there is such thing as men's close-toed shoes that can be appropriate semi-formal workwear yet never need to be tied. I wear something roughly similar to this at work: Amazon and a more casual variation in my free time. Very comfortable, loose-sneaker feel on the inside. An elastic-bound tongue ensures uniform snugness, rather than fluctuating between too tight and too loose. Once broken in, you can slide them on and off without hands, as you might with slippers or flip-flops.

But more importantly than the ergonomics... why waste time time tying shoes? Why risk injury tripping over laces, or getting them caught places?

Comment author: Vladimir_Nesov 23 April 2014 10:45:04PM *  1 point [-]

For example, a Neanderthal is a much better optimizer than a fruit fly but both a almost equally powerless against an H-bomb.

There is no reason to expect exact equality, only close similarity. If you optimize, you still prefer something that's a tiny bit better to something that's a tiny bit worse. I'm not claiming that there is a significant difference. I'm claiming that there is some expected difference, all else equal, however tiny, which is all it takes to prefer one decision over another. In this case, a FAI gains you as much difference as available, minus the opportunity cost of FAI's development (if we set aside the difficulty in predicting success of a FAI development project).

(There are other illustrations I didn't give for how the difference may not be "tiny" in some senses of "tiny". For example, one possible effect is a few years of strongly optimized world, which might outweigh all of the moral value of the past human history. This is large compared to the value of millions of human lives, tiny compared to the value of uncontested future light cone.)

(I wouldn't give a Neanderthal as a relevant example of an optimizer, as the abstract argument about FAI's value is scrambled by the analogy beyond recognition. The Neanderthal in the example would have to be better than the fly at optimizing fly values (which may be impossible to usefully define for flies), and have enough optimization power to render the difference in bodies relatively morally irrelevant, compared to the consequences. Otherwise, the moral difference between their bodies is a confounder that renders the point of the difference in their optimization power, all else equal, moot, because all alse is now significantly not equal.)

Comment author: gjm 23 April 2014 10:34:19PM 0 points [-]

Because it has twice as many pixels as two of those.

(Is that enough reason? Maybe not. But that's the main reason you'd want it, if you did.)

Comment author: Oscar_Cunningham 23 April 2014 10:12:28PM *  3 points [-]

Is there any way I can delete my userpage or set up a redirect so that when I click on my name it takes me to my comments page like it used to?

Comment author: James_Miller 23 April 2014 10:10:44PM 0 points [-]

Protocol Guide for Neurofeedback Clinicians (very expensive but the best); The Neurofeedback Solution How to Treat Autism, ADHD, Anxiety, Brain Injury, Stroke, PTSD, and More; and Getting Started with Neurofeedback (Norton Professional Books).

Neurofeedback has many different targets. I have used it to become more relaxed and focused. Most of what I learned came from talking to neurofeedback professionals. I strongly suggest you not experiment on yourself, but rather do so under the care of a professional.

Comment author: Squark 23 April 2014 09:58:06PM *  0 points [-]

There is a LW parents mailing list? How do I get in?

EDIT: I guess I found it: https://groups.google.com/forum/#!forum/less-wrong-parents

Comment author: Squark 23 April 2014 09:55:03PM 1 point [-]

A FAI is still capable of optimizing a "hopeless" situation better than humans...

This argument is not terribly convincing by itself. For example, a Neanderthal is a much better optimizer than a fruit fly but both a almost equally powerless against an H-bomb.

...it might turn out to be easy (for an AGI) to quickly develop significant control over a local area of the physical world that's expensive to take away...

Hmm, what about the following idea. The FAI can threaten the aliens to somehow consume a large portion of the free energy in the solar system. Assuming the 2nd law of thermodynamics is watertight, it will be profitable for them to leave us a significant portion (1/2?) of that portion. Essentially it's the Ultimatum game. The negotiation can be done acausally assuming each side has sufficient information about the other.

Thus we remain a small civilization but survive for a long time.

Comment author: Gunnar_Zarncke 23 April 2014 09:51:55PM 3 points [-]

To get a critical mass I propose upvoting any comment confirming creation of such a user page. The voting on this comment should show whether encouraging user page creation is a good idea.

Comment author: ChristianKl 23 April 2014 09:40:37PM *  0 points [-]

If so: consider attending your local DC LessWrong meetup, because we are cool and you are probably cool.

That's an amazing plug for a meetup.

Comment author: shminux 23 April 2014 09:30:46PM *  0 points [-]

She has said that she doesn't want to marry me if she's just my female best friend that I sleep with.

What are her feelings about you? Are you "just" her "male best friend that she sleeps with"? Your post comes across as rather asymmetric.

We both are concerned that I've not really had a relationship not with her, so there are no points of comparison for me to make.

Aren't you "both concerned" that she had too many relationships and so may decide that you are not for her precisely because she has these "points of comparison"? I suspect that she is the dominant partner in this relationship, possibly because she is more mentally mature, and this is often a warning flag.

She sometimes gets mad at me for things I'm "just supposed to know" to do, not do, say, or not say. I'm not sure if she's right and I'm a jerk.

Do you get mad at her for things she is just supposed to know to do, say or not say?

Anyway. DO NOT GET MARRIED YET until you figure out how to be an equal in this relationship (and if you think that you are, then you are fooling yourself).

Comment author: Gunnar_Zarncke 23 April 2014 09:27:13PM 1 point [-]

My thinkpad has two integrated ones. One for power saving, one for gaming and can really drive 4 screens. The docking station is dumb (but expensive nonetheless).

There are external graphics cards or splitters you could use e.g. matrox triplehead2go.

In response to Ergonomics Revisited
Comment author: ChristianKl 23 April 2014 09:25:26PM *  0 points [-]

How important are characteristics like the refresh rate of monitors? Does anything besides the size really matter?

Comment author: Tenoke 23 April 2014 09:14:33PM *  0 points [-]

Because you can normally only use 2 monitors per video card( I guess yours has the option run 2 on an integrated and 2 on a dedicated card?) and any 1 of my external monitors is much better than the 15" laptop screen. If I had the option of running all 3 of them, I'd take it.

At any rate, nowadays I use my laptop purely as a desktop computer (external monitors, keyboard, mouse; almost never using it outside of home), so I am just going to build a desktop (with 2 graphics cards) next time around. With the level of functionality that smartphones and tablets have today, laptops are becoming obsolete for users like me.

Comment author: NancyLebovitz 23 April 2014 09:10:29PM 0 points [-]

If you don't mind, what were the books, and what changes have you noticed in yourself?

Comment author: RomeoStevens 23 April 2014 09:09:40PM 0 points [-]

if you want/are able to wear running shoes all the time the advice doesn't really apply.

Comment author: Schlega 23 April 2014 09:09:00PM 1 point [-]

I'm in the under-qualified but interested camp. I'll plan on coming.

Comment author: polymathwannabe 23 April 2014 09:06:24PM 0 points [-]

I have sometimes found that Vimeo behaves better in Firefox than in Chrome.

Comment author: ChristianKl 23 April 2014 08:49:41PM 1 point [-]

My Straussian reading of Tyler Cowen is that a "serious" MIRI would be assembling and training a team of hacker-assassins to go after potential UFAIs instead of dinking around with decision theory.

If you ideas of being serious is to train a team of hacker-assassins that might indicate that your project is doomed from the start.

parts of the US government that trained people to infiltrate the post-collapse Soviet Union and then locate and neutralize nuclear weapons."

As far as I know there are still nuclear weapons in the post-collapse Soviet Union.

Comment author: Clamwinds 23 April 2014 08:47:37PM *  1 point [-]

I do not know if this is the best place, but I have lurked here and on OB for roughly a year, and have been a fellow traveler for many more. However specifically I want to talk to any members that have ADHD, and how they specifically go about treating their disorder. On the standard anti-akrasia topics, the narrative is that if you have anxiety,depression, xyz that you should treat that first, but there seems to be a lack of quantity of members that have this. Going to other forums to talk about stuff like which medication is "better" is filled with a lot of bad epistemology, conclusions, people faking their disorder, and much more. Any other members have it, and want to talk about it? I was hoping there could be maybe a general discussion thread for people with it if enough people have it. I've poured through studies and journals but it is difficult to do alone.

Comment author: James_Miller 23 April 2014 08:45:12PM 0 points [-]

None online. I have read several books on the topic and undergo it myself.

Comment author: Gunnar_Zarncke 23 April 2014 08:43:28PM 0 points [-]

I have a Thinkpad W530 and can connect 4 external monitors :-) But I didn't get around to setup and use this capability (it also only works with the docking station, otherwise only 2+internal).

Why do you turn off the internal one? I use a screen layout which works well with one or more external monitors (I does limit playement of the laptop though).

Comment author: mwengler 23 April 2014 08:41:39PM 0 points [-]

Is it safe to say that this problem, this result, has no applicability to any similar problem involving a merely finite amount of prisoners, say a mere googol of them?

Comment author: NancyLebovitz 23 April 2014 08:38:23PM 0 points [-]
Comment author: Vladimir_Nesov 23 April 2014 08:37:34PM *  0 points [-]

It is unlikely that the FAI would be able to deal with the aliens. The aliens would have (or be) their own "FAIs" much older and therefore more powerful.

This needs unpacking of "deal with". A FAI is still capable of optimizing a "hopeless" situation better than humans, so if you focus on optimizing and not satisficing, it doesn't matter if the absolute value of the outcome is much less than without the aliens. Considering this comparison (value with aliens vs. without) is misleading, because it's a part of the problem statement, not of a consequentialist argument that informs some decision within that problem statement. FAI would be preferable simply as long as it delivers more expected value than alternative plans that would use the same resources to do something else.

Apart from that general point, it might turn out to be easy (for an AGI) to quickly develop significant control over a local area of the physical world that's expensive to take away (or take away without hurting its value) even if the opponent is a superintelligence that spent aeons working on this problem (analogy with modern cryptography, where defense wins against much stronger offense), in which case a FAI would have something to bargain with.

Comment author: Lumifer 23 April 2014 08:33:25PM 1 point [-]

Lastly, I would point out that I speak about political ideas quite freely and without much of an attachment. It might be that you take a point I'm making overly seriously.

Ah. OK then.

Comment author: NancyLebovitz 23 April 2014 08:28:25PM 0 points [-]

Source of information about effectiveness and duration?

Comment author: ChristianKl 23 April 2014 08:23:57PM 2 points [-]

In your list you didn't mention the topic of getting children. If you marry someone with the intention of spending the rest of your life together with them, I think you should be on the same page with regards to getting children before you marry.

View more: Next