Filter Last three months

You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Comment author: Gleb_Tsipursky 20 September 2016 01:59:22AM -2 points [-]

Weird works for me, and I actually associate positive value with weirdness. But of course your mileage may vary. Any term that works to indicate distance from an identity label viscerally to one's System 1 will do, as Gram_Stone pointed out.

Comment author: gwern 19 September 2016 06:22:44PM 4 points [-]

Still, my question remains - is there real data out there to support the contention that P(elite career|IQ) has a local max and then decreases for higher IQ?

No. As I point out in my comment there, the evidence is strongly the other way: TIP/SMPY. To the extent that measures like wealth hit diminishing returns or even fall (eg Zagorsky), it has as much to do with personal choices & values as ability: the physicist who could make money on Wall Street but chooses to continue studying particles, the person who chooses to become an influential but poor writer, etc. (There are many coins of the realm, and greenbacks are but one.)

Comment author: James_Miller 16 September 2016 02:40:54AM 3 points [-]

No, I think Adams assigns a higher probability to Trump winning than most people do. I think Adams accepted this theory on Trump would cost him money.

Comment author: DataPacRat 14 September 2016 05:53:50PM 4 points [-]

not the right place to start

Who says that's where I'm starting? :)

I already have my short-term physical supplies, including water, food, camping gear, and AA-battery-powerable handheld ham radio. I also have a highly-portable solar panel capable of keeping my phone, and the offline copy of Wikipedia I keep on its SD Card, functioning regardless of the power grid; and I have enough battery-backup stuff at home to run my laptop long enough to copy the latest Wikipedia dump (and whatever emergency-survival ebooks I've collected by then) onto that SD card.

Comment author: James_Miller 14 September 2016 03:20:51PM 3 points [-]

I agree with your first paragraph, but Adams has described how his Trump writing has decimated his ability to earn money as a public speaker because people who hire such speakers want to avoid controversy. Adams appearing on the podcast of an obscure college professor was an act of altruism.

Comment author: niceguyanon 14 September 2016 01:32:28PM 4 points [-]

Unless you are wealthy being NEET is generally not a good thing IMO because you will feel crappy about being low status, and you will lack resources. Not sure what your definition of doing nothing is, but reasonable ones include eating at nice restaurants, expensive video games, gym memberships, courting mates, concerts, clothes, etc... doing nothing costs a fortune.

Comment author: Furcas 13 September 2016 03:03:27PM *  4 points [-]

Sam Harris' TED talk on AGI existential risk: https://www.youtube.com/watch?v=IZhGkKFH1x0&feature=youtu.be

ETA: It's been taken down, probably so TED can upload it on their own channel. Here's the audio in the meantime: https://drive.google.com/open?id=0B5xcnhOBS2UhZXpyaW9YR3hHU1k

Comment author: ChristianKl 13 September 2016 10:01:06AM 4 points [-]

I don't see much value in having a recent copy of Wikipedia or Project Gutenberg on my computer. In both cases the availability of the information is secured by other parties. It's more valuable to make sure that I store information that's not protected by other people

Comment author: NancyLebovitz 13 September 2016 12:41:12AM 4 points [-]

Relax when trying to remember something instead of making an effort.

Comment author: gjm 12 September 2016 10:45:36PM -1 points [-]

You are repeatedly telling me I've said things I actually haven't, telling me I think things I actually don't, telling me I don't know things I actually do, etc., etc. You have not yet succeeded in communicating any new insights to me; we may of course disagree about why that is.

Bored now. Bye.

Comment author: DataPacRat 12 September 2016 10:09:14PM 4 points [-]

Time to rebuild a library

My 5 terabyte harddrive went poof this morning, and silly me hadn't bought data-recovery insurance. Fortunately, I still have other copies of all my important data, and it'll just take a while to download everything else I'd been collecting.

Which brings up the question: What info do you feel it's important to have offline copies of, gathered from the whole gosh-dang internet? A recent copy of Wikipedia and the Project Gutenberg DVD are the obvious starting places... which other info do you think pays the rent of its storage space?

Comment author: Houshalter 12 September 2016 06:16:50PM 4 points [-]

Unfortunately it might also be an area where epistemic and instrumental rationality clash. In fact, most of the world does not have freedom of speech in the same way the US does - if one advocated HBD in, say, Germany, could one be thrown in prison in the same way people are imprisoned for saying 'seig heil'?

There is a difference between advocating something and merely believing it. But I'm mostly skeptical of the people that put "strongly disagree" on that question. As opposed to "disagree" or "neutral". The fact that it's so correlated with political ideology is more evidence that it's just political bias.

If I lived 200 years ago, I wouldn't go around advocating atheism. But I might have believed it privately, and I would be more skeptical of the openmindedness of people that say they "strongly oppose the evils of atheism".

The study I am thinking of did account for this.

I really don't know. When I researched this it seems like the effects are pretty hard to estimate. Different models give very different results. A recentish study using more modern climate models shows that the effects would be catastrophic and last for multiple years:

https://en.wikipedia.org/wiki/Nuclear_winter#2007_study_on_global_nuclear_war

the products of a nuclear explosion have very short half-lives - the worst would be over within an hour. Not only do we not have enough bombs to contaminate the world, but ground zero would be habitable again after a few months.

Those first few months are the problem though. The crops and livestock die or absorb the radioactive isotopes. The people too if they don't happen to have a fallout shelter handy.

Also the nuclear bombs themselves aren't the only concern. You would have to deal with all the waste left in the cities they destroy. Nuclear power plants would melt down with no one to contain them. Vast amounts of chemical waste would leak from abandoned chemical plants and waste storage. Oil would leak and pollute the oceans with no cleanup.

I don't know how to estimate the damage of this. But it should be at least a bad or worse than major industrial accidents of the past, like Bhopal, deepwater horizon, or Chernobyl. But all happening at once and with no one left to organize any kind of response.

while I think a nuclear war between allmost all countries is unlikly, its still a lot more likly then 90% of humanity killed by environmental or political collapse.

I think you are underestimating the secondary effects. I imagine a complete destruction of the global economy. There isn't enough food to go around and lots of countries are starving. This would lead to more war and chaos.

A few thousand years ago the civilizations of the mediterranean all collapsed almost at once. It's now speculated to be the result of a serious drought and bad weather. The states that couldn't feed their population got overthrown, and their hungry populations went to war with neighboring countries for food, until nothing of the old orders remained. It was a serious setback for humanity.

If that happened in the modern world, technological civilization might end and never be restarted. The modern world depends on hugely complex infrastructure and tons of different industries and inputs. If we lose that, it would be very difficult to rebuild. We've already extracted most of the easy to get to minerals and fossil fuels. Much farmland has been degraded from overuse and depends on inputs of fertilizer, irrigation systems, and of course modern machinery which would be difficult to replace.

Comment author: Viliam 12 September 2016 04:09:33PM 4 points [-]

As a person who has read 100% of the Sequences, I would also prefer if there would exist a shorter version. But, as far as I know, it doesn't exist yet. Someone would have to make it. Someone other than Eliezer, because this is not at the top of his priority list.

Would I be losing anything if I didn't need to be convinced, I just want to know the pointers?

You would be probably more likely to forget them. In general, longer text requires you to spend more time focusing on the idea. If someone would convert the Sequences into a PowerPoint presentation of 20 pages, a week later you probably wouldn't remember anything.

I realize how what I wrote here conflicts with my desire to have a shorter version of the Sequences, and... I don't know. Perhaps the shorter version should use other techniques for easier memorization, e.g. funny pictures.

Comment author: moridinamael 12 September 2016 03:35:48PM 4 points [-]

First, you should probably read the documents we refer to as the Sequences before you try to "correct" us.

Second,

A lot of things have you confused the territory being the map.

For example, that you exist, is a map.

That there is a being there, creature of some kind, it's a map.

That you have a brain.

Every. Single. Word. Is A Map.

We all know this.

What is the territory?

Become silent of all thoughts, without using thoughts to manipulate or lie, neither using thoughts not to manipulate or lie.

You seem to be referring to meditative states. A lot of us do this, for various reasons. It really has little to do with rationality or arationality. Quieting down and dissociating from one's thoughts certainly helps with clear thinking.

You think you are in control, thus the flow of life doesn't flow effortlessly. :)

We mostly don't believe in free will.

But it's fine to let go, and be present in this moment, where there, you are, the territory, which is arational.

There will be no reason for reasoning or understanding, it is arational.

Just because you're in a meditative state of thoughtlessness doesn't mean that you're doing anything beyond engaging with yet another set of maps. You're just engaging with them nonverbally and intuitively.

It is always the case, whether you think about it or not. I can welcome you in to see for yourself, there's a lot of beauty to be had.

Again, lots of us meditate, and we're all about beauty. Not sure where you're getting this perspective.

Please don't be dogmatic. Try and see for yourself the possible truth which is right before your eyes, the possible truth that you do not exist, that you, and the possibility that everything else is a fiction. The fiction of the mind.

But you will still be to function, to be able to go to AI conferences and talk about the latest improvements, or talk decision theory or whatever else you have going on in your life. Because the belief that you will lose these things, by becoming more aware, is a trick of the ego. It's highly improbable.

So go ahead, and see for yourself. Likely though you need to work on yourself, there's nothing which is more important than the machine which does not come with an instruction manual. That is you. What you think is you. What I mean is the practical you.

I see buried in here a sales pitch for engaging in some kind of meditative or mindfulness practice. I admit that the foundational documents of Less Wrong don't explicitly advocate for taking up meditation, but it's a popular community topic.

Comment author: gjm 12 September 2016 12:48:35PM -1 points [-]

It looks like the analysis didn't suppress responses that gave something other than 50 as an answer to the question about a coin flip. It probably should.

Comment author: Soothsilver 12 September 2016 12:09:43PM 4 points [-]

Being around here has made me think that I know everything interesting about the world and suppressed my excitement and joy from many minor things I could do. I also feel like my sense of wonder diminished. As I write this, I am a little unhappy, and in a period of depression, but I had similar feelings, if less intense, even before this period.

I was wondering whether you have any advice on how to restore this; or even better, how to "forget" as much rationality and transhumanism as possible (if not actually forgetting, then at least "to think and feel as I did before I read the Sequences")?

Comment author: Luke_A_Somers 11 September 2016 05:44:42PM 4 points [-]

Is there a thread for the calibration question analysis? I have some questions and comments about that, more than this.

Comment author: skeptical_lurker 11 September 2016 01:42:04PM 3 points [-]

This might make some sense if DNNs were being used to further our understanding of theoretical physics, but afaik they're not. They're being used to classify cat pics. SInce when do you use polynomial Hamiltonians to recognise cats?

These properties mean that neural networks do not need to approximate an infinitude of possible mathematical functions but only a tiny subset of the simplest ones

No finite DNN can approximate sin(x) over the entire real numbers, unless you cheat by having a sin(x) activation function.

Comment author: Manfred 10 September 2016 11:48:43PM 4 points [-]

I'd blame the MIT press release organ for being clickbait, but the paper isn't much better. It's almost entirely flash with very little substance. This is not to say there's no math - the math just doesn't much apply to the real world. For example, the idea that deep neural networks work well because they recreate the hierarchical generative process for the data is a common misconception.

And then from this starting point you want to start speculating?

Comment author: John_Maxwell_IV 10 September 2016 01:46:48PM 4 points [-]

Thanks for the analysis!

The median amount donated to bugs rights charities is listed as $157.5. That implies that half of survey respondents donated >$150 to bugs rights charities. Obviously this is kind of implausible. I assume the real number who donated to bugs rights charities is 4 people, since the donations sum to $1083.0 and the average amount donated is $270.75. This also goes for the other donation-related questions--just something to keep in mind.

Comment author: morganism 05 September 2016 11:27:23PM 3 points [-]

Academic Publishing without Journals

By setting up the journals with a bitcoin type blockchain, you could reward reviewers, and citations. SciCred !

just a stub to think about

https://hack.ether.camp/#/idea/academic-publishing-without-journals

Comment author: fubarobfusco 05 September 2016 05:49:02PM 2 points [-]

It's not as if LW has a problem of too much material these days.

Comment author: Houshalter 05 September 2016 09:15:44AM 4 points [-]

I wrote a thing that turned out to be too long for a comment: The Doomsday Argument is even Worse than Thought

Comment author: buybuydandavis 04 September 2016 08:03:25PM 4 points [-]

“Why does anything exist at all?”

I lose no sleep over this. I think people who do are just confused by language.

I'd say that if you examine your concept of "why", you find it presupposes existence.

Comment author: Elo 02 September 2016 07:17:32AM -2 points [-]

Tried listening.

3 minutes: most scientists are wrong.

doubt the rest is worth it.

Comment author: Dagon 30 August 2016 02:01:02PM 4 points [-]

You can also point out the contradiction that they don't seem to be in a hurry to take the obvious first step by killing themselves. Proving that they see at least one human life as a net positive. Then talk about everyone else they don't want to kill or prevent being born.

Be aware, though, that this isn't truth-seeking. It's debate for the fun of it.

Comment author: gwern 28 August 2016 08:02:10PM 4 points [-]
Comment author: WalterL 26 August 2016 04:40:35PM 1 point [-]

Aw come on guys. Negative karma for literally pointing out a news site? What does that even mean?

Comment author: philh 26 August 2016 11:51:54AM 3 points [-]

I feel it's important to note that he was talking about writing styles, not philosophy.

Comment author: Elo 25 August 2016 11:00:45PM -2 points [-]

think like machines rather than humans

01101000 01100001 01101000 01100001 01101000 01100001 01101000 01100001

Comment author: ThisSpaceAvailable 21 August 2016 01:31:02AM *  4 points [-]

I suppose this might be better place to ask than trying to resurrect a previous thread:

What kind of statistics can Signal offer on prior cohorts? E.g. percentage with jobs, percentage with jobs in data science field, percentage with incomes over $100k, median income of graduates, mean income of graduates, mean income of employed graduates, etc.? And how do the different cohorts compare? (Those are just examples; I don't necessarily expect to get those exact answers, but it would be good to have some data and have it be presented in a manner that is at least partially resistant to cherry picking/massaging, etc.) Basically, what sort of evidence E does Signal have to offer, such that I should update towards it being effective, given both E, and "E has been selected by Signal, and Signal has an interest in choosing E to be as flattering rather than as informative as possible" are true?

Also, the last I heard, there was a deposit requirement. What's the refund policy on that?

Comment author: gwern 19 August 2016 08:40:42PM 4 points [-]

You would, at the very least, be in violation of several acts regarding approval of GMOs: https://www.loc.gov/law/help/restrictions-on-gmos/usa.php https://en.wikipedia.org/wiki/Regulation_of_the_release_of_genetically_modified_organisms#United_States Specifically, you'd be violating FDA requirements by releasing '“new animal drugs” (NADs)' without approval. Depending on whether mosquitoes are considered plant pests, it looks like you'd also be violating Department of Agriculture laws. I assume you'd probably also be violating a number of EPA laws but didn't see anything specifically about that.

Comment author: James_Miller 19 August 2016 03:32:33PM 4 points [-]

For you I suggest something that also advances your career so that you can devote more time to the project. If the answer to this isn't clear I suggest talking to your professors asking what they suggest. Another approach is to become a literal superhero. Assemble a group of scientists who on their own could eradicate mosquitoes and just do it. Don't wait for official approval.

Comment author: gjm 17 August 2016 11:37:16PM -2 points [-]

I thought I remembered seeing it linked some years back from a friend's blog. The friend in question has moved his blog a couple of times, though, and after looking through all the atheism-related stuff in its current incarnation I didn't find the link. It's also always possible, of course, that I'm misremembering.

Comment author: ChristianKl 12 August 2016 08:15:57PM 4 points [-]

I thought that to most LW'ers the weak version of "Calories in, Calories out" was uncontroversial.

EY likes to say that "mass in, mass out" works even better for predicting changes in weight.

Comment author: bbleeker 12 August 2016 06:45:59PM 3 points [-]

More like 1/100000, and then when they thaw you you'll be brain damaged and have to live in an institution forever. They don't really know how to do this yet. How far along are they now? Have they frozen and thawed a mouse yet, and did it behave the same as before? I won't let them freeze me earlier than that, because there's essentially no chance I'll be even able to walk and talk, let alone be someone present me would recognize as 'me'.

Comment author: The_Jaded_One 11 August 2016 09:27:01PM 4 points [-]

This is more something you would do for a laugh than something that is intended as a serious recruitment strategy. There is a disclaimer at the top of the post.

Cryo has bad signalling value - signals weird + selfish. It's hard to overcome this but I am open to suggestions.

Comment author: Soothsilver 10 August 2016 05:17:21PM 4 points [-]

We're consequentialists here, so I get all the credit for it even if it wasn't much effort, right?

^^

Comment author: jimrandomh 09 August 2016 07:34:06PM 4 points [-]

You have noticed things happening that don't match your model of how you think the world (and nutrition in particular) should work. Rather than defy the data, maybe you could come up with a different model that better explains the observations?

Comment author: Lumifer 09 August 2016 04:50:17PM 4 points [-]

What are "allowable" variables and what makes one "allowable"?

I'm aiming for something like "once you know income (and other allowable variables) then race should not affect the decision beyond that".

That's the same thing: if S (say, race) does not provide any useful information after controlling for X (say, income) then your classifier is going to "naturally" ignore it. If it doesn't, there is still useful information in S even after you took X into account.

This is all basic statistics, I still don't understand why there's a need to make certain variables (like race) special.

Comment author: James_Miller 08 August 2016 01:46:11AM *  4 points [-]

True if gene drive is like antibiotics, but is it? Every day we wait 1,200 people die of malaria because of delay, a price worth paying if, but only if, you get some significant benefit from waiting. Another big "unknown unknown" is what other viruses mosquitoes will put in us if we don't quickly eliminate them.

Comment author: James_Miller 07 August 2016 06:06:16PM 4 points [-]

Yes, this does reduce the benefit of getting Trump to support mosquito eradication.

Comment author: Dagon 05 August 2016 06:02:26PM 2 points [-]

I think there's a fundamental goal conflict between "fairness" and precision. If the socially-unpopular feature is in fact predictive, then you either explicitly want a less-predictive algorithm, or you end up using other features that correlate with S strongly enough that you might as well just use S.

If you want to ensure a given distribution of S independent of classification, then include that in your prediction goals: have your cost function include a homogeneity penalty. Not that you're now pretty seriously tipping the scales against what you previously thought your classifier was predicting. Better and simpler to design and test the classifier in a straightforward way, but don't use it as the sole decision criteria.

Redlining (or more generally, deciding who gets credit) is a great example for this. If you want accurate risk assessment, you must take into account data (income, savings, industry/job stability, other kinds of debt, etc.) that correlates with ethnic averages. The problem is not that the risk classifiers are wrong, the problem is that correct risk assessments lead to unpleasant loan distributions. And the sane solution is to explicitly subsidize the risks you want to encourage for social reasons, not to lie about the risk by throwing away data.

Comment author: Panorama 03 August 2016 10:37:36AM 4 points [-]

Medical benefits of dental floss unproven

The federal government has recommended flossing since 1979, first in a surgeon general's report and later in the Dietary Guidelines for Americans issued every five years. The guidelines must be based on scientific evidence, under the law.

Last year, the Associated Press asked the departments of Health and Human Services and Agriculture for their evidence, and followed up with written requests under the Freedom of Information Act.

When the federal government issued its latest dietary guidelines this year, the flossing recommendation had been removed, without notice. In a letter to the AP, the government acknowledged the effectiveness of flossing had never been researched, as required.

The AP looked at the most rigorous research conducted over the past decade, focusing on 25 studies that generally compared the use of a toothbrush with the combination of toothbrushes and floss. The findings? The evidence for flossing is "weak, very unreliable," of "very low" quality, and carries "a moderate to large potential for bias."

Comment author: ChristianKl 03 August 2016 10:31:06AM 4 points [-]

Last week I had a discussion with a person who believed that because a science fiction film said that dolphins use 30% of their brain, dolphins indeed use 30% of their brain and therefore more than humans with their 10%.

It felt a bit painful but it seem like the epistemic hygine of some people in our society is very poor. Various producers of TV shows might have more responsibility for not making facts up than they believe they have.

Comment author: Lumifer 02 August 2016 05:04:07PM 4 points [-]

There are other ways to prevent global warming. Plan C is creating artificial nuclear winter by volcanic explossion or starting large scale forest fires with nukes.

Goes straight into the "Shit LW people say" bucket.

Comment author: gjm 02 August 2016 10:54:12AM -1 points [-]

If the code is available in a form that enables people to build it, that seems likely to reduce sales considerably whatever the licence. (In any case, I don't think CC-ness of the licence is the relevant feature.)

If the source code is available then nagging, begging and crippling are easily removed. (Unless the crippling is a matter of omission and the uncrippling bits are paid for -- but that's just one variety of freemium.)

Your first suggestion, a good plugin API, seems like the way to go. moridinamael, what advantages do you see to open source over a plugin API?

Other possible options:

  • Divide the app into two parts. One is open-source and is the part that would be extended by plugins. One is closed-source and has most of the secret sauce in it. Someone buying the app gets the binaries for both parts and the source for the extensible part. Of course this is only any good if you can find a way to split the app up that doesn't kill its efficiency or break its architecture.
    • The open-source extensible part might be minimal (just enough to support plugins -- this ends up looking a lot like the "plugin API" option, I think) or maximal (so that the only closed-source part is an "engine" that does some clever thing you are hoping other people can't duplicate) or in between.
  • Have part of the app run not on the user's computer or mobile device but on servers under your control. Charge for access to those servers.
  • Just make it open source and do something entirely different to satisfy your capitalist rent-seeking exploitative desires :-).
Comment author: PhilGoetz 01 August 2016 07:15:57PM *  2 points [-]

From the International Craniofacial Institute's web page on cleft palate.

What they say:

Statistics reassure us that having a child with a cleft does not mean you’ll have other children with the same condition. In fact, your chances only increase by 2 to 5 percent compared to couples with no cleft-affected children.

What they mean:

The chances that your next child will have cleft palate increases from 0.15% to about 4%. Your odds ratio multiplier is 25.

Comment author: SquirrelInHell 01 August 2016 10:44:30AM 4 points [-]

Let me give some feedback about your writing style, which I find consistently cryptic. You tend to describe your thoughts starting in the middle and giving the context later, or skipping it altogether. E.g. the fist sentence reads

I find myself more and more interested in how the concept of "systematized winning" can be applied to a large group of people who have one thing in common, and that not even time, but - in my own very personal case - ...

Until this point, a context like "biology research" etc. does not appear anywhere, and a "large group of people who have one thing in common" could be all people who like ice cream. It is of course possible to decipher what you mean, but by writing in reverse order you make it unnecessarily hard.

~~~

Possibly, a part of the problems you are describing could be solved by storing all the raw data that is collected during research, not just conclusions. In some cases, the amount of data might pose technological problems, but humanity's capacity to store information cheaply is increasing very quickly. So we can just let the future generations analyse the data by themselves, if they care to do so.

Comment author: TheAncientGeek 01 August 2016 10:43:39AM 4 points [-]

Potential sentience had got to count, or it would be ok to kill sleeping peopje

Comment author: James_Miller 31 July 2016 08:11:16PM 4 points [-]

Marriage use to be a " public binding precommitment" before no-fault divorce.

Comment author: NancyLebovitz 29 July 2016 05:33:23PM 4 points [-]

Ratiionalists stil have too much trust in scientific studies, especially psychological studies.

Comment author: Elo 28 July 2016 11:23:47PM -2 points [-]

Rationalists don't even lift bro.

Many do.

http://thefutureprimaeval.net/why-we-even-lift/

Comment author: MrMind 28 October 2016 07:46:08AM 3 points [-]

I won't be able to create a new Open thread on monday (I will be at our national version of Comic-Con). Can someone East of US create it?
Community service is good karma. Literally.

Comment author: TheOtherDave 27 October 2016 12:53:30AM 3 points [-]

This comment taken out of context kind of delighted me.

Comment author: NancyLebovitz 26 October 2016 02:01:23AM 3 points [-]

Can someone here come up with any sort of realistic value system a foreign civilisation might have that would result in it not destroying the human race, or at least permanently stunting our continued development, should they become aware of us?

Not being bored. Living systems (and presumably more so for living systems that include intelligence) show more complex behavior than dead systems.

Comment author: Viliam 25 October 2016 10:26:00PM *  3 points [-]

The part about healthcare is USA-specific, but the relationship between total hours and total pay is nonlinear at other places, too.

In Slovakia, the healthcare is set up so that everyone pays a fixed fraction of their income, and then everyone receives exactly the same healthcare regardless of how much they paid. So it shouldn't have any impact on hourly rate.

Yet, it is difficult to find a part-time work on the market. When I tried it, I had to work for 50% of my previous salary just to reduce the work to 4 days a week, and the employer still believed they were doing me a favor. (After a few weeks I decided that getting 50% of money for 80% of time is not a smart deal, so I quit.)

I believe the problem is signalling. Almost everyone is okay with working full-time; especially men. (Women can use having small kids as an excuse for a part-time job, but that also dramatically reduces their hourly rate, which is an important part of the pay gap.) If you are a man unwilling to work full-time, it makes you weird.

So it's not like the employer literally needs you there 5 days a week. It's simply a decision to not hire a weirdo, when there are non-weird candidates available. If you differ from the majority by not willing to work 5 days a week, 8 hours a day, who knows what else is weird about you? Why take the unnecessary risk? Also, well-paid employees are supposed pretend they love their job; and by asking for a part-time job you show too clearly that you actually care about something else more.

Thus, I sometimes had jobs where I was able to spend up to 50% of my working time just browsing websites from the company computer. But no comparably well paid option where I could officially work 4 days a week, or 6 hours a day, and then simply go home.

(I was also trying to get home office, so that instead of browsing the web I could do something useful. But the companies where the employees spend much time online are usually on some level aware of what is happening, so they don't allow home office. As long as everyone must stay in the building the whole day, the management can keep pretending that people are actually working.)

I believe that if for example 50% of people working in some profession would demand part-time work, this problem would mostly disappear. Then, wanting to work part-time would simply be normal. But that's a coordination problem, and I don't even know how many people would actually be interested in working part-time if that would be a legitimate option (with the same hourly rate).

Comment author: Lumifer 25 October 2016 03:06:41PM 2 points [-]

The issue is the standard "The AI neither loves you nor hates you, but you're made out of atoms...". The Europeans did not desire to wipe out Native Americans, they just wanted land and no annoying people who kept on shooting arrows at them.

Comment author: Lumifer 25 October 2016 02:39:08PM 2 points [-]

Because it would not fit into our values to consider exterminating them as the primary choice.

Did you ask the Native Americans whether they hold a similar opinion?

Comment author: Val 25 October 2016 02:10:22PM 3 points [-]

If we developed practical interstellar travel, and went to a star system with an intelligent species somewhat below our technological level, our first choice would probably not be annihilating them. Why? Because it would not fit into our values to consider exterminating them as the primary choice. And how did we develop our values like this? I guess at least in some part it's because we evolved and built our civilizations among plenty of species of animals, some of which we hunted for food (and not all of them to extinction, and even those which got extinct, wiping them out was not our goal), some of which we domesticated, and plenty of which we left alone. We also learned that other species besides us have a role in the natural cycle, and it was never in our interest to wipe out other species (unless in rare circumstances, when they were a pest or a dangerous disease vector).

Unless the extraterrestrial species are the only macroscopic life-form on their planet, it's likely they evolved among other species and did not exterminate them all. This might lead to them having cultural values about preserving biodiversity and not exterminating species unless really necessary.

Comment author: siIver 25 October 2016 08:22:00AM 3 points [-]

Should probably have been posted in the open thread (not meant as a reproach)

Comment author: Houshalter 25 October 2016 07:20:41AM *  3 points [-]

The premise this article starts with is wrong. The argument goes that AIs can't take over the world, because they can't predict things much better than humans can. Or, conversely, that they will be able to take over because they can predict much better than humans.

Well so what if they can predict the future better? That's certainly one possible advantage of AI, but it's far from the only one. My greatest fear/hope of AI is that it will be able to design technology much better than humans. Humans didn't evolve to be engineers or computer programmers. It's really just an accident we are capable of it. Humans have such a hard time designing complex systems, keeping track of so many different things in our head, etc. Already these jobs are restricted to unusually intelligent people.

I think there are many possible optimizations to the mind to improve at these kinds of tasks. There are rare humans that are very good at these tasks, showing that human brains aren't anywhere near the peak. An AI that is optimized for them, will be able to design technologies we can't even dream of. We could theoretically make nanotechnology today, but there are so many interacting parts and complexities, humans are just unable to manage it. The internet has so much bugged software running it. It could probably be pwned in a weekend by a sufficiently powerful programming AI.

And the same is perhaps true with designing better AI algorithms, an AI optimized towards AI research, would be much better at it than humans.

Comment author: gwern 24 October 2016 11:10:41PM 3 points [-]

Along the lines of my earlier GCTA, I've written a Wikipedia article on genetic correlations.

Comment author: Mac 24 October 2016 06:49:44PM *  3 points [-]

what's the most annoying part of your life/job?

Pain. Moderate but constant pain from old sports injuries makes me: spend money on pain meds and counter irritants, work longer hours because the pain is distracting and reduces my productivity, limit physical activity and travel, deviate from an optimal exercise routine, fall into a black hole of grumpiness occasionally.

how much would you pay for a solution?

If by "solution" you mean an easy, one-time, guaranteed fix: $10,000

Comment author: Lumifer 21 October 2016 02:54:55PM 2 points [-]

You are literally asking me to solve the FAI problem right here and now.

No, I'm asking you to specify it. My point is that you can't build X if you can't even recognize X.

You seem to think Value Learning is the hard problem, getting an AI to learn what humans actually want.

Learning what humans want is pretty easy. However it's an inconsistent mess which involves many things contemporary people find unsavory. Making it all coherent and formulating a (single) policy on the basis of this mess is the hard part.

From your point of view. You gave me examples of values which you consider bad, as an argument against FAI. I'm showing you that CEV would eliminate these things.

Why would CEV eliminate things I find negative? This is just a projected typical mind fallacy. Things I consider positive and negatve are not (necessarily) things many or most people consider positive and negative. Since I don't expect to find myself in a privileged position, I should expect CEV to eliminate some things I believe are positive and impose some things I believe are negative.

Later you say that CEV will average values. I don't have average values.

If they knew all the arguments for and against religion, then their values would be more like ours. They would see how bad killing people is, and that their religion is wrong.

I see no evidence to believe this is true and lots of evidence to believe this is false.

You are essentially saying that religious people are idiots and if only you could sit them down and explain things to them, the scales would fall from their eyes and they will become atheists.This is a popular idea, but it fails real-life testing very very hard.

Comment author: gwern 21 October 2016 01:50:46AM *  3 points [-]

I don't find it convincing. Even though it's long, I don't recognize any of the examples as being 'Ra' ness and I can't think of any examples of 'Ra' in my own experience. The name 'Ra' is also not that great as unlike some of the other reifications going around like Yvain's 'Moloch', which at least have some intuitive connection with their concept, 'Ra' seems pretty much arbitrary.

EDIT: Obormot and saturn2 on IRC note that 'Ra' seems in her telling to slightly overlap with the whole complacent-elite meritocracy going on in the Ivy League & Wall Street, of the Twilight of the Elites type.

Comment author: Houshalter 21 October 2016 12:40:33AM 3 points [-]

most AI safety researchers have not done any research into the topic of (practical) AI research, so their opinions are irrelevant. How is this statement any different?

Because that statement is simply false. Researchers do deal with real world problems and datasets. There is a huge overlap between research and practice. There is little or no overlap between AI risk/safety research, and current machine learning research. The only connection I can think of, is that people familiar with reinforcement learning might have a better understanding of AI motivation.

Really? There's a lot of frequent posters here that don't hold the Bostrom extremist view. skeptical_lurker and TheAncientGeek come to mind.

I didn't say there wasn't dissent. I said it wasn't an outlier view, and seems to be the majority opinion.

But if this site really has an orthodoxy, then it has no remaining purpose to me. Goodbye.

Look I'm sorry if I came across as overly hostile. I certainly welcome any debate and discussion on this issue. If you have anything to say feel free to say it. But your above comment didn't really add anything. There was no argument, just an appeal to authority, and calling GP "extremist" for something that's a common view on this site. At the very least, read some of the previous discussions first. You don't need to read everything, but there is a list of posts here.

Comment author: James_Miller 20 October 2016 04:00:23PM 3 points [-]

Megyn Kelly walked by me once. If she had handed me a knife and asked me to remove my own heart and give it to her, part of my brain would have felt obligated to comply.

Comment author: siIver 20 October 2016 01:41:10AM *  3 points [-]

This may be a naive and over-simplified stance, so educate me if I'm being ignorant--

but isn't promiting anything that speeds up AI reasearch the absolute worst thing we can do? If the fate of humanity rests on the outcome of the race between solving the friendly AI problem and reaching intelligent AI, shouldn't we only support research that goes exclusively into the former, and perhaps even try to slow down the latter? The link you shared seems to fall into the latter category, aiming for general promotion of the idea and accelerating research.

Feel free to just provide a link if the argument has been discussed before.

Comment author: Manfred 20 October 2016 01:04:15AM *  3 points [-]

Depends on information. If people retain memories, so that each person-moment follows from a previous one, then knowing only that I suddenly find myself in a room means I'm probably in room A. If people are memory-wiped at some interval, then this increases the probability I should assign to being in room B - probability of being in a specific room, given that your state of information is that you suddenly find yourself in a room, is proportional to the number of times "I have suddenly found myself in a room" is somebody's state of information.

The above is in fact true. So here's a fun puzzler for you: why is the following false?

"If you tell me the exact time, then my room must more likely be B, because there are 1000 times more people in room B at that time. Since this holds for all times you could tell me, it is always true that my room is probably B, so I'm probably in room B."

Hint: Assuming that room B residents "live" 1,000,000 times longer than room A residents, how does their probability of being in room B look throughout their life, assuming they retain their memories?

Comment author: turchin 19 October 2016 11:02:27AM 3 points [-]

The page http://lesswrong.com/r/discussion/new/ returns error for me for 12 hours, but other pages are fine. Is it only my glitch?

error text: "You have encountered an error in the code that runs Less Wrong. The site maintainers have been informed and will get to it is as soon as they can. In the unlikely event that you've bumped into this error before and think that no-one is paying attention, please report the error and how to reproduce it on http://code.google.com/p/lesswrong/issues/list'

If the error is localised you might still find awesome Less Wrong content in the Main article area or in the Discussion area.

Comment author: ChristianKl 18 October 2016 08:56:25PM 3 points [-]

What empirical evidence do you have observed to back you belief that this technique is valuable?

Comment author: Lumifer 18 October 2016 07:57:18PM 3 points [-]

I don't have a most important value.

Comment author: username2 17 October 2016 05:02:08PM *  3 points [-]

This is deserving of a much longer answer which I have not had the time to write and probably won't any time soon, I'm sorry to say. But in short summary human drives and morals are more behaviorist that utilitarian. The utility function approximation is just that, an approximation.

Imagine you have a shovel, and while digging you hit a large rock and the handle breaks. What that shovel designed to break, in sense that its purpose was to break? No, shovels are designed to dig holes. Breakage, for the most part, is just an unintended side-effect of the materials used. Now in some cases things are intended to fail early for safety reasons, e,g, to have the shovel break before your bones will. But even then this isn't some underlying root purpose. The purpose of the shovel is still to dig holes. The breakage is more a secondary consideration to prevent undesirable side effects in some failure modes.

Does learning that the shovel breaks when it exceeds normal digging stresses tell you anything about the purpose / utility function of the shovel? Pedantically, a little bit if you accept the breaking point being a designed-in safety consideration. But it doesn't enlighten us about the hole digging nature at all.

Would you rather put dust in the eyes of 3^^^3 people, or torture one individual to death? Would you rather push one person onto the trolley tracks to save five others? These are failure mode analysis of edge cases. The real answer is I'd rather have dust in no one's eyes and nobody tortured, and nobody hit by trolleys. Making an arbitrary what-if tradeoff between these scenarios doesn't tell us much about our underlying desires because there isn't some consistent mathematical utility function underlying our responses. At best it just reveals how we've been wired by genetics and upbringing and present environment to prioritize our behaviorist responses. Which is interesting, to be sure. But not very informative, to be honest.

Comment author: James_Miller 17 October 2016 01:00:05PM *  3 points [-]

Yes, I agree. It shows children are trying to guess the teacher's password and are not doing math. Interestingly, when I asked my son this question he said you couldn't find the answer. When I asked how he knew that he said he has seen other math problems where you don't have enough information to solve.

Comment author: SithLord13 17 October 2016 12:51:29PM 3 points [-]

I think the issue here might be slightly different than posed. I think the real issue is that children instinctively assume they're running on corrupted hardware. For all priors in math, they've had a solvable problem. They've had problems they couldn't solve, and then been shown it was a mistake on their part. Without good cause, why would they suddenly assume all their priors are wrong, and not just that they're failing to grasp it? Given their priors and information, it's ration to expect that they missed something.

Comment author: gworley 16 October 2016 12:41:31AM 3 points [-]

medium makes it a little hard to find the rss feeds, but it's at:

https://medium.com/feed/map-and-territory

Comment author: CronoDAS 15 October 2016 09:49:23PM 3 points [-]

Is there an RSS feed for new posts?

Comment author: WhySpace 15 October 2016 06:42:05PM 3 points [-]

If the majority of minds with moral weight are the result of an intelligent mind's decision, then the link between complexity and frequency may be weak. Pain is a strong motivator for some things, even if it's bad at motivating creativity, so perhaps there would still be an incentive to create more pain. This is extremely speculative though.

The bigger worry would be that forces like Moloch and evolution may favor pain. Wild animals appear to have much more pain in their lives than pleasure. Even if the carrot was a more effective motivator than the stick for something, if pain was simpler and more robust evolution would still favor it.

This would be especially important for things like Boltzmann brains. To me it seems unlikely to me that things like trees or insects can suffer, but if they can we'd have a very hard time relating to minds so different from our own. With so little evidence, the choice of a good prior is crucial, so it would be useful to have a prior for the predominance of suffering over happiness.

Comment author: scarcegreengrass 13 October 2016 11:57:11AM 2 points [-]

Oh, this is much more complete, thanks.

Wow, it's surreal to hear Obama talking about Bostrom, Foom, and biological x risk.

View more: Prev | Next