If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
A full half (20/40) of the posts currently under discussion are meetup threads.
Can we please segregate these threads to another forum tab (in the vein of the Main/Discussion split)?
Edit: And only 5 or so of them actually have any comments in them.
I might as well point out my solution- I've set the date of the Austin meetup to be six years from now, and edit the date each week. It stays on the map, it stays on the sidebar (so I remember to edit the date- if this were automatic, then it could be correct), and it stays out of discussion.
Recent work shows that it is possible to use acoustic data to break public key encryption systems. Essentially, if one can send specific encrypted plaintext then the resulting sounds the CPU makes when decrypting can reveal information about the key. The attack was successfully demonstrated to work on 4096 bit RSA encryption. While some versions of the attack require high quality microphones, some versions apparently were successful just using mobile phones.
Aside from the general interest issues, this is one more example of how a supposedly boxed AI might be able to send out detailed information to the outside. In particular, one can send surprisingly high bandwith even accidentally through acoustic channels.
For personal devices the attacker may have access to the microphone inside the device via flash/java/javascript/an app, etc.
That might have been true a few years ago, but they point out that that's not as true anymore. For example, they suggest one practical application of this technique might be to put your own server in a colocation facility, stick a microphone in it and slurp up as many keys as you can. They also were able to get a version of the technique to work 4 meters away, which is far enough that this becomes somewhat different from having direct physical access. They also point out that laser microphones could also be used with this method.
Yesterday I noticed a mistake in my reasoning that seems to be due to a cognitive bias, and I wonder how widespread or studied it is, or if it has a name - I can't think of an obvious candidate.
I was leaving work, and I entered the parking elevator in the lobby and pressed the button for floor -4. Three people entered after me - call them A, B and C - but because I hadn't yet turned around to face the door, as elevator etiquette requires, I didn't see which one of them pressed which button. As I turned around and the doors started to close, I saw that -2 and -3 were lit in addition to my -4. So, three floors and four people, means two people will come out on one of the floors, and I wondered which one it'll be.
The elevator stopped at floor -2. A and B got out. Well, I thought, so C is headed for -3, and I for -4 alone. As the doors were closing, B rushed back and squeezed through them. I realized she didn't want -2, and went out of the elevator absent-mindedly. I wondered which floor she did want. The elevator went down to -3. The doors opened and B got out... and then something weird happened: C didn't. I was surprised. Something wasn't right in my idle deductions. I figured it o...
Reproduced for convenience...
On G+, John Baez wrote about the MIRI workshop he's currently attending, in particular about Löb's Theorem.
Timothy Gowers asked:
Is it possible to give just the merest of hints about what the theorem might have to do with AI?
Qiaochu Yuan, a past MIRI workshop participant, gave a concise answer:
...Suppose you want to design an AI which in turn designs its (smarter) descendants. You'd like to have some guarantee that not only the AI but its descendants will do what you want them to do; call that goal G. As a toy model, suppose the AI works by storing and proving first-order statements about a model of the environment, then performing an action A as soon as it can prove that action A accomplishes goal G. This action criterion should apply to any action the AI takes, including the production of its descendants. So it would be nice if the AI could prove that if its descendants prove that action A leads to goal G, then action A in fact leads to goal G.
The problem is that if the AI and its descendants all believe the same amount of mathematics, say PA, then by Lob's theorem this implies that the AI can already prove that action A leads to goal G. So it must
As there was some interest in Soylent some time ago, I'm curious what people who have some knowledge of dietary science think of its safety and efficacy given that the recipe appears to be finalized. I don't know much about this area, so it's difficult for me to sort out the numerous opinions being thrown around concerning the product.
ETA: Bonus points for probabilities or general confidence levels attached to key statements.
Given that dogfood and catfood work as far as mono-diets go
They mostly seem to, but if they cause a drop in energy or cognitive capability because of some nutrient balance problems, the animals won't become visibly ill and humans are unlikely to notice. A persistent brain fog from eating a poor diet would be quite bad for humans on the other hand.
Perhaps eating isn't a major pleasure of life for everyone.
I'm imagining an analogous argument about exercise. Someone formulates (or claims to, anyway) a technique combining drugs and yoga that provides, in a sweatless ten minutes per week, equivalent health benefits to an hour of normal exercise per day. Some folks are horrified by the idea — they enjoy their workout, or their bicycle commute, or swimming laps; and they can't imagine that anyone would want to give up the euphoria of extended physical exertion in exchange for a bland ten-minute session.
To me, that seems like a failure of imagination. People don't all enjoy the same "pleasures of life". Some people like physical exercise; others hate it. Some people like tasty food; others don't care about it. Some people like sex; others simply lack any desire for it; still others experience the urge but find it annoying. And so on.
I'm not ready for my current employer to know about this, so I've created a throwaway account to ask about it.
A week ago I interviewed with Google, and I just got the feedback: they're very happy and want to move forward. They've sent me an email asking for various details, including my current salary.
Now it seems to me very much as if I don't want to tell them my current salary - I suspect I'd do much better if they worked out what they felt I was worth to them and offered me that, rather than taking my current salary and adding a bit extra. The Internet is full of advice that you shouldn't tell a prospective employer your current salary when they ask. But I'm suspicious of this advice - it seems like the sort of thing they would say whether it was true or not. What's your guess - in real life, how offputting is it for an employer if a candidate refuses to disclose that kind of detail when you ask for it as part of your process? How likely are Google to be put off by it?
I work at Google. When I was interviewing, I was in the exact same position of suspecting I shouldn't tell them my salary (which I knew was below market rate at the time). I read the same advice you did and had the same reservations about it. Here's what happened: I tried to withhold my salary information. The HR person said she had to have it for the process to move forward and asked me not to worry about it. I tried to insist. She said she totally understood where I was coming from, but the system didn't allow her flexibility on this point. I told her my salary, truthfully. I received an offer which was substantially greater than my salary and seemingly uncorrelated with it.
My optimistic reading of the situation is that Google's offer is mostly based on approximate market salary for the role, adjusted perhaps by how well you did at the interviews, your seniority, etc. (these are my guesses, I don't have any internal info on how offers are calculated by HR). Your current salary is needed due for future bookkeeping, statistics, or maybe in case it's higher than what Google is prepared to offer and they want to decide if it's worth it to up the offer a little bit. That's my theory, but keep in mind that that it's just a bunch of guesses, and also that it's a big company and policies may be different in different countries and offices.
I think it is worth mentioning that "the system won't allow for flexibility on this" is just about the oldest negotiation tactic in the book. (Along with, "let me check with my boss on that...")
In reality, there is zero reason Google, or any employer, should need to know your current or past salary information apart from that information's ability to work as a negotiation tactic in their favor.
Google has something you want (a job that pays $) and you have something they want (skill to make them $). Sharing your salary this early in the process tips the negotation scales (overwhelmingly) in their favor.
That said, Google is negotiating from a place of immense strength. They choose from nearly anyone they want, while there is only one Google...
...so, if Google wants to know your salary, tell them your salary. And enjoy your career at one of the coolest companies around. You win. :)
Sidestepping the question:
Interview with other companies (Microsoft, Facebook, etc.) and get other offers. When the competition is other prospective employers, your old salary won't much matter.
The rationale behind salary negotiations are best expanded upon by patio11's "Salary Negotiation: Make More Money, Be More Valued" (that article's well worth the rent).
In real life, the sort of places where employers take offense by you not disclosing current salary (or generally, by salary negotiations -that is, they'd hire someone else if he's available more cheaply) are not the places you want to work with: if they're putting selection pressure for downscaling salaries, all your future coworkers are going to be, well, cheap.
This is anecdotally not true for Google; they can afford truckloads, if they really want to have you onboard. So this is much more likely to come from standardized processes. Also note in Google's case, that decisions are delegated to a board of stakeholders, so there isn't really one person who can be put off due to salary (and they probably handle the hire/no hire decisions entirely separate to the salary negotiations).
How bad an idea would it be to just let my employer know what's going on?
Extremely bad. People have been fired or denied promotion because of this. Don't even tell any of your colleagues.
I am not discussing the legal aspects of this, but you will probably be perceived as not worth investing in the long term. Imagine that your interview fails and you decide to stay. Your current employer is not going to trust you with anything important anymore, because they will be expecting you to leave soon anyway.
Okay, this may sound irrational, because you are not your employer's slave, and technically you are (any anyone else is) free to leave sooner or later. But people still make estimates. It is in your best interest to pretend to be a loyal and motivated employee, until the day you are 100% ready to leave.
It feels deceitful
This is part of the human nature; what we have evolved to do. Even your dislike for deceit is part of the deceit mechanism. If you unilaterally decide to stop playing the game, it most likely means you lose.
There is probably an article by Robin Hanson about how LinkedIn helps us to get in contact with new job offers while maintaining plausible deniability, which is what makes it so popular, but I can't find the link now.
Something's brewing in my brain lately, and I don't know what. I know that it centers around:
-People were probably born during the Crimean War/US Civil War/The Boxer Rebellion who then died of a heart attack in a skyscraper/passenger plane crash/being caught up in, say WWII.
-Accurate descriptions of people from a decade or two ago tend to seem tasteless. (Casual homophobia) Accurate descriptions of people several decades ago seem awful and bizarre. (Hitting your wife, blatant racism) Accurate descriptions of people from centuries ago seem alien in their flat-out implausible awfulness. (Royalty shitting on the floor at Versailles, the Albigensian Crusade, etc...)
-We seem no less shocked now by social changes and technological developments and no less convinced that everything major under the sun has been done and only tweaks and refinements remain than people of past eras did.
I guess what I'm saying is that the Singularity seems a lot more factually supported-ly likely than it otherwise might have been, but we won't realize we're going through it until it's well underway because our perception of such things will also wind up going faster for most of it.
Cthulhu always swims left isn't an observation that on every single issue society will settle on the left's preferences, but that the general trend is leftward movement. If you interpreted it as that, the fall of the Soviet Union and the move away from planned economies should be a far more important counterexample.
Before continue I should define how I'm using left and right. I think them real in the sense they are the coalitions that tend to form in under current socioeconomic conditions, when due to the adversarial nature of politics, you compress very complicated preferences into as few dimensions (one) as possible. Principle component analysis makes for a nice metaphor on this.
Back to Cthulhu. As someone who's preferences can be described as right wing I would be quite happy with returning to 1950s levels of state intervention, welfare and relative economic equality in exchange for that period's social capital and cultural norms. Controlling for technological progress obviously. Some on the far right of mainstream conservatives might accept the same trade. This isn't to say I would find it a perfect fit, not by long shot, but it would be a net improvement. I believe most We...
The "who would prefer to return 50 years back?" argument is interesting, but I think the meaning of "winning" has to be defined more precisely. Imagine that 50 years ago I was (by whatever metric) ten times as powerful as you, but today I am only three times as powerful than you. Would you describe this situation as your victory?
In some sense, yes, it is an improvement of your relative power. In other sense, no, I am still more powerful. You may be hopeful about the future, because the first derivative seems to work for you. On the other hand, maybe the second derivative works for me; and generally, predicting the future is a tricky business.
But it is interesting to think about how the time dimension is related to politics. I was thinking that maybe it's the other way round; that "the right" is the side which self-identifies with the past, so in some sense it is losing by definition -- if your goal is to be "more like yesterday than like today", then tautologically today is worse according to this metric than the yesterday was. And there is a strong element of returning to the past in some right-wing movements.
But then I realized that some le...
There are also a lot of norms about avoiding physical contact with other people. A therapist is supposed to work on the mind and that doesn't mean just hugging a person for a minute. I can imagine a society in which casual touches between people are a lot more intimate than they are nowadays and behavior between males that a conversative American would label as homosexual would be default social behavior between friends.
This futuristic society of casual male intimacy was known as the 19th century.
In it, the Russia of the 1950s and the modern Middle East you could observe men dancing together, holding hands, cuddling, sleeping together and kissing.
I want my family to be around in the far future, but they aren't interested. Is that selfish? I'm not sure what I should do, or if I should even do anything.
I hope MIRI is thinking about how to stop Johnny Depp.
There's been some prior discussion here about the problem of uncertainty of mathematical statements. Since most standard priors (e.g. Solomonoff) assume that one can do a large amount of unbounded arithmetic, issues of assigning confidence to say 53 being prime are difficult, as are issues connected to open mathematical problems (e.g. how does one discuss how likely one is to estimate that the Riemann hypothesis is true in ZFC?). The problem of bounded rationality here seems serious.
I've run across something that may be related, and at minimum seems hard t...
Eliezer said in his Intelligence Explosion Microeconomics that Google is maybe the most potential candidate to start the FOOM scenario.
I've gotten the impression that Google doesn't really take this Friendliness business seriously. But beyond that, what is Google's stance towards it? On the scale of "what a useless daydreaming", "an interesting idea but we're not willing to do anything about it", "we may allocate some minor resources to it at some point in the future", or something else?
http://lesswrong.com/lw/4rx/agi_and_friendly_ai_in_the_dominant_ai_textbook/
This book's second author is Peter Norvig, director of research at google.
Are there solid examples of people getting utility from Lesswrong? As opposed to utility they could get from other self-help resources?
Are there solid examples of people getting utility from Lesswrong?
The Less Wrong community is responsible for me learning how to relate openly to my own emotions, meeting dozens of amazing friends, building a career that's more fun and fulfilling than I had ever imagined, and learning how to overcome my chronic bouts of depression in a matter of days instead of years.
As opposed to utility they could get from other self-help resources?
Who knows? I'm an experiment with a sample size of one, and there's no control group. In the actual world, other things didn't actually work for me, and this did. But people who aren't me sometimes get similar things from other sources. It's possible that without Less Wrong, I might still have run across the right resources and the right community at the right moment, and something else could have been equally good. Or maybe not, and I'd still be purposeless and alone, not noticing my ennui and confusion because I'd forgotten what it was like to feel anything else.
I did self help before I joined lesswrong, and had almost no results. I'd partially attribute Lesswrong to changing me in ways such that I switched my major from graphic design to biology, in an effort to help people through research. I've also gotten involved in effective altruism in my community, starting the local THINK club for my college, which is donating money to various (effective) charities. I have a lovely group of friends from the Lesswrong study hall who have been tremendously supportive and fun to be around. There are a number of other small things, like learning about melatonin, which fixed my insomnia...etc. but those are more of a result of being around people who are knowledgeable of such things, not necessarily lesswrong-people.
In short, yes, it is helpful.
A while back, I posted in an open thread about my new organisation of LW core posts into an introductory list. One of the commenters mentioned the usefulness of having videos at the start and suggested linking to them somehow from the welcome page.
Can I ask who runs the welcome page, and whether we can discuss here whether this is a good idea, and how perhaps to implement it?
What's so great about rationality anyway? I care a lot about life and would find it a pity if it went extinct, but I don't care so much about rationality, and specifically I don't really see why having the human-style half-assed implementation of it around is considered a good idea.
Following up on my post in the last open thread, I'm reading Understanding Uncertainty which I think is excellent.
I would like to ask for help with one thing, however.
The book is in lay terms, and tries to be as non-technical as possible, so I've not been able to find an answer to my question online that hasn't assumed my having more knowledge than I do.
Can anyone give me a real life example of a series of results, where the assumption of exchange ability holds and it isn't a Bernoulli series?
A lot of things modern "conservatives" consider traditional are recent innovations barely a few decades or a century old. Chesterton's fence doesn't apply to them.
"The mathematician’s patterns, like the painter’s or the poet’s, must be beautiful; the ideas, like the colours or the words, must fit together in a harmonious way. Beauty is the first test: there is no permanent place in the world for ugly mathematics." - G. H. Hardy, A Mathematician's Apology (1941)
Just heard this quoted on The Infinite Monkey Cage.
Does anyone have any recommended "didactic fiction"? Here are a couple of examples:
1) Lauren Ipsum (http://www.amazon.com/Lauren-Ipsum-Carlos-Bueno/dp/1461178185) 2) HPMoR
There's a thread in the rationalist fiction subreddit for brainstorming rationalist story ideas which might interest people here.
Is LW the largest and most established online forum for discussion of AI? If yes, then we should be aware that LW, or at least EY's ideas about AI, might be underestimated with regards to how widespread these ideas are to the people that matter, like AI researchers.
I say this because I come across a lot of comments with the sentiment of lamenting the world's AI researchers aren't more aware of friendliness on the level that is discussed here. I might also just be projecting what I think is the sediment here, in that case, just ignore this comment. Thoughts?
Edit spelling
This futuristic society of casual male intimacy was known as the 19th century.
In it, the Russia of the 1950s and the modern Middle East you could observe men dancing together, holding hands, cuddling, sleeping together and kissing.
More generally, ISTM that displays of affection between heterosexual men correlate negatively with homophobia within each society but positively across societies. (That's because the higher your prior probability for X is, the more evidence I need to provide to convince you that not-X.)