Comment author: linkhyrule5 24 February 2015 08:11:12PM 3 points [-]

... Huh.

... Is it just me, or is Harry Potter now in the same room as the Elder Wand and the Philosopher's Stone?

... Well, there's a great big Dark Lord in the way, but.

Comment author: Gabriel 24 February 2015 08:21:04PM 0 points [-]

My reading was that he threw the objects aside, not through the mirror, so they got sealed together with Dumbledore's pocket mirrorverse (a mirrorverse... he must have been hiding a goatee under that fake beard!)

Comment author: Dorikka 16 January 2015 04:19:13AM 3 points [-]

Damn those time travelers, always forgetting the current date. >.>

Comment author: Gabriel 16 January 2015 10:47:11PM *  6 points [-]

Elon Musk donates $10M and it only takes a month from that point to invent an AI capable of time-travel. Truly, money makes the world go round.

Comment author: Gabriel 18 November 2014 06:38:58AM 4 points [-]

This year's edition of Stoic Week begins next monday. It's a seven day long introduction to the basic ideas of Stoicism combined with an attempt to measure its effects.

Comment author: Gvaerg 02 September 2014 02:50:02PM 13 points [-]

A nearby store has this sign that kinda reminds me of What the Tortoise Said to Achilles:

Products marked with <a drawing of a rectangle containing the words "This product can be heated at your request"> can be heated at your request!

Definitely not making this up. Showed this today to my girlfriend who was speechless upon exiting the store.

Comment author: Gabriel 02 September 2014 03:22:28PM *  12 points [-]

You should recurse one level deeper and put a sign outside the store saying "Products marked <a drawing of a rectangle containing the words "This product can be heated at your request"> purchased in stores marked with <a drawing of a sign saying "Products marked with <a drawing of a rectangle containing the words "This product can be heated at your request"> can be heated at your request!"> can be heated at your request!"

Comment author: iarwain1 21 May 2014 02:42:24PM *  2 points [-]

What's the best way to learn programming from a fundamentals-first perspective? I've taken / am taking a few introductory programming courses, but I keep feeling like I've got all sorts of gaps in my understanding of what's going on. The professors keep throwing out new ideas and functions and tools and terms without thoroughly explaining how and why it works like that. If someone has a question the approach is often, "so google it or look in the help file". But my preferred learning style is to go back to the basics and carefully work my way up so that I thoroughly understand what's going on at each step along the way.

Comment author: Gabriel 22 May 2014 01:07:01PM 0 points [-]

Could you give a couple examples of specific things that you'd like to understand?

Without that, a classic that might match what you're interested in is Structure and Interpretation of Computer Programs. It starts as an introduction to general programming concepts and ends as an introduction to writing interpreters.

Comment author: Gabriel 30 April 2014 08:58:33PM *  6 points [-]

We would especially like suggestions which are plausible given technology that normal scientists would expect in the next 15 years. So limited involvement of advanced nanotechnology and quantum computers would be appreciated.

I think that a more precise description of what your hypothetical AI can do would be useful. Just saying to exclude "magic" isn't very specific. There might not be a wide agreement as to what counts as "magic". Nanotechnology definitely does. I believe that so does fast economic domination by cracking the stock market and some people have proposed that. I think that even exploiting software and hardware bugs everywhere to gain total computing dominance should be excluded.

One way to define constraints would be to limit the AI to things that humans have been known to do but allow it to do them with superhuman efficiency. Something like:

  • Assume the AI has any skill that has ever been possessed by a human being.
  • It can execute it without making mistakes, getting tired or demotivated.
  • It can perform an arbitrary high amount of activities simultaneously. To keep with the "no magic" rule, each activity needs to be something a human could plausibly do. So, the AI can act like 10000 genius physicists each solving a different theoretical problem and writing a paper about it, but it can't be a super-physicist who formulates the theory of everything and gains superpowers by exploiting layers of exotic physical law heretofore unknown to humanity. We should also probably require the AI to get some additional computing power before it ramps up its multitasking too high.
  • It can open doors and knows no fear.

Some things such a hypothetical AI could do:

Earning money on the internet: I think it's possible nowadays to register an account on an online freelancing site, talk with clients, do work for them and receive money through electronic money transfer services without ever leaving your home. The only problem would be the need to show your face and your voice to the clients. Faking a real-time video feed probably falls under "things that humans can in principle do".

Moving money around: A crucial limitation is the availability of money management services that don't require signing anything physical before you can start using them. I suspect that quite a lot can be done but it's only a guess. The possibilities should also increase in the future but on the other hand, more regulations could be established making it more difficult. Bitcoin succeeding on a massive scale would make this a non-issue.

Getting more computing power: This sounds like a problem that's already solved. If you can earn money online and move it around then you can rent cloud computing resources. This will become easier and cheaper with time.

Acquiring some amount of control over physical reality: One way is robots. The AI by it's very existence is a solution to the problem of robot control. If it can build a robot capable of making some useful movements then it should also be able to make it perform those movements. This is good once AI has tools, raw materials, energy and a safe place to work on building even more robots but I don't know if current robotics technology would allow it to pass for human, even a really weird one who wears a trench coat all the time, when trying to buy those things.

Another way is recruiting helpers. The problem is that the constraint of making the AI only do human-possible things doesn't really work to prevent postulating "magic" in this area. The AI should profile somewhat gullible people on the internet, give them money, have them join a secret society/cult of its devising and make them fanatically devoted to itself through manipulation and threats, gradually growing the organization and expanding its operations and playing members against each other so that no one ever realizes who's the real boss. This all sounds doable in principle and it sounds like every specific action needed to be taken is something that plenty of people know how to do, but as a whole it comes across as a different version of "solve nanotech and then eat the world".

Comment author: Gabriel 19 January 2014 05:02:47PM 0 points [-]

The second and thirst sections were great. But is the idea of 'terminal goal hacking' actually controversial? Without the fancy lingo, it says that it's okay to learn how to genuinely enjoy new activities and turn that skill to activities that don't seem all that fun now but are useful in the long term. This seems like a common idea in discourse about motivation. I'd be surprised if most people here didn't already agree with it.

This made the first section boring to me and I was about to conclude that it's yet another post restating obvious things in needlessly complicated terms and walk away. Fortunately, I kept on reading and got to the fun parts, but it was close.

Comment author: gjm 18 December 2013 09:15:05AM 1 point [-]

Yes, anchoring, obviously.

The mechanism that seems most important to me doesn't really involve any sort of cognitive bias much. It goes like this. You are on (say) $50k/year. You are good enough that you'd be good value at $150/year, but you'd be willing to move if offered $60k/year, if that were all you could get. You apply for a new job and have to disclose your current salary to every prospective employer. So you get offers in (say) the $60k-80k range because everyone knows, or at least guesses, that that's enough to tempt you and that no one else will be offering much more. You might get a lot more if you successfully start a bidding war, but otherwise you're going to end up paid way less than you could be.

Note that everyone in this scenario acts rationally, arguably at least. Your prospective employer offers you (say) $75k. This would be irrational if you'd turn that down but accept a higher offer. But actually you'll take it. This would be irrational if you could get more elsewhere. But actually you can't because no one else will offer you much more than your current salary.

(You could try telling them that you have a strict policy of not taking offers that are way below what you think you're worth, in the hope that it'll stop them thinking you'd accept an offer of $75k. But you might not like the inference they'd draw from that and your current salary.)

Obvious note: Of course people care about lots of other things besides money, your value to one employer isn't the same as your value to another, etc. This has only limited effect on the considerations above.

Comment author: Gabriel 18 December 2013 06:11:33PM *  0 points [-]

Well, assuming your example numbers, if my work would bring $150k+$x/year and the company didn't hire me because I refused to take $60k/year, instead demanding, say, $120k/year (over twice the current salary, how greedy), then they just let $30k+$something/year walk out the door. Would they really do that (assuming rational behavior blah blah)?

I don't see how they would benefit from playing the game of salary-negotiating chicken to the bitter end. Having a reputation for not offering market salaries for people with unfortunate work history? That actually sounds like it could be harmful.

Comment author: DisclosureQuestion 17 December 2013 08:49:34PM 9 points [-]

I'm not ready for my current employer to know about this, so I've created a throwaway account to ask about it.

A week ago I interviewed with Google, and I just got the feedback: they're very happy and want to move forward. They've sent me an email asking for various details, including my current salary.

Now it seems to me very much as if I don't want to tell them my current salary - I suspect I'd do much better if they worked out what they felt I was worth to them and offered me that, rather than taking my current salary and adding a bit extra. The Internet is full of advice that you shouldn't tell a prospective employer your current salary when they ask. But I'm suspicious of this advice - it seems like the sort of thing they would say whether it was true or not. What's your guess - in real life, how offputting is it for an employer if a candidate refuses to disclose that kind of detail when you ask for it as part of your process? How likely are Google to be put off by it?

Comment author: Gabriel 17 December 2013 10:08:12PM 1 point [-]

I don't feel qualified to answer your question, though if I were to make a guess, I wouldn't expect them to be put off by refusal. Assuming Google behaves at least somewhat rationally, they should at this point have an estimate of your value as an employee and it doesn't seem like your current salary would provide much additional information on that.

So, the question is, to what extent Google behaves rationally. This ties to something that I always wonder whenever I read salary negotiation advice. What is the specific mechanism by which disclosing current salary can hurt you? Yes, anchoring, obviously. But who does it? Is the danger that the potential employer isn't behaving rationally after all and will anchor to the current salary, lowering the upper bound on what they're willing to offer? Or is the danger primarily that anchoring will undermine your confidence and willingness to demand more (and if you felt sufficiently entitled, it wouldn't hurt you at all)?

Comment author: ChrisHallquist 13 December 2013 06:32:05AM -1 points [-]

Rot13 for being partly based on a author's note Eliezer has recommended people not read:

Abg fher ubj gb gnxr guvf hcqngr vagb nppbhag jura svthevat cebonovyvgl Urezvbar ernyyl qbrf pbzr onpx nf na "nyvpbea cevaprff." Ba gur bar unaq, ersrerapr gb "nyvpbea" frrzf gb or frggvat gung hc. Ba gur bgure unaq, V'z univat n uneqre gvzr cnefvat "havpbea ubea cevaprff" guna "jvatrq havpbea cevaprff." Ba gur guveq unaq, guvf.

Comment author: Gabriel 13 December 2013 08:32:44AM 0 points [-]

Be, lbh xabj, guvf. Abg fher vs gebyyvat.

View more: Prev | Next