Comment author: philh 26 March 2014 02:33:04PM 2 points [-]

After learning the constant rule, you find a ruleset that doesn't seem too awful, and then don't learn it.

Hell is trying to abstain from pattern matching.

Comment author: JacekLach 28 March 2014 09:07:22PM *  1 point [-]

Reminds me of talesofmu. Your strategy looks like trying to play the GM, and is likely to get you punished :)

Comment author: ChristianKl 04 February 2014 09:44:29PM 1 point [-]

Taking a hobby costs a lot of time.

For me I don't see any reason to prefer archery over a martial art. The martial art does provide a bunch of secondary benefits.

Comment author: JacekLach 06 February 2014 01:01:40PM 1 point [-]

For me I don't see any reason to prefer archery over a martial art.

And there might not be any reason to do it for you, but other people might be uncomfortable with hitting other people, concerned about their hands (much easier to break a finger or twist your wrist if you're doing martial arts than archery, I imagine), be looking for a relaxing rather than exciting hobby, etc.

Comment author: lmm 03 February 2014 07:06:39PM 0 points [-]

Seriously, citation needed; all the claims I've seen are that cycling is dramatically safer.

Comment author: JacekLach 06 February 2014 11:51:02AM *  4 points [-]

Morgan et all (2010) (http://www.biomedcentral.com/1471-2458/10/699/) estimate 11.1 cyclist deaths per 100000 cyclist-km in London.

Wikipedia (https://en.wikipedia.org/wiki/List_of_countries_by_traffic-related_death_rate) estimates 8.5 road fatalities per 1 BILLION vehicle-km.

http://www.theguardian.com/news/datablog/2012/sep/28/road-deaths-great-britain-data claims 125 motorcyclists died in road accidents for every billion miles travelled - the highest rate for all road users but also a year-on-year fall of 11%.

At 41 deaths per billion miles, the mortality rate for pedestrians was just above that of cyclists (35), with the former a year-on-year rise of 10% and the latter a fall of 6%.

Car occupants had by far the lowest mortality rate at four deaths per billion miles travelled.

Comment author: [deleted] 25 January 2014 11:37:47AM *  0 points [-]

You don't enjoy company of most members-of-your-preferred-sex, but are hopeful that there are people out there that you could spend your life with. The problem is that finding them is painful, because you have to spend time with people whose company you won't enjoy during the search.

You seem to be assuming a model where one can only meet a potential mate selected at random from the population, and one'd need to spend a lot of time with her before being allowed to rule her out.

In response to comment by [deleted] on Dark Arts of Rationality
Comment author: JacekLach 25 January 2014 02:34:57PM 3 points [-]

Hm, somewhat, yes. What do you believe?

I mean it's not purely at random, of course, but surely you need to go out and meet a lot of people.

Comment author: memoridem 23 January 2014 07:43:02PM 1 point [-]

Isn't there anything you already know but wouldn't like to forget? SRS is for keeping your precious memory storage, not necessarily for learning new stuff. There are probably a lot of things that wouldn't even cross your mind to google if they were erased by time. Googling could also waste time compared to storing memories if you have to do it often enough (roughly 5 minutes in your lifetime per fact).

What other skills work nicely with spaced repetition?

In my experience anything you can write into brief flashcards. Some simple facts can work as handles for broader concepts once you've learned them. You could even record triggers for episodic memories that are important to you.

Comment author: JacekLach 23 January 2014 08:47:33PM *  0 points [-]

Isn't there anything you already know but wouldn't like to forget?

Yeah, that's pretty much the problem. Not really. I.e. there are stuff I know that would be inconvenient to forget, because I use this knowledge every day. But since I already use it every day, SR seems unnecessary.

Things I don't use every day are not essential - the cost of looking them up is minuscule since it happens rarely.

I suppose a plausible use case would be birth dates of family members, if I didn't have google calendar to remind me when needed.

Edit: another use case that comes to mind would be names. I'm pretty bad with names (though I've recently begun to suspect that probably I'm as bad with remembering names as anyone else, I just fail to pay attention when people introduce themselves). But asking to take someone's picture 'so that I can put it on a flashcard' seems awkward. Facebook to the rescue, I guess?

(though I don't really meet that many people, so again - possibly not worth the effort in maintaining such a system)

Comment author: Nornagest 23 January 2014 08:09:53PM *  0 points [-]

I don't know what you work on, but many fields include bodies of loosely connected facts that you could in principle look up, but which you'd be much more efficient if you just memorized. In programming this might mean functions in a particular library that you're working with (the C++ STL, for example). In chemistry, it might be organic reactions. The signs of medical conditions might be another example, or identities related to a particular branch of mathematics.

SRS would be well suited to maintaining any of these bodies of knowledge.

Comment author: JacekLach 23 January 2014 08:45:09PM 0 points [-]

I'm a software dev.

In programming this might mean functions in a particular library that you're working with (the C++ STL, for example)

Right. I guess I somewhat do 'spaced repetition' here, just by the fact that every time I interact with a particular library I'm reminded of its function. But that is incidental - I don't really care about remembering libraries that I don't use, and those that I use regularly I don't need SR to maintain.

I suppose medical conditions looks more plausible as a use case - you really need to remember a large set of facts, any of which is actually used very rarely. But that still doesn't seem useful to me personally - I can think of no dataset that'd be worth the effort.

I guess I should just assume I'm an outlier there, and simply keep SR in mind in case I ever find myself needing it.

Comment author: [deleted] 18 January 2014 07:12:45AM *  4 points [-]

If you don't enjoy the company of members-of-your-preferred-sex, what do you want a relationship for (that you couldn't also get from one-night stands or even prostitutes), anyway?

(Possibility of having children in the future?)

In response to comment by [deleted] on Dark Arts of Rationality
Comment author: JacekLach 23 January 2014 08:13:06PM 7 points [-]

You don't enjoy company of most members-of-your-preferred-sex, but are hopeful that there are people out there that you could spend your life with. The problem is that finding them is painful, because you have to spend time with people whose company you won't enjoy during the search.

By hacking yourself to enjoy their company you make the search actually pleasant. Though hopefully your final criteria does not change.

In response to 2013 Survey Results
Comment author: MondSemmel 19 January 2014 12:59:07PM *  12 points [-]

Thanks for taking the time to conduct and then analyze this survey!

What surprised me:

  • Average IQ seemed insane to me. Thanks for dealing extensively with that objection.
  • Time online per week seems plausible from personal experience, but I didn't expect the average to be so high.
  • The overconfidence data hurts, but as someone pointed out in the comments, it's hard to ask a question which isn't misunderstood.

What disappointed me:

  • Even I was disappointed by the correlations between P(significant man-made global warming) vs. e.g. taxation/feminism/etc. Most other correlations were between values, but this one was between one's values and an empirical question. Truly Blue/Green. On the topic of politics in general, see below.
  • People, use spaced repetition! It's been studied academically and been shown to work brilliantly; it's really easy to incorporate in your daily life in comparison to most other LW material etc... Well, I'm comparatively disappointed with these numbers, though I assume they are still far higher than in most other communities.

And a comment at the end:

"We are doing terribly at avoiding Blue/Green politics, people."

Given that LW explicitly tries to exclude politics from discussion (and for reasons I find compelling), what makes you expect differently?

Incorporating LW debiasing techniques into daily life will necessarily be significantly harder than just reading the Sequences, and even those have only been read by a relatively small proportion of posters...

Comment author: JacekLach 23 January 2014 07:12:00PM 0 points [-]

People, use spaced repetition! It's been studied academically and been shown to work brilliantly; it's really easy to incorporate in your daily life in comparison to most other LW material etc... Well, I'm comparatively disappointed with these numbers, though I assume they are still far higher than in most other communities

I'm one of the people who have never used spaced repetition, though I've heard of it. I don't doubt it works, but what do you actually need to remember nowadays? I'd probably use it if I was learning a new language (which I don't really plan to do anytime soon)... What other skills work nicely with spaced repetition?

I just don't feel the need to remember things when I have google / wikipedia on my phone.

Comment author: Emile 13 January 2014 12:52:45PM *  13 points [-]

This situation reminds me of the Ultumatum game.

Say there are three candidates: Alice, Bob and Chandrakant. Everybody knows Chandrakant doesn't have a chance. Between Alice and Bob, you prefer Alice, but would like her to support the ban on transgenic oysters. So, along with a small vocal minority, you threatedn to vote for Chandrakant if she doesn't support the ban.

This plays out like this:

Round 1, Alice decides whether or not to add the ban on transgenic oysters to her platform.

Round 2, you decide whether to vote for her or Chandrakant.

Alice's decision in round 1 depends on how seriously she takes your threat to vote for Chandrakant in round 2. If you're the kind of person who is known for "not wanting to throw his vote away", then you'll end up still voting for her and eating transgenic oysters. If you (and people like you) are known for being stubborn as a mule, she may change her platform.

(note that none of this requires that you actually prefer Chandrakant to Alice - voting for a third-party guy can also be strategic voting!)

Comment author: JacekLach 16 January 2014 11:29:51PM 0 points [-]

Wouldn't a better threat be to switch to Bob anyway? (in which case Alice effectively loses 2 votes instead of one)

Comment author: somervta 16 January 2014 04:43:04AM 1 point [-]

No, there's no particular reason to think an FAI would be better at learning than an UFAI analogue, at least not as far as I can see.

However, one of the problems needed to be solved for FAI (stable self-modification) could certainly make an FAI's rate of self-improvement faster than a comparative AI which has not solved that problem. There are other questions that need to be answered there (does the AI realize that modifications will go wrong and therefore not self-modify? If it's smart enough to notice the problem, won't it's first step be to solve it?), and I may be off base here.

I'm not sure it's that useful to talk about an FAI vs a analogue UFAI, though. If an FAI is built, there will be many significant differences between the resulting intelligence and the one that would have been built if the FAI was not, simply due to the different designers. In terms of functioning, the different design choices, even those not relevant to FAI (if that's even meaningful - FAI may well need to be so fully integrated that all the aspects are made with it in mind), may be radically different depending on the designer and are likely to have most of the effect you're talking about.

In other words, we don't know shit about what the first AGI might look like, and we certainly don't know enough to do detailed separate counterfactuals

Comment author: JacekLach 16 January 2014 03:40:01PM 1 point [-]

No, there's no particular reason to think an FAI would be better at learning than an UFAI analogue, at least not as far as I can see.

I believe you have this backwards - the OP is asking whether a FAI would be worse at learning than an UFAI, because of additional constraints on its improvement. If so:

then a non Friendly AI would eventually (possibly quite quickly) become smarter than any FAI built.

Of course one of the first actions of a FAI would be to prevent any UFAI from being built at all.

View more: Next