Comment author: patrickmclaren 29 August 2013 03:18:51PM 0 points [-]

I'm kind of confused. Did we really mean odds or primes? If we told the robot that this statement was true for the N integers, shouldn't we have said it correctly? If we did mean primes, then could at least have been honest, and said '2, 3, 5, 7'.

Comment author: Kawoomba 25 August 2013 08:15:34PM *  3 points [-]

The combination of verified pointwise causal isomorphism of repeatable small parts, combined with surface behavioral equivalence on mundane levels of abstraction, is sufficient for me to relegate the alternative hypothesis to the world of 'not bothering to think about it any more'.

The kind of model which postulates that "a conscious em-algorithm must not only act like its corresponding human, under the hood it must also be structured like that human" would not likely stop at "... at least be structured like that human for, like, 9 orders of magnitude down from a human's size, to the level that you a human can see through an electron microscope, that's enough after that it doesn't matter (much / at all)". Wouldn't that be kind of arbitrary and make for an ugly model?

Instead, if structural correspondence allowed for significant additional confidence that the em's professions of being conscious were true, wouldn't such a model just not stop, demanding "turtles all the way down"?

I guess I'm not sure what some structural fidelity can contribute (and find those models too construed which place consciousness somewhere beyond functional equivalence, but still in the upper echelons of the substructures, conveniently not too far from the surface level), compared to "just" overall functional equivalence.

IOW, the big (viable) alternative to functional equivalence, which is structural (includes functional) equivalence, would likely not stop just a few levels down.

Comment author: patrickmclaren 28 August 2013 11:09:19AM *  0 points [-]

The combination of verified pointwise causal isomorphism of repeatable small parts, combined with surface behavioral equivalence on mundane levels of abstraction, is sufficient for me to relegate the alternative hypothesis to the world of 'not bothering to think about it any more'.

The kind of model which postulates that "a conscious em-algorithm must not only act like its corresponding human, under the hood it must also be structured like that human" would not likely stop at "... at least be structured like that human for, like, 9 orders of magnitude down from a human's size, to the level that you a human can see through an electron microscope, that's enough after that it doesn't matter (much / at all)". Wouldn't that be kind of arbitrary and make for an ugly model?

Given that an isomorphism requires checking that the relationship is one-to-one in both directions i.e. human -> em, and em -> human, I see little reason to worry about recursing to the absolute bottom.

Suppose that it turns out that in some sense, ems are little endian, whilst humans are big endian, yet, all other differences are negligible. Does that throw the isomorphism out the window? Of course not.

Comment author: Lumifer 28 August 2013 01:14:24AM 5 points [-]

The average physician makes far less over his lifetime than he could applying the same horsepower and hours worked to, say, finance. It's a fairly straightforward back-of-the-envelope calculation.

That doesn't seem obvious to me. Can I see that calculation? I suspect you're comparing the absolute top-of-the-line financial career trajectory (which is very very hard to achieve) with a typical doctor path.

Comment author: patrickmclaren 28 August 2013 01:20:50AM *  1 point [-]

In Australia, a Medicare funded physician makes anywhere between 100k to 150k [1], whereas the avg. finance position pays 88k [2]. So you're right.

[1] http://www.health.qld.gov.au/hrpolicies/wage_rates/documents/hpeb2-wage-rates.pdf

[2] http://content.mycareer.com.au/salary-centre/financial-services

Sorry that these are Australian wages. I don't care about U.S. wages.

Comment author: patrickmclaren 28 August 2013 12:38:44AM *  1 point [-]

@D_Malik Quantitative finance is bursting at the seams, take a look at the latest trends in MFE programs and Wilmott's CQF. Although it is fun :-)

The engineering programs you listed, coupled with an MBA, will equal bigger bucks than simply engineering on it's own, in my opinion. If you're lucky (rather unlucky, from other people's perspective, hah), you'll be able to join the ranks of the superhuman species of "all pay, no work" Suits.

Also, suppose you do a PhD. In your case, given your interest in altruism, don't simply "do a PhD". Use the opportunity for your own purposes. There have been many theses that have positively affected humanity, see "Tate's Thesis", or Shannon's thesis on Boolean Algebra. I know these examples are old, however, they came to mind simply because of my field. Look up some Systems Biology theses for more recent examples.

Also, think beyond right now. For example, what are the reasons behind wanting to improve leadership skills? Do you want to use your potential leadership skills to influence others to adopt your pov w/ respect to altruism? How are you going to get into such a position (i.e. most people with successful and useful TED talks are not simply good speakers, but they have something concrete happening as well.)

IMO most of these things will be minutely beneficial, however, you'll also likely burn out in the process. Find the most important prerequisites to your success, they'll probably take up most of your time.

Comment author: patrickmclaren 22 January 2013 04:38:21PM 1 point [-]

What exactly is the successor of a set?

Comment author: Qiaochu_Yuan 18 January 2013 07:54:31PM 16 points [-]

I would suggest that this is a useful thing to do on an individual level (to adjust for scope insensitivity and so forth) but a terrible thing to do on a group level (because it's mind-killing). Smells too much like the Yellow Peril for my taste.

The Anthropomorphization Cannon is a powerful weapon, and if it were to fall into the wrong hands...

Comment author: patrickmclaren 19 January 2013 06:34:41AM 1 point [-]

I feel that this position could be equally argued if the scopes were switched, given the following motivation.

...if we mentally anthropomorphised certain risks, then we'd be more likely to give them the attention they deserved. -- OP

For example, a harmless :-) play on your comment. All the while, keeping the above maximization criteria in mind.

I would suggest that this is a useful thing to do on a group level (because it's mind-killing; take Yellow Peril for example) but a terrible thing to do on an individual level (to adjust for scope insensitivity and so forth).

Comment author: patrickmclaren 19 January 2013 06:02:47AM *  4 points [-]

Vassar's essay may benefit from a thorough rewrite, in my opinion. Certain sentences seem to make desperate attempts at describing the intension of his personal views. For example, the following lines required several rereads.

Some of those programs allocate attention to things that can be understood fairly rigorously, like a cart, a plow, or a sword. Other programs allocate attention to more complicated things, such as the long-term alliances and reproductive opportunities within a tribe. The former programs might involve situational awareness and detailed planning, while the latter programs might operate via subtle and tacit pattern detection and automatic obedience to crude heuristics.

Although, it is easy to see how one develops such a style of exposition, spending most waking hours trawling through research.

However, more to the point, the conclusion that I came to was that Vassar was advocating educational reform, moving towards something similar to the Montessori approach, and for what it's worth, I wholeheartedly agree.

Would agree about my reading of his short essay?

Mildly. The essay seems suggestive of a 10th point, which I described above. However, the truth lies with the original author, not me.

How solid do you think his argument is?

8/10. The most striking segment of his argument, in my opinion, is the following line.

However, with their attention placed on esteem, their concrete reasoning underdeveloped and their school curriculum poorly absorbed, such leaders aren't well positioned to create value.

Comment author: [deleted] 16 January 2013 07:12:13PM *  1 point [-]

Will it be a repository of links sorted by a SR algorithm or does it offer some way of processing the information into flashcards?

I can see this working well with article summaries, e.g. in conjunction with tldr.io.

Comment author: patrickmclaren 16 January 2013 08:15:09PM *  0 points [-]

Currently it is just a repository of links sorted by a SR algorithm. However, I'll consider pinging tldr.io for summaries, thanks for the reference.

I'm wary of implementing the flashcard behavior, as it allows users to cherry pick information, and possibly exclude more important information on a page, thereby by-passing the utility of learning the material.

Personally, flashcard usage seems to reinforce some sort of reflexive response to queries, rather than encouraging one to turn fields of knowledge into well-trodden gardens, as a neuroprosthetic should. I'm not sure whether this happens to the majority of users or not, more research needed.

Comment author: patrickmclaren 16 January 2013 06:32:00PM *  3 points [-]

Since I've often found myself in similar situations, I decided to start developing a spaced repetition web application, called memoread, for importing information and links straight from the browser.

Ideally there'll also be Chrome and Firefox extensions, plus an Android interface of some sort. Currently, you can either add links directly to memoread, or through a bookmarklet.

You can check it out at http://damp-wave-1655.herokuapp.com/ . I'm planning on releasing the source on GitHub soon, once I create a separate repo for deployment specifics.

Keep in mind, the app should be considered PRE-alpha, with no guarantees of any functionality whatsoever, hence it being located on some obscure heroku subdomain, not a domain of it's own.

EDIT: Also, although in most spaced repetition software you can select a difficulty level of 1-5, this is not yet available as I have not had the time to implement the changes on the UI side.

Comment author: MrMind 14 January 2013 05:44:13PM *  7 points [-]

On a lighter side, this study reinforces (by a small quantity, due to all the caveat outlined in the comments) my idea that women are as promiscuous as men, but they are culturally forced to lie about that: not really big news.

On a more interesting side, the "fake lie detector" is another one of the techniques that are used to circumvent lies that occur even in anonymous surveys: the first that I heard of, anyway, was employed in a survey regarding illegal owning/hunting/farming of something in some parts of Africa (yes, I've lost almost all the details: can someone point me to the original article?). It consisted of telling people that for some answers, you need not to answer truthfully, instead you needed to throw secretly a dice and report the answer that came up. Apparently this, instead of randomizing the answer, gave the 'farmer' an excuse to tell the truth (yes, I really need to dig up the source).

ETA: see Alicorn's comment for the exact reference.

Comment author: patrickmclaren 14 January 2013 10:29:57PM *  2 points [-]

On a lighter side, this study reinforces (by a small quantity, due to all the caveat outlined in the comments) my idea that women are as promiscuous as men, but they are culturally forced to lie about that: not really big news.

Keep in mind that this study only reflects upon individuals born between 1978-1985. Based on the recent increase in entertainment promoting promiscuous behavior (ie. the American Pie series, EuroTrip, <insert recent teenage sex adventure movie here>), I expect that current attitudes (of 18-25 y/o's) would differ, even from those in 2003.

View more: Prev | Next