Comment author: CellBioGuy 12 April 2015 11:58:44PM 18 points [-]

Ooh ooh I have one:

That Thiel's real reason for saying such things is pure self-promotion.

Comment author: pangel 13 April 2015 11:39:17PM *  0 points [-]

Straussian thinking seems like a deep well full of status moves !

  • Level 0 - Laugh at the conspiracy-like idea. Shows you are in the pack.
  • Level 1 - As Strauss does, explain it / present instances of it. Shows you are the guru.
  • Level 2 - Like Thiel, hint at it while playing the Straussian game. Shows you are an initiate.
  • Level 3 - Criticize it for failing too often (bad thinking attractor, ideas that are hard to check and deploy usual rationality tools on). Shows you see through the phyg's distortion field.
Comment author: pangel 13 April 2015 11:19:38PM *  4 points [-]

You probably already agreed with "Ghosts in the Machine" before reading it since obviously, a program executes exactly its code even in the context of AI. Also obviously, the program can still appear to not do what it's supposed to if "supposed" is taken to mean to programmer's intent.

These statements don't ignore machine learning; they imply that we should not try to build an FAI using current machine learning techniques. You're right, we understand (program + parameters learned from dataset) even less than (program). So while the outside view might say: "current machine learning techniques are very powerful, so they are likely to be used for FAI," that piece of inside view says: "actually, they aren't. Or at least they shouldn't." ("learn" has a precise operational meaning here, so this is unrelated to whether an FAI should "learn" in some other sense of the word).

Again, whether a development has been successful or promising in some field doesn't mean it will be as successful in FAI, so imitation of the human brain isn't necessarily good here. Reasoning by analogy and thinking about evolution is also unlikely to help; nature may have given us "goals", but they are not goals in the same sense as : "The goal of this function is to add 2 to its input," or "The goal of this program is to play chess well," or "The goal of this FAI is to maximize human utility."

Comment author: Morendil 08 March 2015 01:57:27PM 14 points [-]

I've just run my first half-marathon, coming in with an official time of 2h0m44s, close enough to my 2h objective that I'll call it a win.

Also this month, I reached a first milestone in writing video games using FRP (Functional Reactive Programming) in the Elm language, coding a proto-game that reproduces the basic gameplay of "The Company of Myself".

Comment author: pangel 08 March 2015 02:16:32PM 4 points [-]

Congratulations!

Comment author: VincentYu 21 September 2013 02:22:53AM 2 points [-]
Comment author: pangel 21 September 2013 11:11:30AM 0 points [-]

Thank you!

Comment author: TheOtherDave 18 September 2013 10:59:54PM 18 points [-]

I haven't read the HN comments, nor do I intend to, but it doesn't seem particularly mind-boggling to me that many people are more concerned by the size of the advantage-gulf between them and more powerful humans than they are by the absolute level of advantage they enjoy.

After all, for most of our lives more powerful humans have been the biggest threat we have to worry about, and the magnitude of the threat they pose has been proportional to the size of that advantage-gulf.

I'm not saying it's a rational response given the specifics of this situation, merely that it's an understandable habit of thought.

Comment author: pangel 18 September 2013 11:48:08PM *  12 points [-]

I have met people who explicitly say they prefer a lower gap between them and the better-offs over a better absolute level for themselves. IIRC they were more concerned about 'fairness' than about what the powerful might do to them. They also believed that most would agree with them (I believe the opposite).

Comment author: pangel 18 September 2013 12:19:17PM 0 points [-]

Gentzen’s Cut Elimination Theorem for Non-Logicians

Knowledge and Value, Tulane Studies in Philosophy Volume 21, 1972, pp 115-126

Comment author: Dabor 26 August 2013 04:41:58PM 1 point [-]

I've gone through a change much like this over the past couple of years, although not with explicit effort. I would tend to get easily annoyed by crossing inconsequential stupidity or spite somewhere on the internet (not directed at me), and then proceed to be disappointed in myself for having something like that hang on my thoughts for a few hours.

Switching to a model in which I'm responsible for my own reaction to other people does a wonder for self control and saves some needless frustration.

I can only think of one person (that I know personally) whom I treat as possessing as much agency as I expect of myself, and that results in offering and expecting full honesty. If I view somebody as at all agenty, I generally wouldn't try to spare their feelings or in any way emotionally manipulate them for my own benefit. I don't find that to be a sustainable way to act with strangers: I can't take the time to model why somebody flinging a poorly written insult over a meaningless topic that I happened to skim over is doing so, and I'd gain nothing (and very probably be wrong) in assuming they have a good reason.

As was mentioned with assigning non-agents negligible moral value, it does lead to higher standards, but those standards extend to oneself, potentially to one's benefit. Once you make a distinction of what the acts of a non-agent look like, you start more consistently trying to justify everything you say or do yourself. Reminds me a bit of "Would an idiot do that?' And if they would, I do not do that thing."

I can still rather easily choose to view people as agents and assign moral value in any context where I have to make a decision, so I don't think having a significantly reduced moral value for others is to my detriment: it just removes the pressure to find a justification for their actions.

This will constitute my first comment on Less Wrong, so thank you for the interesting topic, and please inform me of any errors or inconveniences in my writing style.

Comment author: pangel 28 August 2013 09:39:40PM 1 point [-]

Being in a situation somewhat similar to yours, I've been worrying that my lowered expectations about others' level of agency (with elevated expectations as to what constitutes a "good" level of agency) has an influence on those I interact with: if I assume that people are somewhat influenced by what others expect of them, I must conclude that I should behave (as far as they can see) as if I believed them to be as capable of agency as myself, so that their actual level of agency will improve. This would would work on me, for instance I'd be more generally prone to take initiative if I saw trust in my peers' eyes.

Comment author: Yvain 08 August 2013 07:51:27AM *  39 points [-]

None of these are incorporated in molecular biology books and publications that I can find. But the answer was still there: visualize what I read. But not just visualize like the little diagrams of cellular interactions books usually give you – like stupid, over-the-top, Hollywood-status visualization. I had to make it dramatic. I had to mentally reconstruct the biology of a cell in massive, fast, and explosive terms.

I'm having the same problem with molecular biology right now, and I agree with the track you're taking. The issue seems to be the large amount of structure totally devoid of any semantic cues. For example, a typical textbook paragraph might read:

JS-154 is one of five metabolic products of netamine; however, the enzyme that produces it is unknown. It is manufactured in cells in the far rostral region of of the cerebrum, but after binding with a leukocynoid it takes a role in maintaining the blood-brain barrier - in particular guiding the movements of lipid molecules.

I find I can read paragraphs like this five or six times, write them on flashcards, enter them into Anki, and my brain still refuses to understand or remember them after weeks of trying.

On the other hand, my brain easily remembers vastly more complicated structures when they're loaded with human-accessible meaning. For example, just by casually reading the Game of Thrones series, I know an extremely intricate web of genealogies, alliances, locations, journeys, battlesites, et cetera. Byte for byte, an average Game of Thrones reader/viewer probably has as much Game of Thrones information as a neuroscience Ph.D has molecular biology information, but getting the neuroscience info is still a thousand times harder.

Which is interesting, because it seems like it should be possible exploit isomorphisms between the two areas. For example, the hideous unmemorizable paragraph above is structurally identical to (very minor spoilers) :

Jon Snow is one of five children of Ned Stark; however, his mother is unknown. He was born in a castle in the far northern regions of Westeros, but after binding with a white wolf companion he took a role in maintaining the Wall - in particular serving as mentor to his obese friend Samwell.

This makes me wonder if it would be possible to produce a story as enjoyable as Game of Thrones which was actually isomorphic to the most important pathways in molecular biology. So that you could pick up a moderately engaging fantasy book - it wouldn't have to be perfect - read through it in a day or two, and then it ends with "By the way, guess what, you now know everything ever discovered about carbohydrate metabolism". And then there's a little glossary in the back with translations about as complicated as "Jon Snow = JS-154" or "the Wall = the blood-brain barrier". I don't think this could replace a traditional textbook, but it could sure as heck supplement it.

This would be very hard to do correctly, but I'd love to see someone try, so much so that it's on my list of things to attempt myself if I ever get an unexpectedly large amount of free time.

Comment author: pangel 08 August 2013 09:52:09AM 7 points [-]

There is an animated series for children aimed at explaining the human body which personifies bacteria, viruses, etc. Anyone interested in pursuing your idea may want to pick up techniques from the show:

Wikipedia article: http://en.wikipedia.org/wiki/Once_Upon_a_Time..._Life

Example: http://www.youtube.com/watch?v=LIyvrcHnriE&t=1m11s

Comment author: pangel 18 July 2013 05:52:52AM 2 points [-]

So MoR might be a meta-fantasy of the wizarding world as The Sword of Good is a meta-fantasy of the muggle world. Or at least, MoR!Harry might make the same impression to a wizard reading one fic as Hirou does to a muggle reading the other.

Although my instinct is still that Harry fails at the end.

Comment author: William_Quixote 08 July 2013 01:19:50PM *  28 points [-]

You're right Harry's mood is some evidence for his having the body. And from his behavior I think it's clear where it is:

"The gem upon your ring," Dumbledore said. "It is no longer a clear diamond. It is brown, the color of Hermione Granger's eyes, and the color of her hair." A sudden tension filled the room. "That's my father's rock," Harry said. "Transfigured the same as before. I just did it to remember Hermione -" "I must be sure. Take off that ring, Harry, and place it upon my desk." Slowly, Harry did so, removing the gem and setting the ring off to the other side of the desk. Dumbledore pointed his wand at the gem and -

From this and putting the ring as far away as possible I'm pretty sure the body is the ring and the rock sits on it to fool the magic detector. Someone called it in the comments on last chapter, when I get a chance to check I'll edit their name in so they get the appropriate Bayes points.

Comment author: pangel 08 July 2013 01:40:31PM 3 points [-]

Or Harry transfigured Hermione's body into a rock and then the rock into a brown diamond. Unless the story explicitly disallows double transfigurations and I missed it.

View more: Prev | Next