Posts

Sorted by New

Wiki Contributions

Comments

Here's an argument I found that "hyperplastic agents" (i.e.,Strong AI) cannot make use of Schelling Fences: http://www.slideshare.net/DavidRoden/hyperapocalypse-rev

East Asian - mostly agreed. I think WEIRDness is the biggest factor. WEIRD thought emphasizes precision and context-independent formalization. I am pretty deracinated myself, but my thinking style is low-precision, tolerant of apparent paradoxes and context-sensitive. The difference is much like the analytic-continental divide in Western philosophy. I recommend Richard Nisbett's book The Geography of Thought, which contrasts WEIRD thought with East Asian thought.

37 Ways Words Can Be Wrong (and LW as a whole) is important because of how brittle WEIRD concepts can be. (I have some crackpot ideas about maps and territories inspired by Jean Baudrillard. He's French, of course...)

The concept of adolescence:

Although the first use of the word “adolescence” appeared in the 15th century and came from the Latin word “adolescere,” which meant “to grow up or to grow into maturity” (Lerner & Steinberg, 2009, p.1), it wasn’t until 1904 that the first president of the American Psychological Association, G. Stanley Hall, was credited with discovering adolescence (Henig, 2010, p. 4). In his study entitled "Adolescence," he described this new developmental phase that came about due to social changes at the turn of the 20th century. Because of the influence of Child Labor Laws and universal education, youth had newfound time in their teenage years when the responsibilities of adulthood were not forced upon them as quickly as in the past. http://www.massculturalcouncil.org/services/BYAEP_History.asp

With the trend towards an expectation of college education, we will need an extended concept to include the early twenties.

Edit: "Emerging adulthood is a phase of the life span between adolescence and full-fledged adulthood, proposed by Jeffrey Arnett in a 2000 article in the American Psychologist."

USA resident here, I submitted my sample in April 2013 and have not received data. The status page indicates they are still sequencing my genome. I emailed them twice to inquire on the timeframe for completion to no avail.

Looks promising, but requiring the graph to be acyclic makes it difficult to model processes where feedback is involved. A workaround would be treat each time stamp of a process as a different event. Have A(0)->B(1), where event A at time 0 affects event B at time 1, B(0)->A(1), A(0)->A(1), B(0)->B(1), A(t)->B(t+1), etc. But this gets unwieldy very quickly.

I don't see how phenomenological bridges solve the epistemological problem, instead of just pushing the problem one step further away. Where in the bridge hypothesis is it encoded that one end of the bridge has a "self", in a way that leads to different behavior?

Let me give an example of AIXI, which creates something that is almost a phenomenological bridge, but remains Cartesian. Imagine that an AIXI finds a magnifying glass. It holds the magnifying glass near its camera, and at the correct focal distance, everything in {world − magnifying glass} looks the same, except upside down. Through experimentation and observation, it realizes that gravity hasn't flipped, it's still on the ground, the lights are still 15 feet above it, etc. It will conclude that the magnifying glass filters visual input on the rest of the world flipping the Y axis. Thus AIXI has a hypothesis about the relation of the magnifying glass with the world.

Phenomenal bridge hypotheses are saying there is something like this magnifying glass, except embedded in...where? What's the difference between reading glasses and retinas? I can have 1 "visual filter hypothesis", 2 visual filter hypotheses, n visual filter hypotheses. What's the distinction between internal filters and world filters? Do I have x internal filters and {n − x} external filters? What would that mean?

Somebody outside of LW asked how to quantify prior knowledge about a thing. When googling I came across a mathematical definition of surprise, as "the distance between the posterior and prior distributions of beliefs over models". So, high prior knowledge would lead to low expected surprise upon seeing new data. I didn't see this formalization used on LW or the wiki, perhaps it is of interest.

Speaking of the LW wiki, how fundamental is it to LW compared to the sequences, discussion threads, Main articles, hpmor, etc?

Is it fair to call the CD data a map in this case? (Perhaps that's your point.) The relationship is closer to interface-implementation than map-territory. Reductionism still stands, in that the higher abstraction is a reduction of the lower. (Whereas a map is a compression of the territory, an interface is a construction on top of it). Correct lisp should be implementation-agnostic, but it is not implementation-free.

Emotions, like any sensory input, can serve as a source of information to be rationally inspected and used to form beliefs about the external world. It is only when emotions interfere with the process of interpreting information that they become detrimental to rationality.