Comment author: themusicgod1 28 February 2016 12:30:13AM 0 points [-]
Comment author: themusicgod1 17 October 2015 08:30:25PM *  0 points [-]

(this is the second copy of this comment, the first was regrettably lost in a browser crash. Use systems that back up your comments automatically)

This advice seems to fly in the face of Richard Hamming's advice to keep an open door. However perhaps the difference is subtle: Hamming suggested to have an open door but not necessarily to share your secrets, so perhaps there is room for a big science mystery cult to retain its own mysteries at every level of initiation. Perhaps there is a middle ground[1] to be found between this and current 'open science' wherein secrets and ritual are more emphasized, but where the public has the ability to always query deep into the bureaucracy of the science temple/university.

More likely, however the best approach is all of the above, some kinds of thinking are enhanced by a certain size of a team, and there may be some problems that require an open-science sized 'ingroup', and some problems that are more tractable with an ingroup the size of a mystery cult.

In response to Fake Reductionism
Comment author: themusicgod1 17 September 2015 03:40:29PM *  1 point [-]

The question may have once been which poet gets quoted when rainbows are brought up. If Keats isn't adding to the discussion in a meaningful way anymore since his metaphors will play second fiddle to the ones that of Newton, which were wonderful and exciting enough that Newton was driven to poking himself in the eye with a needle over them. I don't know if Keats even in his heyday could have claimed that. It may have been that his views on rainbows were propagated in some ingroup, until someone from that ingroup quoted them to someone in an ingroup with exposure to Newton's ideas on the same. They would have looked bad when that happened, but they would likely bring up the same thing to a person who might quote Keats to them, and so on until Keats himself was bested at his own game.

The problem isn't that Science is taking away from Rainbows, the problem is that Science is taking the power of controlling perception and justifying belief (mostly in other people) from Keats. No kidding he's going to be unhappy about it.

Science changes the poetry dynamic Keats' is used to because suddenly there's competition for what gets associated with what idea in such a way that poets don't necessarily get first dibs in the minds of people that they care about. Similar to how Galileo got in trouble for changing the scope of mathematicians from strictly below philosophers, this may be another instance of Newton changing how we view things by raising the social position of those who participate in science to where it is acceptable to challenge the status of a poet. Poets were important enough in Keats' day that the heads of governments had their own poet on staff.

Keats just could not keep up with what was actually still wonderful to the people he would have seduced with his ideas: Darwin came later, and found wonder still left:

"There is grandeur in this view of life, with its several powers, having been originally breathed into a few forms or into one; and that, whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being, evolved. " - Charles Darwin

Of course this dynamic may be changing yet. This framing of the problem leaves open the possibility that our personal ability to perceive wonder can get very broken when our computer systems produce the models for us, as described by radiolab (tl; dr when you have computer systems that can derive laws describing phenomena better than we can understand the reason behind those laws, but which nevertheless describe those systems that generate the phenomena, we may be at something of a loss when it comes to our 'right' to perceive wonder). Being unable to physically train your brain to assign wonder to wonderful thing seems to be a different problem than this one, more of a disability rather than anything.

Comment author: Eliezer_Yudkowsky 15 March 2008 04:33:24PM 24 points [-]

If we had enough cputime, we could build a working AI using AIXItl.

*Threadjack*

People go around saying this, but it isn't true:

1) Both AIXI and AIXItl will at some point drop an anvil on their own heads just to see what happens (test some hypothesis which asserts it should be rewarding), because they are incapable of conceiving that any event whatsoever in the outside universe could change the computational structure of their own operations. AIXI is theoretically incapable of comprehending the concept of drugs, let alone suicide. Also, the math of AIXI assumes the environment is separably divisible - no matter what you lose, you get a chance to win it back later.

2) If we had enough CPU time to build AIXItl, we would have enough CPU time to build other programs of similar size, and there would be things in the universe that AIXItl couldn't model.

3) AIXItl (but not AIXI, I think) contains a magical part: namely a theorem-prover which shows that policies never promise more than they deliver.

Comment author: themusicgod1 29 August 2015 01:07:30AM *  0 points [-]

This seems to me more evidence that intelligence is in part a social/familial thing: that like human beings that have to be embedded in a society in order to develop a certain level of intelligence, a certain level of an intuition for "don't do this it will kill you" informed by the nuance that is only possible with a wide array of individual failures informing group success or otherwise: it might be a prerequisite for higher level reasoning beyond a certain level (and might constrain the ultimate levels upon which intelligence can rest).

I've seen more than enough children try to do things that would be similar enough to dropping an anvil on their head to consider this 'no worse than human' (in fact our hackerspace even has an anvil, and one kid has ha ha only serious even suggested dropping said anvil on his own head). If AIXI/AIXItl can reach this level, at the very least it should be capable of oh-so-human level reasoning(up to and including the kinds of risky behaviour that we all probably would like to pretend we never engaged in), and could possibly transcend it in the same way that humans do: by trial and error, by limiting potential damage to individuals, or groups, and fighting the neverending battle against ecological harms on its own terms on the time schedule of 'let it go until it is necessary to address the possible existential threat'.

Of course it may be that the human way of avoiding species self-destruction is fatally flawed, including but not limited to creating something like AIXI/AIXItl. But it seems to me that is a limiting, rather than a fatal flaw. And it may yet be that the way out of our own fatal flaws, and the way out of AIXI/AIXItl's fatal flaws are only possible by some kind of mutual dependence, like the mutual dependence of two sides of a bridge. I don't know.

Comment author: themusicgod1 16 August 2015 06:13:21AM 0 points [-]

Either way, the question is guaranteed to have an answer. You even have a nice, concrete place to begin tracing—your belief, sitting there solidly in your mind.

In retrospect this seems like an obvious implication of belief in belief. I would have probably never figured it out on my own, but now that I've seen both, I can't unsee the connection.

Comment author: lessdazed 18 February 2011 03:52:56AM 1 point [-]

I remember about three dreams per night with no effort. Sometimes when I wake up I can remember more, but then it's impossible for me to remember them all for long. If I want to remember each of four or more dreams, I have to rehearse them immediately, otherwise I will usually forget all but three. The act of rehearsing makes it harder to remember the others, and it's weird to wake up with 6-7 dreams in my mental cache, knowing that I can't keep them all because after I actively remind myself what 3-4 were about the others will be very faint and by the time I have thought about five the others will be totally gone.

In related(?) news, often my brain wakes up before my body, and I can't move so much as my eyeballs! It's like the opposite of sleepwalking.

If I'm lying in bed, totally "locked in" and remembering a slew of dreams, I know I am awake. No one has complicated thoughts about several dreams from totally different genres while experiencing that one is unable to move a muscle without being awake.

If I'm arguing to the animated electrified skeleton of a shark that has made itself at home in my pool that he'd be better off joining his living brother in a lake in the Sierra Nevadas, who is eating campers I tell him to in exchange for hot dogs...I have a good chance of suspecting it's a dream, even within the dream.

Neither of these are tests, of course.

Comment author: themusicgod1 11 August 2015 03:22:29AM *  0 points [-]

No one has complicated thoughts about several dreams from totally different genres while experiencing that one is unable to move a muscle without being awake.

...I've had some pretty complicated dreams, where I've woken up from a dream(!), gone to work, made coffee, had discussions about the previous dream, had thoughts about the morality or immorality of the dream, then sometime later come to a conclusion that something was out of place(I'm not wearing pants?!) then woken up to realize that I was dreaming. I've had nested dreams a good couple of layers deep with this sort of thing going on.

That said I think you have something there, though. Sometimes I wake up (Dream or otherwise) and I remember my dream really vividly, especially when I awake suddenly, due to an alarm clock or something

But I've never had a dream that I struggled to remember what was in my dream inside of my dream. At the least, such an activity should really raise my priors that I'm toplevel.

Comment author: themusicgod1 05 July 2015 04:32:46AM 0 points [-]

Looks like somewhere along the transition to lesswrong, the trackback to this related OB post appears to have been lost. It's worth digging a step deeper for the context, here.

Comment author: Ron_Hardin 24 February 2008 12:06:24AM 0 points [-]

We have a thousand words for sorrow http://rhhardin.home.mindspring.com/sorrow.txt

I don't know if that affects the theory.

(computer clustering a short distance down paths of a thesaurus)

Comment author: themusicgod1 15 March 2015 04:23:34AM 0 points [-]

Including: "twitter", "altruism", "trust", "start" and "curiosity" apparently?

In response to comment by PK on Arguing "By Definition"
Comment author: [deleted] 13 June 2014 10:37:47AM -3 points [-]

You can start by explaining what it means for something to have a definition by reducing to the idea of a 'concept':

''

The classical theory of concepts, also referred to as the empiricist theory of concepts,[2] is the oldest theory about the structure of concepts (it can be traced back to Aristotle[3]), and was prominently held until the 1970s.[3] The classical theory of concepts says that concepts have a definitional structure.[1] Adequate definitions of the kind required by this theory usually take the form of a list of features. These features must have two important qualities to provide a comprehensive definition.[3] Features entailed by the definition of a concept must be both necessary and sufficient for membership in the class of things covered by a particular concept.[3] A feature is considered necessary if every member of the denoted class has that feature. A feature is considered sufficient if something has all the parts required by the definition.[3] For example, the classic example bachelor is said to be defined by unmarried and man.[1] An entity is a bachelor (by this definition) if and only if it is both unmarried and a man. To check whether something is a member of the class, you compare its qualities to the features in the definition.[2] Another key part of this theory is that it obeys the law of the excluded middle, which means that there are no partial members of a class, you are either in or out.[3]

The classical theory persisted for so long unquestioned because it seemed intuitively correct and has great explanatory power. It can explain how concepts would be acquired, how we use them to categorize and how we use the structure of a concept to determine its referent class.[1] In fact, for many years it was one of the major activities in philosophy - concept analysis.[1] Concept analysis is the act of trying to articulate the necessary and sufficient conditions for the membership in the referent class of a concept.[citation needed] Arguments against the classical theory

Given that most later theories of concepts were born out of the rejection of some or all of the classical theory,[4] it seems appropriate to give an account of what might be wrong with this theory. In the 20th century, philosophers such as Rosch and Wittgenstein argued against the classical theory. There are six primary arguments[4] summarized as follows:

It seems that there simply are no definitions - especially those based in sensory primitive concepts.[4]
It seems as though there can be cases where our ignorance or error about a class means that we either don’t know the definition of a concept, or have incorrect notions about what a definition of a particular concept might entail.[4]
Quine's argument against analyticity in Two Dogmas of Empiricism also holds as an argument against definitions.[4]
Some concepts have fuzzy membership. There are items for which it is vague whether or not they fall into (or out of) a particular referent class. This is not possible in the classical theory as everything has equal and full membership.[4]
Rosch found typicality effects which cannot be explained by the classical theory of concepts, these sparked the prototype theory.[4] See below.
Psychological experiments show no evidence for our using concepts as strict definitions.[4]

Prototype theory

Main article: Prototype theory

Prototype theory came out of problems with the classical view of conceptual structure.[1] Prototype theory says that concepts specify properties that members of a class tend to possess, rather than must possess.[4]Wittgenstein, Rosch, Mervis, Berlin, Anglin, and Posner are a few of the key proponents and creators of this theory.[4][5] Wittgenstein describes the relationship between members of a class as family resemblances. There are not necessarily any necessary conditions for membership, a dog can still be a dog with only three legs.[3] This view is particularly supported by psychological experimental evidence for prototypicality effects.[3] Participants willingly and consistently rate objects in categories like ‘vegetable’ or ‘furniture’ as more or less typical of that class.[3][5] It seems that our categories are fuzzy psychologically, and so this structure has explanatory power.[3] We can judge an item’s membership to the referent class of a concept by comparing it to the typical member - the most central member of the concept. If it is similar enough in the relevant ways, it will be cognitively admitted as a member of the relevant class of entities.[3] Rosch suggests that every category is represented by a central exemplar which embodies all or the maximum possible number of features of a given category.[3] Theory-theory

Theory-theory is a reaction to the previous two theories and develops them further.[3] This theory postulates that categorization by concepts is something like scientific theorizing.[1] Concepts are not learned in isolation, but rather are learned as a part of our experiences with the world around us.[3] In this sense, concepts’ structure relies on their relationships to other concepts as mandated by a particular mental theory about the state of the world.[4] How this is supposed to work is a little less clear than in the previous two theories, but is still a prominent and notable theory.[4] This is supposed to explain some of the issues of ignorance and error that come up in prototype and classical theories as concepts that are structured around each other seem to account for errors such as whale as a fish (this misconception came from an incorrect theory about what a whale is like, combining with our theory of what a fish is).[4] When we learn that a whale is not a fish, we are recognizing that whales don’t in fact fit the theory we had about what makes something a fish. In this sense, the Theory-Theory of concepts is responding to some of the issues of prototype theory and classic theory.[4]""

In response to comment by [deleted] on Arguing "By Definition"
Comment author: themusicgod1 08 March 2015 05:16:10PM *  0 points [-]

Obviously the above is copypasta from Wikipedia at no doubt the time of the parent's posting.

In case it's edited/the edit history is wiped in the future:

[1] Eric Margolis; Stephen Lawrence. "Concepts". Stanford Encyclopedia of Philosophy. Metaphysics Research Lab at Stanford University. Retrieved 6 November 2012.

[2] Susan Carey (2009). The Origin of Concepts. Oxford University Press. ISBN 978-0-19-536763-8.

[3] Gregory Murphy (2002). The Big Book of Concepts. Massachusetts Institute of Technology. ISBN 0-262-13409-8.

[4] Stephen Lawrence; Eric Margolis (1999). Concepts and Cognitive Science. in Concepts: Core Readings: Massachusetts Institute of Technology. pp. 3–83. ISBN 978-0-262-13353-1

[5] Roger Brown (1978). A New Paradigm of Reference. Academic Press Inc. pp. 159–166. ISBN 0-12-497750-2.

Comment author: Vladimir_Nesov 13 October 2010 12:04:28PM 2 points [-]

See chapters 1-9 of this document for a more detailed treatment of the argument.

Comment author: themusicgod1 09 February 2015 07:05:16PM 0 points [-]

This link is 404ing. Anyone have a copy of this?

View more: Next