Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: themusicgod1 07 January 2017 08:26:57PM 0 points [-]

Our children will look back at the fact that we were STILL ARGUING about this in the early 21st-century, and correctly deduce that we were nuts.

We're still arguing whether or not the world is flat, whether the zodiac should be used to predict near-term fate and whether we should be building stockpiles of nuclear weapons. There's billions left to connect to the internet, and most extant human languages to this day have no written form. Basic literacy and mathematics is still something much of the world struggles with. This is going to go on for awhile: the future will not be surprised that the finer details of after the 20th decimal point were being debated when we can't even agree on whether intelligent design is the best approach to cell biology or not.

Comment author: themusicgod1 28 February 2016 12:30:13AM 0 points [-]
Comment author: themusicgod1 17 October 2015 08:30:25PM *  0 points [-]

(this is the second copy of this comment, the first was regrettably lost in a browser crash. Use systems that back up your comments automatically)

This advice seems to fly in the face of Richard Hamming's advice to keep an open door. However perhaps the difference is subtle: Hamming suggested to have an open door but not necessarily to share your secrets, so perhaps there is room for a big science mystery cult to retain its own mysteries at every level of initiation. Perhaps there is a middle ground[1] to be found between this and current 'open science' wherein secrets and ritual are more emphasized, but where the public has the ability to always query deep into the bureaucracy of the science temple/university.

More likely, however the best approach is all of the above, some kinds of thinking are enhanced by a certain size of a team, and there may be some problems that require an open-science sized 'ingroup', and some problems that are more tractable with an ingroup the size of a mystery cult.

In response to Fake Reductionism
Comment author: themusicgod1 17 September 2015 03:40:29PM *  1 point [-]

The question may have once been which poet gets quoted when rainbows are brought up. If Keats isn't adding to the discussion in a meaningful way anymore since his metaphors will play second fiddle to the ones that of Newton, which were wonderful and exciting enough that Newton was driven to poking himself in the eye with a needle over them. I don't know if Keats even in his heyday could have claimed that. It may have been that his views on rainbows were propagated in some ingroup, until someone from that ingroup quoted them to someone in an ingroup with exposure to Newton's ideas on the same. They would have looked bad when that happened, but they would likely bring up the same thing to a person who might quote Keats to them, and so on until Keats himself was bested at his own game.

The problem isn't that Science is taking away from Rainbows, the problem is that Science is taking the power of controlling perception and justifying belief (mostly in other people) from Keats. No kidding he's going to be unhappy about it.

Science changes the poetry dynamic Keats' is used to because suddenly there's competition for what gets associated with what idea in such a way that poets don't necessarily get first dibs in the minds of people that they care about. Similar to how Galileo got in trouble for changing the scope of mathematicians from strictly below philosophers, this may be another instance of Newton changing how we view things by raising the social position of those who participate in science to where it is acceptable to challenge the status of a poet. Poets were important enough in Keats' day that the heads of governments had their own poet on staff.

Keats just could not keep up with what was actually still wonderful to the people he would have seduced with his ideas: Darwin came later, and found wonder still left:

"There is grandeur in this view of life, with its several powers, having been originally breathed into a few forms or into one; and that, whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being, evolved. " - Charles Darwin

Of course this dynamic may be changing yet. This framing of the problem leaves open the possibility that our personal ability to perceive wonder can get very broken when our computer systems produce the models for us, as described by radiolab (tl; dr when you have computer systems that can derive laws describing phenomena better than we can understand the reason behind those laws, but which nevertheless describe those systems that generate the phenomena, we may be at something of a loss when it comes to our 'right' to perceive wonder). Being unable to physically train your brain to assign wonder to wonderful thing seems to be a different problem than this one, more of a disability rather than anything.

Comment author: Eliezer_Yudkowsky 15 March 2008 04:33:24PM 24 points [-]

If we had enough cputime, we could build a working AI using AIXItl.

*Threadjack*

People go around saying this, but it isn't true:

1) Both AIXI and AIXItl will at some point drop an anvil on their own heads just to see what happens (test some hypothesis which asserts it should be rewarding), because they are incapable of conceiving that any event whatsoever in the outside universe could change the computational structure of their own operations. AIXI is theoretically incapable of comprehending the concept of drugs, let alone suicide. Also, the math of AIXI assumes the environment is separably divisible - no matter what you lose, you get a chance to win it back later.

2) If we had enough CPU time to build AIXItl, we would have enough CPU time to build other programs of similar size, and there would be things in the universe that AIXItl couldn't model.

3) AIXItl (but not AIXI, I think) contains a magical part: namely a theorem-prover which shows that policies never promise more than they deliver.

Comment author: themusicgod1 29 August 2015 01:07:30AM *  0 points [-]

This seems to me more evidence that intelligence is in part a social/familial thing: that like human beings that have to be embedded in a society in order to develop a certain level of intelligence, a certain level of an intuition for "don't do this it will kill you" informed by the nuance that is only possible with a wide array of individual failures informing group success or otherwise: it might be a prerequisite for higher level reasoning beyond a certain level (and might constrain the ultimate levels upon which intelligence can rest).

I've seen more than enough children try to do things that would be similar enough to dropping an anvil on their head to consider this 'no worse than human' (in fact our hackerspace even has an anvil, and one kid has ha ha only serious even suggested dropping said anvil on his own head). If AIXI/AIXItl can reach this level, at the very least it should be capable of oh-so-human level reasoning(up to and including the kinds of risky behaviour that we all probably would like to pretend we never engaged in), and could possibly transcend it in the same way that humans do: by trial and error, by limiting potential damage to individuals, or groups, and fighting the neverending battle against ecological harms on its own terms on the time schedule of 'let it go until it is necessary to address the possible existential threat'.

Of course it may be that the human way of avoiding species self-destruction is fatally flawed, including but not limited to creating something like AIXI/AIXItl. But it seems to me that is a limiting, rather than a fatal flaw. And it may yet be that the way out of our own fatal flaws, and the way out of AIXI/AIXItl's fatal flaws are only possible by some kind of mutual dependence, like the mutual dependence of two sides of a bridge. I don't know.

Comment author: themusicgod1 16 August 2015 06:13:21AM 0 points [-]

Either way, the question is guaranteed to have an answer. You even have a nice, concrete place to begin tracing—your belief, sitting there solidly in your mind.

In retrospect this seems like an obvious implication of belief in belief. I would have probably never figured it out on my own, but now that I've seen both, I can't unsee the connection.

Comment author: lessdazed 18 February 2011 03:52:56AM 1 point [-]

I remember about three dreams per night with no effort. Sometimes when I wake up I can remember more, but then it's impossible for me to remember them all for long. If I want to remember each of four or more dreams, I have to rehearse them immediately, otherwise I will usually forget all but three. The act of rehearsing makes it harder to remember the others, and it's weird to wake up with 6-7 dreams in my mental cache, knowing that I can't keep them all because after I actively remind myself what 3-4 were about the others will be very faint and by the time I have thought about five the others will be totally gone.

In related(?) news, often my brain wakes up before my body, and I can't move so much as my eyeballs! It's like the opposite of sleepwalking.

If I'm lying in bed, totally "locked in" and remembering a slew of dreams, I know I am awake. No one has complicated thoughts about several dreams from totally different genres while experiencing that one is unable to move a muscle without being awake.

If I'm arguing to the animated electrified skeleton of a shark that has made itself at home in my pool that he'd be better off joining his living brother in a lake in the Sierra Nevadas, who is eating campers I tell him to in exchange for hot dogs...I have a good chance of suspecting it's a dream, even within the dream.

Neither of these are tests, of course.

Comment author: themusicgod1 11 August 2015 03:22:29AM *  0 points [-]

No one has complicated thoughts about several dreams from totally different genres while experiencing that one is unable to move a muscle without being awake.

...I've had some pretty complicated dreams, where I've woken up from a dream(!), gone to work, made coffee, had discussions about the previous dream, had thoughts about the morality or immorality of the dream, then sometime later come to a conclusion that something was out of place(I'm not wearing pants?!) then woken up to realize that I was dreaming. I've had nested dreams a good couple of layers deep with this sort of thing going on.

That said I think you have something there, though. Sometimes I wake up (Dream or otherwise) and I remember my dream really vividly, especially when I awake suddenly, due to an alarm clock or something

But I've never had a dream that I struggled to remember what was in my dream inside of my dream. At the least, such an activity should really raise my priors that I'm toplevel.

Comment author: themusicgod1 05 July 2015 04:32:46AM 0 points [-]

Looks like somewhere along the transition to lesswrong, the trackback to this related OB post appears to have been lost. It's worth digging a step deeper for the context, here.

Comment author: Ron_Hardin 24 February 2008 12:06:24AM 0 points [-]

We have a thousand words for sorrow http://rhhardin.home.mindspring.com/sorrow.txt

I don't know if that affects the theory.

(computer clustering a short distance down paths of a thesaurus)

Comment author: themusicgod1 15 March 2015 04:23:34AM 0 points [-]

Including: "twitter", "altruism", "trust", "start" and "curiosity" apparently?

Comment author: Vladimir_Nesov 13 October 2010 12:04:28PM 2 points [-]

See chapters 1-9 of this document for a more detailed treatment of the argument.

Comment author: themusicgod1 09 February 2015 07:05:16PM 0 points [-]

This link is 404ing. Anyone have a copy of this?

View more: Next