Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: themusicgod1 17 May 2017 03:08:42PM 0 points [-]

Here's what you actually wanted to link to for "looking back"

Comment author: themusicgod1 25 April 2017 10:47:56PM 0 points [-]

My concern isn't with the interview per se(everything I would add would best be put in another thread). It's with the reaction here in the comments here.

That 90% wasn't a waste anymore than overcomingbias as a blog is a waste. Horgan is hardly alone in remembering the Fifth Generation Project and it was worth it to get Yudkowsky to hammer out, once more, to a new audience why what happened in the 80's was not representative of what is to come in the 10ky timeframe. Those of you who are hard on Horgan he is not one of you. You cannot hold him to LW standards. Yudkowsky has spent a lot of time and effort trying to get other people to not make mistakes, for example mislabeling broad singulitarian thought on him as if he's kurzweil, vinge, the entirety of MIRI and whatnot personified and so it's understandable why he might be annoyed, but at the same time...the average person is not going to bother with the finer details. He probably put in about as much or more journalistic work as the average topic requires. This just goes to really drive home how different intelligence is from other fields, how hard science journalism in a world with AI research can be.

It's frustrating because it's hard. It's hard for many reasons, but one reason is because the layman's priors are very wrong. This it shares in common(for good reason) with economics and psychology more generally that people who are not in the field bring to the table a lot of preconceptions that have to be dismantled. Dismantling them all is a lot of work for a 1 hour podcast. Like those who answer Yahoo Answers! questions, Horgan is a critical point needed to convince on his own terms between Yudkowsky & a substantial chunk of a billion+ people who lived in the 80's who are not following where Science is being taken here.

Comment author: TheAncientGeek 08 April 2017 07:51:30AM 0 points [-]

I don't see how that relates to the supervenience of experiential states on instantaneous brain states.

Comment author: themusicgod1 12 April 2017 01:03:10PM 0 points [-]

The parent made 3 claims(the 3rd one was snuck into the conclusion). I only addressed 2 and 3. 1 is a credible point that stands on its own merit. Without points 2 and 3 however with 1 it's no longer a sound argument.

Comment author: TheAncientGeek 28 April 2014 06:42:05PM 0 points [-]

You don't have any evidence that conscious experience supervenes on objectively instantaneous moments.

You don't have any evidence that conscious experience supervenes on algorithms rather than underlying physical activity.

The two claims are incompatible, since alalgorithms take some time to run.

Comment author: themusicgod1 05 April 2017 02:08:12PM 0 points [-]

edit PaleMoon lost original reply. I will try to recreate it :(

Not saying you're incorrect in criticizing the above(the two claims do seem incompatible), but isn't it the case that algorithms are just structures and that only they take time only to run? What I mean is that within the block-universe view there would be structures that we would be in ignorance of their nature and in order for us to learn about them we might have to count them (and since we are living in a timeline with computers that operate per cycle our accounting of them would take some time to complete), but that it's only ignorance to an observer like us that necessitates this? That if you knew some property of some local region of the block-universe you could use it to estimate some other property via the algorithm that represents their (mutual) structure, but that the algorithm describing their structure merely is. There's plenty of times when choosing algorithms to describe mathematical objects that we choose algorithms that fall along a space-time tradeoff, so it stands to reason that there should be a 'all-space' choice that only encodes the answer we seek in structure alone.

Comment author: themusicgod1 07 January 2017 08:26:57PM 0 points [-]

Our children will look back at the fact that we were STILL ARGUING about this in the early 21st-century, and correctly deduce that we were nuts.

We're still arguing whether or not the world is flat, whether the zodiac should be used to predict near-term fate and whether we should be building stockpiles of nuclear weapons. There's billions left to connect to the internet, and most extant human languages to this day have no written form. Basic literacy and mathematics is still something much of the world struggles with. This is going to go on for awhile: the future will not be surprised that the finer details of after the 20th decimal point were being debated when we can't even agree on whether intelligent design is the best approach to cell biology or not.

Comment author: themusicgod1 28 February 2016 12:30:13AM 0 points [-]
Comment author: themusicgod1 17 October 2015 08:30:25PM *  0 points [-]

(this is the second copy of this comment, the first was regrettably lost in a browser crash. Use systems that back up your comments automatically)

This advice seems to fly in the face of Richard Hamming's advice to keep an open door. However perhaps the difference is subtle: Hamming suggested to have an open door but not necessarily to share your secrets, so perhaps there is room for a big science mystery cult to retain its own mysteries at every level of initiation. Perhaps there is a middle ground[1] to be found between this and current 'open science' wherein secrets and ritual are more emphasized, but where the public has the ability to always query deep into the bureaucracy of the science temple/university.

More likely, however the best approach is all of the above, some kinds of thinking are enhanced by a certain size of a team, and there may be some problems that require an open-science sized 'ingroup', and some problems that are more tractable with an ingroup the size of a mystery cult.

In response to Fake Reductionism
Comment author: themusicgod1 17 September 2015 03:40:29PM *  1 point [-]

The question may have once been which poet gets quoted when rainbows are brought up. If Keats isn't adding to the discussion in a meaningful way anymore since his metaphors will play second fiddle to the ones that of Newton, which were wonderful and exciting enough that Newton was driven to poking himself in the eye with a needle over them. I don't know if Keats even in his heyday could have claimed that. It may have been that his views on rainbows were propagated in some ingroup, until someone from that ingroup quoted them to someone in an ingroup with exposure to Newton's ideas on the same. They would have looked bad when that happened, but they would likely bring up the same thing to a person who might quote Keats to them, and so on until Keats himself was bested at his own game.

The problem isn't that Science is taking away from Rainbows, the problem is that Science is taking the power of controlling perception and justifying belief (mostly in other people) from Keats. No kidding he's going to be unhappy about it.

Science changes the poetry dynamic Keats' is used to because suddenly there's competition for what gets associated with what idea in such a way that poets don't necessarily get first dibs in the minds of people that they care about. Similar to how Galileo got in trouble for changing the scope of mathematicians from strictly below philosophers, this may be another instance of Newton changing how we view things by raising the social position of those who participate in science to where it is acceptable to challenge the status of a poet. Poets were important enough in Keats' day that the heads of governments had their own poet on staff.

Keats just could not keep up with what was actually still wonderful to the people he would have seduced with his ideas: Darwin came later, and found wonder still left:

"There is grandeur in this view of life, with its several powers, having been originally breathed into a few forms or into one; and that, whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being, evolved. " - Charles Darwin

Of course this dynamic may be changing yet. This framing of the problem leaves open the possibility that our personal ability to perceive wonder can get very broken when our computer systems produce the models for us, as described by radiolab (tl; dr when you have computer systems that can derive laws describing phenomena better than we can understand the reason behind those laws, but which nevertheless describe those systems that generate the phenomena, we may be at something of a loss when it comes to our 'right' to perceive wonder). Being unable to physically train your brain to assign wonder to wonderful thing seems to be a different problem than this one, more of a disability rather than anything.

Comment author: Eliezer_Yudkowsky 15 March 2008 04:33:24PM 24 points [-]

If we had enough cputime, we could build a working AI using AIXItl.


People go around saying this, but it isn't true:

1) Both AIXI and AIXItl will at some point drop an anvil on their own heads just to see what happens (test some hypothesis which asserts it should be rewarding), because they are incapable of conceiving that any event whatsoever in the outside universe could change the computational structure of their own operations. AIXI is theoretically incapable of comprehending the concept of drugs, let alone suicide. Also, the math of AIXI assumes the environment is separably divisible - no matter what you lose, you get a chance to win it back later.

2) If we had enough CPU time to build AIXItl, we would have enough CPU time to build other programs of similar size, and there would be things in the universe that AIXItl couldn't model.

3) AIXItl (but not AIXI, I think) contains a magical part: namely a theorem-prover which shows that policies never promise more than they deliver.

Comment author: themusicgod1 29 August 2015 01:07:30AM *  0 points [-]

This seems to me more evidence that intelligence is in part a social/familial thing: that like human beings that have to be embedded in a society in order to develop a certain level of intelligence, a certain level of an intuition for "don't do this it will kill you" informed by the nuance that is only possible with a wide array of individual failures informing group success or otherwise: it might be a prerequisite for higher level reasoning beyond a certain level (and might constrain the ultimate levels upon which intelligence can rest).

I've seen more than enough children try to do things that would be similar enough to dropping an anvil on their head to consider this 'no worse than human' (in fact our hackerspace even has an anvil, and one kid has ha ha only serious even suggested dropping said anvil on his own head). If AIXI/AIXItl can reach this level, at the very least it should be capable of oh-so-human level reasoning(up to and including the kinds of risky behaviour that we all probably would like to pretend we never engaged in), and could possibly transcend it in the same way that humans do: by trial and error, by limiting potential damage to individuals, or groups, and fighting the neverending battle against ecological harms on its own terms on the time schedule of 'let it go until it is necessary to address the possible existential threat'.

Of course it may be that the human way of avoiding species self-destruction is fatally flawed, including but not limited to creating something like AIXI/AIXItl. But it seems to me that is a limiting, rather than a fatal flaw. And it may yet be that the way out of our own fatal flaws, and the way out of AIXI/AIXItl's fatal flaws are only possible by some kind of mutual dependence, like the mutual dependence of two sides of a bridge. I don't know.

Comment author: themusicgod1 16 August 2015 06:13:21AM 0 points [-]

Either way, the question is guaranteed to have an answer. You even have a nice, concrete place to begin tracing—your belief, sitting there solidly in your mind.

In retrospect this seems like an obvious implication of belief in belief. I would have probably never figured it out on my own, but now that I've seen both, I can't unsee the connection.

View more: Next