Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Kawoomba 09 December 2016 05:05:40PM 0 points [-]

The catch-22 I would expect with CFAR's efforts is that anyone buying their services is already demonstrating a willingness to actually improve his/her rationality/epistemology, and is looking for effective tools to do so.

The bottleneck, however, is probably not the unavailability of such tools, but rather the introspectivity (or lack thereof) that results in a desire to actually pursue change, rather than simply virtue-signal the typical "I always try to learn from my mistakes and improve my thinking".

The latter mindset is the one most urgently needing actual improvements, but its bearers won't flock to CFAR unless it has gained acceptance as an institution with which you can virtue-signal (which can confer status). While some universities manage to walk that line (providing status affirmation while actually conferring knowledge), CFAR's mode of operation would optimally entail "virtue-signalling ML students in on one side", "rationality-improved ML students out on the other side", which is a hard sell, since signalling an improvement in rationality will always be cheaper than the real thing (as it is quite non-obvious to tell the difference for the uninitiated).

What remains is helping those who have already taken that most important step of effective self-reflection and are looking for further improvement. A laudable service to the community, but probably far from changing general attitudes in the field.

Taking off the black hat, I don't have a solution to this perceived conundrum.

Comment author: Lumifer 19 April 2016 06:05:16PM 5 points [-]

Thanks for the feedback about other InIn participants.

Sigh. They are not InIn participants. They are people you pay to "manage" social media and they are bungling the job in a rather spectacular fashion.

Call them off.

Comment author: Kawoomba 19 April 2016 08:24:45PM 3 points [-]

The scarier thought is how often we're manipulated that way when people don't bungle their jobs. The few heuristics we use to identify such mischief are trivially misled (for example, establishing plausibility by posting on inconsequential other topics (at least on LW that incurs a measurable cognitive footprint, which is however not the case on, say, Reddit), and then there's always Poe's law to consider). Shills man, shills everywhere!

As they dictum goes, just cuz you're paranoid ...

Reminds me of Ernest Hemingway's apparent paranoid delusions of being under FBI surveillance ... only eventually it turned out he actually was. Well, at least if my family keep playing their roles well enough, from a functional blackbox perspective the distinction may not matter that much anyways. I wonder how they got the children to be such good actors, though. Mind chip implants?

As an aside, it's kind of curious that Prof. Tsipursky does his, let's say "social engineering", under his real name.

Anyways, good entertainment. Though on this forum, it's more of a guilty pleasure (drama is but a weed in our garth of rationality).

Comment author: Kawoomba 14 March 2016 07:54:24PM 2 points [-]

Disclaimer: Only spent 20 minutes on this, so it might be incomplete, or you may already have addressed some of the following points:

At first glance, John Lowe authored 2 pubmed-listed papers on the topic.

The first of which in an open journal with no peer review (Med. Hypotheses) which has also published stuff on e.g. AIDS denialism. From his paper: "We propose that molecular biological methods can provide confirmatory or contradictory evidence of a genetic basis of euthyroid FS [Fibromyalgia Syndrome]." That's it. Proposing a hypothesis, not providing experimental evidence, paper ends.

The second paper was published in a somewhat controversial low impact journal (at least peer-reviewed). However, this apparently one and only peer reviewed and published paper actually contradicts the expected results, Lowe pulls off a somewhat convoluted move to save his hypothesis:

"TSH, FT3, or FT4 did not correlate with RMR [Resting Metabolic Rate] values. For two reasons, however, ITHR [Inadequate Thyroid Hormone Regulation] cannot be ruled out as the mechanism of FM [Fibromyalgia] patients’ lower RMRs: (1) TSH, FT3 , and FT4 levels have not been shown to reliably correlate with RMR values, and (2) these tests evaluate only pituitary-thyroid axis function and cannot rule out central HO and PRTH."

Yea ...

In addition, lots of crank signs: Lowe's review from 2008, along with his other writings, is "published" in a made-up "journal" which still lists him (from beyond the grave, apparently) as the editor-in-chief.

No peer review, pretending to be an actual journal, a plethora of commercial sites citing him and his research ... honi soit qui mal y pense!

Comment author: CellBioGuy 09 March 2016 08:19:41AM *  5 points [-]

AlphaGo system won first game. Not a go player, but the commentary I've seen suggests it was quite close until the very end.

Hypothesis 1: The cluster plays to maximize odds of a win, not magnitude of a win, and is exploiting a class of close wins that humans have a hard time with. Expect sweeping near wins.

Hypothesis 2: The cluster and the champion are indeed evenly matched. Expect wins and losses. May imply that the game saturates at high levels of analysis, and that there is no such thing as a 'superhuman' go player because the best humans hit the point of diminishing returns.

*EDIT: evidence accumulating in favor of #1.

*EDIT2: final results suggest something between the two.

Comment author: Kawoomba 09 March 2016 10:09:06AM 0 points [-]

I wonder if / how that win will affect estimates on the advent of AGI within the AI community.

Comment author: OrphanWilde 09 February 2016 08:47:09PM 1 point [-]

I didn't argue at all there. I pointed out that your position changed in anticipation of an objection you expected me to raise, to forestall the objection from having merit.

The argument, you see, is already over. You played your part, I played mine, and the audience is looking for a new show, the conclusion for this one already having played out in the background.

Comment author: Kawoomba 11 February 2016 02:57:45PM 0 points [-]

You got me there!

Comment author: Lumifer 12 November 2015 10:33:02PM 1 point [-]

Please don't spam the same comment to different threads.

Comment author: Kawoomba 15 November 2015 08:56:01PM 1 point [-]

Please don't spam the same comment to different threads.

Comment author: Lumifer 05 October 2015 03:16:32PM 4 points [-]

That seems extremely dangerous.

LOL. Word inflation strikes again with a force of a million atomic bombs! X-)

Are you really arguing for keeping ideologically incorrect people barefoot and pregnant, lest they harm themselves with any tools they might acquire?

Comment author: Kawoomba 05 October 2015 04:04:21PM 1 point [-]

Hey! Hey. He. Careful there, a propos word inflation. It strikes with a force of no more than one thousand atom bombs.

Are you really arguing for keeping ideologically incorrect people barefoot and pregnant, lest they harm themselves with any tools they might acquire?

Sounds as good a reason as any!

maybe we should shut down LW

I'm not sure how much it counts, but I bet Chief Ramsay would've shut it down long ago. Betting is good, I've learned.

Comment author: Kawoomba 04 October 2015 05:18:59PM 1 point [-]

As seen in the first episode series Caprica, quoth Zoe Graystone:

"(...) the information being held in our heads is available in other databases. People leave more than footprints as they travel through life; medical scans, dna profiles, psych evaluations, school records, emails, recording, video, audio, cat scans, genetic typing, synaptic records, security cameras, test results, shopping records, talent shows, ball games, traffic tickets, restaurant bills, phone records, music lists, movie tickets, tv shows... even prescriptions for birth control."

I, for one, think that the meme-mix defining our identity in itself could capture (predict) our behavior in large parts, foregoing biographical minutiae. Bonesaw in Worm didn't need precise memories to recreate the Slaughterhouse Nine clones.

Many think we can zoom out from atoms to a connectome, why not zoom out from a connectome to the memes it implements?

Comment author: [deleted] 04 October 2015 03:17:40AM 0 points [-]

Go on and elaborate, but unless you can show some very thorough technical considerations, I just don't see how you're able to claim a mind has low Kolmogorov complexity.

Comment author: Kawoomba 04 October 2015 08:55:29AM *  1 point [-]

"Mind" is a high level concept, on a base level it is just a subset of specific physical structures. The precise arrangement of water molecules in a waterfall, over time, matches if not dwarves the KC of a mind.

That is, if you wanted to recreate precisely this or that waterfall as it precisely happened (with the orientation of each water molecule preserved with high fidelity), the strict computational complexity would be way higher than for a comparatively more ordered and static mind.

The data doesn't care what importance you ascribe to it. It's not as if, say, "power", automatically comes with "hard to describe computationally". On the contrary, allowing for a function to do arbitrary code changes is easier to implement that defining precise power limitations (see constraining an AI's utility function).

Then there's the sheer number of mind-phenomena, are you suggesting adding one by necessity increases complexity? In fact, removing one can increase it as well: If I were to describe a reality in which ceteris is paribus, with the exception of your mind not actually being a mind, then by removing a mind I would have increased overall complexity. Not even taking into account that there are plenty of mind-templates around already (implicitly, since KC, even though uncomputable, is optimal), and that for complexity considerations, adding another of a template isn't even adding much, necessarily (I'm aware that adding just a few bits already comes with a steep penalty, this comment isn't meant to be exhaustive). See also the alphabet example further on.

Then there's the illusion that somehow our universe is of low complexity just because the physical laws governing the transition between time-steps are simple. That is mistaken. If we just look at the laws, and start with a big bang that is not precisely informationally described, we get a multiverse host of possible universes with our universe not in the beginning, which goes counter the KC demands. You may say "I don't care, as long as our universe is somewhere in the output, that's fine". But then I propose an even simpler theory of everything: Output a long enough sequence of Pi, and you eventually get our universe somewhere down the line as well. So our universe's actual complexity is enourmous, down to atoms in a stone on a hill on some moon somewhere in the next galaxy. There exists a clear trade-off between explanatory power and conciseness. I used to link an old Hutter lecture on that latter topic a few years ago, I can dig it out if you'd like. (ETA: See for example the paragraph labeled "A" on page 6 in this paper of his).

The old argument that |"universe + mind"| > |"universe"| is simplistic and ill-applied. Unlike with probabilities, the sequence ABCDABCDABCDABCD can be less complex than ABCDABCDABCDABC.

The list goes on, if you want to focus on some aspect of it we can go into greater depth on that. Bottom line is, if there's a slam dunk case, I don't see it.

Comment author: Inyuki 28 September 2015 05:20:36PM *  0 points [-]

I did try to look. My browser said "Secure Connection Failed".

Ha:) Is that because we use self-signed SSL cert? Try again. We'll upgrade cert later.

So, all of hyper-equity can be controlled by 1,000 - 10,000 people?

No, as many people as there are problems (Goals). Potentially infinite.

Comment author: Kawoomba 28 September 2015 08:16:48PM 2 points [-]

If you're looking for gullible recruits, you've come to the wrong place.

Don't lease the Ferrari just yet.

View more: Next