Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: JonahSinick 30 June 2015 05:22:04AM 2 points [-]

I'm sure you're aware that the word "cult" is a strong claim that requires a lot of evidence, but I'd also issue a friendly warning that to me at least it immediately set off my "crank" alarm bells.

Thanks, yeah, people have been telling me that I need to be more careful in how I frame things. :-)

Do you have evidence of legitimate mathematical results or research being hidden/withdrawn from journals or publicly derided, or is it more of an old boy's club that's hard for outsiders to participate in and that plays petty politics to the damage of the science?

The latter, but note that that's not necessarily less damaging than active suppression would be.

Or maybe most social behavior is too cult-like. If so; perhaps don't single out mathematics.

Yes, this is what I believe. The math community is just unusually salient to me, but I should phrase things more carefully.

I question the direction of causation. Historically many great mathematicians have been mentally and socially atypical and ended up not making much sense with their later writings. Either mathematics has always had an institutional problem or mathematicians have always had an incidence of mental difficulties (or a combination of both; but I would expect one to dominate).

Most of the people who I have in mind did have preexisting difficulties. I meant something like "relative to a counterfactual where academia was serving its intended function." People of very high intellectual curiosity sometimes approach academia believing that it will be an oasis and find this not to be at all the case, and that the structures in place are in fact hostile to them.

This is not what the government should be supporting with taxpayer dollars.

Especially in Thurston's On Proof and Progress in Mathematics I can appreciate the problem of trying to grok specialized areas of mathematics.

What are your own interests?

Comment author: Pentashagon 01 July 2015 06:08:37AM 1 point [-]

The latter, but note that that's not necessarily less damaging than active suppression would be.

I suppose there's one scant anecdote for estimating this; cryptography research seemed to lag a decade or two behind actively suppressed/hidden government research. Granted, there was also less public interest in cryptography until the 80s or 90s, but it seems that suppression can only delay publication, not prevent it.

The real risk of suppression and exclusion both seem to be in permanently discouraging mathematicians who would otherwise make great breakthroughs, since affecting the timing of publication/discovery doesn't seem as damaging.

This is not what the government should be supporting with taxpayer dollars.

I think I would be surprised if Basic Income was a less effective strategy than targeted government research funding.

What are your own interests?

Everything from logic and axiomatic foundations of mathematics to practical use of advanced theorems for computer science. What attracted me to Metamath was the idea that if I encountered a paper that was totally unintelligible to me (say Perelman's proof of Poincaire's conjecture or Wiles' proof of Fermat's Last Theorem) I could backtrack through sound definitions to concepts I already knew, and then build my understanding up from those definitions. Alas, just having a cross-reference of related definitions between various fields would be helpful. I take it that model theory is the place to look for such a cross-reference, and so that is probably the next thing I plan to study.

Practically, I realize that I don't have enough time or patience or mental ability to slog through formal definitions all day, and so it would be nice to have something even better. A universal mathematical educator, so to speak. Although I worry that without a strong formal understanding I will miss important results/insights. So my other interest is building the kind of agent that can identify which formal insights are useful or important, which sort of naturally leads to an interest in AI and decision theory.

Comment author: jacob_cannell 30 June 2015 06:35:20PM 0 points [-]

So would we have high-frequency-trading bots outside (or inside) of MRIs shorting the insurance policy value of people just diagnosed with cancer?

In short - yes - you want that information propagated through the market rapidly. It is the equivalent of credit assignment in learning systems. The market will learn to predict the outcome of the MRI - to the degree such a thing is possible.

Also, keep in mind that the insurance policy the patient holds is just one contract, and there could be layers of other financial contracts/bets in play untied to the policy (but correlated).

We've known how to cut health care costs and make it more efficient for centuries; let the weak/sick die and then take their stuff and give it to healthy/strong people.

The proposal for using a computational market to solve health research has nothing whatsoever to do with wealth distribution. It obviously requires a government to protect the market mechanisms and enforce the rules, and is compatible with any amount of government subsidies or wealth redistribution. You seem to be conflating market mechanisms with political stances.

I struggle to comprehend a free market that could simultaneously benefit all individual humans' health while being driven by a profit motive.

In theory a market can be used to solve any computational problem, provided one finds the right rules - this is the domain of computational mechanism design, an important branch of game theory.

Comment author: Pentashagon 01 July 2015 05:25:05AM 0 points [-]

You seem to be conflating market mechanisms with political stances.

That is possible, but the existing market has been under the reins of many a political stance and has basically obeyed the same general rules of economics, regardless of the political rules that have tried to be imposed.

In theory a market can be used to solve any computational problem, provided one finds the right rules - this is the domain of computational mechanism design, an important branch of game theory.

The rules seem to be the weakest point of the system because they parallel the restrictions that political stances have caused to be placed on existing markets. If a computational market is coupled to the external world then it is probably possible to money-pump it against the spirit of the rules.

One way that a computational market could be unintentionally (and probably unavoidably) coupled to the external market is via status and signalling. Just like gold farmers in online games can sell virtual items to people with dollars, entities within the computational market could sell reputation or other results for real money in the external market. The U.S. FDA is an example of a rudimentary research market with rules that try to develop affordable, effective drugs. Pharmaceutical companies spend their money on advertising and patent wars instead of research. When the results of the computational market have economic effects in the wider market there will almost always be ways of gaming the system to win in the real world at the expense of optimizing the computation. In the worst case, the rule-makers themselves are subverted.

I am interested in concrete proposals to avoid those issues, but to me the problem sounds a lot like the longstanding problem of market regulation. How, specifically, will computational mechanism design succeed where years of social/economic/political trial and error have failed? I'm not particularly worried about coming up with game rules in which rational economic agents would solve a hard problem; I'm worried about embedding those game rules in a functioning micro-economy subject to interference from the outside world.

Comment author: jacob_cannell 24 June 2015 09:51:03PM *  1 point [-]

The current insurance system is not a computational prediction market. You can not buy/sell mispriced securities, and thus you cannot profit from novel predictive information, and thus there is little to no incentive for insurance companies to fund research to solve health.

Comment author: Pentashagon 30 June 2015 05:58:08AM 0 points [-]

So would we have high-frequency-trading bots outside (or inside) of MRIs shorting the insurance policy value of people just diagnosed with cancer?

tl;dr: If the market does not already have an efficient mechanism for maximizing expected individual health (over all individuals who will ever live) then I take that as evidence that a complex derivative structure set up to purportedly achieve that goal more efficiently would instead be vulnerable to money-pumping.

Or to put a finer point on it; does the current market reward fixing and improving struggling companies or quickly driving them out of business and cutting them up for transplant into other companies?

Even further; does the current market value and strive to improve industries (companies aggregated by some measurement) that perform weakly relative to other industries? Or does the market tend to favor the growth of strong industries at the expense of the individual businesses making up the weak industries?

We've known how to cut health care costs and make it more efficient for centuries; let the weak/sick die and then take their stuff and give it to healthy/strong people.

I struggle to comprehend a free market that could simultaneously benefit all individual humans' health while being driven by a profit motive. The free market has had centuries to come up with a way to reduce risk to individual businesses for the benefit of its shareholders; something that is highly desired by shareholders, who in fact make up the market. At best, investors can balance risk across companies and industries and hedge with complex financial instruments. Companies, however, buy insurance against uncertain outcomes that might reduce their value in the market. Sole proprietors are advised to form limited liability interests in their own companies purely to offset the *personal * financial risk. I can outlive Pentashagon LLC. but not my physical body as an investment vehicle that will be abandoned when it under-performs.

Comment author: JonahSinick 27 June 2015 01:33:50AM 2 points [-]

I'm sympathetic to everything you say.

In my experience there's an issue of Less Wrongers being unusually emotionally damaged (e.g. relative to academics) and this gives rise to a lot of problems in the community. But I don't think that the emotional damage primarily comes from the weird stuff that you see on Less Wrong. What one sees is them having born the brunt of the phenomenon that I described here disproportionately relative to other smart people, often because they're unusually creative and have been marginalized by conformist norms

Quite frankly, I find the norms in academia very creepy: I've seen a lot of people develop serious mental health problems in connection with their experiences in academia. It's hard to see it from the inside: I was disturbed by what I saw, but I didn't realize that math academia is actually functioning as a cult, based on retrospective impressions, and in fact by implicit consensus of the best mathematicians of the world (I can give references if you'd like) .

Comment author: Pentashagon 30 June 2015 04:03:17AM 4 points [-]

I was disturbed by what I saw, but I didn't realize that math academia is actually functioning as a cult

I'm sure you're aware that the word "cult" is a strong claim that requires a lot of evidence, but I'd also issue a friendly warning that to me at least it immediately set off my "crank" alarm bells. I've seen too many Usenet posters who are sure they have a P=/!=NP proof, or a proof that set theory is false, or etc. who ultimately claim that because "the mathematical elite" are a cult that no one will listen to them. A cult generally engages in active suppression, often defamation, and not simply exclusion. Do you have evidence of legitimate mathematical results or research being hidden/withdrawn from journals or publicly derided, or is it more of an old boy's club that's hard for outsiders to participate in and that plays petty politics to the damage of the science?

Grothendieck's problems look to be political and interpersonal. Perelman's also. I think it's one thing to claim that mathematical institutions are no more rational than any other politicized body, and quite another to claim that it's a cult. Or maybe most social behavior is too cult-like. If so; perhaps don't single out mathematics.

I've seen a lot of people develop serious mental health problems in connection with their experiences in academia.

I question the direction of causation. Historically many great mathematicians have been mentally and socially atypical and ended up not making much sense with their later writings. Either mathematics has always had an institutional problem or mathematicians have always had an incidence of mental difficulties (or a combination of both; but I would expect one to dominate).

Especially in Thurston's On Proof and Progress in Mathematics I can appreciate the problem of trying to grok specialized areas of mathematics. The terminology and symbology is opaque to the uninitiated. It reminds me of section 1 of the Metamath Book which expresses similar unhappiness with the state of knowledge between specialist fields of mathematics and the general difficulty of learning mathematics. I had hoped that Metamath would become more popular and tie various subfields together through unifying theories and definitions, but as far as I can tell it languishes as a hobbyist project for a few dedicated mathematicians.

Comment author: JonahSinick 29 June 2015 07:30:37AM *  0 points [-]

I'll be writing more about this later.

The most scary thing to me is that the most mathematically talented students are often turned off by what they see in math classes, even at the undergraduate and graduate levels. Math serves as a backbone for the sciences, so this may badly undercutting scientific innovation at a societal level.

I honestly think that it would be an improvement on the status quo to stop teaching math classes entirely. Thurston characterized his early math education as follows:

I hated much of what was taught as mathematics in my early schooling, and I often received poor grades. I now view many of these early lessons as anti-math: they actively tried to discourage independent thought. One was supposed to follow an established pattern with mechanical precision, put answers inside boxes, and "show your work," that is, reject mental insights and alternative approaches.

I think that this characterizes math classes even at the graduate level, only at a higher level of abstraction. The classes essentially never offer students exposure to free-form mathematical exploration, which is what it takes to make major scientific discoveries with significant quantitative components.

Comment author: Pentashagon 30 June 2015 03:24:42AM 2 points [-]

I distinctly remember having points taken off of a physics midterm because I didn't show my work. I think I dropped the exam in the waste basket on the way out of the auditorium.

I've always assumed that the problem is three-fold; generating a formal proof is NP-hard, getting the right answer via shortcuts can include cheating, and the faculty's time is limited. Professors/graders do not have the capacity to rigorously demonstrate to themselves that the steps a student has written down actually pinpoint the unique answer. Without access to the student's mind graders are unable to determine if students cheat or not; being able to memorize and/or reproduce the exact steps of a calculation significantly decrease the likelihood of cheating. Even if graders could do one or both of the previous for a single student, they are not 30x or 100x as smart as their students, making it impractical to repeat the process for every student.

That said, I had some very good mathematics teachers in higher level courses who could force students to think, and one in particular who could encourage/demand novelty from students simply by asking them to solve problems that they hadn't yet learned to solve. I didn't realize the power of the latter approach until later (and at the time everyone complained about exams with a median score well under 50%), but his classes were always my favorite.

Comment author: RichardKennaway 09 June 2015 12:46:20PM 1 point [-]

I don't know if I would put it this way, just that if you cannot predict someone's or something's behavior with any degree of certainty, they seem more agenty to you.

The weather does not seem at all agenty to me. (People in former times have so regarded it; but we are not talking about former times.)

Comment author: Pentashagon 12 June 2015 07:55:43AM 1 point [-]

We have probabilistic models of the weather; ensemble forecasts. They're fairly accurate. You can plan a picnic using them. You can not use probabilistic models to predict the conversation at the picnic (beyond that it will be about "the weather", "the food", etc.)

What I mean by computable probability distribution is that it's tractable to build a probabilistic simulation that gives useful predictions. An uncomputable probability distribution is intractable to build such a simulation for. Knightian Uncertainty is a good name for the state of not being able to model something, but not a very quantitative one (and arguably I haven't really quantified what makes a probabilistic model "useful" either).

I think the computability of probability distributions is probably the right way to classify relative agency but we also tend to recognize agency through goal detection. We think actions are "purposeful" because they correspond to actions we're familiar with in our own goal-seeking behavior: searching, exploring, manipulating, energy-conserving motion, etc. We may even fail to recognize agency in systems that use actions we aren't familiar with or whose goals are alien (e.g. are trees agents? I'd argue yes, but most people don't treat them like agents compared to say, weeds). The weather's "goal" is to reach thermodynamic equilibrium using tornadoes and other gusts of wind as its actions. It would be exceedingly efficient at that if it weren't for the pesky sun. The sun's goal is to expand, shed some mass, then cool and shrink into its own final thermodynamic equilibrium. It will Win unless other agents interfere or a particularly unlikely collision with another star happens.

Before modern science no one would have imagined those were the actual goals of the sun and the wind and so the periodic, meaningful-seeming actions suggested agency toward an unknown goal. After physics the goals and actions were so predictable that agency was lost.

Comment author: Pentashagon 08 June 2015 10:59:20PM *  0 points [-]

So agentiness is having an uncomputable probability distribution?

Comment author: Eitan_Zohar 29 May 2015 11:59:11AM *  0 points [-]

What's wrong with hive minds? As long as my 'soul' survives, I wouldn't mind being part of some gigantic consciousness.

Also, another thought- it may take an AI to solve philosophy and the nature of the universe, but it may not be far beyond the capacity of the human brain to understand it.

I appreciate the long response.

Comment author: Pentashagon 30 May 2015 01:45:12AM 0 points [-]

What's wrong with hive minds? As long as my 'soul' survives, I wouldn't mind being part of some gigantic consciousness.

A hive mind can quickly lose a lot of old human values if the minds continue past the death of individual bodies. Additionally, values like privacy and self-reliance would be difficult to maintain. Also, things we take for granted like being able to surprise friends with gifts or have interesting discussions getting to know another person would probably disappear. A hive mind might be great if it was formed from all your best friends, but joining a hive mind with all of humanity? Maybe after everyone is your best friend...

Comment author: Pentashagon 30 May 2015 01:42:02AM 2 points [-]

You are a walking biological weapon, try to sterilize yourself and your clothes as much as possible first, and quarantine yourself until any novel (to the 13th century) viruses are gone. Try to avoid getting smallpox and any other prevalent ancient disease you're not immune to.

Have you tried flying into a third world nation today and dragging them out of backwardness and poverty? What would make it easier in the 13th century?

If you can get past those hurdles the obvious benefits are mathematics (Arabic numerals, algebra, calculus) and standardized measures (bonus points if you can reconstruct the metric system fairly accurately), optics, physics, chemistry, metallurgy, electricity, and biology. For physics specifically the ability to do statics for construction and ballistics for cannons and thermodynamics for engines and other machines (and lubrication and hydraulics are important too). High carbon steel for machine tools, the assembly line and interchangeable parts. Steel reinforced concrete would be nice, but not a necessity. Rubber. High quality glass for optics; necessary for microscopes for biology to progress past "We don't believe tiny organisms make us sick". The scientific method (probably goes without saying) to keep things moving instead of turning back into alchemy and bloodletting.

Electricity and magnetism eventually; batteries won't cut it for industrial scale use of electricity (electrolysis, lighting for longer working hours, arc furnaces for better smelting) so building workable generators that can be connected to steam engines is vital.

Other people have mentioned medicine, which is pretty important from an ethical perspective, but difficult to reverse centuries of bad practice. Basic antibiotics and sterilization is probably the best you'd be able to do, but without the pharmaceutical industry there's a lot of stuff you can't do. If you know how to make ether, at least get anesthesia started.

Comment author: Eitan_Zohar 25 May 2015 09:44:57AM *  4 points [-]

I recently read this essay and had a panic attack. I assume that this is not the mainstream of transhumanist thought, so if a rebuttal exists it would save me a lot of time and grief.

Comment author: Pentashagon 29 May 2015 06:40:18AM 1 point [-]

I find myself conflicted about this. I want to preserve my human condition, and I want to give it up. It's familiar, but it's trying. I want the best of both worlds; the ability to challenge myself against real hardships and succeed, but also the ability to avoid the greatest hardships that I can't overcome on my own. The paradox is that solving the actual hardships like aging and death will require sufficient power to make enjoyable hardships (solving puzzles, playing sports and other games, achieving orgasm, etc.) trivial.

I think that one viable approach is to essentially live vicariously through our offspring. I find it enjoyable watching children solve problems that are difficult for them but are now trivial for me, and I think that the desire to teach skills and to appreciate the success of (for lack of a better word) less advanced people learning how to solve the same problems that I've solved could provide a very long sequence of Fun in the universe. Pre-singularity humans already essentially do this. Grandparents still enjoy life despite having solved virtually all of the trivial problems (and facing imminent big problems), and I think I'd be fine being an eternal grandparent to new humans or other forms of life. I can't extrapolate that beyond the singularity, but it makes sense that if we intend to preserve our current values we will need someone to be in the situation where those values still matter, and if we can't experience those situations ourselves then the offspring we care about are a good substitute. Morality issues of creating children may be an issue.

Another solution is a walled garden run by FAI that preserves the trivial problems humans like solving while but solves the big problems. This has a stronger possibility for value drift and I think people would value life a bit less if they knew it was ultimately a video game.

It's also possible that upon reflection we'll realize that our current values also let us care about hive-minds in the same way we care about our friends and family now. We would be different, alien to present selves, but with the ability to trace our values back to our present state and see that at no point did we sacrifice them for expediency or abandon them for their triviality. This seems like the least probable solution simply because our values are not special, they arose in our ancestral environment because they worked. That we enjoy them is an accident, and that they could fully encompass the post-singularity world seems a bit miraculous.

As a kid I always wondered about this in the context of religious heaven. What could a bunch of former humans possibly do for eternity that wouldn't become terribly boring or involve complete loss of humanity? I could never answer that question, so perhaps it's an {AI,god}-hard problem to coherently extrapolate human values.

View more: Next