Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Benquo 07 December 2016 08:21:47AM 0 points [-]

If there is opportunity for more research, shouldn't the answer be for GiveWell to expand or be duplicated?

If you believe in constant or increasing returns to scale and largely overlapping values, sure. It's not obvious to me why we shouldn't expect GiveWell's highly centralized model to outperform a model more like academia, with little centralization but lots of criticism.

Comment author: Benquo 07 December 2016 08:18:25AM 0 points [-]

There are a few ways you might expect to be able to do better: * There's effectively a size floor on things GiveWell can afford to look at because of the amount of money they want to move and limited staff time. * GiveWell recommendations are tailored for a set of preferences that may not be the same as yours, e.g. a preference for high levels of confidence and easy to explain evidence such as RCTs, even at the expense of EV. * Some pieces of information are easier for you to learn than to communicate or for others to verify. For instance, it might make a lot of sense for you to trust a friend you've known since childhood a lot, another friend to trust them a little based on your say-so, but strangers on the internet to trust them not at all based on your say-so. Aceso Under Glass's post about Tostan is a good example of novel research that has some overlap with the second and third consideration.

I agree that in general you should contribute your research to the commons.

Comment author: Kyre 07 December 2016 04:45:34AM 1 point [-]

Doing theoretical research that ignores practicalities is sometimes turns out to be valuable in practice. It can open a door to something you assumed to be impossible; or save a lot of wasted effort on a plan that turns out to have an impossible sub-problem.

A concrete example of first category might be something like quantum error correcting codes. Prior to that theoretical work, a lot of people thought that quantum computers were not worth pursuing because noise and decoherence would be an insurmountable problem. Quantum fault tolerance theorems did nothing to help solve the very tough practical problems of building a quantum computer, but it did show people that it might be worth pursuing - and here we are 20 years later closing in on practical quantum computers.

I think source code based decision theory might have something of this flavour. It doesn't address all those practical issues such as how one machine comes to trust that another machine's source code is what it says. That might indeed scupper the whole thing. But it does clarify where the theoretical boundaries of the problem are.

You might have thought "well, two machines could co-operate if they had identical source code, but that's too restrictive to be practical". But it turns out that you don't need identical source code if you have the source code and can prove things about it. Then you might have though "ok, but those proofs will never work because of non-termination and self-reference" ... and it turns out that that is wrong too.

Theoretical work like this could inform you about what you could hope to achieve if you could solve the practical issues; and conversely what problems are going to come up that you are absolutely going to have to solve.

Comment author: plethora 07 December 2016 04:39:09AM 0 points [-]

1) I'm fairly intelligent, completely unskilled (aside from writing, which I have some experience in, but not the sort that I could realistically put on a resume, especially where I live), and I don't like programming. What skills should I develop for a rewarding career?

2) On a related note, the best hypothetical sales pitch for EA would be that it can provide enough career help (presumably via some combination of statistically-informed directional advice and networking, mostly the latter) to more than make up for the 10% pledge. I don't know how or whether this could be demonstrated, but do EA people think this is worth pursuing, or is their strategy still to use 99% of their members for publicity to attract the odd multi-millionaire?

Comment author: Douglas_Knight 07 December 2016 03:52:08AM *  0 points [-]

--corrupting-- teaching crucial thinking skills to the youth!

I think that would be better as

--corrupting--the--youth-- teaching crucial thinking skills!

because it is compatible with the parse tree.

Comment author: James_Miller 07 December 2016 03:51:33AM 1 point [-]

Thanks for the positive feedback on my interviews.

Comment author: Douglas_Knight 07 December 2016 03:49:07AM 1 point [-]

Thus, individual small donors with time to do research should expect to be able to find highly cost-effective giving opportunities that GiveWell missed.

This does not seem at all plausible to me. If your research is good, shouldn't you contribute it to the commons? And if you can harness the commons, you should care about the room for funding. GiveWell does what it does for a reason and if you compete with it, you'll become similar. If there is opportunity for more research, shouldn't the answer be for GiveWell to expand or be duplicated? And you do give examples where the answer seems to be to duplicate it, but shift its methods and goals. But none of them seem to have to do with the size of the donor, or the ratio of research to funding.

If there is a proliferation of researchers, they won't have track-records to harness large funding. But how do the researchers themselves know that their research is better than the alternative researchers?

Later you reveal that you were talking about is local improvements. Why not say that up front if that is what you mean? And I think that the relevant difference is not in the quote, but local knowledge that you are unable to communicate to another evaluator.

Comment author: entirelyuseless 07 December 2016 03:27:40AM 0 points [-]

Thanks. Although I am unlikely to change this particular post, you might be right about it being better to be more explicit in presenting the parts of the text. I'll think abut it.

Comment author: entirelyuseless 07 December 2016 03:24:22AM 0 points [-]

Faith in an idea is a different meaning. You could say that you have faith in the idea of atheism in the way some people might say they have faith in the idea of progress. And the "faith" part of that might mean that it is resistant to something that seems contrary, e.g. someone might say that he has faith in progress despite wars and wickedness that seem regressive, and someone might say he has faith in the idea of atheism despite the communist atheists who did very evil things.

But that is very different from religious faith, and anyway it is not opposite to being supported by reason -- someone who has faith in progress, or in atheism in that way, could still say that they have reasons for their belief.

I don't think many people think faith means something that is unsupported by evidence. Rather, they mean they have a commitment to it. And maybe the commitment goes beyond the evidence, but it doesn't mean there is no evidence at all.

I agree with you that when you see people calling atheism a religion or saying that atheists have faith, they usually mean to admit that religion or faith has something bad about it. The idea is that "sure, these things have something bad about them, but no one avoids that bad thing anyway, so it is better to be honest about it." And that has some truth and some falsity: it is probably true that no one completely avoids the bad things that are found in religion, but that doesn't make everything equal, because you can avoid those things to some extent.

Comment author: NatashaRostova 07 December 2016 03:17:11AM 1 point [-]

The only downside is it tends to be correlated with an identity that people reject off hand. I know lots of alt-right/paleo-con sites use hatefacts, and sometimes play fast and loose with the term.

PS: Huge fan of your interview series. I've listened to them all!

Comment author: btrettel 07 December 2016 01:29:35AM 0 points [-]

I've had good experience with MyFitnessPal with respect to speed, and find the features sufficient for my purposes. I manually enter my exercise data, so I can not comment on automatic exercise tracking.

I found FitDay to be annoyingly slow, but I used the site for years before MyFitnessPal.

Comment author: plethora 07 December 2016 12:12:56AM 0 points [-]

I have a very low bar for 'interesting discussion', since the alternative for what to do with my spare time when there's nothing going on IRL is playing video games that I don't particularly like. But it's been months since I've seen anything that meets it.

It seems like internet people think insight demands originality. This isn't true. If you look at popular long-form 'insight' writers, even Yudkowsky (especially Yudkowsky), most of what they do is find earlier books and file the serial numbers off. It could be a lot easier for us to generate interesting discussion if we read more books and wrote about them, like this.

Comment author: biker19 06 December 2016 11:09:42PM 0 points [-]

Hi, everyone! This is my first request here.

I am looking for the proceedings of the past IHPVA scientific symposiums. Unfortunately, they are not available in digital format. I'd be extremely grateful if anyone could scan these for me. (Also requested in /r/Scholar.)

Comment author: morganism 06 December 2016 10:45:12PM 1 point [-]

"There is one more difference between you and the average user that’s even more damaging to your ability to predict what will be a good user interface: skills in using computers, the Internet, and technology in general. Anybody who’s on a web-design team or other user experience project is a veritable supergeek compared with the average population. This not just true for the developers. Even the less-technical team members are only “less-technical” in comparison with the engineers. They still have much stronger technical skills than most normal people."

paper w stats

http://www.oecd-ilibrary.org/education/skills-matter_9789264258051-en

Comment author: RomeoStevens 06 December 2016 10:36:41PM 0 points [-]

That is a very different beast from deliberate practice of feldenkrais for example.

Comment author: morganism 06 December 2016 09:59:14PM 0 points [-]

The result of keeping the Carrier plant in Indiana open is a $16 million investment to drive down the cost of production, so as to reduce the cost gap with operating in Mexico. (i think there was also a 9k per employee year pledged by state, and retraining and extended benefits promised)

What does that mean? Automation. What does that mean? Fewer jobs, Hayes acknowledged.

From the transcript (emphasis added):

GREG HAYES: Right. Well, and again, if you think about what we talked about last week, we're going to make a $16 million investment in that factory in Indianapolis to automate to drive the cost down so that we can continue to be competitive. Now is it as cheap as moving to Mexico with lower cost of labor? No. But we will make that plant competitive just because we'll make the capital investments there.

JIM CRAMER: Right.

GREG HAYES: But what that ultimately means is there will be fewer jobs.

The general theme here is something we've been writing about a lot at Business Insider. Yes, low-skilled jobs are being lost to other countries, but they're also being lost to technology.

http://www.businessinsider.com/united-tech-ceo-says-trump-deal-will-lead-to-more-automation-fewer-jobs-2016-12

Comment author: Douglas_Knight 06 December 2016 08:50:10PM 0 points [-]

To sum up: invent a single sentence to summarize your opponent's position so that you condemn them as naive. For example, what you did to Phil Goetz.

Comment author: Daniel_Burfoot 06 December 2016 06:53:37PM 0 points [-]

It is a webapp with a vanilla Java/Javascript/SQLite stack. Using SQLite instead of a full DB engine makes things a lot simpler, and is appropriate for a single-user/small team use case.

Comment author: Lumifer 06 December 2016 06:37:18PM 0 points [-]

You can certainly train animals to override their instincts and then measure how well that goes. I don't know how much would it tell you about "rationality", though...

Comment author: Lumifer 06 December 2016 06:33:58PM 0 points [-]

What's your hardware/OS platform?

Comment author: ChristianKl 06 December 2016 05:59:57PM 0 points [-]

I don't know German, but it sounds like the thing you mean by "Bildung" is something like "self-development".

As far as I understand especially in the US the term self-development is bend up with the American dream. It's about developing capabilities to turn the dream into reality.

On the other hand "Bildung" can be for it's own sake and only includes art and literature that have no practical usage.

Comment author: ChristianKl 06 December 2016 05:56:57PM 0 points [-]

It's probably not the best example, but I stayed with the original example.

Comment author: ChristianKl 06 December 2016 05:56:20PM 1 point [-]

If "sophisticated" in this usage just means "complex", I'm not sure that I can get behind the idea that complex theories or policies are just better than simple ones in any meaningful way.

I haven't argued that A is just better than B.

I try to root out beliefs that follow that general form

Yes, and I see that as a flaw that's the result of thinking of everything in Bayesian terms.

"the heart pumps blood" is a testable factual statement, and a basic observation, which semantically carries all the same useful information

When the lungs expand that process also leads to pumping of blood. Most processes that change the pressure somewhere in the body automatically pump blood as a result. The fact that the function of the heart is to pump blood has more meaning than just that it pumps blood.

Comment author: ChristianKl 06 December 2016 05:55:24PM 0 points [-]

LessWrong isn't simply a discussion board. It's a blog/discussion board hybrid. Various posts do get read long after they are written.

Comment author: ChristianKl 06 December 2016 05:54:30PM 1 point [-]

A lot of belief boils down to trust.

You can believe that certain religious beliefs about miracles are true because you consider the people who told you that they are true to be trustworthy authorities.

We usually believe that scientific papers are accurate because we trust the authors and the scientific community not to forge the results.

When Carson speaks about the pyramids storing grain, he can't defend that belief with an appeal to authority.

Comment author: sarahconstantin 06 December 2016 05:42:24PM 0 points [-]

I've been frustrated with available self-tracking tools. (Food trackers are slooooow and interface poorly with exercise trackers; I have never yet found a mood tracker that allows you to look at statistics; various other things like that.)

Comment author: sarahconstantin 06 December 2016 05:40:02PM 1 point [-]

I wonder if there's any way to measure rationality in animals.

Bear with me for a second. The Cognitive Reflection Test is a measure of how well you can avoid the intuitive-but-wrong answer and instead make the more mentally laborious calculation. The Stroop test is also a measure of how well you can avoid making impulsive mistakes and instead force yourself to focus only on what matters. As I recall, the "restrain your impulses and focus your thinking" skill is a fairly "biological" one -- it's consistently associated with activity in particular parts of the brain, influenced by drugs, and impaired in conditions like ADHD.

Could we -- or have we already -- design a variant of this made out of mazes that rats could run through?

I might look into this more carefully myself, but does anyone know off the top of their heads?

Comment author: scarcegreengrass 06 December 2016 05:23:53PM 0 points [-]

This seems to me to be like a counterpart of 'keep your identity small'. It's healthy to keep the identity inside your brain small, and it can be healthy to keep the identity you present to your audience small too.

Comment author: scarcegreengrass 06 December 2016 05:20:39PM 1 point [-]

Especially for the most stupid claim of a prolific writer, ie a blogger.

Comment author: James_Miller 06 December 2016 05:10:30PM 4 points [-]

It goes well for me when I use the term because it starts a conversation about what the term means, and most people agree in the abstract that you shouldn't dislike someone for believing in facts.

Comment author: Dagon 06 December 2016 04:34:57PM 0 points [-]

You're right in noticing that public belief signaling is somewhat in conflict with private truth-seeking. Like it or not, we evaluate people's competency based on their stated beliefs, which tend to cluster. Combine this with the fact that deviations from common clusters carry more information than membership in those clusters, and you get today's world.

I think you point out the reasons that it can be correct to judge people more harshly for weirder beliefs (as in less common, not as in less plausible). Someone claiming a common belief might be doing so just to pander to the masses, while someone claiming a weird belief probably actually believes it deeply.

Comment author: Lumifer 06 December 2016 03:44:35PM *  0 points [-]

tl;dr: Social pressure is a thing.

Holding non-mainstream views has always been dangerous (regardless of their truth value). Nowadays, perhaps less so than was typical historically.

Comment author: Lumifer 06 December 2016 03:43:06PM 4 points [-]

can be a good way to disarm potential critics

I don't know why would they be disarmed by that. You're just inviting them to call you evil because you're so full of hate X-/

Comment author: Lumifer 06 December 2016 03:39:41PM 1 point [-]

A true sophisticate would apply sophistry everywhere but modulate it to make it appear that she possesses σοφία where she needs to show it and that she is a simpleton where it suits her :-P

Comment author: Lumifer 06 December 2016 03:35:51PM 0 points [-]

But you usually already have an intuitive idea of what they are

The point is that different classes of attackers have very different capabilities. Consider e.g. a crude threat model which posits five classes:

  • Script kiddies randomly trawling the 'net for open vulnerabilities
  • Competent hackers specifically targeting you
  • As above, but with access to your physical location
  • People armed with subpoenas (e.g. lawyers or cops)
  • Black-ops department of a large nation-state

A typical business might then say "We're going to defend against 1-3 and we will not even try to defend against 4-5. We want to be sure 1 get absolutely nowhere and we will try to make life very difficult for 3 (but no guarantees)". That sounds like a reasonable starting point to me.

Comment author: Daniel_Burfoot 06 December 2016 03:03:52PM 1 point [-]

I have made a lot of progress in the last two years by developing a suite of simple productivity/life-logging tools for myself. The tools are technologically simple, but because they are customized specifically to me, they are quite useful and useable. Unlike a lot of similar one-size-fits-all tools, my tools have all and only the features I want. The suite includes:

  • TODO list (with a priority calculator that increases the priority of older tasks, so I don't get bogged down)
  • Chore logging (remind me when it's time to get a haircut, clean my bathtub, etc)
  • Finance analysis (download CSV files from my bank account, categorize them semi-automatically, aggregate them)
  • Pomodoro system
  • Activity Log (keep track of what I did each day)
  • Junk Food/Alcohol consumption tracker

I recommend programming-savvy people try out building their own productivity tools. But I wanted also to poll people about whether they would pay me to develop some of these tools for them:

Submitting...

Comment author: James_Miller 06 December 2016 02:59:26PM 4 points [-]

Conservatives have the term "hate facts" for true statements that the left considers it hateful for anyone to believe to be true. Calling something a "hate fact" yourself can be a good way to disarm potential critics who would otherwise make the mistake of assuming what you are saying is wrong because it's distasteful.

Comment author: turchin 06 December 2016 02:19:41PM 2 points [-]

I think we should also look not on believe itself, but in the way it is pesented, for example if a person knows that his believe is untypical and that publicly claiming it could damage his reputation.

For example if one say: "I give 1 per cent probability to very unusual idea that pyramids was built for X, because I read Y" it will be good signaling about his intelligence.

Another thing is that if we search entire internet history of a person for most stupid claim he ever did, we will be biased to underestimate his intelligence.

Comment author: niceguyanon 06 December 2016 02:05:36PM 1 point [-]

I'm guilty of over updating towards stupid/crazy when ever someone has a cranky belief. I was on board with the bullying of Ben Carson, but in hindsight the man is a neurosurgeon; I'm pretty sure he's smarter than me.

Comment author: chaosmage 06 December 2016 01:52:05PM 0 points [-]

This is quite helpful, thanks!

Comment author: TheAncientGeek 06 December 2016 12:49:30PM 0 points [-]

It's not helpful to define the morally good as the "good, period", without an explanation of "good, period". You are defining a more precise term using a less precise one, which isn't the way to go.

In response to comment by ig0r on Finding slices of joy
Comment author: Kaj_Sotala 06 December 2016 12:45:37PM 0 points [-]

Yeah, I've done Vipassana which I'm pretty sure has made the practice a lot easier.

Comment author: Kaj_Sotala 06 December 2016 12:44:56PM 1 point [-]

But... yeah, even this article could be considered "yet another epiphany", unless people will actually use it in their lives. And we have no evidence that someone actually used it; only that many people liked seeing it.

I wonder how much would it take to bring this to more productive level; to actually make people use the stuff.

Real-life workshops and study groups. :-)

I'm not even kidding here. This is basically the reason for why you didn't get a writeup of this from CFAR earlier: actually teaching the stuff in person is so much more effective in getting people to use it than just explaining it online is.

Comment author: ingive 06 December 2016 12:22:56PM 1 point [-]

[link] https://universe.openai.com/

Comment author: TheAncientGeek 06 December 2016 12:19:02PM 0 points [-]

It is an objective fact that certain people have won elections and that others have not, for example, even if it doesn't change them physically.

No, it's intersubjective. Winning and elections aren't in the laws of physics. You can't infer objecgive from not-subjective.

In this sense, it is true that every meaningful distinction is based on something objective, since otherwise you would not be able to make the distinction in the first place

You need to be more granular about that. It is true that you can't recognise novel members of an open-ended category (cats and dogs) except by objective features, and you cant do that because you can't memorise all the members of such a set. But you can memorise all the members fo the set of Seanators. So objectivty is not a universal rule.

Comment author: RainbowSpacedancer 06 December 2016 11:02:47AM 0 points [-]

I've been using a backlog though I've never seen Forster's system, and have found it useful. I'm glad to see it made explicit. I also think you are right on the money in trying to run a middle path between the rigidity of a set daily task list in and the lack of priority in GTD's massive list of next actions. There's a lot of insight here, thank you for sharing. My one criticism would be to add some order to it to enhance readability, it's a wall of text right now.

Comment author: RowanE 06 December 2016 10:23:14AM 0 points [-]

"Oh, that's nice."

They wouldn't exactly be accepting the belief as equally valid; religious people already accept that people of other religions have a different faith than they do, and on at least some level they usually have to disagree with "other religions are just as valid as my own" to even call themselves believers of a particular religion, but it gets you to the point of agreeing to disagree.

Comment author: sen 06 December 2016 09:23:08AM *  0 points [-]

Or is it that a true sophisticate would consider where and where not to apply sophistry?

Comment author: sen 06 December 2016 09:11:27AM *  0 points [-]

Information on the discussion board is front-facing for some time, then basically dies. Yes, you can use the search to find it again, but that becomes less reliable as discussion of TAPs increases. It's also antithetical to the whole idea behind TAP.

The wiki is better suited for acting as a repository of information.

Comment author: ProofOfLogic 06 December 2016 08:48:21AM 0 points [-]

Seems there's no way to edit the link, so I have to delete.

Comment author: MrMind 06 December 2016 08:27:21AM 0 points [-]

For example, in Nature's article about AlphaGo: page 485, picture a: it says that a reinforcement learning rollout network is used to produce another bout of data which is then used to train the value network.
Page 486, second column, the third formula: the valuation of the two networks (the fast rollout policy and the value network) for the current position is averaged to give a final score on every possible next move, then the most valuable move is choosed.

Comment author: ChristianKl 06 December 2016 07:42:01AM 1 point [-]

Which is fair, sort of, but the point still stands that a sufficiently complex computer (human brain or otherwise) that is dealing with less information loss would still find Bayesian methods useful.

David Chapman brings the example of an algorithm that he wrote to solve an previously unsolved AI problem that worked without probability but with logic.

In biology people who build knowledge bases find it useful to allow storing knowledge like "The function of the heart is to pump blood". If I'm having a discussion on Wikidata with another person whether X is a subclass or an instance of Y, probability matters little.

Comment author: sen 06 December 2016 07:12:37AM *  0 points [-]

I don't understand what point you're making with the computer, as we seem to be in complete agreement there. Nothing about the notion of ideals and definitions suggests that computers can't have them or their equivalent. It's obvious enough that computers can represent them, as you demonstrated with your example of natural numbers. It's obvious enough that neurons and synapses can encode these things, and that they can fire in patterned ways based on them because... well that's what neurons do, and neurons seem to be doing to bulk of the heavy lifting as far as thinking goes.

Where we disagree is in saying that all concepts that our neurons recognize are equivalent and that they should be reasoned about in the same way. There are clearly some notions that we recognize as being valid only after seeing sufficient evidence. For these notions, I think bayesian reasoning is perfectly well-suited. There are also clearly notions we recognize as being valid for which no evidence is required. For these, I think we need something else. For these notions, only usefulness is required, and sometimes not even that. Bayesian reasoning cannot deal with this second kind because their acceptability has nothing to do with evidence.

You argue that this second kind is irrelevant because these things exist solely in people's minds. The problem is that the same concepts recur again and again in many people minds. I think I would agree with you if we only ever had to deal with a physical world in which people's minds did not matter all that much, but that's not the world we live in. If you want to be able to reliably convey your ideas to others, if you want to understand how people think at a more fundamental level, if you want your models to be useful to someone other than yourself, if you want to develop ideas that people will recognize as valid, if you want to generalize ideas that other people have, if you want your thoughts to be integrated with those of a community for mutual benefit, then you cannot ignore these abstract patterns because these abstract patterns constitute such a vast amount of how people think.

It also, incidentally, has a tremendous impact on how your own brain thinks and the kinds of patterns your brain lets you consciously recognize. If you want to do better generalizing your own ideas in reliable and useful ways, then you need to understand how they work.

For what it's worth, I do think there are physically-grounded reasons for why this is so.

Comment author: sen 06 December 2016 06:19:54AM *  1 point [-]

"Group" is a generalization of "symmetry" in the common sense.

I can explain group theory pretty simply, but I'm going to suggest something else. Start with category theory. It is doable, and it will give you the magical ability of understanding many math pages on Wikipedia, or at least the hope of being able to understand them. I cannot overstate how large an advantage this gives you when trying to understand mathematical concepts. Also, I don't believe starting with group theory will give you any advantage when trying to understand category theory, and you're going to want to understand category theory if you're interested in reasoning.

When I was getting started with category theory, I went back and forth between several pages (Category Theory, Functor, Universal Property, Universal Object, Limits, Adjoint Functors, Monomorphism, Epimorphism). Here are some of the insights that made things click for me:

  • An "object" in category theory corresponds to a set in set theory. If you're a programmer, it's easier to think of a single categorical object as a collection (class) of OOP objects. It's also valid and occasionally useful to think of a single categorical object as a single OOP object (e.g., a collection of fields).
  • A "morphism" in category theory corresponds to a function in set theory. If you think of a categorical object as a collection of OOP objects, then a morphism takes as input a single OOP object at a time.
  • It's perfectly valid for a diagram to contain the same categorical object twice. Diagrams only show relations, and it's perfectly valid for an OOP object to be related to another OOP object of the same class. When looking at commutative diagrams that seem to contain the same categorical object twice, think of them as distinct categorical objects.
  • Diagrams don't only show relationships between OOP objects. They can also show relationships between categorical objects. For example, a diagram might state that there is a bijection between two categorical objects.
  • You're not always going to have a natural transformation between two functors of the same category.
  • When trying to understand universal properties, the following mapping is useful (look at the diagrams on Wikipedia): A is the Platonic Form of Y, U is a fire that projects only some subset of the aspects of being like A.
  • The duality between categorical objects and OOP objects is critical to understanding the difference between any diagram and its dual (reversed-morphisms). Recognizing this makes it much easier to understand limits and colimits.

Once you understand these things, you'll have the basic language down to understand group theory without much difficulty.

Comment author: NatashaRostova 06 December 2016 05:44:22AM *  0 points [-]

I'm going to risk going down a meaningless rabbit hole here of semantic nothingness --

But I still disagree with your distinction, although I do appreciate the point you're making. I view, and think the correct way to view, the human brain as simply a special case of any other computer. You're correct that we have, as a collective species, proven and defined these abstract patterns. Yet even all these patterns are based on observations and rules of reasoning between our mind and the empirical reality. We can use our neurons to generate more sequences in a pattern, but the idea of an infinite set of numbers is only an abstraction or an appeal to something that could exist.

Similarly, a silicon computer can hold functions and mappings, but can never create an array of all numbers. They reduce down to electrical on-off switches, no matter how complex the functions are.

There is also no rule that says natural numbers or any category can't change tomorrow. Or that right outside of the farthest information set in the horizon of space available to humans, the gravitational and laws of mathematics all shift by 0.1. It is sort of nonsensical, but it's part of the view that the only difference between things that feel real and inherently distinguishable is our perception of how certain they are to continue based on prior information.

In my experience talking about this with people before, it's not the type of thing people change their mind on (not implying your view is necessarily wrong). It's a view of reality that we develop pretty foundationally, but I figured I'd write out my thoughts anyway for fun. It's also sort of a self-indulgent argument about how we perceive reality. But, hey, it's late and I'm relaxing.

Comment author: sen 06 December 2016 05:31:20AM 1 point [-]

The distinction between "ideal" and "definition" is fuzzy the way I'm using it, so you can think of them as the same thing for simplicity.

Symmetry is an example of an ideal. It's not a thing you directly observe. You can observe a symmetry, but there are infinitely many kinds of symmetries, and you have some general notion of symmetry that unifies all of them, including ones you've never seen. You can construct a symmetry that you've never seen, and you can do it algorithmically based on your idea of what symmetries are given a bit of time to think about the problem. You can even construct symmetries that, at first glance, would not look like a symmetry to someone else, and you can convince that someone else that what you've constructed is a symmetry.

The set of natural numbers is an example of something that's defined, not observed. Each natural number is defined sequentially, starting from 1.

Addition is an example of something that's defined, not observed. The general notion of a bottle is an ideal.

In terms of philosophy, an ideal is the Platonic Form of a thing. In terms of category theory, an ideal is an initial or terminal object. In terms of category theory, a definition is a commutative diagram.

I didn't say these things weren't influenced by past observations and correlations. I said past observations and correlations were irrelevant for distinguishing them. Meaning, for example, you can distinguish between more natural numbers than your past experiences should allow.

Comment author: James_Miller 06 December 2016 04:58:28AM 0 points [-]

Sorry, just the normal things of reading to him (when he was younger) and using words he doesn't understand to get him to ask what the words mean.

Comment author: Raemon 06 December 2016 04:55:03AM 0 points [-]

Could you expound? Your implicit argument is not obvious.

Comment author: chron 06 December 2016 04:08:08AM 0 points [-]

Links?

Comment author: entirelyuseless 06 December 2016 03:48:57AM 0 points [-]

Objective differences doesn't have to mean physical differences of the thing at the time. It is an objective fact that certain people have won elections and that others have not, for example, even if it doesn't change them physically.

In this sense, it is true that every meaningful distinction is based on something objective, since otherwise you would not be able to make the distinction in the first place. You make the distinction by noticing that some fact is true in one case which isn't true in the other. Or even if you are wrong, then you think that something is true in one case and not in the other, which means that it is an objective fact that you think the thing in one case and not in the other.

Comment author: entirelyuseless 06 December 2016 03:44:01AM 0 points [-]

Saying that a thing is "hedonistically good to do" means that it is good to some extent. It does not tell us whether it is good to do, period. If it is good to do, period, it is morally good. If there are other considerations more important than the pleasure, it won't be good to do, period, and so will be morally wrong.

Comment author: gucciCharles 06 December 2016 02:26:15AM 0 points [-]

lol,No.

Comment author: oooo 06 December 2016 02:20:20AM 0 points [-]

Thank you for these exercise samples. I didn't realize that I was running through a less powerful flavour of these exercises until this post. Do you by chance have any examples of exercises that you've both worked on to increase your child's verbal and linguistic capabilities?

Comment author: NatashaRostova 06 December 2016 02:18:18AM 0 points [-]

Many beliefs are too vague for such a test to exist. It doesn't make sense to put a probability on "The function of the heart is to pump blood". That belief doesn't have a specific prediction. You could create different predictions based on the belief and those predictions would like have different probabilities.

Words are an imperfect information transfer system humans have evolved to develop. To interact with reality we have to use highly imperfect information-terms and tie them together with correlated observations. It seems like you are arguing that the human brain is often dealing with too much uncertainty and information loss to tractably apply a probabilistic framework that requires clearer distinctions/classifications.

Which is fair, sort of, but the point still stands that a sufficiently complex computer (human brain or otherwise) that is dealing with less information loss would still find Bayesian methods useful.

Again, this is sort of trivial, because all it's saying is that 'past information is probabilistically useful to the future.' I think the fact that modern machine learning algos are able to implement Bayesian learning parameters should lead us to the conclusion that Bayesian reasoning is often intractable, but in its purest form it's simply the way to interpret reality.

In response to Epistemic Effort
Comment author: Douglas_Knight 06 December 2016 01:23:48AM 0 points [-]

First you you figure out the function of "epistemic status."

letting you know how seriously to take an individual post

No.

Comment author: btrettel 06 December 2016 12:27:39AM 0 points [-]

I want to show that fighting aging is underestimated from effective altruistic point of view. I would name it second most effective way to prevent sufferings after x-risks prevention.

I'd be very interested in seeing this.

Comment author: btrettel 06 December 2016 12:26:30AM 1 point [-]

I think life extension should be discussed more here.

Many rationalists disappointment me with respect to life extension. Too many of them seem to recognize that physical conditioning is important, yet very few seem to do the right things. Most rationalists who understand that physical conditioning is important think they should do something, but that something tends to be almost exclusively lifting weights with little to no cardiovascular exercise. (I consider walking to barely qualify as cardiovascular exercise, by the way.) I think both are important, but if you only could do one, I'd pick cardio because it's much easier to improve your cardiovascular capacity that way. (Cardiovascular capacity/VO2max correlates well with longevity, as discussed here.) I'm not alone in the belief that cardio is much more important; similar things have been said for a long time. I'd recommend Ken Cooper's first book for more on this perspective.

The inability for rationalists to regularly do cardiovascular exercise probably stems from similar problems that cause cryocrastination. I'd like to see more on actually implementing cardiovascular exercise routines. I have some notes on this which could help. Off the top of my head I can remember that there's evidence morning runners tend to maintain the habit better and that there's evidence that exercising in a group helps with compliance. I personally find Beeminder to help a little bit, but not much.

Comment author: morganism 06 December 2016 12:24:33AM 0 points [-]
Comment author: ig0r 06 December 2016 12:21:23AM 0 points [-]

This is derivative of meditative insight practice. You may be interested in looking into Vipassana practice. With some time spent building concentration skills this kind of sensation noticing practice is far more powerful (think, LSD-like power) and can be extended to get you a lot more than just joy

Comment author: Dagon 06 December 2016 12:16:44AM 0 points [-]

These words wouldn't normally mix. I expect Descriptive vs Prescriptive or Positive vs Normative. Also, link goes nowhere.

Comment author: btrettel 06 December 2016 12:08:51AM *  2 points [-]

A big gap I see is memory. Having read a few books on learning and memory, I think what's been posted on LessWrong has been fragmented and incomplete, and we're in need of a good summary/review of the entire literature. There's a lot of confusion on the subject here too, e.g., this article seems to think spaced repetition and mnemonics are mutually exclusive techniques, but they're not at all. When I used Anki I frequently used mnemonics as well. The article seems to be an argument against bad flash cards, not spaced repetition in general. Probably over a year ago I did start writing a sequence on memory enhancement, but it is a low priority task for me it and do not anticipate completing it any time soon.

Comment author: morganism 05 December 2016 11:48:07PM 0 points [-]

and a special issue on regenerative medicine

Special Focus Issue: Regenerative medicine: past, present and future - Foreword

http://www.futuremedicine.com/toc/rme/11/8

Comment author: morganism 05 December 2016 11:43:51PM 1 point [-]

report an experimental molecule that inhibits kidney function in mosquitoes and thus might provide a new way to control the deadliest animal on Earth.

"What our compounds do is stop urine production, so they swell up and can't volume regulate, and in some cases they just pop," he said.

"By targeting blood feeding female mosquitoes, we predict that there will be less selective pressure for the emergence of resistant mutations," Denton said.

The investigators show VU041 to be effective when applied topically, which indicates that it potentially could be adapted as a sprayed insecticide. They also show that it doesn't harm honeybees.

https://www.eurekalert.org/pub_releases/2016-12/vumc-ei120516.php

Comment author: morganism 05 December 2016 11:24:03PM 2 points [-]

there is some interesting new papers showing RNA splicing and gene replication errors causing lots of age and degenerative disease, and the Tufts study shows a newly isolated protein that affects aging if in high enough concentration.

Uncovering a 'smoking gun' in age-related disease

"called splicing factor 1 (SFA-1) -- a factor also present in humans. In a series of experiments, the researchers demonstrate that this factor plays a key role in pathways related to aging. Remarkably, when SFA-1 is present at abnormally high levels, it is sufficient on its own to extend lifespan."

https://www.eurekalert.org/pub_releases/2016-12/htcs-ua120116.php

and

"The error occurs as copies of three-letter sequences of DNA--known as CAG and CTG triplets--expand and repeat themselves hundreds or even thousands of times, disrupting normal gene sequences." Genetic analyses in baker's yeast now reveal that these large-scale expansions are controlled by genes that have been implicated in a process for repairing DNA breaks, leading the researchers to surmise that the expansions occur while breaks are being healed."

https://www.eurekalert.org/pub_releases/2016-12/tu-tru120216.php

Comment author: Wei_Dai 05 December 2016 10:36:32PM 1 point [-]

You don't need to formalize all the capabilities of attackers, but you need to have at least some idea of what they are.

But you usually already have an intuitive idea of what they are. Writing down even an informal list of attackers' capabilities at the start of your analysis may just make it harder for you to subsequently think of attacks that use capabilities outside of that list. To be clear, I'm not saying never write down a threat model, just that you might want to brainstorm about possible attacks first, without having a more or less formal threat model potentially constrain your thinking.

Comment author: DataPacRat 05 December 2016 09:35:57PM 0 points [-]

if you have that goal, then you would try anything sensible-sounding and any combination of anything sensible until something works.

I have had that goal for some time. I have tried the sensible-sounding things, in various combinations. They didn't work. So I've been shifting my focus from "trying to keep depressive bouts from happening" to "managing my life on the assumption I'm going to keep getting depressive bouts". I've hit enough such management tricks that even with my bout last week interrupting, I'm about 60,000 words into writing a novel, including 1600 words yesterday; I could be doing better, sure, but I could be doing a lot /worse/, too.

Comment author: sixes_and_sevens 05 December 2016 09:24:45PM 2 points [-]

The links are a new feature since I was last here, and I can't say I'm overwhelmed by them, tbh.

Comment author: moridinamael 05 December 2016 09:02:02PM *  0 points [-]

If "sophisticated" in this usage just means "complex", I'm not sure that I can get behind the idea that complex theories or policies are just better than simple ones in any meaningful way.

There may be a tendency for more complex and complicated positions to end up being better, because complexity is a signal that somebody spent a lot of time and effort on something, but Timecube is a pretty complex theory and I don't count that as being a plus.

Complexity or "sophistication" can cut the other way just as easily, as somebody adds spandrels to a model to cover up its fundamental insufficiency.

At the same time it's useful to have beliefs like "The function of the heart is to pump blood".

I don't know. I try to root out beliefs that follow that general form and replace them, e.g. "the heart pumps blood" is a testable factual statement, and a basic observation, which semantically carries all the same useful information without relying on the word "function" which implies some kind of designed intent.

Comment author: ChristianKl 05 December 2016 08:20:05PM 0 points [-]

There are frequently arguments that presume that tribalism is universal in a sense that it isn't.

Comment author: NatashaRostova 05 December 2016 08:16:36PM 3 points [-]

I'm not new to this internet sphere, but new to LW. One thing I suggest is users spend less time wondering what would get people back, and more time posting interesting links. Interesting links are somewhat rare, there is lots of lame blogs and annoying quasi-philosophy discussions. Lots of the philosophy posted here is very cringeworthy.

View more: Next