Comment author: Dorikka 10 August 2015 01:28:59PM 2 points [-]

What would you like to learn about?

Comment author: snarles 14 August 2015 04:27:56PM 2 points [-]

Sociology, political science and international politics, economics (graduate level), psychology, psychiatry, medicine.

Comment author: Dorikka 10 August 2015 01:30:13PM 2 points [-]

What topics might you be able to teach others about?

Comment author: snarles 14 August 2015 04:06:52AM 2 points [-]

Undergraduate mathematics, Statistics, Machine Learning, Intro to Apache Spark, Intro to Cloud Computing with Amazon

Comment author: [deleted] 08 August 2015 09:18:48PM *  13 points [-]

Ok, I'm about to sound hypercritical and rambly. What I'm actually doing is running through the mental models in my head for "how businesses work" and trying to come up with possible failure points and solutions. I think you've jumped ahead to the "how can we create this thing" stage without first running through the "how can we make this a viable, sustainable sytem" part.


Incentives: One thing you might run into is the typical problems with bartering systems - you have to find two people who have exactly equal knowledge that's worth the same to each other, that takes the same amount of time to teach, and is exactly the knowledge that the other wants - otherwise, the trade will not be worth it for one of the people.

One way you could fix this is to have a "time currency" - If you tutor someone for one hour, you than have one hour that you can use to "pay" another tutor.

The problem then becomes quality of the tutors - because there's no form of quality checking in the system/pay for performance, so there's no incentive to improve your tutoring. One way to get around that would be to make a rating system that tied in with the "hours pay", so that for instance if they get five stars, you multiplied the hours by 1.25, if you got four stars, you multiplied by 1, three stars, by .75, two stars, by .5, and one star, by .25. This would incentivize people to actually be good tutors so they got "paid" more.


Network Effect: This is a network, so the typical chicken and egg problem occurs with the network effect - it only works if you have a ton of people who can tutor in a variety of fields, but you're not going to attract that ton of people if there's not already tutors there that want to tutor them. There's a few ways to deal with this, and I think that you should use all the strategies if you want this to work:

  1. You can preseed the network. Get a bunch of your friends who believe in the project to agree to put in x hours of time per week, so that you start with a good amount of tutors and then draw other people in to what already seems to be a vibrant community. (this is how reddit did it).
  2. You can create a prelaunch campaign and email list so that on launch, you have a bunch of people all coming at the same time, instead o a steady trickle of people who come, see that no one else is active, and leave. (most network effect startups try to do this.
  3. You can add some extra value to the system that works solo - eg a self study curriculum that's on top of the extra tutoring system. This means that the initial people stay for that self study curriculum, then as the network grows, they start to see the extra value from the tutoring system (this is how tumblr did it).
  4. You can start with a tiny niche, such as only people who want to learn and teach math related to rationality. This means you have to cover a much smaller base of what people want to learn and teach, and then you can expand slowly based on "peripheral interests" that you see a lot of your network has (this is how Facebook did it.)

Double Sided Market: This is a two sided market, so you run into a similar problem of the network effect. You need tutors to draw in tutees - but you need tutees to draw in tutors. Now, this is a weird double sided market in that you're expecting your buyers are also your sellers... this might seem to solve the whole problem, but in my mind it actually makes it harder. You're assuming that there will be an exact symmetry in how much people are willing to tutor others vs. be tutored, but I would assume there will be a vast difference in how much people are willing to be tutored vs. tutor others. This in my mind is probably the biggest "what-if" in this scenario : If I'm right, and people are not willing(or able) to tutor somone for an hour to get an hour of tutoring, the idea becomes unworkable.


MVP: To me, going through the whole model, that assumption of symmetry between tutored and tutoring is the riskiest assumption, and the riskiest assumption should always be what you test in your MVP (minimum viable product). The simplest way to test this would just be to go on craigslist and put up an ad that says "Looking to be tutored in XYZ, willing to trade for tutoring in A,B, or C subject."

If you get say, 5 legitimately interested responses, that's enough to contact those people, talk to them about your idea, and see if they'll tutor someone. If they will, you can go on to the next step and start creating your launch list.


People: The first failure point that comes up in any startup is not usually getting a good idea, it's executing on it. The tone of your post seemed like it was saying "here's a cool idea I have" vs. "Here's something I'm going to do." But my model of this is that if you don't make it happen, it won't get done. No one else is going to execute this for you.

In response to comment by [deleted] on Peer-to-peer "knowledge exchanges"
Comment author: snarles 09 August 2015 01:58:09AM 0 points [-]

Thanks--this is a great analysis. It sounds like you would be much more convinced if even a few people already agreed to tutor each other--we can try this as a first step.

Comment author: Username 08 August 2015 07:13:34PM 0 points [-]

Great idea! But I am crap at tutoring, any knowledge exchange would be very unequal.

Comment author: snarles 09 August 2015 01:55:45AM 0 points [-]

That's OK, you can get better. And you can use any medium which suits you. It could be as simple as assigning problems and reading, then giving feedback.

Peer-to-peer "knowledge exchanges"

13 snarles 08 August 2015 03:33PM

I wonder if anyone has thought about setting up an online community dedicated to peer-to-peer tutoring.  The idea is that if I want to learn "Differential Geometry" and know "Python programming", and you want to learn "Python programming" and know "Differential geometry," then we can agree to tutor each other online.  The features of the community would be to support peer-to-peer tutoring by:

 

 

  • Facilitating matchups between compatible tutors
  • Allowing for more than two people to participate in a tutoring arrangement
  • Providing reputation-based incentives to honor tutoring agreements and putting effort into tutoring
  • Allowing other members to "sit in" on tutoring sessions, if they are made public
  • Allowing the option to record tutoring sessions
  • Providing members with access to such recorded sessions and "course materials"
  • Providing a forum to arrange other events

With such functions, the community would have some overlap with other online learning platforms, but the focus of the community would be to provide free, quality personalized teaching.

The LessWrong community could build the first version of this peer tutoring system.  It has people with broad interests, high intellectual standards, and many engineers who could help develop some of the infrastructure.  The first iteration of the community would be small, and many of the above features (e.g. a reputation system, and tools for facilitating matchups) would not be needed.  The first problems we would need to solve are:
  • Where should we host the community? (e.g. Google groups?)
  • What are some basic ground rules to ensure the integrity of the community and ensure safety?
  • Where can we provide a place for people to list which subjects they want to learn and which subjects they can teach?
  • Which software should we use for tutoring?
  • How can people publicize their tutoring schedule in case others want to "sit in"?
  • How can people record their tutoring sessions if they wish, and how can they make these available?
  • How should the community be administrated?  Who should be put in charge of organizing the development of the community?
  • How should we recruit new members?

 

In response to On stopping rules
Comment author: IlyaShpitser 03 August 2015 02:30:51PM *  6 points [-]

Was the professor in question Jamie?

Did you read Jamie's and Larry's counterexample where they construct a case where the propensity score is known exactly but the treatment/baseline/outcome model is too complex to bother w/ likelihood methods?

https://normaldeviate.wordpress.com/2012/08/28/robins-and-wasserman-respond-to-a-nobel-prize-winner/

Couldn't we extend this to longitudinal settings and just say MSMs are better than the parametric g-formula if the models for the latter are too complex? Would this not render the strong likelihood principle false? If you don't think causal inference problems are in the "right magisterium" for the likelihood principle, just consider missing data problems instead (same issues arise, in fact their counterexample is phrased as missing data).

Comment author: snarles 05 August 2015 04:40:29PM *  0 points [-]

This is an interesting counterexample, and I agree with Larry that using priors which depend on pi(x) is really no Bayesian solution at all. But if this example is really so problematic for Bayesian inference, can one give an explicit example of some function theta(x) for which no reasonable Bayesian prior is consistent? I would guess that only extremely pathological and unrealistic examples theta(x) would cause trouble for Bayesians. What I notice about many of these "Bayesian non-consistency" examples is that they require consistency over very large function classes: hence they shouldn't really scare a subjective Bayesian who knows that any function you might encounter in the real world would be much better behaved.

In terms of practicality, it's certainly inconvenient to have to compute a non-parametric posterior just to do inference on a single real parameter phi. To me, the two practical aspects of actually specifying priors and actually computing the posterior remain the only real weakness of the subjective Bayesian approach (or the Likelihood principle more generally.)

PS: Perhaps it's worth discussing this example as its own thread.

Comment author: Wes_W 28 July 2015 10:04:21PM *  0 points [-]

So instead of every civ fillings its galaxy, we get every civ building one in every galaxy. For this to not result in an Engine on every star, you still have to fine-tune the argument such that new civs are somehow very rare.

There are some hypotheticals where the details are largely irrelevant, and you can back up and say "there are many possibilities of this form, so the unlikeliness of my easy-to-present example isn't the point". "Alien civs exist, but prefer to spread out a lot" does not appear to be such a solution. As such, the requirement for fine-tuning and multiple kinds of exotic physics seem to me like sufficiently burdensome details that this makes a bad candidate.

Comment author: snarles 29 July 2015 04:40:16PM *  1 point [-]

EDIT: Edited my response to be more instructive.

On some level it's fine to make the kinds of qualitative arguments you are making. However, to assess whether a given hypothesis really robust to parameters like ubiquity of civilizations, colonization speed, and alien psychology, you have to start formulating models and actually quantify the size of the parameter space which would result in a particular prediction. A while ago I wrote a tutorial on how to do this:

http://lesswrong.com/lw/5q7/colonization_models_a_tutorial_on_computational/

which covers the basics, but to incorporate alien psychology you would have formulate the relevant game-theoretic models as well.

The pitfall of the kinds of qualitative arguments you are making is that you risk confusing the fact that "I found a particular region of the parameter space where your theory doesn't work" with the conclusion that "Your theory only works in a small region of the parameter space." It is true that under certain conditions regarding ubiquity of civilizations, colonization speed, and alien diplomatic strategy, that Catastrophe Engines end up being built on every star. However, you go on to claim that in most of the parameter space, such an outcome occurs, and that the Fermi Paradox is only observed in a small exceptional part of the parameter space. Given my experience with this kind of modeling, I predict that Catastrophe Engines actually are robust to all but the most implausible assumptions about ubiquity of intelligent life, colonization speed, and alien psychology, but you obviously don't need to take my word on it. On the other hand, you'd have to come up with some quantitative models to convince me of the validity of your criticisms. In any case, continuing to argue on a purely philosophical level won't serve to resolve our disagreement.

Comment author: Wes_W 28 July 2015 06:19:25PM *  0 points [-]

The second civilization would just go ahead and build them anyways, since doing so maximizes their own utility function.

Then why isn't there an Engine on every star?

Comment author: snarles 28 July 2015 08:52:14PM 0 points [-]

The second civ would still avoid building them too close to each other. This is all clear if you do the analysis.

Comment author: Dagon 27 July 2015 03:26:33PM *  1 point [-]

Unpack #1 a bit.

Are you looking for information about situations where an individual's decisions should include predicted decisions by others (which will in turn take into account the individual's decisions)? The (Game Theory Sequence)[http://lesswrong.com/lw/dbe/introduction_to_game_theory_sequence_guide/] is a good starting point.

Or are you looking for cases where "individual" is literally not the decision-making unit? I don't have any good less-wrong links, but both (Public Choice Theory)[http://lesswrong.com/lw/2hv/public_choice_and_the_altruists_burden/] and the idea of sub-personal decision modules come up occasionally.

Both topics fit into the overall framework of classical decision theory (naive or not, you decide) and expected value.

Items 2-4 don't contradict classical decision theory, but fall somewhat outside of it. decision theory generally looks at instrumental rationality - how to best get what one wants, rather than questions of what to want.

Comment author: snarles 28 July 2015 01:42:03PM 1 point [-]

Thanks for the references.

I am interested in answering questions of "what to want." Not only is it important for individual decision-making, but there are also many interesting ethical questions. If a person's utility function can be changed through experience, is it ethical to steer it in a direction that would benefit you? Take the example of religion: suppose you could convince an individual to convert to a religion, and then further convince them to actively reject new information that would endanger their faith. Is this ethical? (My opinion is that it depends on your own motivations. If you actually believed in the religion, then you might be convinced that you are benefiting others by converting them. If you did not actually believe in the religion, then you are being manipulative.)

Comment author: Stingray 27 July 2015 06:57:54PM *  1 point [-]

Acknowledges that in a group context, actions have a utility in of themselves (signalling) separate from the utility of the resulting scenarios.

Why do people even signal anything? To get something for themselves from others. Why would signaling be outside the scope of consequentialism.

Comment author: snarles 28 July 2015 01:35:11PM 0 points [-]

Ordinarily, yes, but you could imagine scenarios where agents have the option to erase their own memories or essentially commit group suicide. (I don't believe these kinds of scenarios are extreme beyond belief--they could come up in transhuman contexts.) In this case nobody even remembers which action you chose, so there is no extrinsic motivation for signalling.

View more: Prev | Next