Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: adamzerner 18 July 2017 08:05:44AM 1 point [-]

Maybe LW could evolve into something new and awesome with better (forum) software?

Comment author: gilch 09 July 2017 02:23:00AM 2 points [-]

Video chat probably isn't good enough by itself for many topics. For programming, screen-sharing software would be helpful. For mathematics, some kind of online whiteboard would help. Is there anything else we need? Do any of you know of good resources? Free options that don't require registration are preferable.

Comment author: adamzerner 09 July 2017 09:28:09PM 0 points [-]

Good points, I agree. Screen sharing is possible via Hangouts, Skype and talky.io. I'm not sure what the best online whiteboard software is, but screen sharing + using some sort of notepad type thing should work.

Comment author: sen 02 July 2017 08:38:56AM 0 points [-]

A question for people asking for machine learning tutors: have you tried just reading through OpenAI blog posts and running the code examples they embed or link? Or going through the TensorFlow tutorials?

Comment author: adamzerner 02 July 2017 10:35:54AM 0 points [-]

By using the word "just", it gives me the impression that you think it's easy to not get lost. In my experience with other fields, it is easy to get lost, and I would assume that the same is true with machine learning.

Comment author: Lumifer 10 June 2017 07:33:13PM *  3 points [-]

Counterpoint: do you understand the magnitude of how bad it would be if there was a fire and you ended up getting seriously injured or dying?

You continue to live in the apartment building which already had two fires and which has a malfunctioning alarm system.

Comment author: adamzerner 11 June 2017 04:59:12PM 0 points [-]

I don't. I'm not scope sensitive. The alarm system is working fine, it's just that it's sensitive to people who are cooking (I think). I'm eager to move out ASAP though.

In response to Scope Insensitivity
Comment author: adamzerner 10 June 2017 12:14:33AM *  1 point [-]

Real world example: the fire alarm goes off in my apartment building at night about once every two weeks. Many people decide to stay in their room, as opposed to evacuating the building. They aren't understanding the magnitude of how bad it would be if there was a fire and they ended up getting seriously injured or dying. (There have been two real fires so far; the chance of a real fire is not trivial)

Comment author: John_Maxwell_IV 30 May 2017 04:20:10AM *  0 points [-]

My first thought was, "what if that project got lots of traction, and you didn't have the technical skills to continue iterating fast enough?

How many startups do you know of that failed this way?

Twitter and reddit both survived despite major performance issues. And social media has quite low profit per HTTP request served relative to other web businesses. So I'd expect scaling to be much less of an issue if your startup is almost any other area. I also suspect that this kind of scaling failure is becoming rarer and rarer as the skills, resources, and technology to scale up a website get commoditized.

it may be worth it for that person to take some time building up a reasonable foundation of networking, databases, operating systems, algorithms and whatnot before taking the "learn by doing" approach

Apparently that kind of "foundational" knowledge gets forgotten after people graduate from college because it doesn't really get used: http://blog.triplebyte.com/bootcamps-vs-college I think a lot of this stuff functions partially as a way to signal intelligence. Although I'll grant that CS degrees are not as bad in this regard as most degrees are.

A compromise approach: If you are running in to a tricky issue with your database, give yourself time to study enough about databases so you have a decent mental model of what's going on under the hood so you can fix the problem. ("Just-in-time learning")

If you're worried about not knowing that a particular CS subfield is relevant to a problem your startup is facing, you could try Steve Yegge's breadth-first learning philosophy: http://steve-yegge.blogspot.in/2006/03/math-for-programmers.html Not all CS subfields are worth learning for this reason though. I think this justification doesn't really work for networking, databases, or operating systems. It might work for algorithms or artificial intelligence. However, I think it's a pretty weak justification overall. I still think diving in is a superior path.

If you have an inherent desire to learn more CS for some reason, you could deliberately pick a project that will require you to pick up some CS knowledge along the way. This will also look better on your resume (since it signals intelligence). It also creates a bit of a barrier to entry for competitors.

Comment author: adamzerner 31 May 2017 12:40:30AM 0 points [-]

As for your points on learning by doing, I'm not sure what to think, but I appreciate and value them. I'm someone who tends towards textbooks and classes, but I've been slowly turning towards learning by doing. Both in theory, and in practice (ie. my life). At some point I plan on thinking about this question more thoroughly, and posting on LW about it. As for the question posed in this post, it's on the condition that learning by doing is not the most effective method (which may or may not be a correct premise, at least in my eyes).

Comment author: John_Maxwell_IV 30 May 2017 04:20:10AM *  0 points [-]

My first thought was, "what if that project got lots of traction, and you didn't have the technical skills to continue iterating fast enough?

How many startups do you know of that failed this way?

Twitter and reddit both survived despite major performance issues. And social media has quite low profit per HTTP request served relative to other web businesses. So I'd expect scaling to be much less of an issue if your startup is almost any other area. I also suspect that this kind of scaling failure is becoming rarer and rarer as the skills, resources, and technology to scale up a website get commoditized.

it may be worth it for that person to take some time building up a reasonable foundation of networking, databases, operating systems, algorithms and whatnot before taking the "learn by doing" approach

Apparently that kind of "foundational" knowledge gets forgotten after people graduate from college because it doesn't really get used: http://blog.triplebyte.com/bootcamps-vs-college I think a lot of this stuff functions partially as a way to signal intelligence. Although I'll grant that CS degrees are not as bad in this regard as most degrees are.

A compromise approach: If you are running in to a tricky issue with your database, give yourself time to study enough about databases so you have a decent mental model of what's going on under the hood so you can fix the problem. ("Just-in-time learning")

If you're worried about not knowing that a particular CS subfield is relevant to a problem your startup is facing, you could try Steve Yegge's breadth-first learning philosophy: http://steve-yegge.blogspot.in/2006/03/math-for-programmers.html Not all CS subfields are worth learning for this reason though. I think this justification doesn't really work for networking, databases, or operating systems. It might work for algorithms or artificial intelligence. However, I think it's a pretty weak justification overall. I still think diving in is a superior path.

If you have an inherent desire to learn more CS for some reason, you could deliberately pick a project that will require you to pick up some CS knowledge along the way. This will also look better on your resume (since it signals intelligence). It also creates a bit of a barrier to entry for competitors.

Comment author: adamzerner 30 May 2017 07:46:23PM *  0 points [-]

How many startups do you know of that failed this way?

I don't necessarily mean scaling, I mean iterating on product as well, because even with lots of traction, you still need to iterate on product, I assume. I base this on hearing advice from YC and the likes on the importance of continuously talking to customers and iterating, although I personally am not aware of enough data on startups to draw the conclusion strongly myself. The advice of YC may be wrong. And I may be misinterpreting it. What do you think?

Comment author: John_Maxwell_IV 27 May 2017 06:51:25AM 5 points [-]

How about choosing a project that's too small & inconsequential to be called a "startup" (Paul Graham has indicated that he thinks these can be some of the best startup ideas anyway) and use it as a test case to improve your programming skills? (The advantage of choosing something small & inconsequential is that you can get a quick, small win in order to build a success spiral for later attempts at big startup projects. Bonus points if your quick, small win could lead to a series of slower, bigger wins, e.g. in the case where your small & inconsequential project gets you some sort of audience or gives you some firsthand knowledge of how a particular industry works.)

Comment author: adamzerner 29 May 2017 08:19:09PM *  0 points [-]

My first thought was, "what if that project got lots of traction, and you didn't have the technical skills to continue iterating fast enough?". But I suppose that's a pretty good problem to have - you may find that you in fact can iterate fast enough, you may find it worthwhile to find and work with someone who could help you, and you'll probably learn a lot about users/startups/projects.

The downside I see is that working on a project may not be the best way to develop technical skills. For example, consider a front-end developer with almost no CS background - it may be worth it for that person to take some time building up a reasonable foundation of networking, databases, operating systems, algorithms and whatnot before taking the "learn by doing" approach. In the scenario where "learn by doing" is in fact the most effective way to develop technical skills, then it does seem to be a win-win situation. But in the scenario where it isn't, I think the downside of slower learning needs to be balanced against the upsides of a) potential success, b) learning about startups/users, and c) potential confidence stemming from a success spiral in the case where the smaller project is successful. I'm not sure what to think about this balance.

Comment author: RomeoStevens 03 May 2017 06:19:01PM *  12 points [-]

Having spent years thinking about this and having the opportunity to talk with open minded, intelligent, successful people in social groups, extended family etc. I concluded that most explicit discussion of the value of inquiring into values and methods (scope sensitivity and epistemological rigor being two of the major threads of what applied rationality looks like) just works incredibly rarely, and only then if there is strong existing interest.

Taking ideas seriously and trusting your own reasoning methods as a filter is a dangerous, high variance move that most people are correct to shy away from. My impression of the appeal of LW retrospectively is that it (on average) attracted people who were or are under performing relative to g (this applies to myself). When you are losing you increase variance. When you are winning you decrease it.

I eventually realized that what I was really communicating to people's system 1 was something like "Hey, you know those methods of judgment like proxy measures of legitimacy and mimesis that have granted you a life you like and that you want to remain stable? Those are bullshit, throw them away and start using these new methods of judgment advocated by a bunch of people who aren't leading lives resembling the one you are optimizing for."

This has not resulted in many sales. It is unrealistic to expect to convert a significant fraction of the tribe to shamanism.

Comment author: adamzerner 03 May 2017 07:51:53PM 1 point [-]

As for the comment that it's difficult to get people to be interested, that seems very true to me, and it's good to get the data of your vast experience with this.

A separate question is how we can best attempt to get people to be interested. You commented on the failure you experienced with the "throw your techniques away, these ones are better" approach. That seems like a good point. I sense that my message takes that approach too strongly and could be improved.

I'm interested in hearing about anything you've found to be particularly effective.

Comment author: eternal_neophyte 03 May 2017 05:58:54PM 0 points [-]

Hate to have to say this but directly addressing a concern is social confirmation of a form that the concern deserves to be addressed, and thus that it's based in something real. Imagine a Scientologist offering to explain to you why Scientology isn't a cult.

Of the people I know of who are outright hostile to LW, it's mostly because of basilisks and polyamory and other things that make LW both an easy and a fun target for derision. And we can't exactly say that those things don't exist.

Comment author: adamzerner 03 May 2017 06:29:36PM *  0 points [-]

Hate to have to say this but directly addressing a concern is social confirmation of a form that the concern deserves to be addressed, and thus that it's based in something real.

I could see some people responding that way. But I could see others responding with, "oh, ok - that makes sense". Or maybe, "hm, I can't tell whether this is legit - let me look into it further". There are lots of citations and references in the LessWrong writings, so it's hard to argue with the fact that it's heavily based off of existing science.

Still, there is the risk of some people just responding with, "Jeez, this guy is getting defensive already. I'm skeptical. This LessWrong stuff is not for me." I see that directly addressing a concern can signal bad things and cause this reaction, but for whatever reason, my brain is producing a feeling that this sort of reaction will be the minority in this context (in other contexts, I could see the pattern being more harmful). I'm starting to feel less confident in that, though. I have to be careful not to Typical Mind here. I have an issue with Typical Minding too much, and know I need to look out for it.

The good thing is that user research could totally answer this question. Maybe that'd be a good activity for a meet-up group or something. Maybe I'll give it a go.

View more: Next