All of casebash's Comments + Replies

I don't think there is anything stopping you from trying to create a test LW2 account to see if you will be locked out

Have you seen the notifications up the top right? Does that do what you want?

How haven't they caught up to 90s-era newsreaders.

What are the plans for the Wiki? If the plan is to keep it the same, why doesn't Lesser Wrong have a link to it yet?

2Habryka
The plan is to keep the wiki but to not integrate it particularly much into the site. Old links will continue working, but it won't be something that's prominently linked from the site anymore. It probably makes sense to rework the wiki as well and then integrate it into the site more properly, but until then we are probably going to deemphasize the wiki but otherwise leave it as is.

I agree that people should not be able to upvote or downvote an article without having clicked through to it.

I also find the comments hard to parse because the separation is less explicit than on either Reddit or here.

It does not seem to be working.

0Habryka
Hmm, is there anything in particular that is not working? We fixed a few bugs over the last few hours, but the page should have been functional since 4PM.

Are there many communities that do that apart from meta-filter?

1moridinamael
You mean communities that require a fee? I'm specifically thinking of SomethingAwful. Which has a bad reputation, but is actually an excellent utility if you visit only the subforums and avoid the general discussion and politics sections of the site.

Firstly, well done on all your hard work! I'm very excited to see how this will work out.

Secondly, I know that this might be best after the vote, but don't forget to take advantage of community support.

I'm sure that if you set up a Kickstarter or similar, that people would donate to it, now that you've proven your ability to deliver.

I also believe that, given how many programmers we have here, many people will want to make contributions to the codebase. My understanding was that this wasn't really happening before: a) Because the old code base was extremel... (read more)

2namespace
I'm going to write a top level post at some point (hopefully soon) but in the meantime I'd like to suggest the content in the original post and comments be combined into a wiki. There's a lot of information here about LW 2.0 which I wasn't previously aware of and significantly boosted my confidence in the project.
0ChristianKl
I think the default way people tell a stool from a table is that a stool is something on which you sit and a table is something on which you don't sit but put stuff. It's not about whether the surface is flat or the amount of legs. You seem to argue that the identity should be defined about such "natural" or objective qualities without really making a case for it.

Yes, they don't appear in the map, but when you see a mountain you think, "Hmm... this really needs to go in the map"

0Dagon
There are LOTS of maps which don't include mountains.
0ChristianKl
"Go in" is something different than "represented by". It's worthwhile to be conscious of the abstraction.

I think it is important to note that there are probably some ways in which this is adaptive. Us nerds probably spend far too much time thinking and trying to be consistent when it offers us very little benefit. It's also better socially in order to be more flexible - people don't like people who follow the rules too strictly as they are more likely to dob them in. It also much it much easier to appear sincere, but also come up with an excuse for avoiding your prior commitments.

Interesting post, I'll probably look more into some of these resources at some point. I suppose I'd be curious to know which concepts you really need to read the book for or which ones can be understood more quickly. Because reading through all of these books would be a very big project.

1ChristianKl
In many of the cases I don't think reading the books is even sufficient for understanding. Understanding those concepts in a way that matters for actual behavior takes a lot of work.

"I'm assuming you mean "new to you" ideas, not actually novel concepts for humanity as a whole. Both are rare, the latter almost vanishingly so. A lot of things we consider "new ideas" for ourselves are actually "new salience of an existing idea" or "change in relative weighting of previous ideas"." - well that was kind of the point. That if we want to help people coming up with new ideas is somewhat overrated vs. recommending existing resources or adapting existing ideas.

Hopefully the new LW has an option to completely delete a thread.

0Viliam
Post a comment in the latest Open Thread (hopefully some moderator is reading it) with a link to offending comment. (Yeah, having a "report spam" button would be more convenient.)
0Lumifer
Just yell "Bucket & mop to thread X!" really loudly.

I guess what I was saying that insofar as you require knowledge what you tend to need is usually a recommendation to read an existing resource or an adaption of ideas in an existing resource as opposed to new ideas. The balance of knowledge vs. practise is somewhat outside the scope of this article.

In particular, I wrote: "I'm not saying that this will immediately solve your problem - you will still need to put in the hard yards of experiment and practise - just that lack of knowledge will no longer be the limiting factor."

I wrote a post on a similar idea recently - self-conscious ideologies (http://lesswrong.com/r/discussion/lw/p6s/selfconscious_ideology/) - but I think you did a much better job of explaining the concept. I'm really glad that you did this because I consider it to be very important!

0turchin
I have to add that there is (informally) even smaller purple team, which thinks that climate change could happen sooner and in more violent form, like runaway global warming. The idea has similarities with the idea of self-improving AI as in both cases unstoppable process with positive feedback will result in human extinction in 21 century.

What did you do re: Captain Awkward advice?

1Elo
Looked at a few scenarios and tried to come up with advice then compared our collective to the captains advice.

Yeah, I have a lot of difficulty understanding Lou's essays as well. Nonetheless, there appear to be enough interesting ideas there that I will probably reread them again at some point. I suspect that attempting to write a summary as I go of the point that he is making might help clarify here.

"'Rationality gives us a better understanding of the world, except when it does not"

I provided this as an exaggerated example of how aiming for absolute truth can mean that you produce an ideology that is hard to explain. More realistically, someone would write something along the lines of, rationality gives us a better understanding of the world, except in cases a), b), c)... but if there are enough of these cases and these cases are complex enough, then in practise people round it off to "X is true, except when it is not", ie. they do... (read more)

Can you add any more detail of what precisely Continental Rationalism is? Or, even better, if you have time it's probably writing up a post on this.

Additionally, how come you posted here instead of on the Effective Altruism forum: http://effective-altruism.com/?

0Onemorenickname
I initially needed an editor I was used to to link a post to someone on the EA Discord Server. I thought I might as well do it on LW to gather input from LWians.

If you want casual feedback, probably the best location currently is: https://www.facebook.com/groups/eahangout/.

I definitely think it would be useful, the problem is that building such a platform would probably take significant effort.

There are a huge number of "ideas" startups out there. I would suggest taking a look at them for inspiration.

0Onemorenickname
I don't think about building a product from scratch, more about coordinating a Discord server, a Reddit board a Google doc for instance. The website linked by @lifelonglearner is particularly good, even though it will be deleted by July.

I think the reason why cousin_it's comment is upvoted so much is that a lot of people (including me) weren't really aware of S-risks or how bad they could be. It's one thing to just make a throwaway line that S-risks could be worse, but it's another thing entirely to put together a convincing argument.

Similar ideas have been in other articles, but they've framed it in terms of energy-efficiency while defining weird words such as computronium or the two-envelopes problem, which make it much less clear. I don't think I saw the links for either of those artic... (read more)

2Lukas_Gloor
Interesting! I'm only confident about endorsing this conclusion conditional on having values where reducing suffering matters a great deal more than promoting happiness. So we wrote the "Reducing risks of astronomical suffering" article in a deliberately 'balanced' way, pointing out the different perspectives. This is why it didn't come away making any very strong claims. I don't find the energy-efficiency point convincing at all, but for those who do, x-risks are likely (though not with very high confidence) still more important, mainly because more futures will be optimized for good outcomes rather than bad outcomes, and this is where most of the value is likely to come from. The "pit" around the FAI-peak is in expectation extremely bad compared to anything that exists currently, but most of it is just accidental suffering that is still comparatively unoptimized. So in the end, whether s-risks or x-risks are more important to work on on the margin depends on how suffering-focused or not someone's values are. Having said that, I totally agree that more people should be concerned about s-risks and it's concerning that the article (and the one on suffering-focused AI safety) didn't manage to convey this point well.
0[comment deleted]

Thanks for writing this post. Actually, one thing that I really liked about CFAR is that they gave a general introduction at the start of the workshop about how to approach personal development. This meant that everyone could approach the following lectures with an appropriate mindset of how they were supposed to be understood. I like how this post uses the same strategy.

0[anonymous]
Heh. This post was actually originally titled "Opening Session" in a blatant ripoff of their model, but I changed the name last minute :P.

Part of the problem at the moment is that the community doesn't have a clear direction like it did when Elizier was in charge. There was talk about starting an organisation in charge of spreading rationality before, but this never actually seems to have happened. I am optimistic about the new site that is being worked on though. Even though content is king and I don't know how much any of the new features will help us increase the amount of content, I think that the psychological effect about having a new site will be massive.

5sapphire
Its unclear the community can or should have a "leader" again. Alot of the community no longer suffiently agrees with Eliezer. The only person enough people would consent to follow is Scott 'The Rightful Caliph' Alexander. And Scott doesn't want the job. I think the community can flourish despite remaining de-centralized. But its admittedly trickier.

I probably don't have time to be involved in this, but just commenting to note my approval for this project and appreciation for anyone who choses to contribute. One major advantage of this project is that any amount of effort here will provide value - it isn't like a spaceship that isn't useful half built.

The fact that an agent has chosen to offer the bet, as opposed to the universe, is important in this scenario. If they are trying to make money off you, then the way to do that is to offer an unbalanced bet on the expectation that you will take the wrong side. So for example, if you think you have inside information, but they know that is actually unreliable.

The problem is that you have to always play when they want, whilst the other person only has to sometimes play.

So I'm not sure if this works.

Partial analysis:

Suppose David is willing to stake 100:1 odds against Trump winning the presidency (before the election). Assume that David is considered to be a perfectly rational agent who can utilise their available information to calculate odds optimally or at least as well as Cameron, so this suggests David has some quite significant information.

Now, Cameron might have his own information that he suspects that David does not and Cameron know that David has no way of knowing that he has this information. Taking this info into account, and the fact that... (read more)

Thanks for posting this. I've always been skeptical of the idea that you should offer two sided bets, but I never broke it down in detail. Honestly, that is such an obvious counter-example in retrospect.

That said, "must either accept the bet or update their beliefs so the bet becomes unprofitable" does not work. The offering agent has an incentive to only ever offer bets that benefit them since only one side of the bet is available for betting.

I'm not certain (without much more consideration), but it seems that Oscar_Cunningham's solution of always taking one half of a two sided bet sounds more plausible.

0casebash
Partial analysis: Suppose David is willing to stake 100:1 odds against Trump winning the presidency (before the election). Assume that David is considered to be a perfectly rational agent who can utilise their available information to calculate odds optimally or at least as well as Cameron, so this suggests David has some quite significant information. Now, Cameron might have his own information that he suspects that David does not and Cameron know that David has no way of knowing that he has this information. Taking this info into account, and the fact that Cameron offered to stake 100:1 odds, he might calculate 80:1 when his information is incorporated. So this would suggest that David should take the bet as the odds are better than Cameron thinks. Except, perhaps David suspected that Cameron had some inside info and actually thinks the true odds are 200:1 - he only offered 100:1 to fool David into thinking it was better that it was - meaning that the bet is actually bad for Cameron despite his inside info. Hmm... I still can't get my head around this problem.
0cousin_it
Right, and with two-sided bets there's no incentive to offer them at all. One-sided bets do get offered sometimes, so you get a chance for free information (if the other agent is more informed than you) or free money (if you think they might be less informed).

What is Esalen?

3ChristianKl
The Wikipedia page is https://en.wikipedia.org/wiki/Esalen_Institute In his Cargo Cult speech Feymann describes the place by saying:

What's Goodhart's Demon?

2Duncan Sabien (Deactivated)
A riff on Goodhart's Law, which is that any measure which becomes a metric ceases to be a good measure, or more broadly the dynamic that's behind "teaching to the test."

The biggest challenge with getting projects done within the Less Wrong community will always be that people have incredibly different ideas of what should be done. Everyone has their own ideas, few people want to join in other people's ideas. Will definitely be interested to see how things turn out after 3 months.

I like the idea of spreading popularity around when justified, ie. high status people pointing out when someone has a particular set of knowledge that people may not know that they could benefit from or giving them credit for interesting ideas. These seem important for a strong community and additionally provide benefits to the rest of the community by allowing them to take advantage of each other's skills.

"Seems fraught with philosophical gobbledygook and circular reasoning to specify what about "because the teacher said so" it is that isn't as "mathematical" as "because you're summing ones and tens separately"."

"Because you're summing ones and tens separately" isn't really a complete gears level explanation, but a pointer or hint to one. In particular, if you are trying to explain the phenomenon formally, you would begin by defining a "One's and ten's representation" of a number n as a tuple (a,b)... (read more)

I'm still confused about what Gear-ness is. I know it is pointing to something, but it isn't clear whether it is pointing to a single thing, or a combination of things. (I've actually been to a CFAR workshop, but I didn't really get it there either).

Is gear-ness:

a) The extent to which a model allows you to predict a singular outcome given a particular situation? (Ideal situation - fully deterministic like Newtonian physics)

b) The extent to which your model includes each specific step in the causation? (I put my foot on the accelerator -> car goes faster... (read more)

I'm still confused about what Gear-ness is.

Honestly, so am I. I think there's work yet to be done in making the idea of Gears become more Gears-like. I think it has quite a few, but I don't have a super precise definition that feels to me like it captures the property exactly.

I thought of this when Eliezer sent me a draft of a chapter from a book he was working on. In short (and possibly misrepresenting what he said since it's been a long time since I've read it), he was arguing about how there's a certain way of seeing what's true that made him immune ... (read more)

6Raemon
I think this is part of it but the main metaphor is more like "your model has no hand-wavy-ness. There are clear reasons that the parts connect to each other, that you can understand as clearly as you can understand how gears connect to each other."

Out of:

1) "Hey, sorry to interrupt but this sounds like a tangent, maybe we can come back to this later during the followup conversation?"

and:

2) "Hey, just wanted to make sure some others got a chance to share their thoughts."

I would suggest that number 1) is better as 2) suggests that they are selfishly dominating the conversation.

0ChristianKl
In our local LW space I think (1) would feel strange. To me, it seems like a sentence that people would say in other communities. I could only imagine saying something like that if I would be truly interested in revisiting the topic later.
5Raemon
I think it depends on whether they're a repeat offender and you want to make it clear why they need to give other people a turn.

You used the word umbrella and if I was going with a slightly less catchy, but more accurate summary, I would write, "Akrasia is an umbrella term". I think the word is still useful, but only if you remember this. The first step in solving an Akrasia problem is to notice that a problem falls within the Akrasia umbrella, the second step is to then figure out where it falls within that umbrella.

2[anonymous]
This is accurate. Thanks for a revised summary!

Because the whole point of these funds is that they have the opportunity to invest in newer and riskier ventures. On the other hand, Givewell tries to look for interventions with a strong evidence base.

They expect Givewell to update its recommendations, but they don't necessarily expect Givewell to evaluate just how wrong a previous past recommendation was. Not yet anyway, but maybe this post will change this.

0ChristianKl
That still leaves the question why you think people expect from funds to report on the success of their investments but don't expect it from GiveWell.

A major proportion of the clients will be EAs

Because people expect this from funds.

2ChristianKl
You think people don't expect it from GiveWell?

To what extent is it expected that EAs will be the primary donors to these funds?

If you want to outsource your donation decisions, it makes sense to outsource to someone with similar values. That is, someone who at least has the same goals as you. For EAs, this is EAs.

No, because the fund managers will report on the success or failure of their investments. If the funds don't perform, then their donations will fall.

Benquo460

It's been a year. I looked at the fund pages and the only track record info I was lists of grants made and dollar amounts:

Global Health and Development Fund

Animal Welfare Fund

Long-Term Future Fund

Effective Altruism Community Fund

I emailed CEA asking whether there was any track record info, and was directed to the same pages. I expect that this will change no one's mind on anything whatsoever. I regret doing the research to write this comment.

9Peter Wildeford
Why do you think this? The outside view suggests this won't happen -- disclosing success and failure is uncommon in the non-profit space.
3ChristianKl
Why are the fund managers going to report on the success of their investments when an organisation like GiveWell doesn't do this (as per the example in the OP)?

Wanting a board seat does not mean assuming that you know better than the current managers - only that you have distinct and worthwhile views that will add to the discussion that takes place in board meetings. This may be true even if you know worse than the current managers.

2Benquo
Is the idea that someone might think that current managers are wrongly failing to listen to them, but if forced to listen, would accept good ideas and reject bad ones? That seems plausible, though the more irrational you think the current managers are in the relevant ways, the more you should expect your influence to be through control rather than contributing to the discourse. Overall this seems like a decent alternative hypothesis.

All I ever covered in university was taking the Scrodinger equation and then quantum physics did whatever that equation said.

Infinite sums/sequences are a particular area of me. I would love to know about how these sums appear in string theory - what's the best introduction/way into this? You said these sums appear all over physics. Where do they appear?

2shev
Well, Numberphile says they appear all over physics. That's not actually true. They appear in like two places in physics, both deep inside QFT, mentioned here. QFT uses a concept called renormalization to drop infinities all over the place, but it's quite sketchy and will probably not appear in whatever final form physics takes when humanity figures it all out. It's advanced stuff and not, imo, worth trying to understand as a layperson (unless you already know quantum mechanics in which case knock yourself out).

"This may also be somewhat pedantic, but in something like quantum physics, because of this gap in knowledge, it'd be very obvious who the professor was to an audience that doesn't know quantum physics, even if it wasn't made explicitely clear beforehand." - I met one guy who was pretty convincing about confabulating quantum physics to some people, even though it was obvious to me he was just stringing random words together. Not that I know even the basics of quantum physics. He could actually speak really fluently and confidently - just everything was a bunch of non-sequitors/new age mysticism. I can imagine a professor not very good at public speaking who would seem less convincing.

Load More