Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: ArisKatsaris 01 February 2018 01:26:25PM 0 points [-]

Meta Thread

Comment author: LessWrong 01 February 2018 10:23:07AM 2 points [-]

When exactly did that happen? I haven't been here when the site was "highly active" (which I assume when EY was making the sequences posts) but do we have any statistics about it? I could build a small scraper and make a graph for dates and stuff, but somebody with access to the database could do it much better.

I don't remember ever seeing statistics on that.

Comment author: LessWrong 01 February 2018 10:02:24AM *  4 points [-]

Confession thread. I've been in love with LessWrong for about 5 years (my first post was this, found on 4chan. Maybe it isn't exceptional but it always had a place in my heart. In fact, it gave me the courage to get my first job when I was scared of being outside) and I've never admitted it. Now that it's about to go boom I can finally confess that, even though I've been a horrible student. Take that, LW2, you'll never be as awesome.

NEWSFLASH: HPMOR chapter 123 released: Something to protect: Less Wrong.

Comment author: diegocaleiro 01 February 2018 02:50:20AM 1 point [-]

Eric Weinstein argues strongly against returns being 20century level, and says they are now vector fields, not scalars. I concur (not that I matter)

Comment author: LessWrong 30 January 2018 09:52:22PM *  2 points [-]

Excellent job. You got bonus points for writing it in Lisp. I assume you've read SICP?

Comment author: chowfan 30 January 2018 08:44:30PM *  0 points [-]

If the moderator had enough voting power or stake, then it would be a pure prediction market. The decision could be the final voting results. Maybe some mechanism can be designed to ensure the early voters get some profits if the vote is the same as the final voting result. If an incident happens and the other side increase substantially because of it, the final results can also be reversed.

Comment author: username2 30 January 2018 04:55:25PM 2 points [-]

I have become very used to the interface here and the various ways it can be manipulated, so I prefer it greatly even if this is just due to inertia. Glad to see more than 6 names on the Last 30 Days list. But it's clear that this is a dead zone and I've become resigned to the idea that this will soon be gone.

I do enjoy what's going on at LW2 even though it's still open beta, a bit broken in a few areas and cluttered by too many specific requests and follow-up about personal preferences for site look and feel. And moderator chat that really feels like it can be kept behind closed doors -- I hope this is just a feature of beta that will be ironed out. And I fully applaud the approach to trolls (so far anyway).

Overall it's fun to see people jockeying for the position of Next Great Poster Who Will Lead Us From Darkness, especially those who aren't trying to copy Previous Heroes. Some falling terribly short but it's interesting to see the variety of voices. It does not seem to be heading to an obvious local minimum, something I worried about in the early days of LW2.0. Maybe a few local minima but that's fine with me.

Comment author: username2 30 January 2018 04:48:58PM 3 points [-]

Thank you very much. I read LW primarily for the discussions that are spurred by posts/articles and the comments are effectively impossible for me to read with the standard interface. On a small glance/browse I'm very encouraged about trying Greaterwrong as my regular reading mode.

Comment author: cousin_it 30 January 2018 01:01:08PM *  1 point [-]

Thank you for doing this!

Comment author: cousin_it 29 January 2018 11:34:12PM *  5 points [-]

I just spent the last few days rereading old posts and comments, reminiscing about how much fun we had as a fandom. Then we tried to turn into a goal-based community, but it turns out social connections arise from having a common focus of emotion, not from hard work.

Comment author: bogus 29 January 2018 08:37:35PM *  1 point [-]

Thanks for adding this, then! Personally, I'm just waiting to create an account/log in there until the 'final' LW-importation goes through. (Users who were late setting the e-mails to their accounts here did not have these imported to LW2 initially, which can lead to all sorts of problems. But a new importation from LW's updated user list can fix this - or maybe it can't, but then there's no loss in just creating a new user!)

It would be nice to have more than just a single page of 'new' content, since as is, it can even be hard to check out all recent posts from the past few days, or whatever. It's great that the archive is available though. (Similarly, it would be great if we could access more of a user's posting and commenting history directly from their user page. On LW and LW2, you can see everything that a user has posted to the site simply by browsing from the userpage, and many LW users do rely on this feature as a de-facto 'index' of what they've contributed here.)

Comment author: saturn 29 January 2018 07:08:02PM *  5 points [-]

Hi, I'm the one who created Greater Wrong. I'm intending to announce it more widely once it doesn't have so many conspicuously missing features, but it's something I'm working on in my spare time so progress is somewhat gradual. You can, however, already log in and post comments. You can use your existing LW 2.0 username/password or create a new one. Let me know if you have any problems.

Comment author: ChristianKl 29 January 2018 04:12:29PM 2 points [-]

It seems like a personal message sent to me on the new website got lost.

At the beginning, the speed was too low but now it's a lot better.

Getting notifcation to replies to your post doesn't yet work and I think that's the last thing that has to be done to make the new website a clear improvement above the existing version.

Comment author: bogus 29 January 2018 03:44:07PM 6 points [-]

There is an alternative interface to the new site at Greater Wrong. It has a few problems (namely, it's hard to access archived content; all you get is a day to day listing of posts) but compared to Lesser Wrong it's at least usable. LW2 should support it officially in addition to the Lesser Wrong website, and perhaps even add features like logging in and posting content through it.

Comment author: Thomas 29 January 2018 09:08:39AM 0 points [-]

I tend to agree. I don't know is it just a habit or something else, like a conservative profile of myself and many others, but that doesn't really matter.

The new site isn't that much better. Should be substantially better than this one for a smooth transition.

Comment author: LessWrong 28 January 2018 10:49:55PM *  13 points [-]

Old site love thread.

Just curious how many people like, and possibly even prefer, the old site.

I'd also like to know if anyone else has terrible experience with site redesigns. They always, for some reason, end up terrible. Likelyhood of bias: 60%.

Comment author: Elo 28 January 2018 01:51:35AM 0 points [-]

welcome! You might like to hang out on the soon-to-be-merged new site - http://www.lesserwrong.com

This site is inactive.

Comment author: bidbid 27 January 2018 09:07:59PM 0 points [-]

Hello, I'm just a guy that found this site by chance, I have a "system" I base my decision making on but while I wasn't able to find "problems" in my way of thinking I am sure there must be some, so I wanted to write it down for you to dissect, probably a lot of stuff you've heard of already but oh well :D

Comment author: cousin_it 27 January 2018 09:53:37AM *  0 points [-]

Yeah, makes sense that "enlightenment" would be a physiological state, not a mental state. Probably many other states are like that too. I've noticed that when I want to make my mind behave a certain way (e.g. focused, sociable, or creative), a quick physical warm-up (like a few push-ups or squat jumps) works much better than trying to change tracks purely mentally.

Comment author: chowfan 27 January 2018 08:26:18AM *  0 points [-]

Hi Wei. Do you have any comments on the Ethereum, ICO (Initial Coin Offering) and hard forks of Bitcoin? Do you think they will solve the problem of fixed monetary supply of Bitcoin since they somehow brought much more "money" (or securities like stock, not sure how to classify them)?

Do you have any comments about the scaling fight of Bitcoin between larger blocks and 2nd-layer payment tunnels such as Lightning Network ?

Comment author: Lemmih 26 January 2018 05:08:51PM 0 points [-]

Site moved to https://clozecards.com/

My attention has mostly been elsewhere but my vocabulary is slowly growing.

Comment author: Joy 26 January 2018 03:30:54AM 0 points [-]

Not a specific piece, but a great resource if you appreciate animation.


Spotlights the obscure, and highlights the quirkier details of what’s mainstream.

Comment author: Gyrodiot 25 January 2018 02:14:51PM *  1 point [-]

The Community Weekend of 2017 was one of the highlights of my past year. I strongly recommend it.

Excellent discussions, very friendly organizers, awesome activities.

Signed up!

Comment author: ChristianKl 24 January 2018 09:39:16PM 0 points [-]

Most of the comments provide arguments without referencing any sources to back up their claims. The result is that this system filters for popular arguments instead of filtering for arguments that can be well supported by sources.

Comment author: ChristianKl 24 January 2018 09:00:15PM 0 points [-]

Your landing page doesn't show me any current discussion the way Reddit or Quora do. I think that's likely a bad decision.

Comment author: cousin_it 23 January 2018 12:53:50PM 0 points [-]

LW's period of fastest growth was due to Eliezer's posts that were accessible and advanced (and entertaining, etc.) Encouraging other people to do work like that could be more promising than splitting the goals as you propose.

Comment author: gwern 22 January 2018 07:19:00PM 0 points [-]
Comment author: akvadrako 21 January 2018 10:48:29AM *  0 points [-]

In any case, are you making the claim that if a neural net were able to figure out the rules of the game by examining a few million games, you would accept that it's a universal knowledge creator?

If it could figure out the rules of any game that would be remarkable. That logic would also really help to find bugs in programs or beat the stock market.

Comment author: skjoldburger 20 January 2018 10:15:07AM 0 points [-]

The internet can, I believe, fix itself. Kialo is one attempt at doing so.

The pros of Kialo appear to be that 1. Participants are civil, 2. Arguments are deconstructed, and 3. one can look at a topographic map of an argument. Also, the system checks to see if any arguments have already been made elsewhere so as to prevent repetition

Deeper than this is what could be be called the Wikipedia effect. Though anyone can edit a page in Wikipedia, pages more or less get better and better, particularly in the areas that are not controversial. There is a constant improvement process in place.

That is in Wikipedia. However, arguments are inherently controversial but with editors and flagging I can imagine that improvements could lead to improved arguments. I cannot say if that is in fact the case. One troubling part is that sub arguments get voted upon to appear higher or lower on a pros vs. cons list. Truth is not democratic.

Furthermore as there are no barriers to entry of participants, such as a test of reasoning skills or a test to eliminate those with a pathological bias -- the hallmark of a online troll -- the voting process hinders rather than furthers the "Wikipedia effect".

Comment author: fortyeridania 19 January 2018 04:13:29AM 0 points [-]

Hey, I just saw this post. I like it. The coin example is a good way to lead in, and the non-quant teacher example is helpful too. But here's a quibble:

If we follow Bayes’ Theorem, then nothing is just true. Thing are instead only probable because they are backed up by evidence.

The map is not the territory; things are still true or false. Bayes' theorem doesn't say anything about the nature of truth itself; whatever your theory of truth, that should not be affected by the acknowledgement of Bayes' theorem. Rather, it's our beliefs (or at least the beliefs of an ideal Bayesian agent) that are on a spectrum of confidence.

Comment author: UNVRSLWSDM 19 January 2018 01:28:13AM 0 points [-]

yeah, torture 1 and 0 with endless multiplication

Comment author: Good_Burning_Plastic 18 January 2018 04:18:57PM 0 points [-]

A mysterious but trustworthy agent named "Laplace's Demon" has recently appeared, and informed everyone that, to a first approximation, the world is currently in one of seven possible quantum states.

What is the word "quantum" doing there? Repeat with me: Quantum superpositions are not about epistemic uncertainty! Quantum superpositions are not about epistemic uncertainty! Quantum superpositions are not about epistemic uncertainty!

Comment author: Good_Burning_Plastic 17 January 2018 01:42:26PM 0 points [-]

The following rules are stipulated: There are four possible outcomes, either "Hillary elected and US Nuked", "Hillary elected and US not nuked", "Jeb elected and US nuked", "Jeb elected and US not nuked". Participants in the market can buy and sell contracts for each of those outcomes, the contract which correponds to the actual outcome will expire at $100, all other contracts will expire at $0

An issue about that is that all other things being equal $100 will be worth more if the US is not nuked than if it is.

Comment author: Caspar42 16 January 2018 02:20:22PM 0 points [-]

The issue with this example (and many similar ones) is that to decide between interventions on a variable X from the outside, EDT needs an additional node representing that outside intervention, whereas Pearl-CDT can simply do(X) without the need for an additional variable. If you do add these variables, then conditioning on that variable is the same as intervening on the thing that the variable intervenes on. (Cf. section 3.2.2 "Interventions as variables" in Pearl's Causality.)

Comment author: Larks 08 January 2018 02:13:07AM 0 points [-]

The bioterminist's guide is now 5 years old. Does anyone know of an updated version?

Comment author: RedMan 08 January 2018 02:10:42AM *  0 points [-]

An ethical injunction doesn't work for me in this context, killing can be justified with lots of more base motives than 'preventing infinity suffering'.

So, instead of a blender, I could sell hats with tiny brain pulping shaped charges that will be remotely detonated when mind uploading is proven to be possible, or when the wearer dies of some other cause. As long as my marketing reaches some percentage of people who might plausibly be interested, then I've done my part.

I assess that the number is small, and that anyone seriously interested in such a device likely reads lesswrong, and may be capable of making some arrangement for brain destruction themselves. So, by making this post and encouraging a potential upload to pulp themselves prior to upload. I have some > 0 probability of preventing infinity suffering.

I'm pretty effectively altruistic, dang. It's not even February.

I prefer your borg scenarios to individualized uploading. I feel like it's technically feasible using extant technology, but I'm not sure how much interest there really is in mechanical telepathy.

Comment author: Kaj_Sotala 07 January 2018 04:17:53PM *  1 point [-]

I agree that the argument you advance here is the sane one, but I have trouble reconciling it with my interpretation of Effective Altruism: 'effort should be made to expend resources on preventing suffering, maximize the ratio of suffering avoided to cost expended'

I interpret your paper as rejecting the argument advanced by prof Hansen that if of all future variants of you, the number enjoying 'heaven' vastly outnumber the copies suffering 'hell', on balance, uploading is a good. Based on your paper's citation of Omelas, I assert that you would weight 'all future heaven copies' in aggregate, and all future hell copies individually.

Well, our paper doesn't really endorse any particular moral theory: we just mention a number of them, without saying anything about which one is true. As we note, if one is e.g. something like a classical utilitarian, then one would take the view by Hanson that you mention. The only way to really "refute" this is to say that you don't agree with that view, but that's an opinion-based view rather than a refutation.

Similarly, some people accept the various suffering-focused intuitions that we mention, while others reject them. For example, Toby Ord rejects the Omelas argument, and gives a pretty strong argument for why, in this essay (under the part about "Lexical Threshold NU", which is his term for it). Personally I find the Omelas argument very intuitively compelling, but at the same time I have to admit that Ord also makes a compelling argument against it.

That said, it's still possible and reasonable to end up accepting the Omelas argument anyway; as I said, I find it very compelling myself.

(As an aside, I tend to think that personal identity is not ontologically basic, so I don't think that it matters whose copy ends up getting tortured; but that doesn't really help with your dilemma.)

If you do end up with that result, my advice would be for you to think a few steps forward from the brain-shedding argument. Suppose that your argument is correct, and that nothing could justify some minds being subjected to torture. Does that imply that you should go around killing people? (The blender thing seems unnecessary; just plain ordinary death already destroys brains quite quickly.)

I really don't think so. First, I'm pretty sure that your instincts tell you that killing people who don't want to be killed, when that doesn't save any other lives, is something you really don't want to do. That's something that's at least worth treating as a strong ethical injunction, to only be overridden if there's a really really really compelling reason to do so.

And second, even if you didn't care about ethical injunctions, it looks pretty clear that going around killing people wouldn't actually serve your goal much - you'd just get thrown in prison pretty quickly, and also cause enormous backlash against the whole movement of suffering-focused ethics; anyone even talking about Omelas arguments would from that moment on get branded as "one of those crazy murderers" and everyone would try to distance themselves from them; which might just increase the risk of lots of people suffering from torture-like conditions, since a movement that was trying to prevent them would get discredited.

Instead, if you take this argument seriously, then what you should instead be doing is to try to minimize s-risks in general: if any given person ending up tortured would be one of the worst things that could happen, then large numbers of people ending up tortured would be even worse. We listed a number of promising-seeming approaches for preventing s-risks in our paper: none of them involve blenders, and several of them - like supporting AI alignment research - are already perfectly reputable within EA circles. :)

You may also want to read Gains from Trade through Compromise, for reasons to try to compromise and find mutually-acceptable solutions with people who don't buy the Omelas argument.

(Also, I have an older paper which suggests that a borg-like outcome may be relatively plausible, given that it looks like linking brains together into a borg could be relatively straightforward once we did have uploading - or maybe even before, if an exocortex prosthesis that could be used for mind-melding was also the primary uploading method.)

Comment author: Hafurelus 07 January 2018 03:52:28PM 0 points [-]

Thank you!

Comment author: Lumifer 05 January 2018 04:35:27PM *  1 point [-]

There seems to be a complexity limit to what humans can build. A full GAI is likely to be somewhere beyond that limit.

The usual solution to that problem -- see the EY's fooming scenario -- is to make the process recursive: let a mediocre AI improve itself, and as it gets better it can improve itself more rapidly. Exponential growth can go fast and far.

This, of course, gives rise to another problem: you have no idea what the end product is going to look like. If you're looking at the gazillionth iteration, your compiler flags were probably lost around the thousandth iteration and your chained monitor system mutated into a cute puppy around the millionth iteration...

Probabilistic safety systems are indeed more tractable, but that's not the question. The question is whether they are good enough.

Comment author: MaryCh 05 January 2018 03:23:17PM 0 points [-]

Well, in my life I can recall two instances off-hand. There have probably been more of them, but at the very least, they seem to be completely unrelated to attempts to raise well-being levels...

Comment author: RedMan 05 January 2018 01:32:59PM 1 point [-]

Thank you for the detailed response!

I agree that the argument you advance here is the sane one, but I have trouble reconciling it with my interpretation of Effective Altruism: 'effort should be made to expend resources on preventing suffering, maximize the ratio of suffering avoided to cost expended'

I interpret your paper as rejecting the argument advanced by prof Hansen that if of all future variants of you, the number enjoying 'heaven' vastly outnumber the copies suffering 'hell', on balance, uploading is a good. Based on your paper's citation of Omelas, I assert that you would weight 'all future heaven copies' in aggregate, and all future hell copies individually.

So if the probability of one or more hell copies of an upload coming into existence for as long as any heaven copy exceeds the probability of a single heaven copy existing long enough to outlast all the hell copies, that person's future suffering will eventually exceed all suffering previously experienced by biological humans. Under the EA philosophy described above, this creates a moral imperative to prevent that scenario, possibly with a blender.

If uploading tech takes the form of common connection and uploading to an 'overmind', this can go away--if everyone is Borg, there's no way for a non-Borg to put Borg into a hell copy, only Borg can do that to itself, which is, at least from an EA standpoint, probably an acceptable risk.

At the end of the day, I was hoping to adjust my understanding of EA axioms, not be talked down from chasing my friends around with a blender, but that isn't how things went down.

SF is a tolerant place, and EAs are sincere about having consistent beliefs, but I don't think my talk title "You helped someone avoid starvation with EA and a large grant. I prevented infinity genocides with a blender" would be accepted at the next convention.

Comment author: Kaj_Sotala 05 January 2018 06:55:15AM 1 point [-]

Awesome paper.

Thank you very much!

Curious about your take on my question here: http://lesswrong.com/lw/os7/unethical_human_behavior_incentivised_by/

So, I agree that mind uploads being tortured indefinitely is a very scary possibility. And it seems very plausible that some of that is going to happen in a world with mind uploads, especially since it's going to be impossible to detect from the outside, unless you are going to check all the computations that anyone is running.

On the other hand, we don't know for sure what that world is going to be like. Maybe there will be some kind of AI in charge that does check everyone's computations, maybe all the hardware that gets sold is equipped with built-in suffering-detectors that disallow people from running torture simulations, or something. I'll admit that both of these seem somewhat unlikely or even far-fetched, but then again, someone might come up with a really clever solution that I just haven't thought of.

Your argument also seemed to me to have some flaws:

Over a long enough timeline, the probability of a copy of any given uploaded mind falling into the power of a sadistic jerk approaches unity. Once an uploaded mind has fallen under the power of a sadistic jerk, there is no guarantee that it will ever be 'free',

You can certainly make the argument that, for any event with non-zero probability, then over a sufficiently long lifetime that event will happen at some point. But if you are using that to argue that an upload will be captured by someone sadistic eventually, shouldn't you also hold that they will also escape eventually?

This argument also doesn't seem to be unique to mind uploading. Suppose that we achieved biological immortality and never uploaded. You could also make the argument that, now that people can live until the heat-death of the universe (or at least until our sun goes out), then their lifetimes are sufficiently long that at some point in their lives they are going to be kidnapped and tortured indefinitely by someone sadistic, so therefore we should kill everyone before we get radical life extension.

But for biological people, this argument doesn't feel anywhere near as compelling. In particular, this scenario highlights the fact that even though there might be a non-zero probability for any given person to be kidnapped and tortured during their lifetimes, that probability can be low enough that it's still unlikely to happen even during a very long lifetime.

You could reasonably argue that for uploads, it's different, since it's easier to make a copy of an upload undetected etc., so the probability of being captured during one's lifetime is larger. But note that there have been times in history that there actually was a reasonable chance for a biological human to be captured and enslaved during their lifetime! Back during the era of tribal warfare, for example. But we've come a long way from those times, and in large parts of the world, society has developed in such a way to almost eliminate that risk.

That, in turn, highlights the point that it's too simple to just look at whether we are biological or uploads. It all depends on how exactly society is set up, and how strong are the defenses and protections that society provides to the common person. Given that we've developed to the point where biological persons have pretty good defenses against being kidnapped and enslaved, to the point where we don't think that even a very long lifetime would be likely to lead to such a fate, shouldn't we also assume that upload societies could develop similar defenses and reduce the risk to be similarly small?

Comment author: RedMan 05 January 2018 12:15:25AM 1 point [-]

Curious about your take on my question here: http://lesswrong.com/lw/os7/unethical_human_behavior_incentivised_by/ Awesome paper.

Comment author: RedMan 04 January 2018 07:17:21PM *  0 points [-]

I hadn't thought about it that way.

I do think that either compiler time flags for the AI system or a second 'monitor' system chained to the AI system in order to enforce the named rules would probably limit the damage.

The broader point is that probabilistic AI safety is probably a much more tractable problem than absolute AI safety for a lot of reasons, to further the nuclear analogy, emergency shutdown is probably a viable safety measure for a lot of the plausible 'paperclip maximizer turns us into paperclips' scenarios.

"I need to disconnect the AI safety monitoring robot from my AI-enabled nanotoaster robot prototype because it keeps deactivating it" might still be the last words a human ever speaks, but hey, we tried.

Comment author: Lumifer 04 January 2018 03:47:35PM 0 points [-]

Are you reinventing Asimov's Three Laws of Robotics?

Comment author: thefishinthetank 04 January 2018 07:27:36AM 1 point [-]

OTOH, joy is very different. It kind of just happens, unasked-for.

This is the happiness we are really searching for. The other kind is better described as pleasure.

Comment author: thefishinthetank 04 January 2018 07:23:57AM *  2 points [-]

Interesting post. I can definitely identify with the journey of exercise, supplementation, and spiritual exploration.

I would like to caution you that your connection between the calming effects of vasodialation and enlightenment might be a bit superficial. It seems you have discovered what it is like to be calm, or have equanimity. While being calm is both a prerequisite and downstream effect of enlightenment, it is not to be confused with the deep knowledge of truth (enlightenment). Enlightenment is a deep subconscious insight, that becomes more likely to happen when the mind is calm, clear, and alert.

Enlightenment is also not state dependent. It's often thought of as something you realize and don't forget. It also induces perceptual changes, like those described by Jeffrey Martin. Entering states where your mind is finding profound connections is not enlightenment, but it is a step closer to realizing that insight.

I'm posting this not to tear down your experience, but to urge you on. I'm suggesting that you may think you've sailed the seven seas, where in reality you only saw a picture of a boat. Thinking you've found enlightenment and that it's not great is likely to steer you away from this path, which in my opinion, would be unfortunate.

And how can I be so sure that you didn't find enlightenment? Those who find it don't discredit it. ;)

Comment author: RedMan 04 January 2018 02:00:03AM *  0 points [-]

Rules for an AI:

If an action it takes results in more than N logs of $ worth of damage to humans/kills more than N logs of humans, transfer control of all systems it can provide control inputs to designated backup (human, formally proven safe algorithmic system, etc), power down.

When choosing among actions which affect a system external to it, calculate probable effect on human lives. If probability of exceeding N assigned in rule 1 is greater than some threshold Z, ignore that option, if no options are available, loop.

Most systems would be set to N= 1, Z = 1/10000, giving us five 9s of certainty that the AI won't kill anyone. Some systems (weapons, climate management, emergency management dispatch systems) will need higher N scores and lower Z scores to maintain effectiveness.

JFK had an N of like 9 and a Z score of 'something kind of high', and passed control to Lyndon B Johnson of 'I have a minibar and a shotgun in the car I keep on my farm so I can drive and shoot while intoxicated' fame. We survived that, we will be fine.

Are we done?

Comment author: RedMan 04 January 2018 01:45:11AM 0 points [-]

That's great to hear, stay safe.

This sort of data was a contributor to my choice of sport for general well being: https://graphiq-stories.graphiq.com/stories/11438/sports-cause-injuries-high-school#Intro

There is truth to it: https://www.westside-barbell.com/blogs/2003-articles/extra-workouts-2

Really grateful for the info, I never could put my finger on what exactly I was not liking about CM when I wasn't pushing myself, stuff is amazing for preventing exercise soreness though.

Comment author: Elo 03 January 2018 03:35:02PM 1 point [-]

Interesting you say that about bad when you are not lifting. There just wasn't any warning from anyone (there probably was but I took no notice).

I have been back to doctors, and I do run several times a week these days. It's not set wrong or else I couldn't run, and I never got an x-ray.

I went back to trampoline 6 months later and I injured myself trying to do something that I didn't have the muscles for any more. It strikes me as more dangerous than I was willing to admit. It's exercise that really pushes your body and I'm not sure I am comfortable with it compared to things that are more within a body's limit.

For example rock climbing - you are limited by what your body let's you do. Only lift your own weight. And that's a lot closer to the safe limit than trampolines which interact with external contraption and do things like compress your spine and cause unnatural brain shaking.

Weakest link theory was a bit of a joke but I am sure there is some truth to it.

Comment author: RedMan 03 January 2018 12:50:47PM 0 points [-]

You back to trampolining yet?

Way to eat a broken bone and not seek medical attention for it, someone I knew did about what you did and ended up having to have a doctor re-break and set the bone to fix things. Lots of 'newly fit' people, particularly teenagers, have your 'injury from stupidity' behavior pattern. This is one of the reasons professional athletes are banned from amateur sports by their contracts

The great coach Louis Simmons is worth reading, he will expand your mind on your weakest link theory.

My own conclusion on your magic enlightenment pill, based on my lived experience: super awesome when you're lifting, Fs you up a bit when you're not. Use it around intense exercise, otherwise avoid.

Comment author: 333kenshin 03 January 2018 09:07:21AM *  0 points [-]

As a Christian turned atheist, I can attest to the fact that church rituals do in fact encompass quite a few valid and effective techniques.

Consider the following practices which researchers have fairly well established contribute to mental wellness (all links are to Psychology Today):

Nothing surprising or new, right?

But the weird thing is when you realize each of the above practices is embedded in weekly church attendance:

  • confidant => confession
  • gratitude => grace
  • recitation => lord's prayer
  • singing => hymns
  • water => baptism (traditionally carried out down by the river)

In other words, church attendance provides a concentrated bundle of mental health benefit.

And I think this should jibe in terms of explaining why so many people continue to adhere to religion despite its obvious downsides. The usual explanation is that they must be dumb or irrational. But now we have a simpler explanation: that these mental health upsides offset the downsides. It doesn't require an assumption of extreme stupidity and/or irrationality (of course, it holds up just as well if they do happen to be so). As Bayesians, what is more probable: that we are all that much smarter and more rational then each and every one of them? Or that they simply value happiness more than than they value logic?

And yes, I know I'm presenting a false dichotomy to imply that happiness and logic are either/or proposition. But given that presently, access to many of these practices is limited outside of church. For example, the only socially acceptable venues for non-professionals to sing in is in the shower and at karaoke bars. Likewise, therapy costs an arm and a leg, and the prospects of finding someone else to confide in is spotty at best.

Which suggests what our next step should be as a community: to show that it's possible to be happy and logical. I suggest incorporating these practices into our own meetups as widely as possible - eg conduct meet at park fountains or with a rock band set. Only when we break this perceived monopoly of religion on mental well being will people in large number entertain leaving the church

Comment author: Torello 02 January 2018 06:17:50PM 1 point [-]
Comment author: Kallandras 01 January 2018 09:22:10PM 0 points [-]

I've recently begun listening to a few bands that are new to me - Parov Stelar, Tape Five, Caravan Palace, and Goldfish. I have found the upbeat tempo of electro-swing to be helpful when I want to improve my mood.

Comment author: gwern 01 January 2018 03:49:54AM 0 points [-]
Comment author: James_Miller 01 January 2018 03:45:39AM 1 point [-]

I've started creating a series of YouTube videos on the dangers of artificial general intelligence.

Comment author: ArisKatsaris 01 January 2018 02:12:48AM 0 points [-]

Short Online Texts Thread

Comment author: ArisKatsaris 01 January 2018 02:12:40AM 0 points [-]

Online Videos Thread

Comment author: ArisKatsaris 01 January 2018 02:12:36AM 0 points [-]

Fanfiction Thread

Comment author: ArisKatsaris 01 January 2018 02:12:32AM 0 points [-]

Nonfiction Books Thread

Comment author: ArisKatsaris 01 January 2018 02:12:28AM 0 points [-]

Fiction Books Thread

Comment author: ArisKatsaris 01 January 2018 02:12:22AM 0 points [-]

TV and Movies (Animation) Thread

Comment author: ArisKatsaris 01 January 2018 02:12:19AM 0 points [-]

TV and Movies (Live Action) Thread

Comment author: ArisKatsaris 01 January 2018 02:12:15AM 0 points [-]

Games Thread

Comment author: ArisKatsaris 01 January 2018 02:12:12AM 0 points [-]

Music Thread

Comment author: ArisKatsaris 01 January 2018 02:12:06AM 0 points [-]

Podcasts Thread

Comment author: ArisKatsaris 01 January 2018 02:12:02AM 0 points [-]

Other Media Thread

Comment author: ArisKatsaris 01 January 2018 02:11:56AM 0 points [-]

Meta Thread

Comment author: ChristianKl 29 December 2017 06:34:07PM 0 points [-]

Oliver Habryka (who works on programming LW2 at the moment) taught rationality to other students at his school a while back based on CFAR style ideas which at the time meant a lot of calibration and Fermi estimates.

The same would also make sense with the more recent CFAR material for anyone who took the CFAR course.

Comment author: Lu_Tong 27 December 2017 03:43:36AM *  0 points [-]

Thanks, I'll ask a couple more. Do you think UDT is a solution to anthropics? What is your ethical view (roughly, even given large uncertainty) and what actions do you think this prescribes? How have you changed your decisions based on the knowledge that multiple universes probably exist (AKA, what is the value of that information)?

Comment author: Luke_A_Somers 26 December 2017 12:27:10AM 0 points [-]

If you find an Omega, then you are in an environment where Omega is possible. Perhaps we are all simulated and QM is optional. Maybe we have easily enough determinism in our brains that Omega can make predictions, much as quantum mechanics ought to in some sense prevent predicting where a cannonball will fly but in practice does not. Perhaps it's a hypothetical where we're AI to begin with so deterministic behavior is just to be expected.

Comment author: Luke_A_Somers 26 December 2017 12:11:58AM 0 points [-]

I think the more relevant case is when the random noise is imperceptibly small. Of course you two-box if it's basically random.

Comment author: NerdyAesthete 24 December 2017 11:03:04PM *  1 point [-]

Sometimes, it almost seems like I am truly happy only when I "escape" or "triumph" over something that almost "ate me up": my husband's household, the Department that I had gone to for a PhD thesis... the genuinely nice psychiatrist who soothes my Mother's fears... Like "I am happy when I have proved that I haven't changed, because change is corruption".

I'd say that's a relief from a precarious situation, which does provide happiness, but is only temporary and not sustainable.

However, contentment (or a relaxed sense of well-being) is a form of happiness that can be sustained until something distressful occurs. Sustaining contentment may require life changes, for I feel many people's lives are incompatible with this sustained level of contentment; the lack of freedom imposed by obligations tends to being more stressing than not.

Also, exhilaration is another form of happiness (similar to anxiety, but the difference is certainly noticeable) that is desirable that's tricky to activate. I believe your joy is similar to my exhilaration, or maybe a gradation between contentment and exhilaration.

Comment author: Wei_Dai 24 December 2017 01:26:37AM 0 points [-]

I talked a bit about why I think multiple universes exist in this post. Aside from what I said there, I was convinced by Tegmark's writings on the Mathematical Universe Hypothesis. I can't really think of other views that are particularly worth mentioning (or haven't been talked about already in my posts), but I can answer more questions if you have them?

Comment author: morganism 23 December 2017 10:52:21PM 0 points [-]

"Destroyed Worlds" --Cause Star's Strange Dimming (VIDEO)

"A team of U.S. astronomers studying the star RZ Piscium has found evidence suggesting its strange, unpredictable dimming episodes may be caused by vast orbiting clouds of gas and dust, the remains of one or more destroyed planets."


Comment author: morganism 23 December 2017 10:44:44PM 0 points [-]
In response to Happiness Is a Chore
Comment author: MaryCh 23 December 2017 06:02:54PM 1 point [-]

I feel so much freer when I don't have to demonstrate that I am happy.

Sometimes, it almost seems like I am truly happy only when I "escape" or "triumph" over something that almost "ate me up": my husband's household, the Department that I had gone to for a PhD thesis... the genuinely nice psychiatrist who soothes my Mother's fears... Like "I am happy when I have proved that I haven't changed, because change is corruption". So yes, [feeling happy] is one of the necessary chores of self-maintenance. I don't get why I should want it more than, say, a chance to sleep in.

OTOH, joy is very different. It kind of just happens, unasked-for.

Comment author: Lu_Tong 22 December 2017 09:52:56PM 1 point [-]

Which philosophical views are you most certain of, and why? e.g. why do you think that multiple universes exist (and can you link or give the strongest argument for this?)

Comment author: Vaniver 22 December 2017 07:01:06PM 0 points [-]

I don't think that what you need has any bearing on what reality has actually given you.

As far as I can tell, I would pay Parfit's Hitchhiker because of intuitions that were rewarded by natural selection. It would be nice to have a formalization that agrees with those intuitions.

or by sneaking in different metaphysics

This seems wrong to me, if you're explicitly declaring different metaphysics (if you mean the thing by metaphysics that I think you mean). If I view myself as a function that generates an output based on inputs, and my decision-making procedure being the search for the best such function (for maximizing utility), then this could be considered as different metaphysics from trying to cause the most increase in utility for myself by making decisions, but it's not obvious that the latter leads to better decisions.

Comment author: bestazy 22 December 2017 06:19:40PM *  0 points [-]

I may be going too far afield here but did anyone else notice the part where the author says about AI that it can’t recognize uncertainty, so it ignores it, which brought to mind the recent self driving crashes where an unexpected event causes a crash, so while a human driver says whoa, uncertainty, I’m slowing down while I try to figure out what this other driver is up to, the AI at this point says, I don’t know what it is so it doesn’t exist. Seems like some postings recently stating that algos only know what they’re told and that is a big hurdle for the aforementioned masters of the tech universe. bestazy

View more: Prev | Next