All of albeola's Comments + Replies

albeola00

There's already the option of doing this through alternate accounts.

4daenerys
Two points: 1) Alternate accounts are suspect to manipulation (anyone can create an account claiming anything), and as such what they say carries little weight. The cohesive post will have the added weight that the submissions will be verified as being written by actual female Less Wrongers, and not sock puppets with 0 karma. Also, some posters might be ok having their name associated at the level of "I wrote A submission" versus "I wrote THIS submission." 2) If you post on your own, rather than as a group, you will still run into the difficulty of being overpowered by the amount of male voices, so either few will hear you, or you'll be one against many, or you'll be taken as "single anecdote/ feminazi" rather than "The Women of Less Wrong"
9Nornagest
That seems like enough of a trivial inconvenience to deter a lot of people, even if it was being actively encouraged in some context similar to this one. Sending a PM to Daenerys seems much less inconvenient, if more work for her.
6Epiphany
Be.
3Decius
Nobody alive has died yet.
albeola40

The "FAR" keeps pushing me into far mode and then the red color keeps pulling me back into near mode. It's like a Stroop task!

albeola10

Apologies — I should have taken reinforcement into account and noted that the new algorithm is probably still a lot better than the previous one.

albeola70

Ironically, it appears the new algorithm is frequentist.

5matt
Bayesian reformulations welcome.
albeola190

I see it as being like the Chuck Berry scene in Back to the Future.

albeola220

Beck is a Mormon, and Mormons generally seem a lot friendlier to transhumanist-type ideas than standard Christians.

8Grognor
That's definitely true.
albeola00

Sure, I don't see anything here to disagree with.

albeola00

The problem of locating "the subjective you" seems to me to have two parts: first, to locate a world, and second, to locate an observer in that world. For the first part, see the grandparent; the second part seems to me to be the same across interpretations.

-1private_messaging
The point is, code of a theory has to produce output matching your personal subjective input. The objective view doesn't suffice (and if you drop that requirement, you are back to square 1 because you can iterate all physical theories). The CI has that as part of theory, MWI doesn't, you need extra code. The complexity argument for MWI that was presented doesn't favour MWI, it favours iteration over all possible physical theories, because that key requirement was omitted. And my original point is not that MWI is false, or that MWI has higher complexity, or equal complexity. My point is that argument is flawed. I don't care about MWI being false or true, I am using argument for MWI as an example of sloppiness SI should try not to have (hopefully without this kind of sloppiness they will also be far less sure that AIs are so dangerous).
albeola00

The original justification for the Kelly criterion isn't that it maximizes a utility function that's logarithmic in wealth, but that it provides a strategy that, in the infinite limit, does better than any other strategy with probability 1. This doesn't mean that it maximizes expected utility (as your examples for linear utility show), but it's not obvious to me that the attractiveness of this property comes mainly from assigning infinite negative value to zero wealth, or that using the Kelly criterion is a similar error to the one Weitzman made.

2CarlShulman
Yes, if we have large populations of "all-in bettors" and Kelly bettors, then as the number of bets increase the all-in bettors lead in total wealth increases exponentially, while the probability of an all-in bettor being ahead of a Kelly bettor falls exponentially. And as you go to infinity the wealth multiplier of the all-in bettors goes to infinity, while the probability of an all-in bettor leading a Kelly bettor goes to zero. And that was the originally cited reasoning. Now, one might be confused by the "beats any other constant bankroll allocation (but see the bottom paragraph) with probability 1" and think that it implies "bettors with this strategy will make more money on average than those using other strategies," as it would in a finite case if every bettor using one strategy did better than any bettor using any other strategy. But absent that confusion, why favor probability of being ahead over wealth unless one has an appropriate utility function? One route is log utility (for which Kelly is optimal), and I argued against it as psychologically unrealistic, but I agree there are others. Bounded utility functions would also prefer the Kelly outcome to the all-in outcome in the infinite limit, and are more plausible than log utility. Also, consider strategies that don't allocate a constant proportion in every bet, e.g. first do an all-in bet, then switch to Kelly. If the first bet has a 60% chance of tripling wealth and a 40% chance of losing everything, then the average, total, and median wealth of these mixed-strategy bettors will beat the Kelly bettors for any number of bets in a big population. These don't necessarily come to mind when people hear loose descriptions of Kelly.
albeola60

if you are seeking lowest complexity description of your input, your theory needs to also locate yourself within what ever stuff it generates somehow (hence appropriate discount for something really huge like MWI)

It seems to me that such a discount exists in all interpretations (at least those that don't successfully predict measurement outcomes beyond predicting their QM probability distributions). In Copenhagen, locating yourself corresponds to specifying random outcomes for all collapse events. In hidden variables theories, locating yourself correspo... (read more)

-2private_messaging
Well, the goal is to predict your personal observations, in MWI you have huge wavefunction on which you need to somehow select the subjective you. The predictor will need code for this, whenever you call it mechanism or not. Furthermore, you need to actually derive Born probabilities from some first principles somehow if you want to make a case for MWI. Deriving those, that's what would be interesting, actually making it more compact (if the stuff you're adding as extra 'first principles' is smaller than collapse). Also, btw, CI doesn't have any actual mechanism for collapse, it's strictly a very un-physical trick. Much more interestingly, Solomonoff probability hints that one should try really to search for something that would predict beyond probability distributions. I.e. search for objective collapse of some kind. Other issue: QM actually has problem at macroscopic scale, it doesn't add up to general relativity (without nasty hacks), so we are matter of factly missing something, and this whole issue is really silly argument over nothing as what we have is just a calculation rule that happens to work but we know is wrong somewhere anyway. I think that's the majority opinion on the issue. Postulating a zillion worlds based on known broken model would be tad silly. I think basically most physicists believe neither in collapse as in CI (beyond believing its a trick that works) nor believe in many worlds, because forming either belief would be wrong.
albeola50

There's a difference between thinking as if dimensions are linked together, and thinking as if there's "some cosmic niceness built into the universe that makes everything improve monotonically along every dimension at once" (emphasis mine). Switching between attacking moderate and extreme versions of the same claim is classic logical rudeness.

albeola50

But there isn't some cosmic niceness built into the universe that makes everything improve monotonically along every dimension at once.

Who believes this?

Nobody, stated explicitly, but the word "progress" links a lot of those dimensions together, so it's easy to think, functionally, as if they are. Wiggins and all that.

1RomeoStevens
whig history? shrug
albeola50

Is any of it transmissible? If not, is the reason why it isn't transmissible transmissible? Do your reasons carry over to other people's situations?

-18Will_Newsome
albeola-20

The commonly accepted view is that women and men are equally good at math on average

Some googling informs me that there's a gender gap on the math SAT and other standardized tests. It may be that you have in mind some way in which these tests don't reflect a real gap in average math ability, but I think it's more likely that you confused the data on math ability and the data on IQ. A .3 standard deviation gap would mean 62% of women are below the male average. I agree that this makes "most women are bad at math" an exaggeration, though more male spread means the numbers look worse the higher you set the bar.

5JoshuaZ
The math gap is much larger in the United States than it is in Northern Europe. In general, gender inequality and poorer math performance by females are correlated. Moreover, over time, most of the gender gap has gone down. Most relevant study (although I do remember having reservations about some aspects of their methodology the last time I looked at that in detail, and I don't unfortunately remember what they were.)
albeola00

I guess I'm hereby tapping out of the discussion.

0komponisto
Fair enough.
albeola00

OK, so compare "BLUE-CAR person" with "CLOWN-car person". They still seem different to me. (I didn't downvote, though I wouldn't blame people if they downvoted this entire sub-conversation for pedantry.)

1komponisto
I would note that the original point was specifically about the use of the hyphen; there is no need for an example to match the case of interest in every aspect in order to be illustrative of the relevant aspect(s). I don't think that's a fair characterization. No one was correcting anyone's grammar. This sub-conversation began with an inquiry by Alicorn about a particular individual's usage habits. If your implication is that the details of language are somehow not as "worthy" a subject for discussion on LW as many other similarly "esoteric" subjects discussed here, I protest.
albeola00

There are some corpses in the street. Some people are proposing to bury them, because they'll rot and cause diseases. Others are proposing to leave them there, because haha, corpses. In this situation, you may prefer cryopreservation to burial and at the same time prefer burial to non-burial, because cryo probably won't happen. (Maybe this is an insane alien hypothetical world where cryo is just really unpopular.) If you're facing a "bury yes or no" button, it may well be rational to push yes. This is true even though the probability of cryoprese... (read more)

0komponisto
I agree with the qualitative point but think it irrelevant. Not only are we not facing a "yes or no" button, but all that you claim in the above is that it "may well be rational to push yes" (emphasis added) in the event that we are faced with such a button. This says very little. Again, I reiterate the point made in the grandparent. A hypothetical about a yes-or-no button is no answer to someone specifically advocating a third alternative. If you don't think the third alternative is possible, argue against it directly; don't pretend it was never proposed.
albeola00

Preferring sidebar change to banning does not imply preferring no banning to banning given actual probability of sidebar change. Do you agree?

0komponisto
Actual probability of sidebar change is, I would hope, dependent on such preferences.
albeola00

So you're not saying that you prefer no banning to banning (given whatever you predict will actually happen to the sidebar)?

0komponisto
I thought I was saying that.
albeola00

True, but it would discriminate less well. It would hide many OK comments that happened to be downvoted once or twice.

Note that for this solution to be an argument against the banning solution, it would need to actually be implemented. Are you predicting that will happen?

3komponisto
I'm saying it ought to be done, if the problem is as described. Or, in other words, that I prefer such a solution over the alternative being proposed (moderator intervention to remove comments).
albeola10

Comments in the sidebar tend to be too new to have been voted below -3 or whatever the threshold is.

0komponisto
One could make the sidebar-threshold lower than the ordinary threshold....
albeola20

Doesn't feel the same to me. One is adjective noun, the other is noun noun. It affects the intonation. "I'm a blue CAR person" vs "I'm a CLOWN car person".

-2komponisto
I don't agree; if you're contrasting blue-car people with red-car people, the stress is on the first component. And if there is no context at all, I would read "blue-car person" as "BLUE-CAR person" (i.e. stress on the modifier relative to the modified, but not on either component of the modifier relative to the other).
2Crux
Compare: vs. Not the most natural-sounding example, but the point should nevertheless be intact. It's noun noun, yet still works out the same way as komponisto's original noun-adjective example.
albeola00

The question isn't whether it "exists in order to" make cracking down unnecessary, or whether it "is supposed to" replace moderator action. The question is whether it actually does those things. And it's far from perfect at doing them. Yes, heavily downvoted comments take up a little less space in the recent comments and in the thread (at least if you have the willpower not to click on them! virtue of curiosity!) But they still take up some space; they take time to be downvoted enough to be hidden; I'm pretty sure they still appear in t... (read more)

albeola30

I was assuming you'd see both colors as the same. Then a zebra crossing would just look like an ordinary stretch of road. That wouldn't kill you. What would kill you is to see an ordinary stretch of road as a zebra crossing. If that were to happen, though, it definitely wouldn't be at the next zebra crossing.

albeola00

Removing comments happens silently and without a trace. Such tools can be used by the establishment to quiet dissent.

So let's have a policy that banned commenters get to post a link to their anti-LW blog. We could list all the anti-LW blogs on a wiki page or something.

They can break existing conversations.

By removing examples of what not to do, we can no longer point at them as examples.

I don't think anyone is proposing to delete past comments.

We need more contrarians, not fewer.

If I promise to be a high-quality contrarian, can we ban the next f... (read more)

albeola40

such measures are very damaging

Why?

9thomblake
Just a few reasons: Removing comments happens silently and without a trace. Such tools can be used by the establishment to quiet dissent. They can break existing conversations. We need more contrarians, not fewer. By removing examples of what not to do, we can no longer point at them as examples. Even if the comments were on the whole annoying, there might be interesting stuff in there worth responding to. Bans, more than downvotes, outright discourage participation amongst those who are in particular need of our help. Freedom of speech is valuable in itself, and its presence here is aesthetically pleasing.
albeola00

prove to emself that black is white, and be killed in the next zebra crossing

You wouldn't be killed, you'd just fail to cross the street.

0RolfAndreassen
I really don't see why. A zebra crossing is a sequence of black and white stripes. Exchanging the colours just means you start with white instead of black, or vice-versa. It's the stripiness that's important, not the ordering.
0Viliam_Bur
Illusion of transparency is thinking that contents of MIND1 and MIND2 must be similar, ignoring that MIND2 does not have information that strongly influences how MIND1 thinks. Expecting short inferential distances is underestimating the vertical complexity (information that requires knowledge of other information) of a MAP. EDIT: I don't know if there is a standard name for this, and it would not surprise me if there isn't. Seems to me that most biases are about how minds work and communicate, while "inferential distances" is about maps that did not exist in ancient environment.
albeola130

Please crack down earlier, harder, and more often. Nobody is going to die from it. Higher average comment quality will attract better commenters in a virtuous circle. There's no excuse for tolerating the endless nonsense that some commenters post, and those enabling them by responding to them should stop.

7komponisto
On the contrary, the karma system exists in order to make such "cracking down" unnecessary. If comments are downvoted sufficiently, they are hidden. This system is supposed to replace moderator action. If moderators are going to control content then we may as well not have voting. I'm speaking up in this instance in particular because it seems to me that the only problem with the commenter in question is an intellectual one. The person isn't behaving badly in any sense other than arguing for an incorrect view and not noticing the higher level of their opponents (which after all can hardly be expected). It's exactly the kind of thing that downvotes alone are supposed to handle. We're not talking about a troll or spammer. The reason it's important to make this distinction is that censoring for purely viewpoint-based reasons is a Rubicon that we need not cross. (EDIT: I'll also point out, for clarity, that I myself have not responded to any of Monkeymind's comments. Being opposed to banning a commenter is not to be confused with being in favor of engaging them.)
-2[anonymous]
At the risk of exposing myself to a severe dose of negative karma, I have to say I don't agree with that approach. This is supposed to be a blog devoted to the art of refining human rationality. If we crack down on people too heavily and too early on, before explaining why we disagree with them, I think it defeats the entire purpose of the blog. What would the point be if we just ostracized people who are not already on board with the Less Wrong view of rationality before explaining why we believe that our own approach is the best approach?

those enabling them by responding to them should stop.

This seems to be the main problem, but my recent attempts to discourage those who make high-quality contributions to hopeless or malignant conversations didn't stir much enthusiasm, so it'd probably take a lot of effort to change this.

(A specific suggestion I have is to establish a community norm of downvoting those participating in hopeless conversations, even if their contributions are high-quality.)

Please crack down earlier, harder, and more often.

This is something new for LW, in fact this app... (read more)

1[anonymous]
This. We should have been done with this several days ago.
1[anonymous]
I would expect the Fun Theory Sequence to be outcompeted by advertisements for toothpaste, Axe body spray, and sports cars, at least among the general public.
0Document
* http://lesswrong.com/lw/xp/seduced_by_imagination/ * http://lesswrong.com/lw/ye/and_say_no_more_of_it/
albeola-10

spending their life complaining about how they would do this and that if only they didn't have akrasia

Do you agree the quoted property differs from the property of "having akrasia" (which is the property we're interested in); that one might have akrasia without spending one's life complaining about it, and that one might spend one's life complaining about akrasia without having (the stated amount of) akrasia (e.g. with the deliberate intent to evade obligations)? If this inaccuracy were fixed, would your original response retain all its rhetor... (read more)

albeola130

instead of covering pending legislation or the impact it could have on your life

If "impact on your life" is the relevant criterion, then it seems to me Wong should be focusing on the broader mistake of watching the news in the first place. If the average American spent ten minutes caring about e.g. the Trayvon Martin case, then by my calculations that represents roughly a hundred lifetimes lost.

9homunq
You have a funny definition of "lost". By that measure, JRR Tolkien is worse than a mass-murderer.
albeola380

8, especially, is an especially eloquent formulation of Aumann's Agreement Theorem.

It may or may not be eloquent, but it sure as hell is not a formulation of Aumann's Agreement Theorem.

albeola90

Some people might see the descriptions below as sappy or silly, and that's a small loss that I'm happy to take; these songs (and emotions) have really improved my thinking and made me stronger, and if other people can more easily and powerfully achieve the same results by having this tool from the beginning, then I want to do what I can to make this tool available.

I appreciate your thinking here, but I'm worried that this is just going to turn into a thread where people list random songs they like. I mean, if "a cool love song" qualifies...

If ... (read more)

0FrankAdamek
Great point, and a concern. At least for me, these songs are very particular. I have about 5000 songs in my collection, and about 300 made it into this list. Distinguishing this kind of song was something I found time-intensive, and there are many songs I enjoyed but that didn't actually make me motivated to go do things. My hope is that these particular songs are likely to be helpful - I think my past self would have really been helped by having something like this. That's an excellent point. For me, the emotions can be either good or bad, but the critical thing is that the song is about a good thing I plan to increase or a bad thing I plan to decrease, that I am fully able to make that change, and that the song is not about looking impressive. One of the main reasons I didn't include many songs is that I don't expect them to have this kind of association for others, though they do for me. And I expect other people to find songs that work for them but not for me. The only thing that really matters is that it works for you.
albeola-10

You're changing the subject. The question was whether actually having akrasia is compatible with rationality. The question was not whether someone who claims to have akrasia actually has akrasia, or whether it is rational for someone who has akrasia to complain about akrasia and treat it as not worth trying to solve.

6A1987dM
Having akrasia is no more compatible with rationality than having myopia is: saying “if only I had better eyesight” while not wearing eyeglasses is not terribly rational.
-2Shmi
I'm pretty sure I expressed my opinion on this topic precisely ("no, it's not compatible"). It's up to you how you choose to misunderstand it, I have no control over it.
albeola60

(I may try emailing Jaan Tallinn to ask him myself, depending on how others react to this post).

It seems like that might carry some risk of making him feel like he was being bugged to give more money, or something like that. Maybe it would be better to post a draft of such an email to the site first, just in case?