If it's worth saying, but not worth its own post, then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

New Comment
30 comments, sorted by Click to highlight new comments since:

No problem this week, just an appreciation for people of LessWrong who can be right, when I am wrong.

Hi, I'm helping to organize this year's NYC Solstice, as raemon has moved to the Bay Area. I've been reading LW, SSC, and rationalist Tumblr since about January and going to NYC meetups semiregularly since April, but haven't yet posted on here, so I thought I'd make a quick introduction.

My name is Rachel, I'm an undergrad senior at NYU completing a communications major. I'm originally from the Bay Area. On the MBTI, I'm an INTJ. Besides rationality, my passions are graphic novels and cooking. My intellectual interests tend towards things that have a taxonomic character, like biology and linguistics. (I'm also a fan of the way that these subjects keep evading a perfect taxonomy.)

I occasionally wear sandals.

As someone who hopes to attend the NY Solstice but is too far away to offer much assistance, you have my thanks for working on that!

What graphic novel(s) have you read recently you really liked? I haven't been paying attention to the media since Gaiman's Endless Nights, but that's largely because a change of social circle meant the stream of recommendations dried up.

I stay barefoot as much as is remotely socially acceptable, which isn't as nearly as much as I'd like.

From recent releases, I really like Tillie Walden's ultrasoft scifi On a Sunbeam, (2015-2017), and Kieron Gillen's The Wicked + The Divine (2014-ongoing), which has a lot of similarity to American Gods.

For something rationalist-adjacent, I'd recommend Blue Delliquanti's O Human Star (2012-ongoing), which deals with LGBTQ issues in the context of FAI and transhumanism.

Would love to have you in attendance!

The synopsis of The Wicked + The Divine does look like my kind of tale. Both "Mortal becomes god" and "Mortal kills god" show up with weird frequency in my favourite stories.I'll likely check that one out first :)

I thoroughly enjoyed last year's solstice. I'm hoping to be able to take a three day weekend for it, since I know there were some meetups before and after that I had to miss since I was just in town for the one night. Do you happen to know the best spot to watch for details on the solstice or adjacent activities, once things are more organized?

[-]Elo00

Bare foot checking in.

Welcome to LW! We're an accepting community when it comes to footwear. I wear crocs most of the time, and shoes in winter.

With the Dota OpenAI bot, Alpha GO, and Deep Blue --- it's funny how we keep training AIs to play zero-sum war simulation games against human enemies.

They are good training tasks.

Does anyone have any tips or strategies for making better social skills habitual? I'm trying to be more friendly, compliment people, avoid outright criticism, and talk more about other people than myself. I can do these things for a while, but I don't feel them becoming habitual as I would like. Being friendly to people I do not know well is particularly hard, when I'm tired I want to escape interaction with everyone except close friends and family.

This is not an easy-to-implement tip, but my suggestion is to try to get into a mental space where the social things that you're trying to do are easy / come naturally / are the things that you want to do in the moment.

A person who is naturally friendly, non-critical, and interesting in hearing about you probably did not get that way just by practicing each of those behaviors as habits; they have some deeper motivation/perspective/emotion/something that those behaviors naturally follow from. Try to get in touch with that deeper thing.

One thing that helps with this is noticing when you've had the experience of being in a mental space where the things come more naturally (even if only briefly, or only marginally more naturally). Then you can try to get back into that mental space, and take it further.

Another thing that can help is putting yourself in different social situations, including ones that you're liable to get swept up in (that is, ones that are likely to put you in a different mental space from where you usually are). That can be a quicker way to get some experience being in different modes. Reading books (and watching videos, etc.) can also help, especially if you do things like these as you read them.

It might help to cultivate your curiousity. Who are these people? What are they doing in the moment? What are they good at that you could learn from? Why are they in the same place as you? What are they up to when they are not at the same place as you? What are they enthusiastic about?

Sometimes when I talk to people I don't know well and I'm not thinking up many comments or questions based on our shared circumstances or environment, I'll ask some questions like "Have you read any good books lately?" or "What have you been thinking about?" or ask their advice about something.

I think from your question you want to be able to do this even when you're tired, but part of the solution might be to limit the times when you have to do this when you are tired by scheduling things differently, or making sure you have rested and eaten before you have to be in a social situation, or changing how you select which social events to participate in.

but I don't feel them becoming habitual as I would like

Have you noticed any improvement? For example, an increase in the amount of time you feel able to be friendly? If so, then be not discouraged! If not, try changing the reward structure.

For example, you can explicitly reward yourself for exceeding thresholds (an hour of non-stop small talk --> extra dark chocolate) or meeting challenges (a friendly conversation with that guy --> watch a light documentary). Start small and easy. Or: Some forms of friendly interaction might be more rewarding than others; persist in those to acclimate yourself to longer periods of socialising.

There's a lot of literature on self-management out there. If you're into economics, you might appreciate the approach called picoeconomics:

Caution: In my own experience, building new habits is less about reading theories and more about doing the thing you want to get better at, but it's disappointingly easy to convince myself that a deep dive into the literature is somehow just as good; your experience may be similar (or it may not).

[-]jmh00

I'm going to come at this from a different angle than the others, I think. I don't claim it will work or be easy as I really identify with you question -- changing myself should be easy (I control my brain, right? I make my decisions, right?) but find that reinventing me int the person I'd rather be than who I am is a real challenge.

There was another post here on LW, http://kajsotala.fi/2017/09/debiasing-by-rationalizing-your-own-motives/ that I think might have value in this contex as well as the one it takes for the post.

We can all try making our selves to X and though effort and repetition make it something of a habit. I think that works better for the young (no idea of your age). But at some point in life the habits, and especially the mental and emotional (which probably means physiological chemical processes that drive these states) hae become near hardwired. So, what I'll call the brute force approach -- just keep practicing -- faces the problem of relative proportions. Behavoural characteristics we've developed over 20, 30 40 years (or more) will have a lot more weigh than the efforts to act differently for a few years (assuming one keeps up at the change myself routine).

Maybe at some point more effort in looking at "why am I acting like X" is as important just the effort to act differently. Perhaps to develop a new habbit will be easier than changing old habits. But if the new habbit then serves as a feedback into the old habit we setup a type of interupt for the initial impulse to behave in a way we would rather change. That might help break the old habits we don't want but have reinforced to the point they are no longer just habits we display but actually more "who we are".

So, this is off the cuff thinking to so very likely has some gapping holes!

[-]Elo00

I have some research that will help you on your quest to make it more easy for you to do the thing.

  1. Nvc https://youtu.be/l7TONauJGfc (and accompanying books)
  2. Daring greatly - brene Brown (book) brief review - https://youtu.be/iCvmsMzlF7o
  3. Search inside yourself - book (mindfulness)

NVC will keep you aware of what takes you out of the habit, vulnerability will keep you oh track to a different strategy. And search inside yourself will encourage practice on the topic of being thoughtful and caring of the people around you.

In this order.

I remember reading that some scientists prefer to publish their research on blogs, because they get better feedback in comments than through peer review. Is that an individual thing, or is there also somewhere a community blog where different scientists can post their research at the same place? (If yes, how do they defend against crackpots?)

It seems possible to make decent face predictions based on genome data: https://phys.org/news/2017-09-identification-individuals-trait-whole-genome-sequencing.html Maybe someone can train a deep net to do paternity testing based on facial images?

I have an awesome business idea.

Set up a kiosk at a fair or something, point a webcam at the crowd, and yell at passing families: "Hey, dude, do you know the AI says these kids aren't yours?"

Has anyone studied the Red Black Tree algorithms recently? I've been trying to implement them using my Finite State technique that enables automatic generation of flow diagrams. This has been working well for several other algorithms.

But the Red Black tree rebalancing algorithms seem ridiculously complicated. Here is an image of the deletion process (extracted from this Java code) - it's far more complicated than an algorithm like MergeSort or HeapSort, and that only shows the deletion procedure!

I'm weighing two hypotheses:

  1. Keeping a binary tree balanced in N log N time is an intrinsically complex task.
  2. There is some much simpler method to efficiently maintain balance in a binary tree, but nobody bothered looking for it after the RB tree algorithms and analysis were published.

I'm leaning toward the latter theory. It seems to me that most of the other "elementary" algorithms of computer science are comparatively simple, so the weird overcomplexity of the tool we use for binary tree balancing is some kind of oversight. Here is the Wiki page on RB trees - notice how the description of the algorithm is extremely hard to understand.

An intuition is that red-black trees encode 2-3-4 trees (B-trees of order 4) as binary trees.

For a simpler case, 2-3 trees (Ie. B-trees of order 3) are either empty, a (2-)node with 1 value and 2 subtrees, or a (3-)node with 2 values and 3 subtrees. The idea is to insert new values in their sorted position, expand 2-nodes to 3-nodes if necessary, and bubble up the extra values when a 3-node should be expanded. This keeps the tree balanced.

A 2-3-4 tree just generalises the above.

Now the intuition is that red means "I am part of a bigger node." That is, red nodes represent the values contained in some higher black node. If the black node represents a 2-node, it has no red children. If it represents a 3-node, it has one red child, and if it represents a 4-node, it has 2 red children.

In this context, the "rules" of the red-black trees make complete sense. For instance we only count black trees when comparing branch heights because those represent the actual nodes. I'm sure that with a bit of work, it's possible to make complete sense of the insertion/deletion rules through the B-tree lens but I haven't done it.

edit: I went through the insertion rules and they do make complete sense if you think about a B-tree while you read them.

[-]gjm60

There are other kinds of binary tree with simpler rebalancing procedures, most notably the AVL tree mentioned by cousin_it. I think red-black tends to dominate for some combination of these reasons:

  • Tradition. Some influential sources (e.g., Sedgwick's algorithms book[1], SGI's STL implementation) used, or gave more visibility to, red-black trees, and others copied them.
  • Fewer rotations in rebalancing. In some circumstances (certainly deletion; I forget whether it's true for insertion too) AVL trees may need to do Theta(log n) rotations, whereas red-black never need more than O(1).
    • Does this mean an actual advantage in performance? Maaaaybe. Red-black trees are, in the worst case at least, worse-balanced, which may actually matter more. Such benchmarks as I've seen don't suggest a very big advantage for either red-black or AVL over the other.
  • Persistence. If you want to make a persistent data structure out of a binary tree, whether for practical reasons or just to show your students in Haskell, it's easier with a red-black tree.
  • Metadata requirements. A red-black tree needs one bit per node, to store the redness/blackness. An AVL tree needs one and a half bits :-) to store the -1/0/+1 height difference. Perhaps in some implementations it's OK to "waste" one bit per node but not two.

[1] I think. I don't have a copy myself. Surely it must at least mention AVL trees too, but my hazy recollection is that the balanced-tree algorithm Sedgwick gives most space to is red-black.

Hm, my implementation of AVL trees seems persistent enough?

[-]gjm00

I expect it is -- I haven't looked. But AIUI the usual way of making this sort of structure persistent amounts to remembering a log of everything you did, so the requirement for more rotations means that a persistent AVL tree ends up less space-efficient than a persistent red-black tree. Something like that, anyway, I haven't attempted to build either.

[-][anonymous]00

In Haskell, or any GC language really, you don't need a log to make a tree data structure persistent. Just always allocate new nodes instead of changing old ones, so all operations will return a new version of the whole tree. That's O(log n) allocations per operation, because most subtrees can be shared between the old and new versions, only the nodes along one path down the tree need to be new. Anyone who kept a reference to an old version's root can keep accessing the entire old version, because nodes never change. And if nobody kept a reference, GC picks it up. That way it's also easy to have branching history, which seems impossible with a log.

[This comment is no longer endorsed by its author]Reply

Good analysis, thanks. I buy the first two points. I'd be shocked to see an implementation that actually makes use of the lower metadata requirements. Are there languages that provide a boolean primitive that uses a single bit of memory instead of a full byte? Also I don't understand what you mean by persistence.

[-]gjm20

Suppose you're making tree structures in a pure functional language, where there is no mutable state. Then what you need is functions that e.g. take a tree and a new element and return a new tree, sharing as much of its structure as possible with the old for efficiency's sake, that has the new element in it. These are sometimes called persistent data structures because the old versions stick around (or at least might; they might get garbage-collected once the runtime can prove nothing will ever use them).

I always felt that AVL trees were easier to understand than red-black. Just wrote some Haskell code for you. As you can see, both insertion and deletion are quite simple and rely on the same rebalancing operation.

Very nice, thanks. Ahh... Haskell really is quite pretty.

[-][anonymous]00

AVL trees always felt simpler to me than red-black. Here's a quick implementation in Haskell, adapted from some code I found online. Delete seems to be about as complex as insert. It might have bugs, but I'm pretty sure the correct version would be about as long.

Edit: it seems like someone has implemented deletion for RB trees in Haskell as well, and it's doesn't look too complicated. I haven't checked it carefully though.

[This comment is no longer endorsed by its author]Reply
[-][anonymous]00

Dear LW, I had great fun brainstorming a topic this weekend all by myself, and now I invite you to join in. What's the laziest, most cynical, most hilarious way to combine these three ideas for maximum investor appeal:

  • Blockchain
  • Deep learning
  • Gathering user data
[This comment is no longer endorsed by its author]Reply