All of Mathisco's Comments + Replies

This feels related to Hanson's recent article: https://www.overcomingbias.com/2021/04/to-beat-aliens-we-must-become-aliens.html

Which mentions that the greatest threat to successor ages are preceding ages that don't like the prospect of 'alienation'.

Some of the arguments mentioned here against technology point in that same direction.

I'm not sure I follow. Whether it's the evolving configuration of atoms or bits, both can lead to new applications. The main difference to me seems that today it is typically harder to configure atoms than bits, but perhaps that's just by our own design of the atoms underlying the bits? If some desired information system would require a specific atomic configuration, then you'd be hardware constrained again.

Let's say that in order to build AGI we find out you actually need super power efficient computronium, and silicon can't do that, you need carbon. Now ... (read more)

1[anonymous]
I am saying below a certain level of abstraction it becomes a solved problem in that you precisely have defined what correctness is and have fully represented your system. And you can trivially check any output and validate it versus a model. The reason software fails constantly is we don't have a good definition that can be checked by computer of what correctness means. Software Unit tests help but are not nearly as reliable as tests for silicon correctness. Moreover software just ends up being absurdly more complex than hardware and ai systems are worse. Part of it is "unique complexity". A big hardware system is millions of copies of the same repeating element. And locality matters - an element cannot affect another one far away unless a wire connects them. A big software system is millions of copies of often duplicated and nested and invisibly coupled code.

I once read a comment somewhere that Paul Graham is not a rationalist, though he does share some traits, like writing a lot of self-improvement advice. From what I can tell Paul himself considers himself a builder; a builder of code and companies. But there is some overlap with rationalists, Paul Graham mostly builds information systems. (He is somewhat disdainful of hardware, which I consider the real engineering, but I am a physicist.) Rationalists are focussed on improving their own intelligence and other forms of intelligence. So both spend a great dea... (read more)

3eigen
This is a really good comment. If you care to know more about his thinking, he has a book called, "hackers and painters" which I think sums up very well his views. But yes, it's a redistribution of wealth and power from strong people and bureaucrats to what he calls "nerds" as in people who know technology deeply and actually build things. The idea of instrumental rationality touches at the edges of builders, and you need to if you ever so desire to act in the world.
1[anonymous]
Note that for hardware, the problems are that you need a minimum instruction set in order to make a computer work.  So long as you at least implement the minimum instruction set, and for all supported instructions perform the instructions (which are all in similar classes of functions) bit for bit correctly, done.  It's ever more difficult to make a faster computer for a similar manufacturing cost and power consumption because the physics keep getting harder. But it is in some ways a "solved problem".  Whether a given instance of a computer is 'better' is a measurable parameter, the hardware people try things, the systems and software engineers adopt the next chip if it meets their definition of 'better'.   So yeah if we want to see new forms of applications that aren't in the same class as what we have already seen - that's a software and math problem.  

Thanks for taking the time to transparantly writing down your approach. I'm spending more and more time optimizing developer effectiveness at work, so posts like this may help me in my own behavior.

Thanks Lsusr, thinking back there was a post where you asked people to "pick up the glove" and you mentioned people hardly do. It helped kick me out of my passivity. I'm not sure I can be as risk seeking as you have been in life, but I'm trying to create more instead of just consuming.

Woah, thanks for your confirmation.

I'll admit it's a constant struggle. This smartphone is both a blessing and a curse.

Did you ever follow those guided meditation apps? It's all about recognizing you are distracted and moving back to your breath or some other concentration excercise.

Well, I try to catch myself in the act of avoiding boredom. Reaching to my phone. Or opening some social media app. Or even going to read LessWrong. Those are cues. Instead I now stare out the window a bit, accepting the boredom, doing a micro-meditation. Or I start writing a s... (read more)

Well this is quite a tantalizing introduction.

1Dale Udall
Thank you. I'm currently playing with Excalidraw to create basic diagrams, since Venn diagrams are the best way to introduce the concepts. In fact, whenever I describe it with words, my goal is to simulate these Venns in my listeners' minds, so I'm better off just plopping them into the post. Now I just have to figure out the best way to include these drawings in the posts. SVG? PNG? Excalidraw native JSON? I'm lurking and reading the faqs to figure that out. When I turn it into a blog, it might be best to have my own little wiki because of the way my content and terminology are interconnected.

I read my first anti-news manifesto about 10 years ago and the meme immediately clicked with me. Haven't gone back ever since, my close family, friends and colleagues inform me of relevant news.

I haven't been able to convince many others though. So I guess I'll just salute you, fellow meme spreader.

On reflection I do this too on occasions. If it helps you then it's great, right?

Also there is a whole literature about the meditation posture. If you are prone to falling asleep while lying down you should consider sitting. But if you are a high energy individual then a reclining posture can actually help. Don't feel bad about what works well for you after experimentation.

This reminds me of Left Brain, Right Stuff. It also has content on how overconfidence helps athletes perform something like 4% better, which is a big deal in a relative competition where small differences can make you win or lose. He then continues to find business analogies.

The internal narrator is only one form of thought.

One meditation technique is to quickly label each passing thought (it's called "noting" I believe). At some point you can begin to label the narrator process itself and see it separate from your other thinking processes ("voice" I call it, though it becomes wordless at that point).

[Edit: nevermind the Focusing link actually mentions the labeling. Though I recall Focusing was more about depth of analysis, not fast, high frequency labeling]

Always lovely such practical advice.

By the way, if you can live so close to work that you can cycle or walk to it, you can combine a lot of great things: more excercise, less commuting, more money. If you can then commute together with coworkers, even better.

As another commenter noted, there exists an alternative strategy. Which is to organize a lot of one-on-one meetings to build consensus. And then to use a single group meeting to demonstrate that consensus and polarizing the remaining minority. This may be a more efficient way to enforce cooperation.

Anyway, I wonder if there is a good method to find out the dominant forces at play here.

Is it not useful to avoid the acceptance of false beliefs? To intercept these false beliefs before they can latch on to your mind or the mind of another. In this sense you should practice spotting false beliefs untill it becomes reflexive.

2Adam Zerner
If you are at risk of having fake beliefs latch on to you, then I agree that it is useful to learn about them in order to prevent them from latching on to you. However, I question whether it is common to be at risk of such a thing happening because I can't think of practical examples of fake beliefs happening to non-rationalists, let alone to rationalists (the example you gave doesn't seem like a fake belief). The examples of fake beliefs used in Map and Territory seem contrived. In a way it reminds me of decision theory. My understanding is that expected utility maximization works really well in real life and stuff like Timeless Decision Theory is only needed for contrived examples like Newcomb's problem.

How about another angle.

Most meetings are not just power games. They are pure status games. Only in such group meetings can you show off. Power plays are one way to show off.

You will speak quickly and confidently, while avoiding to make any commitment to action. If you attend someone else's meeting, you quickly interrupt and share your arguments in order to look confident and competent.

The low status meeting participants are mainly there to watch. They will try to quickly join the highest status viewpoints to avoid loss of more status, thereby causing casc... (read more)

7TheMajor
I certainly expect status games, above and beyond power games. Actually saying 'power games' was the wrong choice of words in my comment. Thank you for pointing this out! That being said, I don't think the situation you describe is fully accurate. You describe group meetings as an arena for status (in the office), whereas I think instead they are primarily a tool for forcing cooperation. The social aspect still dominates the decision making aspect*, but the meeting is positive sum in that it can unify a group into acting towards a certain solution, even if that is not the best solution available.   *I think this is the main reason so many people are confused by the alleged inefficiency of meetings. If you have a difficult problem and no good candidate solutions it is in my experience basically never optimal to ask a group of people at once and hope they collectively solve it. Recognizing that this is at best a side-effect of group meetings cleared up a lot of confusion for me.

This is an approach I recognize. It works well, except if many one-on-ones are happening in parallel on the same topic. Then you are either in a consensus building race with adversaries and/or constantly re-aligning with allies.

Hah, the polarization effect explains why I always go into important meetings with sufficient number of allies. But unfortunately that's a way to manipulate the decision making, not to actually make better decisions.

Yes! It's all about manipulating existing systems. Startup founders are not free, they just operate in a larger system, namely human society.

It is orders of magnitude harder to cut yourself free from society. And more orders of magnitude harder to cut yourself free from earth's ecosystem.

Another reason for Zvi to paint a bleak picture is to make sure mazedom doesn't grow further, ever. Even if mazedom is low, it may still be beneficial to keep it that way.

Assuming you don't spend all your time in some rationalist enclave, then it's still useful to understand false beliefs and other biases. When communicating with others, it's good to see when they try to convince you with a false belief, or when they are trying to convince another person with a false belief.

Also I admit I recently used a false belief when trying to explain how our corporate organization works to a junior colleague. It's just complex... In my defense, we did briefly brainstorm how to find out how it works.

2Adam Zerner
Not that I'm disagreeing -- in practice I have mixed feelings about this -- but can you elaborate as to why you think it's useful? For the purpose of understanding what it is they are saying? For the purpose of trying to convince them? For the latter, I think it is usually pretty futile to try to change someones mind when they have a false belief.  I'm not seeing how that would be a false belief. If you told me that an organization is complex I would make different predictions than if you told me that it was not complex. It seems like the issue is more that "it's just complex" is an incomplete explanation, not a fake/improper one.

I agree, it's important to create, or at least detect, well-aligned agents. You suggest we need an honesty API.

3johnswentworth
Nope, that is not what I'm talking about here. At least I don't think so. The thing I'm talking about applies even when there's only one agent; it's a question of how that agent's own internal symbols end up connected to physical things in the world, for purposes of the agent's own reasoning. Honesty when communicating with other agents is related, but sort of tangential.

I also recognize this feeling of "You have not done enough" or worse "This goal was meaningless in hindsight". It's probably very instrumental, pushing us and our genes to ever greater heights.

So should we lean in to it? Accepting happiness is forever lost behind some horizon? You will just walk around with this internal nagging feeling.

Or should we fix this bug as you say, but risk stagnation? One way may be to become a full-time meditating monk. Then you may have a chance to turn your wetware into a personal nirvana untill you pop out of existence. But that feels meaningless as well.

I'm trying to find a blend; take the edge off the suffering while moving forward.

Rust is a fascinating new language to learn, but not designed for scientific computing. For that Julia is the new kid on the block, and quite easy to pick up if you know Python.

Julia's IterTools has the partition with step argument as well

May I ask why you choose Rust to write math and algorithms? I would have chosen Julia :p

6Zack_M_Davis
Realistically?—Python and Rust are the only two languages I'm Really Good at, and I'm more Excited about Rust because I write enough Python at my dayjob? In my defense, the .windows iterator on slices was really handy here and is shamefully missing from Python's itertools.

This closely relates to the concept of black swan farming.

The typical argument I've read is that we should take more risk, because risk taking widens the distribution and gives us more probability of ending up in the tail.

However, blind risk taking widens the distribution symmetrically. So we need to find ways to increase the positive tail probability, while taking more risk. You propose 'weak ties' and 'virtue' as a solution.

I'm going to take the leap and assume you mean virtue signaling, or any other form of signaling that makes you look like a good ally... (read more)

1brianlui
Great comment; you're right that in addition to avoiding upside decay, we can also try to increase the positive tail! Also good spot that virtue as defined here is relative (e.g. during the Cold War the USA would be considered "virtuous" by dint of being less mean than the USSR).

As I grow older I spend more and more time teaching. I can concur with all points in this post. Sadly it contained no diagrams.

Diagrams are truly awesome. Great diagrams are absolutely amazing. High level summary diagrams are the best. I spend most of my time at work now drawing and explaining diagrams.

Thanks for your article! Improving education is a good, yet difficult goal to pursue.

I'd like to weakly signal boost dev4x.com and the founder Bodo Hoenen, another high school drop-out who became a social entrepreneur with a focus on education. I know him and wish he was more involved with EA and rationality. Maybe a great contact for your network, Samuel?

May I ask why you think you "passively consume" LW content? I notice the same behavior in myself, so I'm curious.

P.S. I hope it's still better than passively consuming most other media.

4DirectedEvolution
In one sentence, active reading produces a higher number of reactions per sentence read. In reading the posts for this exercise, I noticed myself having a far higher number of reactions to the content than normal.
I think it's worth hammering out the definition of a thread here.

Agreed. I only want to include conscious thought processes. So I am modeling myself as having a single core conscious processor. I assume this aligns with your statement that you are only experiencing a single thing, where experience is equivalent to "a thought during a specified time interval in your consciousness"? The smallest possible time interval that still constitutes a single thought I consider the period of a conscious brainwave. This random site states a conscious b... (read more)

1Isnasene
Yes -- this fits with my perspective. The definition of the word "thought" is not exactly clear to me but claiming that it's duration is lower-bounded by brainwave duration seems reasonable to me. Yeah, it could be that our conscious attention performs temporal multi-threading -- only being capable of accessing a single one of the many normally background processes going on in the brain at once. Of course, who knows? Maybe it only feels that way because we are only a single conscious attention thread and there are actually many threads like this in the brain running in parallell. Split brain studies are a potential indicator that this could be true: --quote from wikipedia Alternative hypothesis: The way our brain produces thought-words seems like it could in principle be predictive processing a-la GPT-2. Maybe we're just bad at multi-tasking because switching rapidly between different topics just confuses whatever brain-part is instantiating predictive-processing.

I'll examine the link!

When you say 'one thought at a time', do you mean one conscious thought? From reading all these multi-agent models I assumed the subconscious is a collection of parallel thoughts, or at least multi-threaded.

I also interpreted the Internal Double Crux as spinning up two threads and let them battle it out.

I recall one dream where I was two individuals at the same time.

I do consider it like two parallel thoughts, though one dominates, or at least I relate my 'self' mostly with one of them. However, how do I evaluate my subjective experie

... (read more)
1Isnasene
Yes. The key factor is that, while I might have many computations going on in my brain at once, I am only ever experiencing a single thing. These things flicker into existence and non-existence extremely quickly and are sampled from a broader range of parallel, unexperienced, thoughts occuring in the subconscious. I think it's worth hammering out the definition of a thread here. In terms of brain-subagents engaging in computational process, I'd argue that those are always on subconsciously. When I'm watching and listening to TV for instance, I'd describe my self as rapidly flickering between three main computational processes: a visual experience, an auditory experience, and an experience of internal monologue. There are also occasionally threads that I give less attention to -- like a muscle being too tense. But I wouldn't consider myself as experiencing all of these processes simultaneously -- instead its more like I'm seeing a single console output that keeps switching between the data produced by each of the processes.

I found a link in your links to Internal Double Crux. This technique I do recognize.

I recently also tried recursively observing my thoughts, which was interesting. I look at my current thought, than I look at the thought that's looking at the first thought, etc. Untill it pops, followed by a moment of stillness, then a new thought arises, I start over. Any name for that?

1Isnasene
Interesting... When you do this, do you consider the experience of the thought looking at your first thought to be happening simultaneously with the experience of your first thought? If so, this would be contrary to my expectation that one only experiences one thought at a time. To quote Scott Alexander quoting Daniel Ingram: If you're interesting in this, you might want to also check out Scott's review of Daniel's book.

Thanks!

I wrote with global standards in mind. My own income isn't high compared to US technology industry standards.

In the survey I also see some (social) media links that may be interesting. I have occasionally wondered if we should do something on LinkedIn for more career related rationalist activities?

I inspired someone; yay!

Since I like profound discussions I am now going to have to re-read IFS, it didn't fully resonate with me the first time.

I cannot come up with such a cool wolverine story I am afraid.

4Isnasene
Huzzah! To speak more broadly, I'm really interested in joining abstract models of the mind with the way that we subjectively experience ourselves. Back in the day when I was exploring psychological modifications, I would subjectively "mainline" my emotions (ie cause them to happen and become aware of them happening) and then "jam the system" (ie deliberately instigating different emotions and shoving them into that experiential flow). IFS and later Unlocking The Emotional Brain (and Scott Alexander's post on that post, Mental Mountains) helped confirm for me that the thing I thought I was doing was actually the thing I was doing. No worries; you've still got time!

Goodday! I've been reading rationalist blogs for approximately 2 years. At this random moment I have decided to make a LessWrong account.

Like most human beings I suffer and struggle in life. As a rich human, like most LessWrong users I assume (we have user stats?), I suffer in luxury.

The main struggle is where to spend my time and energy. The opportunity cost of life I suppose. What I do:

  • Improve myself. My thinking, my energy, my health, my wealth, my career, my status.
  • Improve my nearest relationships.
  • Improve my community (a bit).
  • Improve the world (a
... (read more)
3gjm
There are surveys, but I think it may have been a few years since the last one. In answer to your specific question, LWers tend to be smart and young, which probably means most are rich by "global" standards, most aren't yet rich by e.g. typical US or UK standards, but many of those will be in another decade or two. (Barring global economic meltdown, superintelligent AI singularity, etc.) I think LW surveys have asked about income but not wealth. E.g., here are results from the 2016 survey which show median income at $40k and mean at $64k; median age is 26, mean is 28. Note that the numbers suggest a lot of people left a lot of the demographic questions blank, and of course people sometimes lie about their income even in anonymous surveys, so be careful about drawing conclusions :-).

No harm done with experimenting a bit I suppose.

Do you have examples of infographics that come close to what you have in mind?

1Jalex S
These infographics feature pairs of people having difficult conversations snippets about the morality of abortion. https://whatsmyprolifeline.com/

I would like to encourage this!

Alternative representations for a larger audience could be

  • cartoons explaining a single concept, like XKCD or Dilbert.
  • graphical overviews, like the cognitive bias cheatsheet.

What else would be feasible?

chaosmage*100

I'm fantasizing about infographics with multiple examples of the same bias, an explanation how they're all biased the same way, and very brief talking points like "we're all biased, try to avoid this mistake, forgive others if they make it, learn more at LessWrong.com".

They could be mass produced with different examples. Like one with a proponent of Minimum Wage and an opponent of it, arguing under intense confirmation bias as described in the table above, with a headline like "Why discussions about Minimum Wage often fail&quo... (read more)