Reneging prosocially by Duncan Sabien
A good post about reneging on agreements, acting when the other person reneges on you, and making agreements.
A good post about reneging on agreements, acting when the other person reneges on you, and making agreements.
I've been reading "Playing with movement - Hargrove 2019". This book told me about "complexity science", which is a field which supposedly studies complex systems and it does it not by reductionism, but by looking at the system as a whole. It seems that some key concepts used in complexity...
I want to work on technical AI alignment research. I am trying to choose an AI alignment area to get into. To do that, I want to understand the big picture of how to make AI go well through technical research. This post contains my views on this question. More...
Is there software for goal factoring (the CFAR technique)? I want to use it to create a directed graph, which is not necessarily a tree and may have cycles (this requirement disqualifies mind mapping software). Nodes are goals, they have attached text. There's an edge from goal x to goal...
Many rationalists are interested in blockchain. This article describes important mathematical problems related to blockchain, and potential solutions to cooperation problems and philanthropy via mechanism design (quadratic voting, quadratic funding).
A good post about reneging on agreements, acting when the other person reneges on you, and making agreements.
I make an isolated sandbox for it using containerization and then run it with --dangerously-skip-permissions inside it. It only has access to what's inside the sandbox.
Your bilinear attention layer is a bilinear function, but where the input (x) is copied and used as both inputs. Using those matrices Dec, L_Enc and R_enc, the way you show, is one particular way to parametrize the bilinear function. There are many other ways, the simplest of which would be to just use one tensor of shape [size-of-y, size-of-x, size-of-x]. I'm curious, why did you choose that particular parametrization?
Also, how did you initialize the model's weights? How do you initialize to prevent exploding gradients and similar problems?
I am curious about all this because my masters thesis was about a tensor network based alternative to CNNs.
Partner with an onlyfans model, make a YouTube account for promoting her, use a thirsttrappy avatar for it, and post those comments using that account. Don't forget to promote her onlyfans in that YouTube account.
I hope this argument works somewhat as an introduction
Well, it surprisingly did. Until the moment where a predicate symbol that had only previously been applied to variables representing objects in the universe of discourse (i.e. a normal variable) was applied to a variable representing a predicate. It was at that moment that I understood that I don't understand this stuff.
Modal logic seems interesting. Where can I quickly learn more about it?
Ebook available only on Amazon? Bah. Normal epub or something that I can read in a normal ebook reader app pls.
I disagree with your claim in general.
But also, more specifically,
The quintessential example of explosive skill acquisition is foreign language learning. It’s standard advice that if you really want to speak Spanish, you should do a language immersion—travel to Mexico and only speak Spanish while there—rather than practicing with apps and textbooks for an hour a week. I’d bet that the person who spent two months hanging around Tijuana - or who immersed themselves in spanish media and telenovellas for a few months, is going to be better at Spanish than the person who has a million Duolingo points.
I did move to a Spanish-speaking country a year ago. I did try to use Spanish with other people in that country wherever I went. And no, after two months my friend with a million Duolingo points but no experience in an actual Spanish-speaking country was still better than me.
I don't think that if you don't respond to a comment arguing with you, people will think you've lost the argument. I wouldn't think like that. I would just evaluate your argument on my own and I would evaluate the counterargument in the comment on my own. I don't bother to respond to comments very often and I haven't seen anything bad come out of it.
What about some examples from your real life? Asking because we don't really know many details behind the 2 given examples.
Please recommend me cheap (in brain-time and text length) ways to represent different levels of confidence in English texts and speech. More concretely, I want English words and ways to use them in sentences without modifying the sentence's structure. Examples:
I am looking for a gears-level introductory course (or a textbook, or anything) in cooking. I want to cook tasty healthy food in an efficient way. I am already often able to cook tasty food, but other times I fail, and often I don't understand what went wrong and how cooking even works.
I've been reading "Playing with movement - Hargrove 2019". This book told me about "complexity science", which is a field which supposedly studies complex systems and it does it not by reductionism, but by looking at the system as a whole. It seems that some key concepts used in complexity science are: complex system, emergence, adaptivity, nonlinearity, self-organization, constraints, attractors, feedback loops. This book pitched an idea that with many complex adaptive systems, if you want the system to achieve a certain goal, it's bad to specify a specific plan for it and instead it's better to specify or build constraints under which the system, being adaptive and all, will achieve the... (read 600 more words →)
You started self quarantining, and by that I mean sitting at home alone and barely going outside, since december or january. I wonder, how's it going for you? How do you deal with loneliness?
I want to work on technical AI alignment research. I am trying to choose an AI alignment area to get into. To do that, I want to understand the big picture of how to make AI go well through technical research. This post contains my views on this question. More precisely, I list types of AI safety research and their usefulness.
Probably, many things in this post are wrong. I want to become less wrong. The main reason why I am posting this is to get feedback. So, please
Often in psychology articles I see phrases like "X is associated with Y". These articles' sections often read like the author thinks that X causes Y. But if they had evidence that X causes Y, surely they would've written exactly that. And in such cases I feel that I want to punish them, so in my mind I instead read it as "Y causes X", just for contrarianism's sake. Or, sometimes, I imagine what variable Z can exist which causes both X and Y. I think the latter is a useful exercise.
Examples:
It appears that some types of humor are more effective than others in reducing stress. Chen and Martin (2007) found that
How to download the documentation of a programming library for offline use.
It turns out, Pytorch's pseudorandom number generator generates different numbers on different GPUs even if I set the same random seed. Consider the following file do_different_gpus_randn_the_same.py:
import torchseed = 0
torch.manual_seed(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
foo = torch.randn(500, 500, device="cuda")
print(f"{foo.min():.30f}")
print(f"{foo.max():.30f}")
print(f"{foo.min() / foo.max()=:.30f}")
On my system, I get the following for two runs on two different GPUs:
$ CUDA_VISIBLE_DEVICES=0 python do_different_gpus_randn_the_same.py
-4.230118274688720703125000000000
4.457311630249023437500000000000
foo.min() / foo.max()=-0.949029088020324707031250000000
$ CUDA_VISIBLE_DEVICES=1 python do_different_gpus_randn_the_same.py
-4.230118751525878906250000000000
4.377007007598876953125000000000
foo.min() / foo.max()=-0.966440916061401367187500000000
Due to this, I am going to generate all pseudorandom numbers on my CPU and then transfer them to GPU for reproducibility's sake like foo = torch.randn(500, 500, device="cpu").to("cuda").
A new piece of math notation I've invented which I plan to use whenever I am writing proofs for myself (rather than for other people).
Sometimes when writing a proof, for some long property P(x) I want to write:
It follows from foo, that there exists x such that P(x). Let x be such that P(x). Then ...
I don't like that I need to write P(x) twice here. And the whole construction is too long for my liking, especially when the reason foo why such x exists is obvious. And if I omit the first sentence It follows from foo, that there exists x such that P(x). and just write
Let x be such that
Is there software for goal factoring (the CFAR technique)? I want to use it to create a directed graph, which is not necessarily a tree and may have cycles (this requirement disqualifies mind mapping software). Nodes are goals, they have attached text. There's an edge from goal x to goal y iff fulfillment of x directly contributes to fulfillment of y. There should be an easy way to see the graph and modify it, preferrably in a visual way.
Many rationalists are interested in blockchain. This article describes important mathematical problems related to blockchain, and potential solutions to cooperation problems and philanthropy via mechanism design (quadratic voting, quadratic funding).
In my understanding, here are the main features of deep convolutional neural networks (DCNN) that make them work really well. (Disclaimer: I am not a specialist in CNNs, I have done one masters level deep learning course, and I have worked on accelerating DCNNs for 3 months.) For each feature, I give my probability, that having this feature is an important component of DCNN success, compared to having this feature to the extent that an average non-DCNN machine learning model has it (e.g. DCNN has weight sharing, an average model doesn't have weight sharing).
In my MSc courses the lecturer gives proofs of important theorems, while unimportant problems are given as homework. This is bad for me, because it makes me focus on actually figuring out not too important stuff. I think it works like this because the course instructors want to • make the student do at least something and • check whether the student has learned the course material.
Ideally I would like to study using interactive textbooks where everything is a problem to solve on my own. Such a textbook wouldn't show an important theorem's proof right away. Instead it would show me the theorem's statement and ask me to prove it. There should
Many biohacking guides suggest using melatonin. Does liquid melatonin spoil under high temperature if put in tea (95 degree Celcius)?
More general question: how do I even find answers to questions like this one?
A good post about reneging on agreements, acting when the other person reneges on you, and making agreements.
I've been reading CFAR Handbook's chapter about Againstness. The chapter's idea is that when your sympathetic nervous system (SNS) is dominant over parasympathetic nervous system (PSNS), your introspection is impaired and you tend to be less rational, hence you should learn how to know which system is currently dominant and also learn to switch to PSNS dominance.
They provide the following table of how your body, mind, and behaviour change depending on which system is dominant

In order to learn to know where you are on SNS-PSNS spectrum they recommend observing yourself in different situations, determine which system is dominant, and try to find patterns (e.g., maybe after doing physical exercise your SNS is
Previous post: Fundamentals of Formalisation Level 7: Equivalence Relations and Orderings. First post: Fundamentals of Formalisation level 1: Basic Logic.
Nine months ago we, RAISE, have started creating a Math Prerequisites for AI Safety online course. It has mostly MIRI research related subjects: set theory, computability theory, and logic, but we want to add machine learning related subjects in the future. For 4 months we've been adding new lessons and announcing them on LessWrong. Then we stopped, looked back and decided to improve their usability. That's what we've been busy with since August.

Followup to Fundamentals of Formalisation Level 6: Turing Machines and the Halting Problem. First post.
This is a new lesson of our online course on math formalizations required for AI safety research.
The big ideas:
To move to the next level you need to be able to:
Why this is important:
I need instructions for finding a microwave browning skillet in a non English speaking county, where Corningware might not have existed.