Journal 'Basic and Applied Psychology' bans p<0.05 and 95% confidence intervals
Editorial text isn't very interesting; they call for descriptive statistics and don't recommend any particular analysis.
Editorial text isn't very interesting; they call for descriptive statistics and don't recommend any particular analysis.
Editorial text isn't very interesting; they call for descriptive statistics and don't recommend any particular analysis.
'MIRI' works in the search field when electing a charity to get 0.5% of your https://smile.amazon.com purchases.
Has anyone read "How We Reason" by Philip Johnson-Laird? He and others in his field (the "model theory" of psychology/cognitive science) claim that their studies refute the naive claim that human brains often operate in terms of logic or Bayesian reasoning (probablistic logic). I gather they'd say that we are...
Adam Alter lists some evidence from people who study the effects of "disfluency" (unfamiliarity, or lack of clarity), which somewhat surprisingly leads to greater depth of thought (while you're expending the energy to understand something, you can't help but think about it), and also a willingness to depart further from...
How many people feel despair in imagining a heaven (positive singularity) that they'll miss out on if they don't survive long enough? I don't think about it, but I already have plenty of reasons to like being alive.
> In Copenhagen the summer before last, I shared a taxi with a man who thought his chance of dying in an artificial intelligence-related accident was as high as that of heart disease or cancer. No surprise if he’d been the driver, perhaps (never tell a taxi driver that you’re...
Three Toed Sloth has a nice exposition on the difficulties of optimizing an economy, including the best explanation of convex optimization ever: > If plan A calls for 10,000 diapers and 2,000 towels, and plan B calls for 2,000 diapers and 10,000 towels, we could do half of plan A...
I'm unclear on whether the 'dimensionality' (complexity) component to be minimized needs revision from the naive 'number of nonzeros' (or continuous but similar zero-rewarded priors on parameters).
Either:
Does this seem fair?
This appears to be a high-quality book report. Thanks. I didn't see anywhere the 'because' is demonstrated. Is it proved in the citations or do we just have 'plausibly because'?
Physics experiences in optimizing free energy have long inspired ML optimization uses. Did physicists playing with free energy lead to new optimization methods or is it just something people like to talk about?
This kind of reply is ridiculous and insulting.
We have good reason to suspect that biological intelligence, and hence human intelligence roughly follow similar scaling law patterns to what we observe in machine learning systems
No, we don't. Please state the reason(s) explicitly.
Google's production search is expensive to change, but I'm sure you're right that it is missing some obvious improvements in 'understanding' a la ChatGPT.
One valid excuse for low quality results is that Google's method is actively gamed (for obvious $ reasons) by people who probably have insider info.
IMO a fair comparison would require ChatGPT to do a better job presenting a list of URLs.
how is a discretized weight/activation set amenable to the usual gradient descent optimizers?
You have the profits from the AI tech (+ compute supporting it) vendors and you have the improvements to everyone's work from the AI. Presumably the improvements are more than the take by the AI sellers (esp. if open source tools are used). So it's not appropriate to say that a small "sells AI" industry equates to a small impact on GDP.
But yes, obviously GDP growth climbing to 20% annually and staying there even for 5 years is ridiculous unless you're a takeoff-believer.
You don't have to compute the rotation every time for the weight matrix. You can compute it once. It's true that you have to actually rotate the input activations for every input but that's really trivial.
Editorial text isn't very interesting; they call for descriptive statistics and don't recommend any particular analysis.
'MIRI' works in the search field when electing a charity to get 0.5% of your https://smile.amazon.com purchases.
Has anyone read "How We Reason" by Philip Johnson-Laird? He and others in his field (the "model theory" of psychology/cognitive science) claim that their studies refute the naive claim that human brains often operate in terms of logic or Bayesian reasoning (probablistic logic). I gather they'd say that we are not Jaynes' perfect Bayesian reasoning robot or even something resembling a computationally bounded approximation to it.
I'm intrigued by this recommendation:
... (read 792 more words →)... formal logic cannot be the basis for human reason. Johnson-Laird reviews evidence to this effect. For example, there are many valid conclusions that we never bother to draw because they are of no practical use to us. We also make systematic errors in reasoning
Adam Alter lists some evidence from people who study the effects of "disfluency" (unfamiliarity, or lack of clarity), which somewhat surprisingly leads to greater depth of thought (while you're expending the energy to understand something, you can't help but think about it), and also a willingness to depart further from immediate concrete reality (as in Robin Hanson's Near-Far). Think of the effort given to studying vague, poetic, or just incomprehensible religious materials (sometimes in their original scripts) and the investment this can generate.
Below are some of the linked claims of evidence:
... (read 999 more words →)When you give the prompt ... "Think about what it would be like to be fit and to have done a lot of
How many people feel despair in imagining a heaven (positive singularity) that they'll miss out on if they don't survive long enough? I don't think about it, but I already have plenty of reasons to like being alive.
In Copenhagen the summer before last, I shared a taxi with a man who thought his chance of dying in an artificial intelligence-related accident was as high as that of heart disease or cancer. No surprise if he’d been the driver, perhaps (never tell a taxi driver that you’re a philosopher!), but this was a man who has spent his career with computers.
Nothing new for LW, but interesting to see some non-sci-fi public discussion of AI risk.
Three Toed Sloth has a nice exposition on the difficulties of optimizing an economy, including the best explanation of convex optimization ever:
If plan A calls for 10,000 diapers and 2,000 towels, and plan B calls for 2,000 diapers and 10,000 towels, we could do half of plan A and half of plan B, make 6,000 diapers and 6,000 towels, and not run up against the constraints.
Anyone else going? http://blog.daggre.org/
Looks like you can barely get direct roundtrip from LA<->DC for $600 now (probably double that if you wait a week to book).
If you'd like to see some visual representations of conditional independence is neither necessary or sufficient for independence, confounding causes, explaining away, etc. you should be able to view these videos from ai-class.com.
Working the exercises gave me a better understanding than the "I understand this and so don't need to actually apply it" feeling that almost satisfied me.
Having a large pool of specific info on effective recall is a sign of mental health and quite useful. I've noticed various successful and charismatic commentators appearing to have talent in this area. It's possible that as well as being a sign of health it buffers brain abilities generally, that modern recall-augmenting tools will atrophy the native facility. It seems you can IQ test pretty high as long as you're capable of remembering what words mean but otherwise aren't guaranteed to have exceptional long-term memory capacity.