Actually, that toxoplasmosis thing is the only happiness-creating-preference-inducing, negative-side-effect disease I actually know that really works for Solomon's Problem. You can either pet cute kittens already tested and guaranteed not to have toxoplasmosis, or refrain. This ought to be our go-to real-life example against EDT!
Cold fusion exists, little doubt about that. Only that it is even much colder than people expect. I mean, it is a question of "when", not of "if", for two hydrogen atoms to fuse. That's elementary.
Perhaps a billion times colder fusion than the so called "cold fusion" is a fact of life.
I once had my friend calculate the probability of a single pair of hydrogen nuclei fusing in the reaction of 2H2 with O2 in a balloon (which produces a cool boom resulting in water vapor). Despite the enormous number of atoms, and the fact that at the high energy tail of the distribution some fraction of atoms should be going really fast, the probability that any were going fast enough to fuse was e^-somethinghuge.
From Zombies to Artificial Intelligence: Thinking Clearly About Truth, Morality and Winning in This And Other Worlds.
Slight rework; From AI to Zombies: Thinking Clearly About Truth, Morality and Winning in This And Other Worlds.
No problem is perfectly parallelizable in a physical sense. If you build a circuit to solve a problem, and that the circuit is one light year across in size, you're probably not going to solve it in under a year -- technically, any decision problem implemented by a circuit is at least O(n) because that's how the length of the wires scale.
Now, there are a few ways you might want to parallelize intelligence. The first way is by throwing many independent intelligent entities at the problem, but that requires a lot of redundancy, so the returns on that will not be linear. A second way is to build a team of intelligent entities collaborating to solve the problem, each specializing on an aspect -- but since each of these specialized intelligent entities is much farther from each other than the respective modules of a single general intelligence, part of the gains will be offset by massive increases in communication costs. A third way would be to grow an AI from within, interleaving various modules so that significant intelligence is available in all locations of the AI's brain. Unfortunately, doing so requires internal scaffolding (which is going to reduce packing efficiency and slow it down) and it still expands in space, with internal communication costs increasing in proportion of its size.
I mean, ultimately, even if you want to do some kind of parallel search, you're likely to use some kind of divide and conquer technique with a logarithmic-ish depth. But since you still have to pack data in a 3D space, each level is going to take longer to explore than the previous one, so past a certain point, communication costs might outweigh intelligence gains and parallelization might become somewhat of a pipe dream.
technically, any decision problem implemented by a circuit is at least O(n) because that's how the length of the wires scale.
That is a pretty cool idea.
Supplementing potassium has a large effect on mental performance for some people, it's cheap and easy enough to be quite worth trying. Personally I add a few grams of KCl (nusalt) to a drink.
What is the actual evidence for this? I've only heard gwern say that Kevin said it was good. Google thinks it's for everything but mental performance.
This sounds like an awesome idea. It also sounds like a computer-assisted human EURISKO fooming device.
I'm curious how this worked for you. (The lighting, not particularly the experimental method.)
Unfortunately not enough of an effect for me to claim anything. I do like it brighter though, so I will continue using the lights.
I don't think you need that - you can still profit from God's offers, even without Alex Mennen's condition.
You can profit, but that's not the goal of normative rationality. We want to maximize utility.
This is like the supremum-chasing Alex Mennen mentioned. It's possible that normative rationality simply requires that your utility function satisfy the condition he mentioned, just as it requires the VNM axioms.
I'm honestly not sure. It's a pretty disturbing situation in general.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Saw the video before this post, thought to make a prediction, and was correct! :D