[LINK] "Harry Potter And The Cryptocurrency of Stars"
A blog post by Patrick McKenzie/patio11, a frequently cited here blogger/entrepreneur/SEO wiz/etc. about Stellar, a new currency exchange protocol. [EDIT: apparently not that new.] He explains it in the ELI5 (or, more accurately, ELIRW, explain-like-I'm-Ron-Weasley) way. With plenty of humor. Bonus: a comment from Eliezer.
Some quotes:
Harry Potter: [...] Stellarmus, I trust Ron Weasley with one Weasley!GBP.
Ron Weasley: Stellarmus, send Harry Potter a quid!
...
Harry Potter: Stellarmus, what currencies does Hermione Granger accept?
Stellarmus rattles off a long, long list.
Harry Potter: What on earth is a Tokyo!ABL?
Hermione Granger: It’s a claim against an online Magic: The Gathering exchange headquartered in Tokyo for one Alpha Black Lotus, which is a card that I’ve wanted for a while.
Ron Weasley: You’d trust a random company in Tokyo to send you magic cards?
Hermione Granger: They’re not magic cards, they’re Magic cards, and yes, I’d trust that company to hold Magic cards for me. Nothing else though. It would certainly be dreadfully stupid to say “Stellarmus, I trust The Company That Must Not Be Named for 50 million USD.”
Harry Potter: Why do I get the feeling you know more about this topic than I do?
Hermione Granger: Welcome to life, Harry Potter. I know more about every topic than you do.
(Note that this is Canon!Harry, not MoR!Harry, though the defense professor is a lot closer to MoR!Quirrell.)
...
Hermione Granger: I like you, Ron, but not enough to trust you with money. Save my life a few times first and maybe we’ll talk.
...
Harry Potter: Wait, why do Hogwarts faculty trust the Defense Professor when the first rule of wizardry is “Don’t trust the Defense Professor?”
Defense Professor: Because the Hogwarts faculty are fools. Trust is for the weak, anyhow. The only real currency is a totally trustless currency.
...
Defense Professor: Granger is, of course, trusting that The Adversary never controls Hogwarts, Gringotts, and the Ministry of Magic at the same time.
Ron Weasley: That seems pretty reasonable, though.
Defense Professor: You think a far-reaching conspiracy can’t simultaneously capture all your trusted institutions? I love the young and naive.
...
Defense Professor: [...] By the eldritch rites of Satoshi, transfer to my exchange’s account three infinitely divisible currency units. I’ll bounce a fraction of them off your toy network into something that the Stellarmus spell will trade for a ChoChang!JPY, swap that for a cryptocurrency with actual value, and turn you into a value pump.
[LINK] Claustrum Stimulation Temporarily Turns Off Consciousness in an otherwise Awake Patient
This paper, or more often the New Scientist's exposition of it is being discussed online and is rather topical here. In a nutshell, stimulating one small but central area of the brain reversibly rendered one epilepsia patient unconscious without disrupting wakefulness. Impressively, this phenomenon has apparently been hypothesized before, just never tested (because it's hard and usually unethical). A quote from the New Scientist article (emphasis mine):
One electrode was positioned next to the claustrum, an area that had never been stimulated before.
When the team zapped the area with high frequency electrical impulses, the woman lost consciousness. She stopped reading and stared blankly into space, she didn't respond to auditory or visual commands and her breathing slowed. As soon as the stimulation stopped, she immediately regained consciousness with no memory of the event. The same thing happened every time the area was stimulated during two days of experiments (Epilepsy and Behavior, doi.org/tgn).
To confirm that they were affecting the woman's consciousness rather than just her ability to speak or move, the team asked her to repeat the word "house" or snap her fingers before the stimulation began. If the stimulation was disrupting a brain region responsible for movement or language she would have stopped moving or talking almost immediately. Instead, she gradually spoke more quietly or moved less and less until she drifted into unconsciousness. Since there was no sign of epileptic brain activity during or after the stimulation, the team is sure that it wasn't a side effect of a seizure.
If confirmed, this hints at several interesting points. For example, a complex enough brain is not sufficient for consciousness, a sort-of command and control structure is required, as well, even if relatively small. A low-consciousness state of late-stage dementia sufferers might be due to the damage specifically to the claustrum area, not just the overall brain deterioration. The researchers speculates that stimulating the area in vegetative-state patients might help "push them out of this state". From an AI research perspective, understanding the difference between wakefulness and consciousness might be interesting, too.
[LINK] Why Talk to Philosophers: Physicist Sean Carroll Discusses "Common Misunderstandings" about Philosophy
Why Talk to Philosophers? Part I. by philosopher of science Wayne Myrvold.
See also Sean Carroll's own blog entry, Physicists Should Stop Saying Silly Things about Philosophy.
Sean classifies the disparaging comments physicists make about philosophy as follows: "Roughly speaking, physicists tend to have three different kinds of lazy critiques of philosophy: one that is totally dopey, one that is frustratingly annoying, and one that is deeply depressing". Specifically:
- “Philosophy tries to understand the universe by pure thought, without collecting experimental data.”
- “Philosophy is completely useless to the everyday job of a working physicist.”
- “Philosophers care too much about deep-sounding meta-questions, instead of sticking to what can be observed and calculated.”
He counters each argument presented.
Personally, I am underwhelmed, since he does not address the point of view that philosophy is great at asking interesting questions but lousy at answering them. Typically, an interesting answer to a philosophical question requires first recasting it in a falsifiable form, so that is becomes a natural science question, be it physics, cognitive sciences, AI research or something else. This is locally known as hacking away at the edges. Philosophical questions don't have philosophical answers.
[LINK] Scott Aaronson on Google, Breaking Circularity and Eigenmorality
Scott suggests that ranking morality is similar to ranking web pages. A quote:
Philosophers from Socrates on, I was vaguely aware, had struggled to define what makes a person “moral” or “virtuous,” without tacitly presupposing the answer. Well, it seemed to me that, as a first attempt, one could do a lot worse than the following:
A moral person is someone who cooperates with other moral people, and who refuses to cooperate with immoral people.
Proposed solution:
Just like in CLEVER or PageRank, we can begin by giving everyone in the community an equal number of “morality starting credits.” Then we can apply an iterative update rule, where each person A can gain morality credits by cooperating with each other person B, and A gains more credits the more credits B has already. We apply the rule over and over, until the number of morality credits per person converges to an equilibrium. (Or, of course, we can shortcut the process by simply finding the principal eigenvector of the “cooperation matrix,” using whatever algorithm we like.) We then have our objective measure of morality for each individual, solving a 2400-year-old open problem in philosophy.
He then talks about "eigenmoses and eigenjesus" and other fun ideas, like Plato at the Googleplex.
One final quote:
All that's needed to unravel the circularity is a principal eigenvector computation on the matrix of trust.
EDIT: I am guessing that after judicious application of this algorithm one would end up with the other Scott A's loosely connected components with varying definitions of morality, the Archipelago. UPDATE: He chimes in.
EDIT2: The obvious issue of equating prevailing mores with morality is discussed to death in the comments. Please read them first before raising it yet again here.
List a few posts in Main and/or Discussion which actually made you change your mind
To quote the front page
> Less Wrong users aim to develop accurate predictive models of the world, and change their mind when they find evidence disconfirming those models, instead of being able to explain anything.
So, by that logic, one interesting metric of the forum quality would be how often what is posted here makes people change their minds. Of course, most of us change their minds almost all the time, but mostly on some mundane topics and in very small amounts, probably too small to pay attention too. But if something comes to mind, feel free to link a thread or two. Depending on the response, we can even try to measure how influential newer posts are vs. older ones.
EDIT: Feel free to mention the Sequence posts, as well, could be a useful benchmark.
EDIT2: Why specifically changing your mind and not just learning something new? Because unlearning is much harder than initial learning, and we, to generalize from one example, tend to forget the unlearned and relapsed into old ways of thinking and doing. (Links welcome). Probably because the patterns etched in the System 1 are not easily erased, and just knowing something intellectually does not remove the old habits. So, successfully unlearning something and internalizing a different view or concept or a way of doing things is indicative of a much more significant impact than "just" learning something for the first time.
Mathematics as a lossy compression algorithm gone wild
This is yet another half-baked post from my old draft collection, but feel free to Crocker away.
There is an old adage from Eugene Wigner known as the "Unreasonable Effectiveness of Mathematics". Wikipedia:
the mathematical structure of a physical theory often points the way to further advances in that theory and even to empirical predictions.
The way I interpret is that it is possible to find an algorithm to compress a set of data points in a way that is also good at predicting other data points, not yet observed. In yet other words, a good approximation is, for some reason, sometimes also a good extrapolation. The rest of this post elaborates on this anti-Platonic point of view.
Now, this point of view is not exactly how most people see math. They imagine it as some near-magical thing that transcends science and reality and, when discovered, learned and used properly, gives one limited powers of clairvoyance. While only the select few wizard have the power to discover new spells (they are known as scientists), the rank and file can still use some of the incantations to make otherwise impossible things to happen (they are known as engineers).
This metaphysical view is colorfully expressed by Stephen Hawking:
What is it that breathes fire into the equations and makes a universe for them to describe? The usual approach of science of constructing a mathematical model cannot answer the questions of why there should be a universe for the model to describe. Why does the universe go to all the bother of existing?
Should one interpret this as if he presumes here that math, in the form of "the equations" comes first and only then there is a physical universe for math to describe, for some values of "first" and "then", anyway? Platonism seems to reach roughly the same conclusions:
Wikipedia defines platonism as
the philosophy that affirms the existence of abstract objects, which are asserted to "exist" in a "third realm distinct both from the sensible external world and from the internal world of consciousness, and is the opposite of nominalism
In other words, math would have "existed" even if there were no humans around to discover it. In this sense, it is "real", as opposed to "imagined by humans". Wikipedia on mathematical realism:
mathematical entities exist independently of the human mind. Thus humans do not invent mathematics, but rather discover it, and any other intelligent beings in the universe would presumably do the same. In this point of view, there is really one sort of mathematics that can be discovered: triangles, for example, are real entities, not the creations of the human mind.
Of course, the debate on whether mathematics is "invented" or "discovered" is very old. Eliezer-2008 chimes in in http://lesswrong.com/lw/mq/beautiful_math/:
To say that human beings "invented numbers" - or invented the structure implicit in numbers - seems like claiming that Neil Armstrong hand-crafted the Moon. The universe existed before there were any sentient beings to observe it, which implies that physics preceded physicists.
and later:
The amazing thing is that math is a game without a designer, and yet it is eminently playable.
In the above, I assume that what Eliezer means by physics is not the science of physics (a human endeavor), but the laws according to which our universe came into existence and evolved. These laws are not the universe itself (which would make the statement "physics preceded physicists" simply "the universe preceded physicists", a vacuous tautology), but some separate laws governing it, out there to be discovered. If only we knew them all, we could create a copy of the universe from scratch, if not "for real", then at least as a faithful model. This universe-making recipe is then what physics (the laws, not science) is.
And these laws apparently require mathematics to be properly expressed, so mathematics must "exist" in order for the laws of physics to exist.
Is this the only way to think of math? I don't think so. Let us suppose that the physical universe is the only "real" thing, none of those Platonic abstract objects. Let is further suppose that this universe is (somewhat) predictable. Now, what does it mean for the universe to be predictable to begin with? Predictable by whom or by what? Here is one approach to predictability, based on agency: a small part of the universe (you, the agent) can construct/contain a model of some larger part of the universe (say, the earth-sun system, including you) and optimize its own actions (to, say, wake up the next morning just as the sun rises).
Does waking up on time count as doing math? Certainly not by the conventional definition of math. Do migratory birds do math when they migrate thousands of miles twice a year, successfully predicting that there would be food sources and warm weather once they get to their destination? Certainly not by the conventional definition of math. Now, suppose a ship captain lays a course to follow the birds, using maps and tables and calculations? Does this count as doing math? Why, certainly the captain would say so, even if the math in question is relatively simple. Sometimes the inputs both the birds and the humans are using are the same: sun and star positions at various times of the day and night, the magnetic field direction, the shape of the terrain.
What is the difference between what the birds are doing and what humans are doing? Certainly both make predictions about the universe and act on them. Only birds do this instinctively and humans consciously, by "applying math". But this is a statement about the differences in cognition, not about some Platonic mathematical objects. One can even say that birds perform the relevant math instinctively. But this is a rather slippery slope. By this definition amoebas solve the diffusion equation when they move along the sugar gradient toward a food source. While this view has merits, the mathematicians analyzing certain aspects of the Navier-Stokes equation might not take kindly being compared to a protozoa.
So, like JPEG is a lossy image compression algorithm of the part of the universe which creates an image on our retina when we look at a picture, the collection of the Newton's laws is a lossy compression algorithm which describes how a thrown rock falls to the ground, or how planets go around the Sun. in both cases we, a tiny part of the universe, are able to model and predict a much larger part, albeit with some loss of accuracy.
What would it mean then for a Universe to not "run on math"? In this approach it means that in such a universe no subsystem can contain a model, no matter how coarse, of a larger system. In other words, such a universe is completely unpredictable from the inside. Such a universe cannot contain agents, intelligence or even the simplest life forms.
Now, to the "gone wild" part of the title. This is where the traditional applied math, like counting sheep, or calculating how many cannons you can arm a ship with before it sinks, or how to predict/cause/exploit the stock market fluctuations, becomes "pure math", or math for math's sake, be it proving the Pythagorean theorem or solving a Millennium Prize problem. At this point the mathematician is no longer interested in modeling a larger part of the universe (except insofar as she predicts that it would be a fun thing to do for her, which is probably not very mathematical).
Now, there is at least one serious objection to this "math is jpg" epistemology. It goes as follows: "in any universe, no matter how convoluted, 1+1=2, so clearly mathematics transcends the specific structure of a single universe". I am skeptical of this logic, since to me 1,+,= and 2 are semi-intuitive models running in our minds, which evolved to model the universe we live in. I can certainly imagine a universe where none of these concepts would be useful in predicting anything, and so they would never evolve in the "mind" of whatever entity inhabits it. To me mathematical concepts are no more universal than moral concepts: sometimes they crystallize into useful models, and sometimes they do not. Like the human concept of honor would not be useful to spiders, the concept of numbers (which probably is useful to spiders) would not be useful in a universe where size is not a well-defined concept (like something based on a Conformal Field Theory).
So the "Unreasonable Effectiveness of Mathematics" is not at all unreasonable: it reflects the predictability of our universe. Nothing "breathes fire into the equations and makes a universe for them to describe", the equations are but one way a small part of the universe predicts the salient features of a larger part of it. Rather, an interesting question is what features of a predictable universe enable agents to appear in it, and how complex and powerful can these agents get.
Reflective Mini-Tasking against Procrastination
This is a slightly polished version of a draft I originally deemed not ready for posting, but given that people keep saying that the Discussion post quality bar is set unreasonably high, here it is.
Most of us have little aversion to doing something that we perceive as short and easy, even if it is not very interesting. If your English homework consisted of writing a one-line poem (this is actually a thing), you'd be less likely to put it off for later, even if writing poetry is one of your least favorite activities. We are certainly more likely to do something if we hate it less, shifting the balance between "should" and "want" toward want. To quote one of my three favorite Scott A's, the one with an unhealthy addiction to puns,
Just as drugs mysteriously find their own non-fungible money, enjoyable activities mysteriously find their own non-fungible time. If I had to explain it, I'd say the resource bottleneck isn't time but energy/willpower, and that these look similar because working hard saps energy/willpower and relaxing for a while restores it, so when I have less time I also have less energy/willpower. But some things don't require energy/willpower and so are essentially free.
And so there are various anti-akrasia proposals based on increasing the want/should ratio (or should it be the want-should difference?) by way of reduction of the perceived will power expenditure to accomplish a task, and/or sweeten it with a reward tacked-on, such as checking off an item on a to-do list and finishing pomodoros. These definitely work some time for some people, but the effect tends to wear off. As one of my coworkers described his attempt to switch from coffee to decaf, the body is fooled for the first few cups, but then it catches on and stops finding decaf enjoyable. (Your experience may vary.) The reason is probably related to the negative feedback, also known as punishment in the Skinner's operant conditioning model.
I think of many of these attempts to shorten/sweeten a should-task as "mini-tasking". It is also commonly known as "just putting one foot in front of the other" and "taking it day-by-day".
What I find hard is not the process of working through a completed set of mini-tasks, but actually breaking a large task down into small ones. So instead I tend to switch to a want-task (like writing this) from a should-task, like finding a bug in my code. I suspect that if I had a to-do list of bug finding in front of me, where, once I finish and check off each short item on the list, the larger project would be completed, I would be less inclined to take breaks for fun before feeling guilty and switching back to "work". Unfortunately, creating such a list is a non-trivial and fairly involved task in itself, so I rarely get it done, preferring instead to, say, just dive into the code and hope for the best.
If only I had a way to reflectively (reflexively?) mini-task, where no single action is perceived as long and/or tedious...
[LINK] No Boltzmann Brains in an Empty Expanding Universe
Another link to Sean Carroll's blog: Squelching Boltzmann Brains (And Maybe Eternal Inflation). The discussion of Boltzmann brains has come up many times on LW, starting from this Eliezer's post. Now Sean and his collaborators argue that in an empty expanding universe:
Quantum fluctuations are not dynamical processes inherent to a system, but instead reflect the statistical nature of measurement outcomes. Making a denite measurement requires an out-of-equilibrium, low-entropy detection apparatus that interacts with an environment to induce decoherence. Quantum variables are not equivalent to classical stochastic variables. They may behave similarly when measured repeatedly over time, in which case it is sensible to identify the nonzero variance of a quantum-mechanical observable with the physical fluctuations of a classical variable. In a truly stationary state, however, there are no fluctuations that decohere. We conclude that systems in such a state|including, in particular, the Hartle-Hawking vacuum never fluctuate into lower-entropy states, including false vacua or congurations with Boltzmann brains.
Although our universe, today or during inflation, is of course not in the vacuum, the cosmic no-hair theorem implies that any patch in an expanding universe with a positive cosmological constant will asymptote to the vacuum. Within QFT in curved spacetime, the Boltzmann brain problem is thus eliminated: a patch in eternal de Sitter can form only a finite (and small) number of brains on its way to the vacuum.
In other words, in an empty universe no macroscopic areas of low entropy can form. And a non-vacuum expanding universe like ours becomes vacuum after a time too short to form more than a few Boltzmann brains.
[LINK] Sean Carroll Against Afterlife
Well, not quite, but close. The debate starts 1 hour after this post is up. From Sean's blog post:
Is There Life After Death? A Debate
No, there’s not. In order to believe otherwise, you would have to be willing to radically alter our fundamental understanding of physics on the basis of almost no evidence. Which I’m not willing to do. But others feel differently! So we’re going to have a debate about it tonight — to be live-streamed.
Note that Sean did extremely well against W.L. Craig (LW discussion), so this should be interesting. His co-debater Steven Novella runs The Skeptics' Guide to the Universe podcast, well worth listening to.
[LINK] Sean Carrol's reflections on his debate with WL Craig on "God and Cosmology"
I previously mentioned this debate a month ago and predicted that Sean Carroll is unlikely to do very well. The debate happened last Friday and Sean posted his post-debate reflections on his popular blog (the full video will be posted soon). Some excerpts:
I think it went well, although I can easily think of several ways I could have done better. On the substance, my major points were that the demand for “causes” and “explanations” is completely inappropriate for modern fundamental physics/cosmology, and that theism is not taken seriously in professional cosmological circles because it is hopelessly ill-defined (no matter what happens in the universe, you can argue that God would have wanted it that way). He defended two of his favorite arguments, the “cosmological argument” and the fine-tuning argument; no real surprises there. In terms of style, from my perspective things got a bit frustrating, because the following pattern repeated multiple times: Craig would make an argument, I would reply, and Craig would just repeat the original argument.
The cosmological argument has two premises: (1) If the universe had a beginning, it has a transcendent cause; and (2) The universe had a beginning. [...] My attitude toward the above two premises is that (2) is completely uncertain, while the “obvious” one (1) is flat-out false. Or not even false, as I put it, because the notion of a “cause” isn’t part of an appropriate vocabulary to use for discussing fundamental physics. [Emphasis mine]
The Aristotelian analysis of causes is outdated when it comes to modern fundamental physics; what matters is whether you can find a formal mathematical model that accounts for the data.
Sean goes over a couple of mistakes he thinks he made in the debate, basically being blindsided by WLC bringing up obscure papers and misinterpreting them to suit his argument.
Sean's reflections are very detailed and worth reading, though I found them hard to summarize. It looks like WLC did his homework better than SC, but it's hard to tell whether it mattered until the video is made public and various interested parties gave their feedback. Another couple of quotes, with my emphasis:
For my closing statement, I couldn’t think of many responses to Craig’s closing statement that wouldn’t have simply be me reiterating points from my first two speeches. So I took the opportunity to pull back a little and look at the bigger picture. Namely: we’re talking about “God and Cosmology,” but nobody really becomes a believer in God because it provides the best cosmology. They become theists for other reasons, and the cosmology comes later. That’s because religion is enormously more than theism. Most people become religious for other (non-epistemic) reasons: it provides meaning and purpose, or a sense of community, or a way to be in contact with something transcendent, or simply because it’s an important part of their culture. The problem is that theism, while not identical to religion, forms its basis, at least in most Western religions. So — maybe, I suggested, tentatively — that could change. I give theists a hard time for not accepting the implications of modern science, but I am also happy to give naturalists a hard time when they don’t appreciate the enormous task we face in answering all of the questions that we used to think were answered by God. [...]
To me, Craig’s best moment of the weekend came at the very end, as part of the summary panel discussion. Earlier in the day, Tim Maudlin (who gave an great pro-naturalism talk, explaining that God’s existence wouldn’t have any moral consequences even if it were true) had grumped a little bit about the format. His point was that formal point-counterpoint debates aren’t really the way philosophy is done, which would be closer to a Socratic discussion where issues can be clarified and extended more efficiently. And I agree with that, as far as it goes. But Craig had a robust response, which I also agree with: yes, a debate like this isn’t how philosophy is done, but there are things worth doing other than philosophy, or even teaching philosophy. He said, candidly, that the advantage of the debate format is that it brings out audiences, who find a bit of give-and-take more exciting than a lecture or series of lectures. It’s hard to teach subtle and tricky concepts in such a format, but that’s always a hard thing to do; the point is that if you get the audience there in the first place, a good debater can at least plant a few new ideas in their heads, and hopefully inspire them to take the initiative and learn more on their own.
Sean concurs: "If we think we have good ideas, we should do everything we can to bring them to as many people as possible."
I hope Luke or someone else will find time to watch the video once posted and give their impressions.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)