Would a series of several posts on astrobiology and the Fermi paradox, each consisting of a link to an external post on a personal blog I have just established to contain my musings on the subject and related matters, be appreciated?
Is anyone interested in another iterated prisoner's dilemma tournament? It has been nearly a year since the last one. Suggestions are also welcome.
Just an amusing anecdote:
I do work in exoplanet and solar system habitability (mostly Mars) at a university in a lab group with four other professional researchers and a bunch of students. The five of us met for lunch today, and it came out that three of the five had independently read HPMoR to its conclusion. After commenting that Ibyqrzbeg'f Iblntre cyndhr gevpx was a pretty good idea, our PI mentioned that some of the students at Cal Tech used a variant of this on the Curiosity rover- they etched graffiti in to hidden corners of the machine ('under cover of calibrations'), so that now their names have an expected lifespan of at least a few million years against Martian erosion. It's a funny story, and also pretty neat to see just how far Eleizer's pernicious influence goes in some circles.
I just listened to a podcast by Sam Harris called "Leaving the Church: A Conversation with Megan Phelps-Roper". It's a phenomenal depiction of the perspective of someone who was born in, but then left, the fundamentalist Westboro Baptist Church.
Most interesting is Megan's clear perspective on what it was like before she left, and many LWers will recognize concepts like there being no evidence that could have possibly convinced her that her worldview had been wrong, etc. Basically, many things EY warns of in the sequences, like motivated cognition, are things she went through, and she's great at articulating them.
So the head of BGI, famous for extremely ambitious & expensive genetics projects which are a Chinese national flagship, is stepping down to work on AI because genetics is just too boring these days: http://www.nature.com/news/visionary-leader-of-china-s-genomics-powerhouse-steps-down-1.18059
I haven't been following estimates lately, but how much do people think it would cost in GPUs to approximate a human brain at this point given all the GPU performance leaps lately? I note that deep learning researchers seem to be training networks with up to 10b parameters using a 4 GPU setup costing, IIRC, <$10k, and given the memory improvements NVIDIA & AMD are working on, we can expect continued hardware improvements for at least another year or two.
(Schmidhuber's group is also now training networks with 100 layers using their new 'highway network' design; I have to wonder if that has anything to do with Schmidhuber's new NNAISENSE startup, beyond just Deepmind envy... EDIT: probably not if it was founded in September 2014 and the first highway network paper was pushed to arxiv in May 2015, unless Schmidhuber et al set it up to clear the way for commercializing their next innovati...
There have been a lots of data breaches recently. Is this because of incompetence, or is it really difficult to maintain a secure database? If I'm going to let at least 100 people have access to a database and intelligent hackers really want to get access for themselves do I have much of a chance of stopping the hackers? Restated: have the Chinese and Russians probably hacked into most every database they really want?
I am not close to an expert in security, but my reading of one is that yes, the NSA et. al. can get into any system they want to, even if it is air gapped.
Dilettanting:
I see, as many others may, that currently we are living in a NN (Neural Networks) renaissance. They are not as good as one may wish them to be, in fact sometimes they seem quite funny.
Still, after some unexpected advances from the last year onward, they look quite unstoppable to me. Further advances are plausible and their applications in playing the Go game for example, can bring us some very interesting advances and achievements. Even some big surprise is possible here.
Anybody else shares my view?
You are not alone. I think NNs are definitely the best approach to AI, and recent progress is quite promising. They have had a lot of success on a number of different AI tasks. From machine vision to translation to video game playing. They are extremely general purpose.
Here's a recent quote from Schmidhuber (who I personally believe is most likely to create AGI.)
Schmidhuber and Hassabis found sequential decision making as a next important research topic. Schmidhuber’s example of Capuchin monkeys was both inspiring and fun (not only because he mistakenly pronounced it as a cappuccino monkey.) In order to pick a fruit at the top of a tree, Capuchin monkey plans a sequence of sub-goals (e.g., walk to the tree, climb the tree, grab the fruit, …) effortlessly. Schmidhuber believes that we will have machines with animal-level intelligence (like a Capuchin smartphone?) in 10 years.
Schmidhuber’s answer was the most unique one here. He believes that the code for truly working AI agents will be so simple and short that eventually high school students will play around with it. In other words, there won’t be any worry of industries monopolizing AI and its research. Nothing to worry at all!
Other agents are dangerous to me to the extent that (1) they don't share my values/goals, and (2) they are powerful enough that in pursuing their own goals, they have little need to take game theoretic consideration of my values. ANN based AI will be similar to other humans in (1), and regarding (2) they are likely to be more powerful than humans since they'll be running on faster, more capable hardware than human brains, and probably have better algorithms as well.
Schmidhuber's best case scenario for superintelligence is that they take no interest in humanity, colonize space and leave us to survive on Earth. What's your best case scenario? Does it seem not much worse to you than the best case scenario for FAI (i.e., if humanity could coordinate to solve the cosmic tragedy of the commons problem and wait until we know how to safely build an AGI that shares some compromise, e.g., weighted average, of all human values)?
Here comes the future, unevenly distributed. For crime-fighting purposes, Kuwait intends to record the genome of all of its citizens.
Random analysis! From the fact that they anticipate using $400 million to record and track about 4 million people, you can tell they are talking about using microarrays to log SNP profiles (like 23andme) or microsatellite repeat lengths or some otherwise cheap and easy marker-based approach rather than de novo sequencing. De novo sequencing that many people would be much more human DNA sequence data than has ever been produced in the history of the world, would clog up the current world complement of high throughput sequencers for a long time, would be no more useful for legal purposes, and probably cost $40 billion + (probably more to develop infrastructure).
DSCOVR is finally at L1 and transmitting back photos. I'm using that one as my new desktop background.
I remember being excited about this more than a decade ago; it's somewhat horrifying to realize that it took longer than New Horizons to reach its destination, though it was traveling through politics, rather than space.
(The non-spectacle value of this mission is at least twofold: the other side of it does solar measurements and replaces earlier CME early warning systems, and this side of it gives us a single temperature and albedo measurement for the Earth, helping with a handful of problems in climate measurement, and thus helping with climate modeling.)
This question is inspired by the suprisingly complicated Wikipedia page on correlation and dependence.Can you explain distance correlation and brownias covariance as well as the 'Randomized Dependence Coefficient' in lay man's terms and their application, particularly for rationalists? How about the 'correlation ratio', 'polychoric correlation' and 'coefficient of determination'?
Is EY's Cogntive Trope Therapy for real or a parody?
It sounds parodistic yet comes accross as weirdly workable. There is a voice in my head telling me I should not respect myself until I become more of a classical tough-guy type, full of courage and strength. However it does not sound like my father did. It sounds a lot like a teenage bully actually. My father sounded a lot more like show yourself respect by expecting a bit more courage or endurance from yourself. Hm. Carl Jung would have a field day with it.
I don't think in the past people were taking self-help advice from Heracles and Achilles or in the modern world from Neo and Luke Skywalker
I don't know how the ancient Greeks related to their legends (although I'm sure that historians of the period do, and it would be worth knowing what they say), but The Matrix and Star Wars are certainly used in that way. Just google "red pill", or "Do or do not. There is no try." And these things aren't just made up by the storytellers. The ideas have long histories.
Literature is full of such practical morality. That is one of its primary functions, from children's fairy tales ("The Ugly Duckling", "The Little Red Hen", "Stone Soup") to high literature (e.g. Dostoevsky, Dickens, "1984"). Peter Watts ("Blindsight") isn't just writing an entertaining story, he's presenting ideas about the nature of mind and consciousness. Golden age sensuwunda SF is saying "we can and will make the world and ourselves vastly better", and has indeed been an inspiration to some of those who went out and did that.
Whenever you think you're just being entertained, look again.
One of our cats (really, my cat) escaped a few days ago after a cat carrier accident. In between working to find her and having emotional breakdowns, I find myself wanting to know what the actual odds of recovering her are. I can find statistics for "the percentage of pets at a shelter for whom original owners were found", but not "the percentage of lost pets that eventually make it back to their owners by any means." Can anyone do better? I don't like fighting unknown odds.
Additionally, if anyone has experienced advice for locating los...
In my one experience with such a situation, we found our cat (also female, but an outdoor cat) a few days later in a nearby tree. I've seen evidence that other cats also may stay in a single tree for days when scared, notably when a neighbor's indoor cat escaped and was found days later stuck up a tree. Climbing down is more difficult than climbing up, so inexperienced cats getting stuck in trees is somewhat common. My best advice is to check all the nearby trees very thoroughly.
Also, food related sound may encourage her to approach, if there are any she is accustomed to such as food rattling in a dish or taping on a can of cat food with a fork.
If you think you have come up with a solid, evidence-based reason that you personally should be furious, self-hating, or miserable, bear in mind that these conditions may make you unusually prone to confirmation bias.
Tim Ferriss interviews Josh Waitzkin
The whole thing is interesting, but there's a section which might be especially interesting to rationalists about observing sunk cost fallacies about one's own strategies-- having an idea that looks good and getting so attached to it that one fails to notice the idea is no longer as good as it looked at the beginning.
Unfortunately, I can't find the section quickly-- I hope someone else does and posts the time stamp.
There is an interesting startup that is about trying to turn cities into villages by trying to make neighbors help each other. You need to verify your address via a scanned document, a neighbor or a a code on a postcard they send you. I think the primary reason they find that verification important is that people are allowed to see the full name, picture and address of people in their own neighborhood. And probably they don't want to share that with people who are not actually neighbors. This seems to be key selling point of this startup - this is how it d...
This seems quite absurd. Why would I give my data to an obscure startup (who'll probably sell it sooner or later) and hope people in my neighborhood make the same choice, when I can probably have way better results simply inviting my neighbors for a BBQ?
Have any snake oil salesmen been right?
I usually immediately disregard anyone who has the following cluster of beliefs:
1: The relevant experts are wrong. 2: I have no relevant expertise in this area. 3: My product/idea/ invention is amazing in a world changing way. 4: I could prove it if only the man didn't keep me down.
Characteristic 2 is somewhat optional, but I'm not sure about it. Examples of snake oil ideas include energy healing, salt water as car fuel and people who believe in a flat earth. Ignoring 2, Ludwig Boltzmann is not an example (he did not ...
The healthcare startup scene suprises me.
Why doesn't the free home doctor service put free (bulk-billed) medical clinics out of business?
Why did MetaMed go out of business?
Coincidence or Correlation?
A couple of months ago, I postponed an overnight camping trip due to a gut feeing. I still haven't taken that particular trip, having focused on other activities.
Today, my local newspaper is reporting that a body was found in that park this morning. My natural human instinct is to think "That could have been me!"... but, of course, instincts are less trustworthy than other forms of thinking.
What are the odds that I'm a low-probability-branch Everett Immortality survivor? Do you think I should pay measurably more attenti...
The clusterfuck in medical science with some well-intentioned attempts to do it better, not actually well, but somewhat better.
Edited to add: A follow-up on the deworming wars (which might be of interest to EAs as, I think deworming was considered to be an very effective intervention) in this blog -- and read the discussion in the comments.
As far as I can tell, utility functions are not standard in financial planning. I think this is dumb (that is, the neglect is dumb; utility functions are smart). Am I right? Sure, you don't know the correct utility function, but see the case for made-up numbers. My guess is to use log of wealth with extra loss-aversion penalties. Wealth is something between 'net worth' and 'disposable savings'.
I had reason to think about this recently from observing a debate over a certain mean/volatility tradeoff. The participants didn't seem to realize that the right dec...
I have realized I don't understand the first thing about evolutionary psychology. I used to think the selfish gene of a male will want to get planted into as many wombs as possible and this our most basic drive. But actually any gene that would result in having many children but not so many great-great-grandchildren due to the "quality" of our children being low would get crowded out by the genes that do. Having 17 sons of the Mr. Bean type may not be such a big reproductive success down the road.
Since most women managed to reproduce, we can ass...
I have realized I don't understand the first thing about evolutionary psychology.
If you're really curious, I recommend picking up an evolutionary psychology textbook rather than speculating/seeking feedback on speculations from non-experts. Lots of people have strong opinions about Evo Psych without actually having much real knowledge about the discipline.
Anecdotally, in more traditional societies what typically men want is not a huge army of children but a high-status male heir
I don't really believe in this anecdote; large numbers of children are definitely a point of pride in traditional cultures.
Since most women managed to reproduce, we can assume a winner strategy is having a large number of daughters
Surely you don't think daughters are more reproductively successful than sons on average?
Every child has both a mother and a father, and there are about as many men as women, so the mean number of children is about the same for males as for females. But there are more childless men than childless women, because polygyny is more common than polyandry, ultimately because of Bateman's principle.
Since most women managed to reproduce, we can assume a winner strategy is having a large number of daughters
But if everyone adopts this strategy, in a few generations women will by far outnumber men, and suddenly having sons is a brilliant strategy instead. You have to think about what strategies are stable in the population of strategies - as you begin to point towards with the comments about game theory. Yes, game theory has of course been used to look at this type of stuff. (I'm certainly not an expert so I won't get into details on how.)
If you haven't read The Selfish Gene by Richard Dawkins, it's a fun read and great for getting into this subject matter. How The Mind Works by Steven Pinker is also a nice readable/popular intro to evolutionary psychology and covers some of the topics you're thinking about here.
One question/concern I have been monitoring for a while now is the response from conservative Christianity. It's not looking good. Google "Singularity image of the beast" to get an idea.
What kind of problems do you think this will lead to, down the line?
Hopefully none - but the conservative protestant faction seems to have considerable political power in the US, which could lead to policy blunders. Due to that one stupid book (revelations), the xian biblical worldview is almost programmed to lash out at any future system which offers actual immortality. The controversy over stem cells and cloning is perhaps just the beginning.
On the other hand, out of all religions, liberal xtianity is perhaps closest to transhumanism, and could be its greatest ally.
As an example, consider this quote:
It is a serious thing to live in a society of possible gods and goddesses, to remember that the dullest and most uninteresting person you talk to may one day be a creature which, if you saw it now, you would be strongly tempted to worship.
This sounds like something a transhumanist might say, but it's actually from C.S. Lewis:
The command Be ye perfect is not idealistic gas. Nor is it a command to do the impossible. He is going to make us into creatures that can obey that command. He said (in the Bible) that we were "gods" and He is going to make good His words. If we let Him—for we can prevent Him, if we choose—He will make the feeblest and filthiest of us into a god or goddess, dazzling, radiant, immortal creature, pulsating all through with such energy and joy and wisdom and love as we cannot now imagine, a bright stainless mirror which reflects back to God perfectly (though, of course, on a smaller scale) His own boundless power and delight and goodness. The process will be long and in parts very painful; but that is what we are in for. Nothing less. He meant what He said.
Divinization or apotheosis is one of the main belief currents underlying xtianity, emphasized to varying degrees across sub-variations and across time.
..
[We alread create lots of new agents with different beliefs ...]
This is true, but:
- I'm not comparing ANN-based AGI to the status quo, but to a future with some sort of near-optimal FAI.
The practical real world FAI that we can create is going to be a civilization that evolves from what we have now - a complex system of agents and hierarchies of agents. ANN-based AGI is a new component, but there is more to a civilization than just the brain hardware.
- The new agents we currently create aren't much more powerful than ourselves, and cannot take over the universe and foreclose the possibility of a better outcome.
Humanity today is enormously more powerful than our ancestors from say a few thousand years ago. AGI just continues the exponential time-acceleration trend, it doesn't necessarily change the trend.
From the perspective of humanity of a thousand years ago, friendliness mainly boils down to a single factor: will the future posthuman civ ressurrect them into a heaven sim?
- Humans or humanity as a whole seem capable of making moral and philosophical progress, and this capability is likely to persist in future generations. I'm not sure the same will be true of ANN-based AGIs.
Why not?
One of the main implications of the brain being a ULM is that friendliness is not just a hardware issue. There is a hardware component in terms of the value learning subsystem, but once you solve that, it is mostly a software issue. It's a culture/worldview/education issue. The memetic software of humanity is the same software that we will instill into AGI.
That being said, I do believe that the AGI we create will be far more aligned with our values than our children are.
I look forward to your post explaining this, but again my fear is that since to a large extent I don't know what my own values are (especially when it comes to post-Singularity problems like how to reorganize the universe on a large scale . .
I don't see how that is a problem. You may not know yourself completely, but have some estimation or distribution over your values. As long as you continue to exist into the future, and as long as you have a significant share in the future decision structure (ie wealth or voting rights), then that should suffice - you will have time to figure out your long term values.
Are you not worried that during this time, the AGIs will take over the universe and reorganize it according to their imperfect understanding of our values, which will look disastrous when we become superintelligences ourselves and figure out what we really want?
This is a potential worry, but it can probably be prevented.
The brain is reasonably efficient in terms of intelligence per unit energy. Brains evolved from the bottom up, and biological cells are near optimal nanocomputers (near optimal in terms of both storage density in DNA, and near optimal in terms of energy cost per irreversible bit op in DNA copying and protein computations). The energetic cost of computation in brains and modern computers alike is dominated by wire energy dissipation in terms of bits/J/mm. Moore's law is approaching it's end which will result in hardware that is on par a little better than the brain. With huge investments into software cleverness, we can close the gap and achieve AGI. In 5 years or so, lets say that 1 AGI runs amortized on 1 GPU (neuromorphics doesn't change this picture dramatically). That means an AGI will only require 100 watts of energy and say $1,000/year. That is about a 100x productivity increase, but in a pinch humans can survive on only $10,000 a year.
Today the foundry industry produces about 10 million mid-high end GPUs per year. There are about 100 million human births per year, and around 4 million per year in the US. Of course if we consider only humans with IQ > 135, then there are only 1 million high IQ humans born per year. This puts some constraints on the likely transition time, and it is likely measured in years.
We don't need to instill values so perfectly that we can rely on our AGI to solve all of our problems until the end of time - we just need AGI to be similar enough to us that it can function as at least a replacement for future human generations and fulfill the game theoretic pact across time of FAI/god/resurrection.
liberal xtianity is perhaps closest to transhumanism, and could be its greatest ally
There's some truth in the first half of that, but I'm not so sure about the second. Expecting that God will at some point transform us into something beyond present-day humanity is a very different thing from planning to make that transformation ourselves. That whole "playing God" accusation probably gets worse, rather than better, if you're actually expecting God to do the thing in question on his own terms and his own schedule.
For a far-from-perfect analogy, ...
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.