Is there any advantage to a keyboard with no labels on the keys besides showing off?
It levels the playing field for those who use non-standard layouts.
For anyone who has ever argued over mechanical-switch and buckling-spring keyboards, made the hard choice between vi and Emacs, or manually reassigned a capslock key to control: this is for you.
They are also available in EPub and Kindle.
First, thank you for publishing this illuminating exchange.
I must say that Pei Wang sounds way more convincing to an uninitiated, but curious and mildly intelligent lay person (that would be me). Does not mean he is right, but he sure does make sense.
When Luke goes on to make a point, I often get lost in a jargon ("manifest convergent instrumental goals") or have to look up a paper that Pei (or other AGI researchers) does not hold in high regard. When Pei Wang makes an argument, it is intuitively clear and does not require going through a complex chain of reasoning outlined in the works of one Eliezer Yudkowsky and not vetted by the AI community at large. This is, of course, not a guarantee of its validity, but it sure is easier to follow.
Some of the statements are quite damning, actually: "The “friendly AI” approach advocated by Eliezer Yudkowsky has several serious conceptual and theoretical problems, and is not accepted by most AGI researchers. The AGI community has ignored it, not because it is indisputable, but because people have not bothered to criticize it." If one were to replace AI with physics, I would tend to dismiss EY as a crank just based on this statement, assuming it is accurate.
What makes me trust Pei Wang more than Luke is the common-sense statements like "to make AGI safe, to control their experience will probably be the main approach (which is what “education” is all about), but even that cannot guarantee safety." and "unless you get a right idea about what AGI is and how it can be built, it is very unlikely for you to know how to make it safe". Similarly, the SIAI position of “accelerate AI safety research and decelerate AI capabilities research so that we develop safe superhuman AGI first, rather than arbitrary superhuman AGI” rubs me the wrong way. While it does not necessarily mean it is wrong, the inability to convince outside experts that it is right is not a good sign.
This might be my confirmation bias, but I would be hard pressed to disagree with "To develop a non-trivial education theory of AGI requires a good understanding about how the system works, so if we don’t know how to build an AGI, there is no chance for us to know how to make it safe. I don’t think a good education theory can be “proved” in advance, pure theoretically. Rather, we’ll learn most of it by interacting with baby AGIs, just like how many of us learn how to educate children."
As a side point, I cannot help but wonder if the outcome of this discussion would have been different were it EY and not LM involved in it.
The cryonics approach advocated by Eliezer Yudkowsky has several serious conceptual and theoretical problems, and is not accepted by most people. People have ignored it, not because it is indisputable, but because people have not bothered to criticize it.
Edit: Yeah, this was meant as a quote.
The question is whether "AGI researchers" are experts on "AI safety". If the answer is "yes", we should update in their direction simply because they are experts. But if the situation is like mine, then Pei Wang is committing argumentum ad populum. Not only should we not pay attention, we should point this out to him.
Merely one example: A calibration game (pulling facts from a database) for Mac, PC, iOS, and Android.
Wouldn't it be easier to implement it as web application? Then you only need one code base and a browser and it works on all those platforms. Distribution is easy. You just email a URL. Updates would be local and wouldn't need to be distributed at all.
“However, Alcor remains something of a shadowy organization that many within the cryonics community are suspicious of.”
Really? That’s a remarkable statement. Alcor has a long history of open communication with its members and the cryonics community in general. Among the ways Alcor does this:
- Cryonics magazine
- Alcor News emailings
- RSS feed
- conferences
- case reports
- extremely detailed website with information on finances, governance… everything
- Facebook page
- Member Forums
See: http://www.alcor.org/newatalcor.html
“Mike Darwin, a former Alcor president, has written at length on both organizations at http://www.chronopause.com, and on the whole, at least based on what I've read, Alcor comes across looking less competent, less trustworthy, and less open than CI.”
Darwin is a member of Alcor, not CI. How do you explain that? Darwin thoroughly enjoys criticizing Alcor (rightly or not) but remains a member. In a related comment, ahartnell says “from what I have read both seem to provide basically the same service”.
This is a remarkable belief. Alcor uses the most advanced cryoprotectant, M22, to perfuse whole bodies and neuros. CI uses a less advanced (and cheaper) cryoprotectant but cryoprotects ONLY THE HEAD, allowing the rest of the body to be straight frozen with massive damage. That’s especially odd since (many of) CI members are insistent about being whole body patients rather than neuros.
Also, and VERY importantly, ischemic time matters hugely. CI members can get standby and transport services from SA by paying a fee (one that makes Alcor neuros significantly LESS expensive). Otherwise, except for CI members undergoing clinical death in the Detroit area, this means long ischemic times and tremendous damage. When I was at CI’s 2011 AGM, Aschwin and Chana de Wolf presented their research findings showing the frightening damage done by extended ischemic time. They also showed that a large majority of CI patients experienced that damage. Staggeringly, no one objected, challenged them, or seem the least concerned.
You mention Mike Darwin, yet note that in Figure 11 of a recent analysis by him, he says that 48 percent of patients in Alcor's present population experienced "minimal ischemia." Of CI, Mike writes, "While this number is discouraging, it is spectacular when compared to the Cryonics Institute, where it is somewhere in the low single digits."
As to Ralph Merkle’s comments: His frank assessment of past practices contradicts the claim that Alcor is secretive. His comments were also about past practices. Unlike CI, Alcor has created robust practices and mechanisms for long-term maintenance and growth of the Patient Care Trust Fund and the Endowment Fund. Go take a look at CI’s financial reports. See how little money is available for the indefinite care and eventual revival of each patient. Also look at the returns on investment of those funds.
For those interested in comparing Alcor and CI, plenty of basic factual information is available here:
maxmore, since you're here, I have a question:
How much life insurance do I need?
The cost for whole body is $200,000. So do I need $200,000 or do I need what it costs at time of death? Historical data says the cost doubles every 20 years.
I think when you have a question that fits the first three criteria, it always devolves into mindkilling. (Operating systems are a good example.)
The only time this doesn't happen is when the question is not popular/important. If you want to find an example, you're going to have to let go of either #2 or #5.
Or if I were to try the existence of God, I predict half the population would say it's 100% certain God exists, and the other half would say the opposite.
That sounds like a massive overestimate of the percentage of atheists.
It depends what you mean by 'God'.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Are you planning to provide training for people who are already running meetups?