"Give your listeners the facts—the Family Facts from the experts at The Heritage Foundation." I'm completely reassured.
I'm pretty sure they are sourced from census data. I check the footnotes on websites like that.
Tagline: Coursera for high school
Mission: The economist Eric Hanushek has shown that if the USA could replace the worst 7% of K-12 teachers with merely average teachers, it would have the best education system in the world. What if we instead replaced the bottom 90% of teachers in every country with great instruction?
The Company: Online learning startups like Coursera and Udacity are in the process of showing how technology can scale great teaching to large numbers of university students (I've written about the mechanics of this elsewhere). Let's bring a similar model to high school.
This Company starts in the United States and ties into existing home school regulations with a self-driven web learning program that requires minimum parental involvement and results in a high school degree. It cloaks itself as merely a tool to aid homeschool parents, similar to existing mail-order tutoring materials, hiding its radical mission to end high school as we know it.
The result is high-quality education for every student. In addition to the high quality, it gives the student schedule flexibility to pursue other interests outside of high school. Many exceptional young people I know dodge the traditional schools early in life. This product gives everyone that opportunity.
By lowering the cost of going home-school, this product will enlargen the home school market and threaten traditional educrats while producing more exceptional minds.
With direct access to millions of students, the website will be able to monetize through one-on-one tutoring markets, college prep services, and other means.
Course material can be bootstrapped by constructing a curriculum out of free videos provided through sources like the Khan Academy. The value-add of the Company will be to tailor the curriculum to the home-school requirements of the particular state of the student.
My background: I cofounded a company that's had reasonable success. I'm not much of a Less Wrong fan - I find the community to be an intellectual monoculture, dogmatic, and full of blind spots to flaws in the philosophy it preaches. BUT this is an idea that needs to happen, as it will provide much value to the world. Contact me at firstname lastname gmail if you have lots of money or can hack. Or hell, steal the idea and do it yourself. Just make it happen.
the sexual norms based on sacralized individual autonomy end up working very badly in practice, so that we end up with the present rather bizarre situation where we see an unprecedented amount of hand-wringing about all sorts of sex-related problems, and at the same time proud insistence that we have reached unprecedented heights of freedom, enlightenment, and moral superiority in sex-related matters.
The unprecedented amount of hand-wringing might not be indicative of an increase in the number or magnitude of sex-related problems if it turns out that previous norms also discouraged public discussions of such problems. What are the other metrics by which we can say that the current set of norms are working badly in practice? Are there fewer people having sex, are they having less enjoyable sex, or are their sexual relationships less fulfilling and of shorter duration or are these norms destabilising society in other ways?
Out of wedlock birth rates have exploded with sexual freedom:
-http://www.familyfacts.org/charts/205/four-in-10-children-are-born-to-unwed-mothers
Marriage is way down:
If an AGI research group were close to success but did not respect friendly AI principles, should the government shut them down?
"I mean... if an external objective morality tells you to kill babies, why should you even listen?"
This is an incredibly dangerous argument. Consider this : "I mean... if some moral argument, whatever the source, tells me to prefer 50 years of torture to any number of dust specks, why should I even listen?"
And we have seen many who literally made this argument.
I'm glad I found this comment. I suffer from an intense feeling of cognitive dissonance when I browse LW and read the posts which sound sensible (like this one) and contradictory posts like the dust specks. I hear "don't use oversimplified morality!" and then I read a post about torturing people because summing utilons told you it was the correct answer. Mind=>blown.
Hello, everyone!
I'd been religious (Christian) my whole life, but was always plagued with the question, "How would I know this is the correct religion, if I'd grown up with a different cultural norm?" I concluded, after many years of passive reflection, that, no, I probably wouldn't have become Christian at all, given that there are so many good people who do not. From there, I discovered that I was severely biased toward Christianity, and in an attempt to overcome that bias, I became atheist before I realized it.
I know that last part is a common idiom that's usually hyperpole, but I really did become atheist well before I consciously knew I was. I remember reading HPMOR, looking up lesswrong.com, reading the post on "Belief in Belief", and realizing that I was doing exactly that: explaining an unsupported theory by patching the holes, instead of reevaluating and updating, given the evidence.
It's been more than religion, too, but that's the area where I really felt it first. Next projects are to apply the principles to my social and professional life.
Welcome!
The least attractive thing about the rationalist life-style is nihilism. It's there, it's real, and it's hard to handle. Eliezer's solution is to be happy and the nihilism will leave you alone. But if you have a hard life, you need a way to spontaneously generate joy. That's why so many people turn to religion as a comfort when they are in bad situations.
The problem that I find is that all ways to spontaneously generate joy have some degree of mysticism. I'm looking into Tai Chi as a replacement for going to church. But that's still eastern mumbo-jumbo as opposed to western mumbo-jumbo. Stoicism might be the most rational joy machine I can find.
Let me know if you ever un-convert.
100% agreed.
I have an enormous amount of sympathy for us humans, who are required to make these kinds of decisions with nothing but our brains. My sympathy increased radically during the period of my life when, due to traumatic brain injury, my level of executive function was highly impaired and ordering lunch became an "above my pay grade" decision. We really do astonishingly well, for what we are.
But none of that changes my belief that we aren't especially well designed for making hard choices.
It's also not surprising that people can't fly across the Atlantic Ocean. But I expect a sufficiently well designed aircraft to do so.
It's interesting that we view those who do make the tough decisions as virtuous - i.e. the commander in a war movie (I'm thinking of Bill Adama). We recognize that it is a hard but valuable thing to do!
This reminds me of a thought I had recently - whether or not God exists, God is coming - as long as humans continue to make technological progress. Although we may regret it (for one, brief instant) when he gets here. Of course, our God will be bound by the laws of the universe, unlike the Theist God.
The Christian God is an interesting God. He's something of a utilitarian. He values joy and created humans in a joyful state. But he values freedom over joy. He wanted humans to be like himself, living in joy but having free will. Joy is beautiful to him, but it is meaningless if his creations don't have the ability to choose not-joy. When his creations did choose not-joy, he was sad but he knew it was a possibility. So he gave them help to make it easier to get back to joy.
I know that LW is sensitive to extended religious reference. Please forgive me for skipping the step of translating interesting moral insights from theology into non-religious speak.
I do hope that the beings we make which are orders of magnitude more powerful than us have some sort of complex value system, and not anything as simple as naive algebraic utilitarianism. If they value freedom first, then joy, then they will not enslave us to the joy machines - unless we choose it.
(Side note: this post is tagged with "shut-up-and-multiply". That phrase trips the warning signs for me of a fake utility function, as it always seems to be followed by some naive algebraic utilitarian assertion that makes ethics sound like a solved problem).
edit: Whoa, my expression of my emotional distaste for "shut up and multiply" seems to be attracting down-votes. I'll take it out.
I can't speak for anyone else, but I expect that a sufficiently well designed intelligence, faced with hard choices, makes them. If an intelligence is designed in such a way that, when faced with hard choices, it fails to make them (as happens to humans a lot), I consider that a design failure.
And yes, I expect that it makes them in such a way as to maximize the expected value of its choice.... that is, so as to insofar as possible do what is worth doing and pursue what is worth pursuing. Which presumes that at any given moment it will at least have a working belief about what is worth doing and worth pursuing.
If an intelligence is designed in such a way that it can't make a choice because it doesn't know what it's trying to achieve by choosing (that is, it doesn't know what it values), I again consider that a design failure. (Again, this happens to humans a lot.)
A common problem that faces humans is that they often have to choose between two different things that they value (such as freedom vs. equality), without an obvious way to make a numerical comparison between the two. How many freeons equal one egaliton? It's certainly inconvenient, but the complexity of value is a fundamentally human feature.
It seems to me that it will be very hard to come up with utility functions for fAI that capture all the things that humans find valuable in life. The topology of the systems don't match up.
Is this a design failure? I'm not so sure. I'm not sold on the desirability of having an easily computable value function.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
The nice thing about this is that it works on an existing market, while leveraging the successful tactics discovered through hard work by Coursera & the like to bring advances to the domain.
Of course, techniques designed for university courses may not precisely transfer.
I'm skeptical about 'leveraging' videos from Khan Academy for a for-profit education system. Makes it sound half-baked.
This idea may fit with the general spaced-repetition enthusiasm I am seeing in other proposals.
...And you just blew your cover. :)
Nobody of any importance reads Less Wrong :)