I'm game!
These examples emphasize the benefit of frequently taking calibration tests, where we assign probabilities to answers and then checks those answer for calibration errors. Perhaps someone could create a website where we could do this regularly? Just collect a large list of questions like the ones above, questions with true answers but where we have intuitions about what the answer might be, and then have us answer those questions with probabilities, and then show us a calibration chart for the last X questions. Yes, collecting the good questions will be most of the work.
What if I were to try to create such a web app. Should I take 5 minutes every lunchbreak asking friends and colleagues to brainstorm for questions? Maybe write a LW post asking for questions? Maybe there could be a section of the site dedicated to collecting and curating good questions (crowdsourced or centrally moderated).
I received an e-mail saying I wasn't selected a couple days ago. Maybe your spam folder?
No matter. Just received word!
We'll let people know by a week from today (i.e., by Monday, May 2). If anyone needs to know before then, please message me privately and I'll see if we can fast-track your application.
I guess I wasn't selected if I haven't received an email by now? Or are you staying up late sorting applications? Will you email just the selectees or all applicants?
I had been uninterested in reading the original post, but this comment changed that. The concrete example makes the abstract concept clear.
I had the same experience.
Gene therapy of the type we do at the moment always works through a engineered virus. But then as technique progresses you don't have to be a nation state anymore to do genetical engineering. A small group of super empowered individuals might be able to it.
Right… I might have my chance then to save the world. The problem is, everyone will get access to the technology at roughly the same time, I imagine. What if the military get there first? This has probably been discussed elsewhere here on LW though...
I'm not sure quite what you're advocating here but 'dealing with the 10% of sticklers in a firm but fair way' has very ominous overtones to me.
Well, presumably Roko means we would be restricting the freedom of the irrational sticklers - possibly very efficiently due to our superior intelligence - rather than overriding their will entirely (or rather, making informed guesses as to what is in their ultimate interests, and then acting on that).
An AI that forced anything on humans 'for their own good' against their will would not count as friendly by my definition. A 'friendly AI' project that would be happy building such an AI would actually be an unfriendly AI project in my judgement and I would oppose it. I don't think that the SIAI is working towards such an AI but I am a little wary of the tendency to utilitarian thinking amongst SIAI staff and supporters as I have serious concerns that an AI built on utilitarian moral principles would be decidedly unfriendly by my standards.
I definitely seem to have a tendency to utilitarian thinking. Could you give me a reading tip on the ethical philosophy you subscribe to, so that I can evaluate it more in-depth?
I think that AI with greater than human intelligence will happen sooner or later and I'd prefer it to be friendly than not so yes, I'm for the Friendly AI project.
In general I don't support attempting to restrict progress or change simply because some people are not comfortable with it. I don't put that in the same category as imposing compulsory intelligence enhancement on someone who doesn't want it.
Well, the AI would "presume to know" what's in everyone's best interests. How is that different? It's smarter than us, that's it. Self-governance isn't holy.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I'm working on a site that lets people map the logical relations between ideas in a massively collaborative environment.
My background is web development (I run the site songlyrics.com) and philosophy (I'm in a MA program at UChicago doing philosophy of language, logic, and epistemology).
The project is currently just shy of a working prototype. The idea is to do things differently than sites like debategraph.org, where arguments are organized hierarchically. We want to simply have propositions, and logical relations between propositions. Our aim to develop a single contiguous map of ideas. A graph rather than a hierarchy. If anyone's interested in hearing more, let me know.
Side question, does anyone have any data on how big the community is here at LW? I'm applying for a social business competition and I consider the users of this site to be squarely positioned within my target audience.
I am trying to build a collaborative argumentation analysis platform. It sounds like we want the almost exact same thing. Who are you working with? What is your detailed vision?
Please join our FB group at https://www.facebook.com/groups/arguable or contact me at branstrom at gmail.com.