What about the problem that if you admit that logical propositions are only probable, you must admit that the foundations of decision theory and Bayesian inference are only probable (and treat them accordingly)? Doesn't this leave you unable to complete a deduction because of a vicious regress?
I think most formulations of logical uncertainty give axioms and proven propositions probability 1, or 1-minus-epsilon.
Question: Is there such a thing as mathematical ethics? I mean rigorous modelling of moral choices based on mathematical objects (lets call them virtue functions) and derivation of qualitative and/or quantitative properties of these objects based on standard math tools like derivates, order theory, statistics or whatever.
I'm asking because yesterday I had an interesting discussion about ethics which involed modelling subjective value judgements as a function. I'd like to relate this to possible existing work.
I did found these links:
http://www.evolutionaryethics.com/chapter7.html (this seems to be more about analogy between math and ethics development)
http://www.utilitarian.org/maths.html (this is mathematical, but seems to apply only to utiliarism)
modelling subjective value judgements as a function.
Like a utility function?
Provably-secure computing is undervalued as a mechanism for guaranteeing Friendliness from an AI.
I'm not sure what you mean by provably-secure, care to elaborate?
It sounds like it might possibly be required and is certainly not sufficient.
I've often considered a self-assessment system where the sitter is prompted with a series of terms from the topic at hand, and asked to rate their understanding on a scale of 0-5, with 0 being "I've never heard of this concept", and 5 being "I could build one of these myself from scratch".
The terms are provided in a random order, and include red-herring terms that have nothing to do with the topic at hand, but sound plausible. Whoever provides the dictionary of terms should have some idea of the relative difficulty of each term, but you could refine it further and calibrate it against a sample of known diverse users, (novices, high-schoolers, undergrads, etc.)
When someone sits the test, you report their overall score relative to your calibrated sitters ("You scored 76, which puts you at undergrad level"), but you also report something like the Spearman rank coefficient of their answers against the difficulty of the terms. This provides a consistency check for their answers. If they frequently claim greater understanding of advanced concepts than basic ones, their understanding of the topic is almost certainly off-kilter (or they're lying). The presence of red-herring terms (which should all have canonical score of 0) means the rank coefficient consistency check is still meaningful for domain experts or people hitting the same value for every term.
Actually, this seems like a very good learning-a-new-web-framework dev project. I might give this a go.
Look up Bayesian Truth Serum, not exactly what you're talking about but a generalized way to elicit subjective data. Not certain on its viability for individual rankings, though.
Well, I somewhat strongly disagree (will I get cookies now?).
Assuming a similar amount of information is exchanged*, introducing strong social conventions will overshadow the arguments and skew their results. Probably without the participants even noticing. Being on someone's home turf, being served food, the ballast of thousands of years of social evolution, all of these are detractors from factual debate. Yes, the experience will be more amenable, filled with polite "Aaaah, I see what you mean!" interjections, and everyone will leave fuzzily happy, well fed, socially status-affirmed and postprandially somnolent. Quirrell would approve, as the cookie-dispensing host. But when looking for the correct -- even if unpopular -- deduction?
There's a reason I have more respect for people who implicitly embrace Crocker's Rules, having no need for being cuddled, especially when such cuddles will (in my opinion) inevitably sway the course of the arguments.
When you're invited to debate a couple of friendly Jehova's Witnesses (See me adhering to online social conventions? In fact, any religion could be inserted here.) over some tea and cookies, and everybody leaves fuzzily happy thinking "what a great exchange!", you did something wrong. If truth was your objective, that is.
* (otherwise the medium allowing for more exchange mostly wins by default; a long series of Facebook exchanges would allow for more mutual updates than a short tete-a-tete)
I think there are two opposing effects that might happen if you try something like this.
People get less defensive about the identity politics of the debate, which opens both sides to actually engaging with the other side, not automatically rejecting the other side, treating arguments less like soldiers, etc.
People are more likely to let statements they disagree with slide, and the depth and vigor of the discussion is reduced, by focusing on agreements and amicability, rather than disagreements.
A lot of other factors are at play here, but depending on what your biggest problems are in debate, and how much this sort of change will affect them, it might still be a good idea. If the debate is already an actual debate and argument, rather than political attacks and rhetoric, then changing the context to something like this would probably be counterproductive. If the debate is political attacks and rhetoric, on the other hand, a little bit of humanity and amicability is probably not a bad idea.
Anki is a great way to solve the problem that you are having for topics like cognitive science and neuroscience. If you manage to translate the book you are reading into Anki flashcards and you successfully learn those flashcards you have the knowledge. Anki gives you automatic testing.
What about practical knowledge and skills you might want to practice from those fields? Anki is an excellent substitute for the "short answer" side of standardized testing, but there's more to it than that if you want to apply it, and it's often difficult to find systematic ways to practice such things.
Can you set multiple questions to the same card in Anki? Like, if I wanted to practice something like factoring quadratic equations, would I be able to copy a whole bunch of problems of that type to Anki, and not have each one as an independent card to be memorized?
For high school level knowledge, finding Cambridge International Exams past papers is a fairly good option. The exams are done twice a year, and go back to about 2003 IIRC.
I've already written a binary LMSR prediction market contract which could run on ethereum.
Looks like it uses centralized judges, skimming the code. Wouldn't Truthcoin be better to work on?
Yeah, lately I've been trying to implement TruthCoin's SVD voting algorithm in Ethereum. Had a few hiccups so I'm putting it on the back burner for now.
But, unless I did something to the code I've forgotten, the judge can be an arbitrary contract here. A panel of judges, a single judge, a simple proof-of-stake voting system, or a TruthCoin-like system, etc.
Is your prediction market code open source? If so, where can we find it?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
If this is a communal setting the logical step for the UDT agents is to coordinate and build a mutual blackmail prevention fond and clearly signal their membership. And I'd guess such a thing exists.
Only works if UDT agents make a significant proportion of agents in the setting. 10 UDT agents plus 1000 CDT agents, say, and the UDT agents are still vulnerable.