The guy who runs OpenDNS says he won't fight SOPA.
That's sad. Here he was quoted as saying
"We can reincorporate as a Cayman Islands company and offer the same great service and reliability and not be a U.S. company anymore."
DynDNS has an article on their site, written by their CEO, Jeremy Hitchcock, called SOPA: What You Should Know & Why Dyn Opposes It. It's not at all ambivalent but doesn't make any promises.
Is anyone else worried about SOPA? Trying to do anything about it?
Referring, of course, to the proposed U.S. legislation which could cause severe damage to the Internet—at least, that's what a lot of people are saying. See, e.g., this Open Letter From Internet Engineers to the U.S. Congress (the first signatory listed is Vint Cerf). On Wikipedia, people including Jimbo Wales are discussing strategies as extreme as blanking the entire site (except for an explanatory message) to get people's attention, and thereby perhaps incite them to action, such as calling their Congressional representative.
I just happened to find out about all this a few hours ago, being someone who tries to avoid distractions like most kinds of news, so possibly others here with similar habits will appreciate having it called to ther attention. Or possibly they won't. But to those of you who possess relevant kinds of expertise:
- Is it possible to project likely consequences of this legislation's being passed?
- What are those consequences?
- Assuming they are on net undesirable, what can be done that would be most likely to prevent its passage?
(I think this subject can be discussed without political advocacy, in which I am mostly not at all interested anyway. It just looks like a practical problem to me.)
Edited to Add: I forgot to include a fourth bullet point:
- Again assuming they are undesirable, what can be done to ameliorate|circumvent them?
It seems to have been assumed by many commenters, nevertheless.
“There are a number of people who have knowledge in this field that estimate humanity’s chance at making it through this century at about 50 percent,” Schwall says. “Even if that number is way off and it’s one in a billion, that’s too high for me.”
Presumably he meant something different.
I am working on finishing up a philosophy paper about whether "fine-tuning" (the claim that the physical constants and initial conditions that permit the evolution of life and conscious observers are rare in the space of physically possible parameters) supports "multiverse" hypotheses according to which the cosmos is huge and is heterogeneous in its local conditions. One major argument for the view that fine-tuning does not support multiverse hypotheses is due to Ian Hacking, who claimed that this inference is analogous to an "inverse gambler's fallacy" where a gambler enters a casino, witnesses a roll of dice resulting in double-sixes, and concludes that the people must have been throwing dice for a while.
While going through Nick Bostrom's book Anthropic Bias, I've found his discussion of Hacking's argument (and of an significantly improved recent version by Roger White, available here ) somewhat unilluminating, although I thought there must be something wrong with the argument. Going through the existing replies to this argument in the literature I've found counterarguments that either fail straightforwardly or (more commonly) render fine-tuning irrelevant to whether multiverse hypotheses are confirmed, degenerating into an almost a priori argument that I find very implausible. I've found a fairly simple way of seeing how exactly the Hacking/White argument goes wrong, by combining Bostrom's self-sampling assumption with a technical fix independently arrived at by a few other philosophers. This solution does not generate the implausible a priori argument for the multiverse that previous approaches in the literature do, as long as the reference class (for applying the self-sampling assumption) satisfies some weak requirements.
The result is a critical review paper going through the literature while building up the concepts needed to understand the proposed solution. I've produced all the content by now, and am now mostly working on finishing a draft, integrating notation across sections, making it readable to philosophers with at least rudimentary knowledge of Bayesianism, and in general improving the paper to meet top-tier journal standards.
Yeah, this is an important subject. I'll probably read your paper.
I've found a fairly simple and apparently workable solution
To what, exactly?
Here's a summary and discussion of the affair, with historical comparison to the Gödel results and their reception (as well as comments from several luminaries, and David Chalmers) on a philosophy of mathematics blog whose authors seem to take the position that the reasons for consensus in the mathematical community are mysterious. (It is admitted that "arguably, it cannot be fully explained as a merely socially imposed kind of consensus, due to homogeneous ‘indoctrination’ by means of mathematical education.") This is a subject that needs to be discussed more on LW, in my opinion.
Yes, but just to iterate: it's a failure to empathize not a failure of empathy.
What does it mean to "cultivate an X based morality" and why should we do it? Why should we have an any-one-thing based morality? Obviously picking one moral emotion and only teaching and encouraging that is likely to leave important moral judgments out. I don't think even Peter Singer is recommending that. Nonetheless, empathy seems to have a central if not exclusive role in the motivation and development of lots of really important moral judgments. That empathy is not necessary for all moral judgments does not mean that it can be systematically replaced by other moral emotions in cases where it is central. Helping people is good! We should teach children to help people and laud those who do.
I'm not sure section 5 says... anything at all. All of the things said about empathy in this section are true of people. Try substituting one for the other. Which is to say, they're true for lots of other behaviors and emotions as well. Pointing out that biases affect empathy isn't helpful unless one has found a different moral emotion which inspires a extensionally similar moral judgment (one that leads to the same behaviors) that combines the motivational force of empathy without the vulnerability to bias. Anyone have candidates for that?
Edit: Prinz's suggestion is "outrage". He says we should get angry and indignant at the causes of suffering- claiming that this has more motivational power than empathy. This may be the case-- but outrage tends to come with empathy (unless the outrage is directed at something causing oneself harm) so it isn't clear how to evaluate this claim. More importantly, I see no reason at all to think outrage is less subject to bias. It can certainly be subject to in-group bias, proximity effects, salience effects. It can be easily manipulated. It also leads to people looking for an enemy where there isn't necessarily one. This leads to people ignoring causes of suffering like economic inefficiencies and institutional ineffectiveness in favor of targeting people perceived as greedy. A bit richly, he condemns the 'empathy-inspired' moral system of collectivism by referencing collectivist atrocities... as if they had nothing to do with outrage.
So he's outraged by people basing their moral decisions on empathy? I'm... not sure how to empathise with that emotion.
Lacking self-empathy sound a bit like Alexithymia
Interesting. But one could have the awareness, understanding, and ability to describe, but also an attitude of not caring, with regard to one's own emotions. Or at least some of them, sometimes.
On the other hand, I'm not sure the word 'emotions' means the same thing to everyone. I'm not even sure that what I take it to mean hasn't changed substantially.
ETA: Here I seem to be defining 'empathy' in yet another way. It's odd how my intuition about what a word means can vary situationally. It seems to me right now that I would want to claim I usually think 'empathising with X' is '(accurately) modelling the internal state of X'. But perhaps in contexts where the distinction is irrelevant I may also have been identifying the conjunct of '(accurately) modelling the internal state of X' and 'caring about the result' as 'empathising with X'. And then here I took 'empathy' to be just the 'caring about the result' part.
There seems to be a general tendency here to conflate 'empathy' with 'the particular (biased, inconsistent) ways humans tend to (attempt to) practise empathy'. The latter is obviously far less capable of constituting a basis for morality than the former, on just about any reasonable construal of 'morality' (another term the ambiguous employment of which obviates the usefulness of many an argument on such topics...).
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
What makes you believe that the US won't go as far ?
As executions? As far as I know, the U.S. has only ever had capital punishment for murder and treason. Defining 'use of technology that could be used to circumvent copyright protection' as 'treason' does not appear to be on the horizon yet. I think.