This is Hari's business. She takes innocuous ingredients and makes you afraid of them by pulling them out of context.... Hari's rule? "If a third grader can't pronounce it, don't eat it." My rule? Don't base your diet on the pronunciation skills of an eight-year-old.
From http://gawker.com/the-food-babe-blogger-is-full-of-shit-1694902226
It would be a lot harder to make a machine that actually is conscious (phenomenally conscious, meaning it has qualia) than it would be to make one that just acts as if is conscious (in that sense). It is my impression that most LW commenters think any future machine that acts conscious probably is conscious.
I only recently realized that evolution works, for the most part, by changing the processes of embryonic development. There are some exceptions-- things like neoteny and metamorphosis-- but most changes are genetic differences leading to differences in, say, how long a process of growth is allowed to occur in the embryo.
This is exactly the point of asking "What Would Jesus Do?" Christians are asking themselves, what would a perfectly moral, all-knowing person do in this situation, and using the machinery their brains have for simulating a person to find out the answer, instead of using the general purpose reasoner that is so easily overworked. Of course, simulating a person (especially a god) accurately can be kind of tricky. Similar thoughts religious people use to get themselves to do things that they want to abstractly but are hard in the moment: What would I do if I were the kind of person I want to become? What would a perfectly moral, all-knowing person think about what I'm about to do?
I think that arguments like this are a good reason to doubt computationalism. That means accepting that two systems performing the same computations can have different experiences, even though they behave in exactly the same way. But we already should have suspected this: it's just like the inverted spectrum problem, where you and I both call the same flower "red," but the subjective experience I have is what you would call "green" if you had it. We know that most computations even in our brains are not accompanied by conscious perceptual experience, so it shouldn't be surprising if we can make a system that does whatever we want, but does it unconsciously.
Sorry, I was just trying to paraphrase the paper in one sentence. The point of the paper is that there is something wrong with computationalism. It attempts to prove that two systems with the same sequence of computational states must have different conscious experiences. It does this by taking a robot brain that calculates the same way as a conscious human brain, and transforms it, always using computationally equivalent steps, to a system that is computationally equivalent to a digital clock. This means that either we accept that a clock is at every mome...
Check out "Counterfactuals Can't Count" for a response to this. Basically, if a recording is different in what it experiences than running a computation, then two computations that calculate the same thing in the same way, but one has bits of code that never run, experience things differently.
One rational ability that people are really good at that is hard (i.e. we haven't made much progress in automating) is applying common sense knowledge to language understanding. Here's a collection of sentences where the referent is ambiguous, but we don't even notice because we are able to match it up as quickly as we read: http://www.hlt.utdallas.edu/~vince/data/emnlp12/train-emnlp12.txt
You can read a paper on EURISKO here. My impression is that the program quickly exhausted the insights he put in as heuristics, and began journeying down eccentric paths that were not of interest to a human mathematician.
Yes, that's what I'm saying. The other ones are meant to prove a point. This one is just to make you laugh, just like the one it is named after. http://www.mindspring.com/~mfpatton/Tissues.htm
"Unlike these other highly-contrived hypothetical scenarios we invent to test extreme corner-cases of our reasoning, this highly-contrived hypothetical scenario is a parody. If you ever find yourself in the others, you have to take it seriously, but if you find yourself in this one, you are under no such obligation."
We know that some complex processes in our own brains happen unaccompanied by qualia. This is uncontroversial. It doesn't seem unlikely to me that all the processes needed to fake perceptual consciousness convincingly could be implemented using a combination of such processes. I don't know what causes qualia in my brain and so I'm not certain it would be captured by the emulation in question-- for example, the emulation might not be at a high enough level of detail, might not exploit quantum mechanics in the appropriate way, or whatever. Fading and dancing...
I turned in my PhD dissertation. Here's the title and first paragraph of the abstract:
PRODUCTIVE VISION: METHODS FOR AUTOMATED IMAGE COMPREHENSION
Image comprehension is the ability to summarize, translate, and answer basic questions about images. Using original techniques for scene object parsing, material labeling, and activity recognition, a system can gather information about the objects and actions in a scene. When this information is integrated into a deep knowledge base capable of inference, the system becomes capable of performing tasks that, when performed by students, are considered by educators to demonstrate comprehension.
(Basically it is computer vision combined with Cyc.)
I think the reason AI and nanotech often go together in discussions of the future is summed up in this quote by John Cramer: "Nanotechnology will reduce any manufacturing problem, from constructing a vaccine that cures the common cold to fabricating a starship from the elements contained in sea water, to what is essentially a software problem."
When people make purchasing decisions, pricing models that are too complex make them less likely to purchase. If it's too confusing to figure out whether something is a good deal or not, we generally tend to just assume it's a bad deal. See http://ideas.repec.org/p/ags/ualbsp/24093.html (Choice Environment, Market Complexity and Consumer Behavior: A Theoretical and Empirical Approach for Incorporating Decision Complexity into Models of Consumer Choice), for example.
I occasionally read the blog of Scott Adams, the author of Dilbert. He claims to believe that the world is a simulation, but who can blame him? His own situation is so improbable he must cast about for some explanation. I predict that among celebrities (and the unusually successful in other fields), there is an unusually high amount of belief that just by wanting things hard enough they will come to you-- because, like everyone else, they wished for something in life, but unlike most people, they actually got it.
Perhaps Columbus's "genius" was simply to take action. I've noticed this in executives and higher-ranking military officers I've met-- they get a quick view of the possibilities, then they make a decision and execute it. Sometimes it works and sometimes it doesn't, but the success rate is a lot better than for people who never take action at all.
This sort of argument was surprisingly common in the 18th and 19th century compared to today. The Federalist Papers, for example, lay out the problem as a set of premises leading inexorably to a conclusion. I find it hard to imagine a politician successfully using such a form of argument today.
At least that's my impression; perhaps appeals to authority and emotion were just as common in the past as today but selection effects prevent me from seeing them.
I really enjoyed the first part of the post-- just thinking about the fact that my future goals will be different from my present ones is a useful idea. I found the bit of hagiography about E.Y. at the end weird and not really on topic. You might just use a one or two sentence example: He wanted to build an A.I., and then later he didn't want to.
Regarding Cyberpunk, Gibson wasn't actually making a prediction, not in the way you're thinking. He was always making a commentary on his own time by exaggerating certain aspects of it. See here, for instance: http://boingboing.net/2012/09/13/william-gibson-explains-why-sc.html
I think trying to understand organizational intelligence would be pretty useful as a way of getting a feel for the variety of possible intelligences. Organizations also have a legal standing as artificial persons, so I imagine that any AI that wanted to protect its interests through legal means would want to be incorporated. I'd like to see this explored further. Any suggestions on good books on the subject of corporations considered as AIs?
Can you give me a concrete course of action to take when I am writing a paper reporting my results? Suppose I have created two versions of a website, and timed 30 people completing a task on each web site. The people on the second website were faster. I want my readers to believe that this wasn't merely a statistical coincidence. Normally, I would do a t-test to show this. What are you proposing I do instead? I don't want a generalization like "use Bayesian statistics, " but a concrete example of how one would test the data and report it in a paper.
I think a lot of people are misunderstanding the linked xkcd, or maybe I am. The way I see it, It's not about misusing the word "logic." It's about people coming in from the outside, thinking that just because they are smart, they know how to solve problems in a field that they are completely inexperienced in, and have spent very little time thinking about compared to those who think about it as a full time job.
Look, I like Less Wrong. It's fun. But if you want to have an influence on the world, you need to engage with the discussions the professionals are having. You need to publish in scientific journals. You need to play the game that's out there well enough to win. I don't think people should feel insulted by my suggesting this. Getting insulted by ideas that make us uncomfortable isn't what I feel this place is about.
I tried to search for it before I posted, but failed to find it. Nice to see at least one other person felt the same way on reading the comic. I feel like we as a group are sometimes guilty of trying to reinvent the wheel instead of participating in the scholarly philosophy and AI communities by publishing papers. It's a lot easier this way, and there's less friction, but some of this has been said before, and smart people have already thought about it.
"it doesn't share any of the characteristics that make you object to murder of the usual sort." I disagree -- it shares the most salient aspect of murder, namely the harm it does to the future of the human being being murdered. The other features are also objectionable, but a case of murder that doesn't have any of those features (say, the painless murder of a baby with no close acquaintances, friends or family) is still rightfully considered murder. This is why most abortion advocates (unlike the author of this article) do not consider a fetus a "human being" at all. If they did, they would have to confront this argument head on.
One book that takes a very mechanical approach to story plot is Dramatica Theory (free, online, see link below). If I were to try to write a program to write fiction, I'd start with this and see what I could automate.
http://www.dramatica.com/theory/theory_book/dtb.html
cultureulterior is talking about plots to overthrow governments.
No effect from practice? How would the necessary mental structures get built for the mapping from the desired sound to the finger motions for playing the violin? Are you saying this is all innate? What about language learning? Anyone can write like Shakespeare in any language without practice? Sorry, I couldn't believe it even if such an AI told me that.
This kind of attitude is common among my friends who are more technical, but it can really damage communications with most people. "You're an idiot" doesn't just communicate "you're wrong" it says that you lack the ability to think at all, so all of your conclusions, whether related to this subject at all, are worthless. A good friend might take that in the way you intend, but there's no reason anyone else should. What is being called a Dark Art is something that Hermione would use; something that shows that you care about the other pe...
Under theories like loop quantum gravity, doesn't some "fabric of spacetime" exist? I would call that a refinement of the idea of the ether. It has odd properties in order to allow relativity, but it hasn't been ruled out.