All of summerstay's Comments + Replies

Under theories like loop quantum gravity, doesn't some "fabric of spacetime" exist? I would call that a refinement of the idea of the ether. It has odd properties in order to allow relativity, but it hasn't been ruled out.

This is Hari's business. She takes innocuous ingredients and makes you afraid of them by pulling them out of context.... Hari's rule? "If a third grader can't pronounce it, don't eat it." My rule? Don't base your diet on the pronunciation skills of an eight-year-old.

From http://gawker.com/the-food-babe-blogger-is-full-of-shit-1694902226

It would be a lot harder to make a machine that actually is conscious (phenomenally conscious, meaning it has qualia) than it would be to make one that just acts as if is conscious (in that sense). It is my impression that most LW commenters think any future machine that acts conscious probably is conscious.

1Sabiola
How would you tell the difference? I act like I'm conscious too, how do you know I am?
2hyporational
I haven't gotten that impression. The p-zombie problem those other guys talk about is a bit different since human beings aren't made with a purpose in mind and you'd have to explain why evolution would lead to brains that only mimic conscious behavior. However if human beings make robots for some purpose it seems reasonable to program them to behave in a way that mimics behavior that would be caused by consciousness in humans. This is especially likely since we have hugely popular memes like the Turing test floating about. I tend to believe that much simpler processes than we traditionally attribute consciousness to could be conscious in some rudimentary way. There might even be several conscious processes in my brain working in parallel and overlapping. If this is the case looking for human-like traits in machines becomes a moot point.
2polymathwannabe
EY has declared that P-zombies are nonsense, but I've had trouble understanding his explanation. Is there any consensus on this?

I only recently realized that evolution works, for the most part, by changing the processes of embryonic development. There are some exceptions-- things like neoteny and metamorphosis-- but most changes are genetic differences leading to differences in, say, how long a process of growth is allowed to occur in the embryo.

There's a reason everyone started calling it "the hard problem." Chalmers explained the problem so clearly that we now basically just point and say "that thing Chalmers was talking about."

This is exactly the point of asking "What Would Jesus Do?" Christians are asking themselves, what would a perfectly moral, all-knowing person do in this situation, and using the machinery their brains have for simulating a person to find out the answer, instead of using the general purpose reasoner that is so easily overworked. Of course, simulating a person (especially a god) accurately can be kind of tricky. Similar thoughts religious people use to get themselves to do things that they want to abstractly but are hard in the moment: What would I do if I were the kind of person I want to become? What would a perfectly moral, all-knowing person think about what I'm about to do?

0Shmi
Right. Unfortunately, they know they are not as good as Jesus, so this fails more often than not. However, simulating oneself with just one small difference, the way OP suggests, is probably much easier and so likely be more successful.

I assumed that was the intention of the writers of Donnie Darko. The actual shapes coming out of their chests we got were not right, but you could see this is what they were trying to do.

I think that arguments like this are a good reason to doubt computationalism. That means accepting that two systems performing the same computations can have different experiences, even though they behave in exactly the same way. But we already should have suspected this: it's just like the inverted spectrum problem, where you and I both call the same flower "red," but the subjective experience I have is what you would call "green" if you had it. We know that most computations even in our brains are not accompanied by conscious perceptual experience, so it shouldn't be surprising if we can make a system that does whatever we want, but does it unconsciously.

Sorry, I was just trying to paraphrase the paper in one sentence. The point of the paper is that there is something wrong with computationalism. It attempts to prove that two systems with the same sequence of computational states must have different conscious experiences. It does this by taking a robot brain that calculates the same way as a conscious human brain, and transforms it, always using computationally equivalent steps, to a system that is computationally equivalent to a digital clock. This means that either we accept that a clock is at every mome... (read more)

Check out "Counterfactuals Can't Count" for a response to this. Basically, if a recording is different in what it experiences than running a computation, then two computations that calculate the same thing in the same way, but one has bits of code that never run, experience things differently.

0asr
The reference is a good one -- thanks! But I don't quite understand the rest of your comments. Can you rephrase more clearly?

I found the draft via this post from the end of June 2013.

One rational ability that people are really good at that is hard (i.e. we haven't made much progress in automating) is applying common sense knowledge to language understanding. Here's a collection of sentences where the referent is ambiguous, but we don't even notice because we are able to match it up as quickly as we read: http://www.hlt.utdallas.edu/~vince/data/emnlp12/train-emnlp12.txt

You can read a paper on EURISKO here. My impression is that the program quickly exhausted the insights he put in as heuristics, and began journeying down eccentric paths that were not of interest to a human mathematician.

Here's my advice: always check Snopes before forwarding anything.

5Shmi
I wish there was a checkbox in email sites and clients "check incoming messages against known urban myths". Probably no harder to implement than the current automatic scam and spam filtering.

Yes, that's what I'm saying. The other ones are meant to prove a point. This one is just to make you laugh, just like the one it is named after. http://www.mindspring.com/~mfpatton/Tissues.htm

3sixes_and_sevens
So if you found yourself in the unlikely scenario of a regular Newcomb's Problem, you have an answer for it; but if you found yourself in the unlikely scenario of this problem, you wouldn't feel obliged to be able to answer it?

I think most of the commenters aren't getting that this is a parody. Edit: It turns out I was wrong.

I'm at the current MIRI workshop, and the Ultimate Newcomb's Problem is not a parody.

"Unlike these other highly-contrived hypothetical scenarios we invent to test extreme corner-cases of our reasoning, this highly-contrived hypothetical scenario is a parody. If you ever find yourself in the others, you have to take it seriously, but if you find yourself in this one, you are under no such obligation."

It's a life and death matter: if the upload won't be ikrase, then he will be killed in the process of uploading. Naturally he doesn't care as much about whether or not a new person will be created than whether he will continue to exist.

4ikrase
If I am killed in the process of uploading (thus creating an immortal child of my mind), that is far, far, far, better than dying utterly, but not as good as continuous consciousness. In particular, most uploading techniques seem like they would allow the unlimited duplication of people and would not necessarily destroy the original, which worries me. (Hanson cites this as an advantage of the em-verse, which convinces me of his immorality). However, I am not yet convinced that I would be willing to casually upload.

We know that some complex processes in our own brains happen unaccompanied by qualia. This is uncontroversial. It doesn't seem unlikely to me that all the processes needed to fake perceptual consciousness convincingly could be implemented using a combination of such processes. I don't know what causes qualia in my brain and so I'm not certain it would be captured by the emulation in question-- for example, the emulation might not be at a high enough level of detail, might not exploit quantum mechanics in the appropriate way, or whatever. Fading and dancing... (read more)

I turned in my PhD dissertation. Here's the title and first paragraph of the abstract:

PRODUCTIVE VISION: METHODS FOR AUTOMATED IMAGE COMPREHENSION

Image comprehension is the ability to summarize, translate, and answer basic questions about images. Using original techniques for scene object parsing, material labeling, and activity recognition, a system can gather information about the objects and actions in a scene. When this information is integrated into a deep knowledge base capable of inference, the system becomes capable of performing tasks that, when performed by students, are considered by educators to demonstrate comprehension.

(Basically it is computer vision combined with Cyc.)

Perhaps a good place to start would be the literature on life satisfaction and happiness. Statistically speaking, what changes in life that can be made voluntarily lead to the greatest increase in life satisfaction at the least cost in effort/money/trouble?

I think the reason AI and nanotech often go together in discussions of the future is summed up in this quote by John Cramer: "Nanotechnology will reduce any manufacturing problem, from constructing a vaccine that cures the common cold to fabricating a starship from the elements contained in sea water, to what is essentially a software problem."

When people make purchasing decisions, pricing models that are too complex make them less likely to purchase. If it's too confusing to figure out whether something is a good deal or not, we generally tend to just assume it's a bad deal. See http://ideas.repec.org/p/ags/ualbsp/24093.html (Choice Environment, Market Complexity and Consumer Behavior: A Theoretical and Empirical Approach for Incorporating Decision Complexity into Models of Consumer Choice), for example.

I occasionally read the blog of Scott Adams, the author of Dilbert. He claims to believe that the world is a simulation, but who can blame him? His own situation is so improbable he must cast about for some explanation. I predict that among celebrities (and the unusually successful in other fields), there is an unusually high amount of belief that just by wanting things hard enough they will come to you-- because, like everyone else, they wished for something in life, but unlike most people, they actually got it.

Perhaps Columbus's "genius" was simply to take action. I've noticed this in executives and higher-ranking military officers I've met-- they get a quick view of the possibilities, then they make a decision and execute it. Sometimes it works and sometimes it doesn't, but the success rate is a lot better than for people who never take action at all.

6wedrifid
Executives and higher ranking military officers also happen to have the power to enforce their decisions. Making decisions and acting on them can be possible without that power but the political skill required is far greater, the rewards lower, the risks of failure greater and the risks of success non-negligible.

This sort of argument was surprisingly common in the 18th and 19th century compared to today. The Federalist Papers, for example, lay out the problem as a set of premises leading inexorably to a conclusion. I find it hard to imagine a politician successfully using such a form of argument today.

At least that's my impression; perhaps appeals to authority and emotion were just as common in the past as today but selection effects prevent me from seeing them.

1ChristianKl
Today's politicians don't use writing as their primary means of convincing other people. Airplane travel is cheap. It doesn't cost much to get a bunch of people into a room behind closed doors and talk through an issue.
6Eugine_Nier
Also, in the past the people you were trying to convince were likely to be better educated.

I really enjoyed the first part of the post-- just thinking about the fact that my future goals will be different from my present ones is a useful idea. I found the bit of hagiography about E.Y. at the end weird and not really on topic. You might just use a one or two sentence example: He wanted to build an A.I., and then later he didn't want to.

0Mimosa
Not exactly. The core idea remains the same, but the method in which he's getting there has, and the type of mind that he wants to create has changed.

Regarding Cyberpunk, Gibson wasn't actually making a prediction, not in the way you're thinking. He was always making a commentary on his own time by exaggerating certain aspects of it. See here, for instance: http://boingboing.net/2012/09/13/william-gibson-explains-why-sc.html

Great! This means that in order to develop an AI with a proper moral foundation, we just need to reduce the following statements of ethical guidance to predicate logic, and we'll be all set:

  1. Be excellent to each other.
  2. Party on, dudes!
-2MugaSofer
He does say that if you need more detailed knowledge you should read the metaethics sequence.
5BerryPick6
Is the first time that movie's ever been mentioned in the context of this site? Well done.

I think trying to understand organizational intelligence would be pretty useful as a way of getting a feel for the variety of possible intelligences. Organizations also have a legal standing as artificial persons, so I imagine that any AI that wanted to protect its interests through legal means would want to be incorporated. I'd like to see this explored further. Any suggestions on good books on the subject of corporations considered as AIs?

-2MugaSofer
I agree with your main point, but I'm not sure why an AI would want to acquire the corporate form of personhood. After all, you still need a human to sign contracts and, at least on paper, make decisions; all they'd get out of it is a bunch of rules about the best interest of the shareholders and so on.
2latanius
... Accelerando by Charles Stross, while not exactly being a scientific analysis, had some ideas like this. It also wasn't bad.
2TimS
I'm not sure an AI would want to be incorporated - mostly because I'm not sure what legal effects you are trying to describe. If the AI were an asset of the corporation, it would be beholden to the interests of the shareholders of the corporation. If the AI were a shareholder, it would presumably already have the legal rights of a person that motivated consideration of the corporate form. More generally, incorporation is a legally approved way of apportioning liability. If my law firm was incorporated, I would not be liable for actions taken by my firm, even if I was the only shareholder. But I can't duck liability for my own actions, like if I committed legal malpractice, regardless of the legal formalities I used. (That's one reason I didn't make the effort to incorporate the firm). But an AI isn't initially concerned with avoiding legal liability. That only matters after the law recognizes the AI's ability to be held responsible at all. My laptop can neither enter into nor enforce a contract. Competence to enter a contract is the legal status an AGI would desire.

Perhaps you would suggest showing the histograms of completion times on each site, along with the 95% confidence error bars?

2jsteinhardt
Presumably not actually 95%, but, as gwern said, a threshold based on the cost of false positives.
2gwern
I'd suggest more of a scattergram than a histogram; superimposing 95% CIs would then cover the exploratory data/visualization & confidence intervals. Combine that with an effect size and one has made a good start.

Can you give me a concrete course of action to take when I am writing a paper reporting my results? Suppose I have created two versions of a website, and timed 30 people completing a task on each web site. The people on the second website were faster. I want my readers to believe that this wasn't merely a statistical coincidence. Normally, I would do a t-test to show this. What are you proposing I do instead? I don't want a generalization like "use Bayesian statistics, " but a concrete example of how one would test the data and report it in a paper.

6XFrequentist
You could use Bayesian estimation to compute credible differences in mean task completion time between your groups. Described in excruciating detail in this pdf.
2summerstay
Perhaps you would suggest showing the histograms of completion times on each site, along with the 95% confidence error bars?

I think a lot of people are misunderstanding the linked xkcd, or maybe I am. The way I see it, It's not about misusing the word "logic." It's about people coming in from the outside, thinking that just because they are smart, they know how to solve problems in a field that they are completely inexperienced in, and have spent very little time thinking about compared to those who think about it as a full time job.

0prase
I don't disagree about the intended message of the linked xkcd. By a "strawman rationalist with a silly conception of logic" I meant exactly that: a person who assumes that every problem can be solved by "logical thinking" and underestimates the role of expertise. (Since logic alone in the proper sense isn't sufficient to produce results - right or wrong - except in small subclass of problems, this attitude needs some abuse of the word "logic", if one indeed uses the phrase "to think logically" to denote one's own behaviour. But that's besides the point.) The disagreement arises when you claim that the described failure mode is typical for LW. To me it seems more typical for the rather ordinary sort of crackpots who think they can do everything better than anybody else and when they lose in a direct competition they claim that the rules have been stacked against them (another part the xkcd is mocking). To show that LW is like this, it doesn't suffice to point out that we (for whatever meaning of we) aren't enough influential. You have additionally to show that we are unaware of the difficulties or are finding inane excuses for our lack of success. Edit: to further clarify, I don't treat LW as means of influencing the world.

I thought that Randall Munroe might be talking about LW, but I wasn't sure, so I asked if anyone else had the same impression. At least one other person did. Most people didn't.

Look, I like Less Wrong. It's fun. But if you want to have an influence on the world, you need to engage with the discussions the professionals are having. You need to publish in scientific journals. You need to play the game that's out there well enough to win. I don't think people should feel insulted by my suggesting this. Getting insulted by ideas that make us uncomfortable isn't what I feel this place is about.

5prase
But this is a different critique, isn't it? Not being able to significantly influence the world for whatever reasons is one thing, being a strawman rationalist with a silly conception of "logic" is another thing. You may be right that LW isn't a highly influential community, but that's not what the linked xkcd is about.
3wedrifid
If I was remotely interested in taking what you say personally the offense I would take is regarding the presumptive condescension. I imagine the phrase "No shit!" may even spring to mind.
2ArisKatsaris
It's one thing to believe LW isn't immune to such failures to win, it's another thing to suggest that Rundall Munroe had us specifically in mind when he was writing this. If you are to offer the first criticism, perhaps you oughtn't present it as a criticism specifically targetted to us by Rundall Munroe.

I tried to search for it before I posted, but failed to find it. Nice to see at least one other person felt the same way on reading the comic. I feel like we as a group are sometimes guilty of trying to reinvent the wheel instead of participating in the scholarly philosophy and AI communities by publishing papers. It's a lot easier this way, and there's less friction, but some of this has been said before, and smart people have already thought about it.

"it doesn't share any of the characteristics that make you object to murder of the usual sort." I disagree -- it shares the most salient aspect of murder, namely the harm it does to the future of the human being being murdered. The other features are also objectionable, but a case of murder that doesn't have any of those features (say, the painless murder of a baby with no close acquaintances, friends or family) is still rightfully considered murder. This is why most abortion advocates (unlike the author of this article) do not consider a fetus a "human being" at all. If they did, they would have to confront this argument head on.

0Dolores1984
Or, in some cases, we consider it to be definitely human -- just not a person.

Interviewer: How do you answer critics who suggest that your team is playing god here?

Craig Venter: Oh... we're not playing.

There are very few people who would have understood in the 18th century, but Leibniz would have understood in the 17th. He underestimated the difficulty in creating an AI, like everyone did before the 1970s, but he was explicitly trying to do it.

0[anonymous]
Your definition of "explicit" must be different from mine. Working on prototype arithmetic units and toying with the universal characteristic is AI research? He subscribed wholeheartedly to the ideographic myth; the most he would have been capable of is a machine that passes around LISP tokens. In any case, based on the Monadology, I don't believe Leibniz would consider the creation of a godlike entity to be theologically possible.

Oh, it's not so bad a quote. If we define sanity around here as being more Bayesian (that's the waterline we're trying to raise, right?) then defining insanity as refusal to update when more data comes would make sense.

I was thinking the same thing. The things he thinks should be obvious by now (such as the quirrel/voldemort connection) ought to be made explicit in an appropriate point-of-view so we can puzzle over the things that he wants the reader to be puzzling over.

When we exert willpower or mental effort, it uses up glucose from the blood in the brain. One way you could explain the exhaustion that comes from using magic is that it requires mental effort to the point of creating dangerously low levels of blood sugar in the brain.

I'm kidding, by the way. Anyone who has seen it would know that it has a lot of broad slapstick humor.

Fawlty Towers is a good example of the understated and deadpan nature of British comedy.

0summerstay
I'm kidding, by the way. Anyone who has seen it would know that it has a lot of broad slapstick humor.

One book that takes a very mechanical approach to story plot is Dramatica Theory (free, online, see link below). If I were to try to write a program to write fiction, I'd start with this and see what I could automate.

http://www.dramatica.com/theory/theory_book/dtb.html

cultureulterior is talking about plots to overthrow governments.

0Paulovsk
Yeah, I notice that, but maybe it can be useful. Your link has a more direct knowledge. Thanks for that.

No effect from practice? How would the necessary mental structures get built for the mapping from the desired sound to the finger motions for playing the violin? Are you saying this is all innate? What about language learning? Anyone can write like Shakespeare in any language without practice? Sorry, I couldn't believe it even if such an AI told me that.

3MugaSofer
Clearly, we all learn really fast.

This kind of attitude is common among my friends who are more technical, but it can really damage communications with most people. "You're an idiot" doesn't just communicate "you're wrong" it says that you lack the ability to think at all, so all of your conclusions, whether related to this subject at all, are worthless. A good friend might take that in the way you intend, but there's no reason anyone else should. What is being called a Dark Art is something that Hermione would use; something that shows that you care about the other pe... (read more)

-2wedrifid
You seem to have misread what I said. In fact you have it approximately backwards. The opening of"but that doesn't necessary mean it is a bad thing. Just that is normal social behavior." makes it rather clear that the disagreement you present here is not with me.

Summa Theologica is a good example of what happens when you have an excellent deductive system (Aquinas was great at syllogisms) and flawed axioms (a literal interpretation of the Bible).

6Jayson_Virissimo
Aquinas probably meant something different by "literal interpretation" than you think. For instance, I'm pretty sure he agreed with Augustine that the six days of creation were not literally six periods of 24 hours.

Dolphins do in fact engage in infanticide, among other behaviors we would consider evil if done by a human. But no one suggests we should be policing them to keep this from happening.

Load More