Comment author: Elo 30 March 2016 06:41:48AM 4 points [-]

this is a trade off that we make for partially completed survey data. On the one hand; the total number of questions was mentioned at the start (maybe could have been highlighted more), and there is a progress bar at the top of each page. I agree that this is not idea; does the trade off make more sense now?

In response to comment by Elo on Lesswrong 2016 Survey
Comment author: Kaj_Sotala 02 April 2016 07:58:23AM 1 point [-]

this is a trade off that we make for partially completed survey data.

Not sure what you mean by that?

But thanks for mentioning the progress bar, I didn't notice it at first. That helps somewhat.

Comment author: Kaj_Sotala 30 March 2016 06:18:19AM *  2 points [-]

I notice that the fact that I can't see all the questions on one page makes me feel more averse towards taking this survey. It makes me feel like there's a potentially infinite amount of content to be answered, lurking out of sight, whereas if it was all one page I'd always be clear on how many more questions there were left.

This format also makes it hard to answer questions out of order, skipping a hard one until I'm done with all the easy ones.

Comment author: Gram_Stone 20 March 2016 04:43:21PM 1 point [-]

Here are my ideological Turing test results of your comment:

People usually use the word intuition to refer to vague impressions that are not amenable to the same sort of justification as deliberative judgments, so these are different from the example that you provided of quickly inventing a deliberative rule and making errors in the process. This makes the purported counterexample less persuasive to me than you seem to expect it to be. Evaluate this comment in the context that we both still anticipate the same experiences, so this is likely a disagreement over word usage, and not likely to be highly significant.

I think this is a very productive criticism. I feel emphasis in italics makes it easier for me to write because it makes it more similar to the way that I speak, so please don't interpret them as aggressive. The way my mind goes down this path is thus:

I have to make the qualification that I don't believe that intuitions are vague feelings that cannot be justified, but vague feelings that have not been justified. There is always some fact of the matter as to whether or not it is, in some sense. But once again, probably something we would consider as disagreeing about word usage. But I think it's an important boundary to draw. From Evans (2006):

If intuition means based on feelings without access to explicit reasoning, then that sounds like a type 1 process. But in some applications it seems to mean naïve judgement, which could be based on explicit rules or heuristics that occur to an untrained judge, in which case they would be type 2.

People often use the phrase 'intuition' to refer to confident beliefs retrieved from cached memory, and the idea is that when you go wrong, it's because intuitions are unreliable. I'm getting at the possibility that that's what people say, but it's not the whole picture.

Say that you're a judge on Pop Idol or something like that, and you have no experience doing it, and you want to quickly come up with a rule, and you retrieved the reliable intuition that pop idols are usually very physically attractive, and then invented a deliberative rule that used your subjective rating of each candidate's physical attractiveness as a measure for evaluating their general Pop Idol factor, and suppose that physical attractiveness actually does not correlate perfectly with the true general Pop Idol factor. Then you would have begun with a reliable intuition and put it into an unreliable deliberative process and obtained an 'unreliable' result in the sense that it does not optimize for the purported normative criterion of Pop Idol judgment panels, which is the selection of the best Pop Idol; you would have picked the most attractive candidate instead, and you would have made a mistake on a higher level than using an unreliable intuition: you would have combined reliable intuitions in a deliberative but unreliable way. This is closely related to the 'System 1 is fast, System 2 is slow' distinction. Reasoning that looks like fast, unreliable intuitive reasoning can really just be fast, unreliable deliberative reasoning. So the main point is not about saying that there are a lot of counterexamples to 'intuitive' reasoning being System 1, but that if you want to do real work the category 'intuitive' won't cut it, because it's still a leaky generalization, even if it isn't that leaky. Does that all make sense?

Comment author: Kaj_Sotala 30 March 2016 06:14:46AM 1 point [-]

I liked your rephrasing of my comment. :) I felt that it was an accurate summary of what I meant.

I believe that we're in agreement about everything.

Comment author: gjm 20 March 2016 02:43:31AM 1 point [-]

HN has a mechanism for giving an article your seal of approval: it's called upvoting. More than that is only necessary if you expect your approval specifically to weigh more highly than that of other users.

Comment author: Kaj_Sotala 20 March 2016 05:33:22PM *  4 points [-]

Seeing comments from (say) three people who explicitly say that they agree or think I've done a good work, feels much better than just seeing three upvotes on my comment / post. I know that there are other people who feel the same. Our minds aren't good at visualizing numbers.

I think that "if you are particularly happy about something, you can indicate this with an explicit comment in addition to the upvote" is a good norm to have. Giving people extra reward for doing particularly good work is good.

Comment author: Kaj_Sotala 20 March 2016 03:59:02PM *  1 point [-]

The fourth common confusion is that Type 1 processes involve 'intuitions' or 'naivety' and Type 2 processes involve thought about abstract concepts. You might describe a fast-and-loose rule that you made up as a 'heuristic' and naively think that it is thus a 'System 1 process', but it would still be the case that you invented that rule by deliberative means, and thus by means of a Type 2 process. When you applied the rule in the future it would be by means of a deliberative process that placed a demand on working memory, not by some behavior that is based on association or procedural memory, as if by habit.

I suspect that we're disagreeing on the definitions of words rather than having any substantial difference in expectations, but: I think the way "intuition" is commonly used refers to vague feelings that you can't quite justify explicitly, not explicit heuristics that you've generated by deliberation. So your example doesn't really feel like a counterexample for the claim that intuitions are a Type 1 process.

Comment author: Kaj_Sotala 19 March 2016 01:11:51PM 2 points [-]

Great work!

A clarifying question - is this more of a "here are the changes that we're going to make unless people find serious problems with them" kind of document (implying that ~everything in it will be implemented), or more of a "here are changes that we think seem the most promising, later on we'll decide which ones we'll actually implement" type of document (implying that only some limited subset will be implemented)?

Comment author: TheAncientGeek 15 March 2016 09:58:44AM *  0 points [-]

When you say autonomous AIs, do you mean AIs that are autonomous and superinteligent?

Do you think they could he deployed by basement hackers, or only by large organisations?

Do you think an organisation like the military or business has a motivation to deploy them?

Do you agree that there are dangers to an FAI project that goes wrong?

Do you have a plan B to cope with a FAI that goes rogue?

Do you think that having a AI potentially running the world is an attractive idea to a lot of people?

Comment author: Kaj_Sotala 18 March 2016 10:58:28AM 0 points [-]

When you say autonomous AIs, do you mean AIs that are autonomous and superinteligent?

AIs that are initially autonomous and non-superintelligent, then gradually develop towards superintelligence. (With the important caveat that it's unclear whether an AI needed to be generally superintelligent in order to pose a major risk for society. It's conceivable that superintelligence in some more narrow domain, like cybersecurity, would be enough - particularly in a sufficiently networked society.)

Do you think they could he deployed by basement hackers, or only by large organisations?

Hard to say. The way AI has developed so far, it looks like the capability might be restricted to large organizations with lots of hardware resources at first, but time will likely drive down the hardware requirements.

Do you think an organisation like the military or business has a motivation to deploy them?

Yes.

Do you agree that there are dangers to an FAI project that goes wrong?

Yes.

Do you have a plan B to cope with a FAI that goes rogue?

Such a plan would seem to require lots of additional information about both the specifics of the FAI plan, and also the state of the world at that time, so not really.

Do you think that having a AI potentially running the world is an attractive idea to a lot of people?

Depends on how we're defining "lots", but I think that the notion of a benevolent dictator has often been popular in many circles, who've also acknowledged its largest problems to be that 1) power tends to corrupt 2) even if you got a benevolent dictator, you also needed a way to ensure that all of their successors were benevolent. Both problems could be overcome with an AI, so on that basis at least I would expect lots of people to find it attractive. I'd also expect it to be considered more attractive in e.g. China, where people seem to be more skeptical towards democracy than they are in the West.

Additionally, if the AI wouldn't be the equivalent of a benevolent dictator, but rather had a more hands-off role that kept humans in power and only acted to e.g. prevent disease, violent crime, and accidents, then that could be attractive to a lot of people who preferred democracy.

Comment author: [deleted] 12 March 2016 02:08:00AM 2 points [-]

The Civ 5 AI does cheat insofar as it doesn't have to deal with the fog of war, IIRC.

The XCOM AI seems to cheat because they've don't report the actual probability.

Comment author: Kaj_Sotala 14 March 2016 07:07:44PM *  0 points [-]

Right, I meant that Civ doesn't cheat when it comes to die rolls - e.g. if it displays a 75% chance for the player to win a battle, then the probability really is at least 75%.

It does cheat in a number of other ways.

Comment author: TheAncientGeek 13 March 2016 09:04:21PM *  1 point [-]

If you allow for autonomously acting AIs, then you could have Friendly autonomous AIs tracking down and stopping Unfriendly / unauthorized AIs.

You could, but if you don't have autonomously acting agents, you don't need Gort AIs. Building an agentive superintelligence that is powerful enough to take down any othe, as as MIRI conceives it, is a very risky proposition, since you need to get the value system exactly right. So its better not to be in a place where you have to do that,

This of course depends on people developing the Friendly AIs first, but ideally it'd be enough for only the first people to get the design right, rather than depending on everyone being responsible.

The first people have to be able as well as willing to get everything right, Safety through restraint is easier and more reliable. -- you can omit a feature more reliably than you can add one.

Business (which by nature covers just about every domain in which you can make a profit, which is to say just about every domain relevant for human lives), warfare, military intelligence, governance...

These organizations have a need for widespread intelligence gathering , and for agentive AI, but that doesn't mean they need both in the same package. The military don't need their entire intelligence database in every drone, and don't want drones that change their mind about who the bad guys are in mid flight. Businesses don't want HFT applications that decide capitalism is a bad thing.

We want agents to act on our behalf, which means we want agents that are predictable and controllable to the required extent. Early HFT had problems which led to the addition of limits and controls. Control and predictability are close to safety. There is no drive to power that is also a drive away from safety, because uncontrolled power is of no use.

Based on the behaviour of organisations, there seems to be natural division between high-level, unpredictable decision information systems and lower level, faster acting genitive systems. In other words, they voluntarily do some of what would be required for an incremental safety programme.

Comment author: Kaj_Sotala 14 March 2016 09:28:44AM 0 points [-]

I agree that it would be better not to have autonomously acting AIs, but not having any autonomously acting AIs would require a way to prevent anyone deploying them, and so far I haven't seen a proposal for that that'd seem even remotely feasible.

And if we can't stop them from being deployed, then deploying Friendly AIs first looks like the scenario that's more likely to work - which still isn't to say very likely, but at least it seems to have a chance of working even in principle. I don't see that an even-in-principle way for "just don't deploying autonomous AIs" to work.

Comment author: skeptical_lurker 11 March 2016 08:45:14AM *  2 points [-]

Zealots/muta/dragoons/Hydralisks is just a standard rock/paper/scissors game theory thing, and it shouldn't be too hard to calculate an approximate nash equlibrium. The problem is that there is micro, macro, game theory, imperfect information, and an AI has to tie all these different aspects together (as well as perhaps some perceptual chunking to reduce the complexity) so its a real challange for combining different cognitive modules. This is too close to AGI for comfort IMO.

Comment author: Kaj_Sotala 11 March 2016 07:20:42PM 5 points [-]

This is too close to AGI for comfort IMO.

Pretty sure it's still comfortably narrow AI. People used to think that chess required AGI-levels of intelligence, too.

View more: Prev | Next