All of billswift's Comments + Replies

I think fictional evidence isn't terribly convincing.

Indeed. Try Hans-Herman Hoppe's Democracy: The God that Failed or Graham's The Case Against Democracy. Neither is all that convincing that monarchy is much better than democracy, but they make a decent case that it is at least marginally better. Note that Hoppe's book obviously started as a collection of articles, it is seriously repetitive. Both books are short and fairly easy reads.

You are grossly over-simplifying anti-intellectualism, some streams of which are extremely valuable. Your claim only fits the "thalamic anti-intellectual", one of at least five broad types Eric Raymond discusses.

The most important and useful to society is the "epistemic-skeptical anti-intellectual. His complaint is that intellectuals are too prone to overestimate their own cleverness and attempt to commit society to vast utopian schemes that invariably end badly." Of course lefties who want to change society to fit their theories try... (read more)

1Epiphany
You sound like you've researched this. If I wanted to get a really good idea of what both sides mean by elitism and understand the problem better, is there some reading you could recommend for that?
0Epiphany
Interesting link, however, this looks like a tangent. If this is more related than I realize, please point out the connection.
0cata
Thanks for this link. I think it just boils down to more arguing about words -- as far as I can tell, I agree with what you and he are actually saying, but I was using "intellectual" more sloppily to refer to people who interact with culture via argument, ideas, and art, regardless of whether they dabble in politics, perform what Eric criticizes as "ceaseless questioning," or whether they have an inclination toward "vast utopian schemes." It was sort of a throwaway remark and not very well thought-through.

I only recently ran into a good simple explanation for Bayes-- that the more detailed a prediction becomes, the less likely it is to be true.

That looks like a good way of explaining the conjunction and narrative fallacies, too. They could easily be looked at as adding details to a simpler argument. I wonder what other fallacies could be "generalized" similarly?

One thing I think we should be working on is a way of organizing the mass of fallacies and heuristics. There are too many to keep straight without some sort of organizing principles.

Go to Google Scholar and search on "argument maps" and "argument diagram", you'll get plenty of hits.

0roryokane
Links: * Google Scholar search for "argument maps" * Google Scholar search for "argument diagram"

The survey is ended and he has posted the results, A Survey Question.

Another possibility I saw, though it probably wasn't intended, is that both pickled and stewed are slang for drunk; maybe they are really powerful fruits.

billswift120

You might find this useful, it isn't a source of papers, it is first-hand accounts by autistics and what life and other people were like to them. This one, Don't Mourn For Us, is probably the best general description. A quote from it:

You try to relate to your autistic child, and the child doesn't respond. He doesn't see you; you can't reach her; there's no getting through. That's the hardest thing to deal with, isn't it? The only thing is, it isn't true.

Look at it again: You try to relate as parent to child, using your own understanding of normal child

... (read more)

Not really. If you look at a periodic table, the vast majority actually are metals.

0wedrifid
The vast majority are metals, and saying they all are is wrong (except in as much as authority within the clique is able to redefine such things). It's also distasteful and lazy to formalise the misuse. I'd be embarassed if I were an astronomer.
billswift-20

The world (including brains) is strictly deterministic. The only source of our mental contents are our genetics and what we are "taught" by our environments (and the interactions between them). The only significant difference between rat and human brains for the purpose of uploading should be the greater capacity and more complex interactions supported by human brains.

At least for the three examples you cited, I seem to remember them bring called approximations, not "correct".

4Stuart_Armstrong
What's the difference between a singularity, and an approximate singularity? :-)

My comment from July 5, "Go Bayes! So if you just make your priors big enough, you never have to change your mind.", was rather snarky, but it illustrates a real problem. If your priors are not reasonably accurate, it takes a lot of new information and updating to get it straightened out. That is one reason a lot of introductions to Bayes rule use medical decision making which has reasonably well-established base-rates (priors) to begin with.

0johnlawrenceaspden
Not quite never, and the predictions of your various theories are also priors. So suppose I'm a physicist in the 19th century. And I've got two theories 'Classical Physics' and 'We're wrong about everything'. My prior for classical physics will be truly immense because of all its successful predictions, and little bits of evidence like seeing clocks on trains running a bit slow won't affect my beliefs in any noticeable way, because I'll always be able to explain them in much more sensible ways than 'physics is broken'. But once I realise that I literally can't come up with any classical explanation for the observed motion of Mercury, then my immense prior gets squashed out of existence by the hideous unlikeliness of seeing those results if classical physics is true. Something somewhere is broken, and all my probability mass moves over into 'we don't understand'. If you've got an immense prior belief in a theory that can explain anything at all, then yes, that's hard to shift.

Off-topic, but in the context of "best mistake", here is John Ringo's definition of serendipity from The Last Centurion:

We were saved by serendipity. (Which is a term meaning "I fucked up but things came out better than if I hadn't.")

Derek Lowe also commented on the studies. Repeating my comment there:

So the comparison of the two experiments shows that underfeeding results in life extension over monkeys that over-eat, but not over monkeys that eat a normal diet. Where is the surprise there?

ADDED: I just noticed the paragraph here is missing a key bit of information needed to make sense of my comment. The WNPRC experiment, which found positive results from calorie restriction, fed their controls ad libitum, as much as they wanted to eat. The newer NIA experiment fed the controls a standard, healthy diet, and found no effect of diet restriction.

  1. Evolution doesn't stop. We have continued to evolve, adapting to new environments, including foods.
0[anonymous]
But it's really slow, some new traits are lactas in adult age, gluten tolerance, but rerouting your entire metabolism is something altogether. Besides optimising from evolutions point of view (e.i. maximise reproduction) is not the same as optimising from most peoples point of view (e.g. living longer, being healthy past middle age).

I don't disagree with anything in this comment, I was just pointing out that "deliberate practice" has several requirements, including practice being separate from execution, that makes it less usable, or even totally unusable, for some areas, such as decision making and choosing. The other main requirements are that it has a specific goal, should not be enjoyable and, as you pointed out, that is is challenging. Another thing, that is not part of the original requirements but is encompassed by them, is that you are not practicing when you are in "flow".

Some places, the "deliberate practice" idea breaks down, choosing and decision making is one of them. There is no way to "practice" them except by actually making chooses and decisions; separating practice from normal execution is not possible.

0wattsd
I agree that the only way to practice decisions is to make them, but I think there is more to it than that. The deliberate part of deliberate practice is that you are actively trying to get better. The deliberate performance paper I linked to touches on this a bit, in that deliberate practice is challenging for professionals and that something else might work better (they advocate the first 5 methods in that paper). Beyond making decisions, you need to have an expectation of what will happen, otherwise hindsight bias is that much harder to overcome. It's the scientific method: hypothesis->test->new hypothesis. Without defining what you expect ahead of time, it is much easier to just say "Oh yeah, this makes sense" and normalize without actually improving understanding.

Colleges have a breadth requirement; one source I read suggested using that to take a writing heavy course in history or philosophy that requires lots of short papers in order to improve your writing.

0Kindly
It's also possible to get lots of writing experience by taking advanced foreign language courses.

Except that if the simulation really is accurate, his response should be already taken into account. Reality is deterministic, an adequately accurate and detailed program should be able to predict exactly. Human free will relies on the fact that our behavior has too many influences to be predicted by any past or current means. Currently, we can't even define all of the influences.

2Xachariah
If the simulation is really accurate, then the GLUT would enter an infinite loop if he uses an 'always do the opposite' strategy. Ie, "Choose either heads or tails. The oracle predicts you will choose ." If his strategy is 'choose heads because I like heads' then the oracle will correctly predict it. If his strategy is 'do what the oracle says', then the oracle can choose either heads or tails, and the oracle will predict that and get it correct. If his strategy is 'flip a coin and choose what it says' then the oracle will predict that action and if it is a sufficiently powerful oracle, get it correct by modeling all the physical interactions that could change the state of the coin. However, if his strategy is 'do the opposite', then the oracle will never halt. It will get in an infinite recursion choosing heads, then tails, then heads, then tails, etc. until it crashes. It's no different than an infinite loop in a computer program. It's not that the oracle is inaccurate. It's that a recursive GLUT cannot be constructed for all possible agents.

You add a parallel module to solve the new issue and a supervisory module to arbitrate between them. There are more elaborate systems that could likely work better for many particular situations, but even this simple system suggests there is little substance to your criticism. See Minsky's Society of Mind, or some papers on modularity in evolutionary psych, for more details.

0Dolores1984
Sure you can add more modules. Except that then you've got a car-driving module, and a walking module, and a stacking-small-objects module, and a guitar-playing module, and that's all fine until somebody needs to talk to it. Then you've got to write a Turing-complete conversation module, and (as it turns out) having a self-driving car really doesn't make that any easier.

The reason for expanding a narrow AI is the same for a tool agent not staying restricted; the narrow domain they are designed to function in is embedded in the complexity of the real world. Eventually someone is going to realize that the agent/AI can provide better service if they understand more about how their jobs fit into the broader concerns of their passengers/users/customers and decide to do something about it.

0atucker
AIXI is able to be widely applicable because it tries to model every possible program that the universe could be running, and then it eventually starts finding programs that fit. Driverless cars may start containing modeling things other than driving, and may even start trying to predict where their users are going to be, but I suspect that it would try and just track user habits or their smartphones, rather than trying to figure out their owner's economic and psychological incentives for going to different places. Trying to build a car that's generally capable of driving and figuring out new things about driving might be dangerous, but there's plenty of useful features to give people before they get there. Just wondering, is your intuition coming from the tighter tie to reality that a driverless car would have?

I think a working AGI is more likely to result from expanding or generalizing from a working driverless car than from an academic program somewhere. A program to improve the "judgement" of a working narrow AI strikes me as a much more plausible route to GAI.

1Eliezer Yudkowsky
There are proverbs about how trying to generalize your code will never get to AGI. These proverbs are true, and they're still true when generalizing a driverless car. I might worry to some degree about free-form machine learning algorithms at hedge funds, but not about generalizing driverless cars.
4Kaj_Sotala
Our evolutionary history would seem to support this view - to a first approximation, it would seem to me like general intelligence effectively evolved by stacking one narrow-intelligence module on top of another. Spiders are pretty narrow intelligence, rats considerably less so.
0atucker
Narrow-AI driverless cars will probably not decide that they need to take over the world in order to get to their destination in the most efficient way. Even if it would be better, I would be very surprised if they decided to model the world that generally for the purposes of driving. There's only so much modeling of the world/general capability you need in order to solve very domain-specific problems.
2Douglas_Knight
Note that the driverless car itself came from "an academic program somewhere."
0jmmcd
Has LW, or some other forum, held any useful previous discussion on this topic?

Listen to actual conversation sometime, most of it is excruciatingly boring if you think about it in terms of information. But as other posters have pointed out, most conversation is about social bonding, not exchanging information.

Or for representing phenomena in an altered "format". For example, I have read a description of the bimetallic spring in a thermostat as a model of the room's temperature presented in a way that the furnace can make use of it.

Humans normally get away with their biases by not examining them closely, and when the biases are pointed out to them by denying that they, personally are biased. Willful ignorance and denial of reality seem to be two of the most common human mental traits.

That has a link to a new article by Sylvia Engdahl who has written on the importance of space for years, http://www.sylviaengdahl.com/space.htm

I think this would be the most useful, even if it was only partially completed, since even a partial database would help greatly with both finding previously unrecognized biases and with the logic checking AI. It may even make the latter possible without the natural language understanding that Nancy thinks would likely be needed for it.

0Epiphany
What I'm seeing is that a rational agreement software would require some kind of objective method for marking logical fallacies, which the logic checking AI would obviously be helpful with. Not sure why the rationalist agreement database would help with creating the logic checking AI, unless you mean it can act like a sort of "wizard" where you can go through your document with it one piece at a time and have a sort of "chat" with it about what the rationalist agreement database contains, fed to you in little carefully selected bits.

I criticize FAI because I don't think it will work. But I am not at all unhappy that someone is working on it, because I could be wrong or their work could contribute to something else that does work even if FAI doesn't (serendipity is the inverse of Murphy's law). Nor do I think they should spread their resources excessively by trying to work on too many different ideas. I just think LessWrong should act more as a clearinghouse for other, parallel ideas, such as intelligence amplification, that may prevent a bad Singularity in the absence of FAI.

Everybody does that anyway, it is usually called second-guessing yourself. The best rule is to not decide under pressure unless you really have to, take the time to think things through.

it depends upon your past self having more information than your current self.

Or maybe you just spent more time thinking it through before. "Never doubt under pressure what you have calculated at leisure." I think that previous states should have some influence on your current choices. As the link says:

If your evidence may be substantially incomplete you shouldn't just ignore sunk costs -- they contain valuable information about decisions you or others made in the past, perhaps after much greater thought or access to evidence than that of which you are currently capable.

0OrphanWilde
That presumes you've forgotten why you did something to begin with, your reasoning having created that information. Again, given the precise conditions, I think it's a perfectly fine argument. I just don't find those conditions more probable than the converse, which is to say, having more information.
2drethelin
Also remember the corollary that any decision made under pressure could probably stand to be reviewed at leisure.
billswift-10

I see you found yet another problem, with no way to get more utilons you die when those in the box are used up. And utility theory says you need utility to live, not just to give you reason to live.

2rocurley
This is contrary to my understanding. Can you explain, please?
billswift-10

There are no other ways to get utilons.

Is a weakness in your argument. Either you can survive without utilons, a contradiction to utility theory, or you wait until your "pre-existing" utilons are used up and you need more to survive.

6wedrifid
Utilons don't need to be associated with survival. Survival can be a mere instrumental good used to increase the amount of actual utilons generated (by making, say, paperclips). I get the impression that you mean something different by the word than what the post (and the site) mean.
selylindi140

This suggests a joke solution: Tell people about the box, then ask them for a loan which you will repay with proceeds from the box. Then you can live off the loan and let your creditors worry about solving the unsolvable.

0Mestroyer
What's wrong with not having any more reason to live after you get the utilons?

Even worse, unlike your examples, rationality isn't a single, focussed "skillset", but a broad collection of barely related skills. Learning to avoid base rate neglect helps little if at all with avoiding honoring sunk costs which helps little with avoiding the narrative fallacy. You need to tackle them almost independently. That is one reason why I tend to emphasize the need to stop and think, when you can. Even if you have not mastered the particular fallacy that may be about to trip you up, you are more likely to notice a potential problem if you get in the habit of thinking through what you are doing.

they are always too brittle and inflexible to carry you on in any meaningful, long-term sort of way.

What you need to do is to capture it, then use it to help you take the next step; then keep taking those next steps.

The very first thing you need to do is to STOP reading, write down whatever caused your epiphany, and think about the next step. Too much of the self-help and popular psychological literature are written like stories, which, while make them more readable and more likely to be read, tends to encourage readers to keep on reading through it all. If you are reading for change, you need to read it like a textbook, for the information, rather than entertainment.

pjeby140

Too much of the self-help and popular psychological literature are written like stories, which, while make them more readable and more likely to be read, tends to encourage readers to keep on reading through it all. If you are reading for change, you need to read it like a textbook, for the information, rather than entertainment.

This is why most of the successful self-help gurus pack their books full of stories and insights, but leave the actual training for in-person workshops, or at least for higher-bandwidth or interactive media. Most of the challen... (read more)

Studies against the effectiveness of preventative medicine aren't new, they have been published repeatedly for decades, I have read several myself as early as 1993. And of course the RAND study that Robin discussed repeatedly.

I don't know if it may help develop a helpful phrase, but another thing to keep in mind is that the link between what information you have and the problem you want to solve is often not obvious. You often need to play around with the information before you can figure out how it can be used to solve the problem.

And the complexity of real world problems can confuse the issue even more, so it helps to try to simplify or generalize the problem, so you can see what the core of the problem actually is, first.

Next we come to what I’ll call the epistemic-skeptical anti-intellectual. His complaint is that intellectuals are too prone to overestimate their own cleverness and attempt to commit society to vast utopian schemes that invariably end badly. Where the traditionalist decries intellectuals’ corrosion of the organic social fabric, the epistemic skeptic is more likely to be exercised by disruption of the signals that mediate voluntary economic exchanges. This position is often associated with Friedrich Hayek; one of its more notable exponents in the U.S. is Thomas Sowell, who has written critically about the role of intellectuals in society.

From Eric Raymond

-7Shmi
2OrphanWilde
Politics is the Mindkiller is an irrational mantra... well, let's test that theory. I'm going to construct an article to test the thesis.
-1TrE
Here in Germany, we've been living in a social market economy (probably about what you mean) for decades, and so far it has worked fine. Just to provide a datapoint that the economic landscape has multiple local maxima when ordered in the 2 dimensions economic left/right and social libertarian/authoritative (as The Political Compass does)

Well, the question is, do the specific effects of damage look more like the effects that the "radio receiver" hypothesis would predict, or the ones that the "electronic brain" hypothesis would predict?

There is a big difference between an audio distortion and a semantic distortion. The radio-receiver hypothesis predicts that we can introduce audio distortion, but not that we can make the voice stop talking about vegetables. If we can only get the former sort of effect, then we are messing with a device that didn't understand vegetables i... (read more)

gjm270

I don't understand. Did you read the rest of what I wrote, where I gave some specific examples of the kind of damage we're talking about? (Note: they weren't intended to be neurologically perfectly accurate.) Do you not agree that if you had a device that produced such effects when damaged, it would be grossly unreasonable to think it was a radio rather than an AI?

[EDITED to add: Of course I agree that there are situations, quite different from what we see in the real world, that could also -- just barely -- be described by saying "particular kinds of... (read more)

OTOH, there are downsides to being too secure: you're less likely to be kidnapped, but it's likely to be worse if you ARE.

Indeed, for a recent, real world example, the improvement in systems to make cars harder to steal led directly to the rise of carjacking in the 1990s.

If you read the second sentence, I do too; it's just a very weak disadvantage when compared to almost any suffering. If I didn't consider it at least somewhat disadvantageous, I wouldn't be around now to write about it.

1Richard_Kennaway
That seems to imply that you would rather commit suicide than, say, endure a toothache for a few days. Really?

"Anything is easy if you're not the one that has to do it." Claiming something is easy, without giving an actual means of doing it, is a cheap rhetorical trick, one of the "dark arts".

Her point was that honest people who know that many people do steal would be penalized.

2Viliam_Bur
I guess something similar is already used by people (consciously or unconsciously), and that honest people with exceptional knowledge/experience already are penalized. Reading this article may help them recognize why, and reduce the penalty.

Except for possible disutility to family and friends, oblivion has a lot to recommend it; not least that you won't be around to regret it afterward. It isn't something to seek, since you won't have any positive utility afterward, but it isn't something that is worth enduring much suffering to avoid either.

-1Richard_Kennaway
I judge that a disadvantage.

The genome is the ultimate spaghetti code. I would not be at all surprised if some genes code for the direct opposite characteristics in the presence of other genes. It is going to take more than just running relatively simple correlation studies to untangle the functions of most genes. We are going to have to understand what proteins they code for and how they work.

Sounds are waves transmitted by air. Waves can reinforce or cancel each other, but cancelling can only go so far (to zero), so what is left is the sound resulting from the reinforced waves.

If you have nothing to lose, consider desperate high risk options. If you are in a comfortable position consolidate and avoid risk.

I have, in fact, seen this given as investment advice. If you are going to go broke in a big way, take risks; this is the time to play the lottery, you probably won't win, but you might and by this point you have nothing to lose. If you have plenty of wealth, and aren't playing for the thrills, this is when you play safe, no more need to take significant risks at this point.

1Shmi
That strategy successfully killed a number of large banks, Barings being a classic example.
1Manfred
Hm. I would disagree - you only ask "how do I lose" when you're close to winning and winning is a "stable" state (like going on a fun vacation), and you only ask "how do I win" when you're close to losing and losing is a known stable state (like having to call a tow truck to pull you out of a muddy canyon). When you're in situations without a convenient "floor" (or ceiling) to stand on, then you stick with the middle question, which is "what will put me ahead?" I've edited something like this into the post.
wedrifid180

I have, in fact, seen this given as investment advice. If you are going to go broke in a big way, take risks

Absolutely. This also applies if you have an insane government that will bail you out if your risky investment doesn't pay off.

Not really, because I don't think they are distinct in the way you suggested; rather, I think safety issues are a subset of "things I'll likely regret".

ADDED: Or at least safety issues where things actually do go wrong are "things I'll likely regret".

0handoflixue
The set of regrets NOT related to safety, and the set of regrets over safety, are two separate sets. Or, if you must, two separate subsets of "things I'll likely regret." Most people seem to intuitively understand the idea of "emotional" vs "safety" regrets when it comes to sex. i.e. the difference between "I wish I hadn't slept with her, because it ruined our friendship" vs "I wish I hadn't slept with her, because now it burns when I pee."

They are not mutually exclusive. I can't think of anything I would regret more than causing a permanent injury to myself or another person.

1Paul Crowley
Is there a better word for the distinction I'm trying to draw?

There is one significant question about ethics that has been skirted around, but, as far as I remember, never specifically addressed here. "Why should any particular person follow any ethical or moral rule?" Kai Nielsen has an entire book, Why Be Moral?, devoted to the issue, but doesn't come to a good reason.

Humans' inherited patterns of behavior are a beginning, Nielsen only addresses purely philosophical issues in the book, but still not adequate for what then becomes the question, "Why not defect?"

0OrphanWilde
I believe the answer to this question is "Because the rule maximizes one's ethical values." (Without getting into the act versus rule argument, which figures into my post, where I am, to some extent, arguing against act utilitarianism on the grounds that it is too computationally expensive.) Of course, that leads directly into the question, "Why should any particular person hold any particular ethical value?" I don't believe this question has an answer that doesn't lead directly into another ethical value, which is why I hold ethical values as axioms.
  • I have often regretted my speech, never my silence. - - Publilius Syrus

  • Regret for the things we did can be tempered by time; it is regret for the things we did not do that is inconsolable. - - Sydney Harris

  • My version: You will regret missed opportunities far more than anything you actually do.

1wedrifid
Totally not true. Things I actually do have far more salience. Things I don't actually do I usually don't even remember and if I do they certainly don't drag around much in the way of emotional weight. Perhaps my regret mechanism is different?
4Viliam_Bur
But that's probably a bias. You often don't realize what you missed; and even if you do, the missed things are usually in a far mode.
Load More