All of jpet's Comments + Replies

jpet80

Parrots and other birds seem to be about that intelligent, and octopi are close.

Perhaps that's an argument for the difficulty of the chimp to human jump: we have (nearly) ape-level intelligence evolving multiple times, so it can't be that hard, but most lineages plateaued there.

2jacob_cannell
The conditions for the chimp to human jump require a series of changes where each brain increase enables better language/tools that pays for the increased costs. Parrots/birds don't seem to have a feasible path like that - light bodies designed for flight, lack of hands. Cetaceans can easily grow and support large brains but fire doesnt work under water and most tool potentials are limited. Elephants seem to be the most likely runner up, if primates weren't around - perhaps in a few tens of millions or hundreds of years there could have been a pachyderm civilization. So yeah - it might be somewhat rare, but its hard to say, as it didn't take that long on earth.
jpet00

That doesn't look right--if she just flipped H, then THT is also eliminated. So the renormalization should be:

HH: 1/2

HT: 0

THH: 1/4

THT: 0

TTH: 1/4

TTT: 0

Which means the coin doesn't actually change anything.

0DanielLC
In the THT case, on Monday she flips heads. Thus, if she flips heads, and has no way of knowing whether or not it's monday, she can't eliminate the possibility of THT.
jpet280

Took it. Comments:

  • Hopefully you have a way to filter out accidental duplicates (i.e. a hidden random ID field or some such), because I submitted the form by accident several times while filling it out. (I was doing it from my phone, and basically any slightly missed touch on the UI resulted in accidental submission).

  • Multiple choice questions should always have a "none" option of some kind, because once you select a radio button option there's no way to deselect it. Most of them did but not all.

  • I answered "God" with a significant p

... (read more)
3Kurros
It defined "God" as supernatural didn't it? In what sense is someone running a simulation supernatural? Unless you think for some reason that the real external world is not constrained by natural laws?
jpet100

Your "dimensionless" example isn't dimensionless; the dimensions are units of (satandate - whalefire).

You only get something like a reynolds number when the units cancel out, so you're left with a pure ratio that tells you something real about your problem. Here you aren't cancelling out any units, you're just neglecting to write them down, and scaling things so that outcomes of interest happen to land at 0 and 1. Expecting special insight to come out of that operation is numerology.

Great article other than that, though. I hadn't seen this quote ... (read more)

3[anonymous]
Hmm. You are right, and I should fix that. When we did that trick in school, we always called it "dimensionless", but you are right it's distinct from the pi-theorem stuff (reynolds number, etc). I'll rethink it. Edit: Wait a minute, on closer inspection, your criticism seems to apply to radians (why radius?) and reynolds number (characteristic length and velocity are rather arbitrary in some problems). Why are some unit systems "dimensionless", and others not? More relevently, taboo "dimensionless", why are radians better (as they clearly are) than degrees or grads or arc-minutes? Why is it useful to pick the obvious characteristic lengths and velocities for Re, as opposed to something else. For radians, it seems to be something to do with euler's identity and the mathematical foundations of sin and cos, but I don't know how arbitrary those are, off the top of my head. For Re, I'm pretty sure it's exactly so that you can do numerology by comparing your reynolds number to reynolds numbers in other problems where you used the same charcteristic length (if you used D for your L in both cases, your numerology will work, if not, not). I think this works the same in my "dimensionless" utility tricks. If we are consistent about it, it lets us do (certain forms of) numerology without hazard.
jpet210

Hi all, I'm Jeff.

I've started mentally rewarding myself with a happy thought and a smile when I catch myself starting a bad habit ("Hey! I noticed!") instead of castigating myself ("Doh! I'm doing it again!"). Seems to work so far; we'll see how it goes.

I started using the Pomodoro technique today (pick a task, work on it for 25 minutes, break for 5, repeat). I'll had to adjust it somewhat to deal with interruptions during the day, but that wasn't too hard: when I get done with the interruption, I just have less time before the next bre... (read more)

3Mercurial
Awesome job putting yourself forward this way! This is flooding, from Critch's session on overcoming aversions. :-) (This is Valentine, by the way. I'll see if I can get my handle here changed since "Mercurial" just isn't well-associated with me.)
jpet60

During-meetup report: is the meetup still on? Brandon and his sign aren't here, and I don't see a likely group. The waitress had no idea who I was asking about.

Two different baby showers, though. I could join one of those instead.

Update: located one other LWer. We talked about the sequences and whatnot for an hour; then I had to go. On my way out discovered the table with five more folks.

Lesson for next meetup: bigger sign.

0BrandonReinhart
I posted below. I got sick with the flu this week and have been in bed. I wasn't in any condition to go. :(
jpet180

I don't see how this differs at all from Searle's Chinese room.

The "puzzle" is created by the mental picture we form in our heads when hearing the description. For Searle's room, it's a clerk in a room full of tiles, shuffling them between boxes; for yours, it's a person sitting at a desk scratching on paper. Since the consciousness isn't that of the human in the room, where is it? Surely not in a few scraps of paper.

But plug in the reality for how complex such simulations would actually have to be, if they were to actually simulate a human brain... (read more)

jpet270

One of the most popular such ideas is to replicate the brain by copying the neurons and seeing what happens. For example, IBM's Blue Brain project hopes to create an entire human brain by modeling it neuron for neuron, without really understanding why brains work or why neurons do what they do.

No, the Blue Brain project (no longer affiliated with IBM, AFAIK) hopes to simulate neurons to test our understanding of how brains and neurons work, and to gain more such understanding.

If you can simulate brain tissue well enough that you're reproducing the actu... (read more)

jpet70

Where in this system would you place a thorough and accurate, but superficial model that described the phenomenon? If I've made a lot of observations, collected a lot of data, and fit very good curves to it, I can do a pretty good job of predicting what's going to happen--probably better than you, in a lot of cases, if you're constrained by model that reflects a true understanding of what's going on inside.

If we're trying to predict where a baseball will land, I'm going to do better with my practiced curve-fitting than you are with your deep understanding ... (read more)

2SilasBarta
Yes, those are all examples of stage 1 -- where you have some system that gives answers, and works, even though you can't say why. They are extensions of the "primitively understood" part of reality that I mention in Level 2 (but which counts as Level 1). I don't know why you say they're not "generative" -- when make a prediction with the black box inside you, you've generated a prediction. However, as I mentioned in another comment, there can be partial progress on the levels. For example, if your experience with people allows you to make predictions in very different areas of human behavior, in such a way that the predictions relate to each other and have implications for each other, that would be progress into Level 2. (Though this would still be a shallow connection to your other models because it only connects to phenomena involving human behavior.)
jpet10

Ah, I misunderstood the comment. I just assumed that Gallo was in on it, and the claim was that customers of Gallo failing to complain constituted evidence of wine tasting's crockitude.

If Gallo's wine experts really did get taken in, then yes, that's pretty strong evidence. And being the largest winery, I'm sure they have many experts checking their wines regularly--too many to realistically be "in" on such a scam.

So you've convinced me. Wine tasting is a crock.

jpet20

If "top winery" means "largest winery", as it does in this story, I don't see how it says anything about the ability of tasters to tell the difference. Those who made such claims probably weren't drinking Gallo in the first place.

They were passing of as expensive, something that's actually cheap. Where else would that work so easily, for so long?

I think it's closer to say they were passing off as cheap, something that's actually even cheaper.

Switch the food item and see if your criticism holds:

Wonderbread, America's top bread maker,... (read more)

0Douglas_Knight
If people who can tell the difference are a big enough demographic to sell to, then they are employed by all wineries, regardless of quality. But an alternate explanation is that Gallo was tacitly in on the scam - they got as much PN as Sideways demanded, without moving the market.
1SilasBarta
If people made such a huge deal about the nuances in the taste of bread, while it also "happened" to have psychoactive effects that, gosh, always have to be present for the bread to be "good enough" for them, and cheap breads were still normally several times the cost of comparable-nutrition food, then yes, the cases would be parallel. (Before anyone says it: Yes, I know bread as trace quantities of alcohol, we're all proud of what you learned in chemistry.)
jpet160

Part of the problem stems from different uses of the word "caution".

There are a range of possible outcomes for the earth's climate (and the resulting cost in lives and money) over the next century ranging from "everything will be fine" to "catastrophic"; there is also uncertainty over the costs and benefits of any given intervention. So what should we do?

Some say, "Caution! We don't know what's going to happen; let's not change things too fast. Keep our current policies and behaviors until we know more."

Others say, &... (read more)

1whpearson
I have proposed things similar to those you have suggested as arguments against runaway AI, mainly to show how little we do actually understand about what it takes to be intelligent with finite resources. I wouldn't use these as arguments that it isn't going to be a problem, just that working to understand real-world intelligence might be a more practical activity than trying to build safe guards against scenarios we don't have a strong inside view for.
jpet40

Another form of argumentus interruptus is when the other suddenly weakens their claim, without acknowledging the weakening as a concession

I used to do this quite often. Usually in personal conversations rather than online, because I would get caught up in trying to win. I didn't really notice I was doing it until I heard someone grumbling about such behavior and realized I was among the guilty. Now I try to catch myself before retreating, and make sure to acknowledge the point.

So not much to add, other than the encouraging observation that people can occasionally improve their behavior by reading this sort of stuff.

jpet40

It seems like you missed one hypothesis: maybe you're mistaken about the people in question, and they actually never were all that intelligent. They achieved their status via other means. It's an especially plausible error because they have high status--surely they must have got where they are by dint of great intellect!

0magfrump
Some people seem to or claim to gain status from intelligence, especially academics. Aside from that I believe (though I don't really have evidence) that it is in evidence that high status and intelligence correlate somewhat, though this correlation breaks down at very high levels of intelligence (but not at high levels of status as far as I know).
4thomblake
It is dangerously tempting to think that way - "He is high status because he is good at seeking status, rather than being intelligent" which sort of implies "I am low status because I am intelligent, rather than being good at seeking status". cf. Nietzsche's master/slave morality.
jpet00

Define a "representative" item sample as one coming from a study containing explicit statements that (a) a natural environment had been defined and (b) the items had been generated by random sampling of this environment.

Can you elaborate on what this actually means in practice? It doesn't make much sense to me, and the paper you linked to is behind a paywall.

(It doesn't make much sense because I don't see how you could rigorously distinguish between a "natural" or "unnatural" environment for human decision-making. But maybe they're just looking for cases where experimenters at least tried, even without rigor?)

3Kaj_Sotala
If I understood the paper correctly, the following situation would be analogous. (I'll have to recheck it tomorrow to make sure this example does match what they're actually saying - it's too late here for me to do it now.) Imagine that you know that 30% of the people living in a certain city are black, and 70% are white. Next you're presented with questions where you have to guess whether a certain inhabitant of the city is black or white. If you don't have any other information, you know that consistently guessing "white" in every question will get you 70% correct. So when the questionnaire also asks you for your calibration, you say that you're 70% certain for each question. Now, assuming that the survey questions had been composed by randomly sampling from all the inhabitants of the city (a "representative" sampling), then you would indeed be correct about 70% of the time and be well-calibrated. But assume that instead, all the people the survey asked about live in a certain neighborhood, which happens to be predominantly black (a "selected" sampling). Now you might have only 40% right answers, while you indicated a confidence of 70%, so the researchers behind the survey mark you as overconfident. Of course, in practice this is a bit more complicated as people don't only use the ecological base rate but also other information that they happen to have at hand, but since the other information acts to modify their starting base rate (the prior), the same logic still applies.
jpet30

Serious nitpicking going on here. The whole point of my post is that from the information provided, one should arrive at probabilities close to what I said.

It's not "nitpicking" to calibrate your probabilities correctly. If someone was to answer innocent with probability 0.999, they should be wrong about one time in a thousand.

So what evidence was available to achieve such confidence? No DNA, no bloodstains, no phone calls, no suspects fleeing the country, no testimony. Just a couple of websites. People make stuff up on websites all the time.... (read more)

jpet00

I've seen the paper, but it assumes the point in question in the definition of partially rational agents in the very first paragraph:

If these agents agree that their estimates are consistent with certain easy-to-compute consistency constraints, then... [conclusion follows].

But peoples' estimates generally aren't consistent with his constraints, so even for someone who is sufficiently rational, it doesn't make any sense whatsoever to assume that everyone else is.

This doesn't mean Robin's paper is wrong. It just means that faced with a topic where we wo... (read more)

jpet60

I think there's another, more fundamental reason why Aumann agreement doesn't matter in practice. It requires each party to assume the other is completely rational and honest.

Acting as if the other party is rational is good for promoting calm and reasonable discussion. Seriously considering the possibility that the other party is rational is certainly valuable. But assuming that the other party is in fact totally rational is just silly. We know we're talking to other flawed human beings, and either or both of us might just be totally off base, even if we're hanging around on a rationality discussion board.

1TAG
Assuming honesty is pretty problematical, too. In real-world disputes, participants are likely to disagree about what constitutes evidence ("the Bible says.."), aren't rational, and suspect each others honesty.
2gwern
I believe Hanson's paper on 'Bayesian wannabes' shows that even only partially rational agents must agree about a lot.
jpet140

I was unfamiliar with the case. I came up with: 1 - 20% 2 - 20% 3 - 96% 4 - probably in the same direction, but no idea how confident you were.

From reading other comments, it seems like I put a different interpretation on the numbers than most people. Mine were based on times in the past that I've formed an opinion from secondhand sources (blogs etc.) on a controversial issue like this, and then later reversed that opinion after learning many more facts.

Thus, about 1 time in 5 when I'm convinced by a similar story of how some innocent person was falsel... (read more)

0Sebastian_Hagen
I disagree. Do we have specific data about the correlation between the controversy of jury rulings, and their accuracy (or some half-decent proxy, like the likelihood of the rulings being sustained in appeal)? Most of the controversy in this specific case appears to originate from people who have significantly worse access to the factual evidence than the jury; and it's likely to be in the interest of some entities reporting about this case to play up the controversy to attract readers. I don't think there's any strong evidence to be gained from this, and consider the original ruling to still be significant evidence even after taking the controversy into account.
5Cyan
This is a very interesting analysis -- I like your choice of reference set and your Outside View approach.