Comment author: see 18 April 2015 09:11:39PM 2 points [-]

Er, a few species of placental mammal are hardly "widely separated lineages". Trying to draw conclusions for completely alien biologies by looking at convergent evolution inside a clade with a single common ancestor in the last 2-or-3% of the history of life on Earth is absurd. And the fact that the Placentalia start with an unusually high EQ among vertebrates-as-a-whole make it a particularly unsuitable lineage for estimating the possibilities of independent evolution of high animal intelligence.

Comment author: jpet 19 April 2015 01:24:34AM 5 points [-]

Parrots and other birds seem to be about that intelligent, and octopi are close.

Perhaps that's an argument for the difficulty of the chimp to human jump: we have (nearly) ape-level intelligence evolving multiple times, so it can't be that hard, but most lineages plateaued there.

In response to Anthropic Atheism
Comment author: DanielLC 13 January 2014 03:14:28AM 11 points [-]

Suppose sleeping beauty secretly brings a coin into the experiment and flips when she wakes up. There are now six possible combinations of heads and tails, each with their own possibilities:

HH: 1/4

HT: 1/4

THH: 1/8

THT: 1/8

TTH: 1/8

TTT: 1/8

When she wakes up and flips the coin, she notices it lands on heads. This eliminates two of the possibilites. Now renormalizing their values:

HH: 2/5

HT: 0

THH: 1/5

THT: 1/5

TTH: 1/5

TTT: 0

She can conclude that the coin landed on tails with 60% probability, rather than the normal 50% probability. She could flip the coins more times. Doing so, she will asymptotically approach 2/3 probability that it landed on tails.

Perhaps she gets caught with the coin, and has it taken away. This isn't a problem. She can just look at dust specks, or any other thing she can't predict and won't be consistent. For all intents and purposes, she's using SSA. There's a difference if she's woken so many times that it's likely she'll make exactly the same observations more than once, but that takes her being woken order of 10^million times.

In response to comment by DanielLC on Anthropic Atheism
Comment author: jpet 16 January 2014 07:52:37AM 0 points [-]

That doesn't look right--if she just flipped H, then THT is also eliminated. So the renormalization should be:

HH: 1/2

HT: 0

THH: 1/4

THT: 0

TTH: 1/4

TTT: 0

Which means the coin doesn't actually change anything.

Comment author: jpet 25 November 2013 08:03:52PM 23 points [-]

Took it. Comments:

  • Hopefully you have a way to filter out accidental duplicates (i.e. a hidden random ID field or some such), because I submitted the form by accident several times while filling it out. (I was doing it from my phone, and basically any slightly missed touch on the UI resulted in accidental submission).

  • Multiple choice questions should always have a "none" option of some kind, because once you select a radio button option there's no way to deselect it. Most of them did but not all.

  • I answered "God" with a significant probability because the way the definitions is phrased, I would say it includes whoever is running the simulation if the simulation hypothesis is true. I'm sure many people interpreted it differently. I'd suggest making this distinction explicit one way or the other next time.

In response to Pinpointing Utility
Comment author: jpet 30 January 2013 06:33:09PM 8 points [-]

Your "dimensionless" example isn't dimensionless; the dimensions are units of (satandate - whalefire).

You only get something like a reynolds number when the units cancel out, so you're left with a pure ratio that tells you something real about your problem. Here you aren't cancelling out any units, you're just neglecting to write them down, and scaling things so that outcomes of interest happen to land at 0 and 1. Expecting special insight to come out of that operation is numerology.

Great article other than that, though. I hadn't seen this quote before: "We have practically defined numerical utility as being that thing for which the calculus of mathematical expectations is legitimate." For me that really captures the essence of it.

Comment author: jpet 16 May 2012 03:07:54AM *  15 points [-]

Hi all, I'm Jeff.

I've started mentally rewarding myself with a happy thought and a smile when I catch myself starting a bad habit ("Hey! I noticed!") instead of castigating myself ("Doh! I'm doing it again!"). Seems to work so far; we'll see how it goes.

I started using the Pomodoro technique today (pick a task, work on it for 25 minutes, break for 5, repeat). I'll had to adjust it somewhat to deal with interruptions during the day, but that wasn't too hard: when I get done with the interruption, I just have less time before the next break. (I'm keeping the breaks at :25 and :55 to make it easier to keep track.)

There are a number of minor tasks that I've been putting off for weeks (or months) that I finished today, just because I was stuck in the middle of a 25-minute assignment and I wasn't allowing myself to switch to something "more important" until then. So so far Pomodoro is very promising.

I allocated the first time block this morning to scraping together my notes and planning for the week. I didn't get a plan made, but I did realize how ridiculously overcommitted I was once I started thinking of tasks in terms of available half-hour slots.

I also have a strong aversion to posting my writing publicly, especially if it reveals anything personal about myself. So this post right here is a direct attempt to overcome that by just doing it. I'm not sure if this is using any specific technique from the minicamp, or just making use of the crazy mental energy from the camp while I'm still feeling it.

Comment author: jpet 20 February 2011 10:22:16PM *  4 points [-]

During-meetup report: is the meetup still on? Brandon and his sign aren't here, and I don't see a likely group. The waitress had no idea who I was asking about.

Two different baby showers, though. I could join one of those instead.

Update: located one other LWer. We talked about the sequences and whatnot for an hour; then I had to go. On my way out discovered the table with five more folks.

Lesson for next meetup: bigger sign.

Comment author: jpet 21 August 2010 09:00:38PM *  12 points [-]

I don't see how this differs at all from Searle's Chinese room.

The "puzzle" is created by the mental picture we form in our heads when hearing the description. For Searle's room, it's a clerk in a room full of tiles, shuffling them between boxes; for yours, it's a person sitting at a desk scratching on paper. Since the consciousness isn't that of the human in the room, where is it? Surely not in a few scraps of paper.

But plug in the reality for how complex such simulations would actually have to be, if they were to actually simulate a human brain. Picture what the scenarios would look like running on sufficient fast-forward that we could converse with the simulated person.

You (the clerk inside) would be utterly invisible; you'd live billions of subjective years for every simulated nanosecond. And, since you're just running a deterministic program, you would appear no more conscious to us than an electron appears conscious as it "runs" the laws of physics.

What we might see instead is a billion streams of paper, flowing too fast for the eye to follow, constantly splitting and connecting and shifting. Cataracts of fresh paper and pencils would be flowing in, somehow turning into marks on the pages. Reach in and grab a couple of pages, and we could see how the marks on one seemed to have some influence on those nearby, but when we try to follow any actual stimulus through to a response we get lost in a thousand divergent flows, that somehow recombine somewhere else moments later to produce an answer.

It's not so obvious to me that this system isn't conscious.

Comment author: jpet 22 July 2010 07:30:10PM *  21 points [-]

One of the most popular such ideas is to replicate the brain by copying the neurons and seeing what happens. For example, IBM's Blue Brain project hopes to create an entire human brain by modeling it neuron for neuron, without really understanding why brains work or why neurons do what they do.

No, the Blue Brain project (no longer affiliated with IBM, AFAIK) hopes to simulate neurons to test our understanding of how brains and neurons work, and to gain more such understanding.

If you can simulate brain tissue well enough that you're reproducing the actual biological spike trains and long-term responses to sensory input, you can be pretty sure that your model is capturing the relevant brain features. If you can't, it's a pretty good indication that you should go study actual brains some more to see if you're missing something. This is exactly what the Blue Brain project is: simulate a brain structure, compare it to an actual rat, and if you don't get the same results, go poke around in some rat brains until you figure out why. It's good science.

Comment author: jpet 23 March 2010 06:18:52PM *  4 points [-]

Where in this system would you place a thorough and accurate, but superficial model that described the phenomenon? If I've made a lot of observations, collected a lot of data, and fit very good curves to it, I can do a pretty good job of predicting what's going to happen--probably better than you, in a lot of cases, if you're constrained by model that reflects a true understanding of what's going on inside.

If we're trying to predict where a baseball will land, I'm going to do better with my practiced curve-fitting than you are with your deep understanding of physics.

Or for a more interesting example, someone with nothing but pop-psychology notions of how the brain works, but lots of experience working with people, might do a far better job than me at modeling what another person will do, no matter how much neuroscience I study.

...to answer myself, I guess this could be seen as a variation on stage 1: you have a formula that works really well, but you can't explain why it works. It's just that you've created the formula yourself by fitting it to data, rather than being handed it by someone else.

[Edit: changed "non-generative" to "superficial"]

Comment author: Douglas_Knight 21 February 2010 07:34:21AM 0 points [-]

If "top winery" means "largest winery", as it does in this story, I don't see how it says anything about the ability of tasters to tell the difference. Those who made such claims probably weren't drinking Gallo in the first place.

If people who can tell the difference are a big enough demographic to sell to, then they are employed by all wineries, regardless of quality. But an alternate explanation is that Gallo was tacitly in on the scam - they got as much PN as Sideways demanded, without moving the market.

Comment author: jpet 22 February 2010 01:33:43AM 1 point [-]

Ah, I misunderstood the comment. I just assumed that Gallo was in on it, and the claim was that customers of Gallo failing to complain constituted evidence of wine tasting's crockitude.

If Gallo's wine experts really did get taken in, then yes, that's pretty strong evidence. And being the largest winery, I'm sure they have many experts checking their wines regularly--too many to realistically be "in" on such a scam.

So you've convinced me. Wine tasting is a crock.

View more: Next